All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly
@ 2023-04-21 17:42 Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 01/24] selftests/bpf: Add notion of auxiliary programs for test_loader Eduard Zingerman
                   ` (25 more replies)
  0 siblings, 26 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

This is a follow up for RFC [1]. It migrates a second batch of 23
verifier/*.c tests to inline assembly and use of ./test_progs for
actual execution. Link to the first batch is [2].

The migration is done by a python script (see [3]) with minimal manual
adjustments.

Each migrated verifier/xxx.c file is mapped to progs/verifier_xxx.c
plus an entry in the prog_tests/verifier.c. One patch per each file.

The first patch in a series adds a notion of auxiliary programs to
test_loader. This is necessary to support tests that use
BPF_MAP_TYPE_PROG_ARRAY maps. Programs marked as auxiliary are always
loaded but are not treated as a separate tests.

The main differences compared to the previous batch:
- Tests that use pseudo call instruction are migrated,
  called functions are translated as below:

    static __naked __noinline __attribute__((used))
    void bounded_recursion__1(void)
    {
    	asm volatile (...);
    }
    
  Pseudo call in the inline assembly looks as follows:

    __naked void bounded_recursion(void)
    {
        asm volatile ("                                 \
        r1 = 0;                                         \
        call bounded_recursion__1;                      \
        exit;                                           \
    "   ::: __clobber_all);
    }
    
  Interestingly enough, callee declaration does not have to precede
  caller definition when used from inline assembly.

- Tests that specify .expected_attach_type are migrated using
  corresponding SEC annotations. For example, the following test
  specification:

    {
            "reference tracking: ...",
            .insns = { ... },
            .prog_type = BPF_PROG_TYPE_LSM,
            .kfunc = "bpf",
            .expected_attach_type = BPF_LSM_MAC,
            .flags = BPF_F_SLEEPABLE,
            ...
    },
    
  Becomes:

    SEC("lsm.s/bpf")
    __description("reference tracking: ...")
    __success
    __naked void acquire_release_user_key_reference(void) { ... }
    
- Tests that use BPF_MAP_TYPE_PROG_ARRAY are migrated, the definitions
  of map and dummy programs that populate it are repeated in each test.

There `__imm_insn` macro had to be used in a few tests because of the
limitations of clang BPF inline assembler:
- For BPF_ST_MEM instruction in verifier_precise.c and verifier_unpriv.c;
- For BPF_LD_IND with three arguments in verifier_ref_tracking.c.

Migrated tests could be selected for execution using the following filter:

  ./test_progs -a verifier_*

While reviewing the changes I noticed the following irregularities in
the original tests:
- verifier_sock.c:
  Tests "bpf_sk_select_reuseport(ctx, sockhash, &key, flags)"
    and "bpf_sk_select_reuseport(ctx, sockmap, &key, flags)"
  have identical bodies.
- unpriv.c:
  Despite the name of the file, 12 tests defined in it have program
  types that are not considered for unprivileged execution by
  test_verifier (e.g. BPF_PROG_TYPE_TRACEPOINT).

Modifications were applied for the following tests:
- loop1.c:

  Some of the tests have .retval field specified, but program type is
  BPF_PROG_TYPE_TRACEPOINT, which does not support BPF_PROG_TEST_RUN
  command, for these tests
  - either program type is changed to "xdp", which supports
    BPF_PROG_TEST_RUN;
  - or retval tag is removed, if test run result is not actually
    predictable (e.g. depends on bpf_get_prandom_u32()).
- unpriv.c:
  This file is split in two parts:
  - progs/verifier_unpriv.c
  - progs/verifier_unpriv_perf.c
  First part requires inclusion of filter.h,
  second part requires inclusion of vmlinux.h.
- value_ptr_arith.c:
  "sanitation: alu with different scalars 2" and
  "sanitation: alu with different scalars 3"
  are modified to avoid retval "-EINVAL * 2" which cannot be encoded
  as a type tag.
  
Additional details are in the relevant commit messages.

[1] RFC
    https://lore.kernel.org/bpf/20230123145148.2791939-1-eddyz87@gmail.com/
[2] First batch of migrated tests
    https://lore.kernel.org/bpf/20230325025524.144043-1-eddyz87@gmail.com/
[3] Migration tool
    https://github.com/eddyz87/verifier-tests-migrator

Eduard Zingerman (24):
  selftests/bpf: Add notion of auxiliary programs for test_loader
  selftests/bpf: verifier/bounds converted to inline assembly
  selftests/bpf: verifier/bpf_get_stack converted to inline assembly
  selftests/bpf: verifier/btf_ctx_access converted to inline assembly
  selftests/bpf: verifier/ctx converted to inline assembly
  selftests/bpf: verifier/d_path converted to inline assembly
  selftests/bpf: verifier/direct_packet_access converted to inline
    assembly
  selftests/bpf: verifier/jeq_infer_not_null converted to inline
    assembly
  selftests/bpf: verifier/loops1 converted to inline assembly
  selftests/bpf: verifier/lwt converted to inline assembly
  selftests/bpf: verifier/map_in_map converted to inline assembly
  selftests/bpf: verifier/map_ptr_mixing converted to inline assembly
  selftests/bpf: verifier/precise converted to inline assembly
  selftests/bpf: verifier/prevent_map_lookup converted to inline
    assembly
  selftests/bpf: verifier/ref_tracking converted to inline assembly
  selftests/bpf: verifier/regalloc converted to inline assembly
  selftests/bpf: verifier/runtime_jit converted to inline assembly
  selftests/bpf: verifier/search_pruning converted to inline assembly
  selftests/bpf: verifier/sock converted to inline assembly
  selftests/bpf: verifier/spin_lock converted to inline assembly
  selftests/bpf: verifier/subreg converted to inline assembly
  selftests/bpf: verifier/unpriv converted to inline assembly
  selftests/bpf: verifier/value_illegal_alu converted to inline assembly
  selftests/bpf: verifier/value_ptr_arith converted to inline assembly

 .../selftests/bpf/prog_tests/verifier.c       |   80 +-
 tools/testing/selftests/bpf/progs/bpf_misc.h  |    6 +
 .../selftests/bpf/progs/verifier_bounds.c     | 1076 ++++++++++++
 .../bpf/progs/verifier_bpf_get_stack.c        |  124 ++
 .../bpf/progs/verifier_btf_ctx_access.c       |   32 +
 .../selftests/bpf/progs/verifier_ctx.c        |  221 +++
 .../selftests/bpf/progs/verifier_d_path.c     |   48 +
 .../bpf/progs/verifier_direct_packet_access.c |  803 +++++++++
 .../bpf/progs/verifier_jeq_infer_not_null.c   |  213 +++
 .../selftests/bpf/progs/verifier_loops1.c     |  259 +++
 .../selftests/bpf/progs/verifier_lwt.c        |  234 +++
 .../selftests/bpf/progs/verifier_map_in_map.c |  142 ++
 .../bpf/progs/verifier_map_ptr_mixing.c       |  265 +++
 .../selftests/bpf/progs/verifier_precise.c    |  269 +++
 .../bpf/progs/verifier_prevent_map_lookup.c   |   65 +
 .../bpf/progs/verifier_ref_tracking.c         | 1495 +++++++++++++++++
 .../selftests/bpf/progs/verifier_regalloc.c   |  364 ++++
 .../bpf/progs/verifier_runtime_jit.c          |  360 ++++
 .../bpf/progs/verifier_search_pruning.c       |  339 ++++
 .../selftests/bpf/progs/verifier_sock.c       |  980 +++++++++++
 .../selftests/bpf/progs/verifier_spin_lock.c  |  533 ++++++
 .../selftests/bpf/progs/verifier_subreg.c     |  673 ++++++++
 .../selftests/bpf/progs/verifier_unpriv.c     |  726 ++++++++
 .../bpf/progs/verifier_unpriv_perf.c          |   34 +
 .../bpf/progs/verifier_value_illegal_alu.c    |  149 ++
 .../bpf/progs/verifier_value_ptr_arith.c      | 1423 ++++++++++++++++
 tools/testing/selftests/bpf/test_loader.c     |   89 +-
 tools/testing/selftests/bpf/verifier/bounds.c |  884 ----------
 .../selftests/bpf/verifier/bpf_get_stack.c    |   87 -
 .../selftests/bpf/verifier/btf_ctx_access.c   |   25 -
 tools/testing/selftests/bpf/verifier/ctx.c    |  186 --
 tools/testing/selftests/bpf/verifier/d_path.c |   37 -
 .../bpf/verifier/direct_packet_access.c       |  710 --------
 .../bpf/verifier/jeq_infer_not_null.c         |  174 --
 tools/testing/selftests/bpf/verifier/loops1.c |  206 ---
 tools/testing/selftests/bpf/verifier/lwt.c    |  189 ---
 .../selftests/bpf/verifier/map_in_map.c       |   96 --
 .../selftests/bpf/verifier/map_ptr_mixing.c   |  100 --
 .../testing/selftests/bpf/verifier/precise.c  |  219 ---
 .../bpf/verifier/prevent_map_lookup.c         |   29 -
 .../selftests/bpf/verifier/ref_tracking.c     | 1082 ------------
 .../testing/selftests/bpf/verifier/regalloc.c |  277 ---
 .../selftests/bpf/verifier/runtime_jit.c      |  231 ---
 .../selftests/bpf/verifier/search_pruning.c   |  266 ---
 tools/testing/selftests/bpf/verifier/sock.c   |  706 --------
 .../selftests/bpf/verifier/spin_lock.c        |  447 -----
 tools/testing/selftests/bpf/verifier/subreg.c |  533 ------
 tools/testing/selftests/bpf/verifier/unpriv.c |  562 -------
 .../bpf/verifier/value_illegal_alu.c          |   95 --
 .../selftests/bpf/verifier/value_ptr_arith.c  | 1140 -------------
 50 files changed, 10974 insertions(+), 8309 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bounds.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_d_path.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_loops1.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_lwt.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_map_in_map.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_precise.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_regalloc.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_runtime_jit.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_search_pruning.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_sock.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_spin_lock.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_subreg.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_unpriv.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/bounds.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/bpf_get_stack.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/btf_ctx_access.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/ctx.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/d_path.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/direct_packet_access.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/loops1.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/lwt.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/map_in_map.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/precise.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/prevent_map_lookup.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/ref_tracking.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/regalloc.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/runtime_jit.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/search_pruning.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/sock.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/spin_lock.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/subreg.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/unpriv.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/value_illegal_alu.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/value_ptr_arith.c

-- 
2.40.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 01/24] selftests/bpf: Add notion of auxiliary programs for test_loader
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 02/24] selftests/bpf: verifier/bounds converted to inline assembly Eduard Zingerman
                   ` (24 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

In order to express test cases that use bpf_tail_call() intrinsic it
is necessary to have several programs to be loaded at a time.
This commit adds __auxiliary annotation to the set of annotations
supported by test_loader.c. Programs marked as auxiliary are always
loaded but are not treated as a separate test.

For example:

    void dummy_prog1(void);

    struct {
            __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
            __uint(max_entries, 4);
            __uint(key_size, sizeof(int));
            __array(values, void (void));
    } prog_map SEC(".maps") = {
            .values = {
                    [0] = (void *) &dummy_prog1,
            },
    };

    SEC("tc")
    __auxiliary
    __naked void dummy_prog1(void) {
            asm volatile ("r0 = 42; exit;");
    }

    SEC("tc")
    __description("reference tracking: check reference or tail call")
    __success __retval(0)
    __naked void check_reference_or_tail_call(void)
    {
            asm volatile (
            "r2 = %[prog_map] ll;"
            "r3 = 0;"
            "call %[bpf_tail_call];"
            "r0 = 0;"
            "exit;"
            :: __imm(bpf_tail_call),
            :  __clobber_all);
    }

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 tools/testing/selftests/bpf/progs/bpf_misc.h |  6 ++
 tools/testing/selftests/bpf/test_loader.c    | 89 +++++++++++++++-----
 2 files changed, 73 insertions(+), 22 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
index 3b307de8dab9..d3c1217ba79a 100644
--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
+++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
@@ -53,6 +53,10 @@
  *                   - A numeric value.
  *                   Multiple __flag attributes could be specified, the final flags
  *                   value is derived by applying binary "or" to all specified values.
+ *
+ * __auxiliary         Annotated program is not a separate test, but used as auxiliary
+ *                     for some other test cases and should always be loaded.
+ * __auxiliary_unpriv  Same, but load program in unprivileged mode.
  */
 #define __msg(msg)		__attribute__((btf_decl_tag("comment:test_expect_msg=" msg)))
 #define __failure		__attribute__((btf_decl_tag("comment:test_expect_failure")))
@@ -65,6 +69,8 @@
 #define __flag(flag)		__attribute__((btf_decl_tag("comment:test_prog_flags="#flag)))
 #define __retval(val)		__attribute__((btf_decl_tag("comment:test_retval="#val)))
 #define __retval_unpriv(val)	__attribute__((btf_decl_tag("comment:test_retval_unpriv="#val)))
+#define __auxiliary		__attribute__((btf_decl_tag("comment:test_auxiliary")))
+#define __auxiliary_unpriv	__attribute__((btf_decl_tag("comment:test_auxiliary_unpriv")))
 
 /* Convenience macro for use with 'asm volatile' blocks */
 #define __naked __attribute__((naked))
diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
index 40c9b7d532c4..b4edd8454934 100644
--- a/tools/testing/selftests/bpf/test_loader.c
+++ b/tools/testing/selftests/bpf/test_loader.c
@@ -25,6 +25,8 @@
 #define TEST_TAG_DESCRIPTION_PFX "comment:test_description="
 #define TEST_TAG_RETVAL_PFX "comment:test_retval="
 #define TEST_TAG_RETVAL_PFX_UNPRIV "comment:test_retval_unpriv="
+#define TEST_TAG_AUXILIARY "comment:test_auxiliary"
+#define TEST_TAG_AUXILIARY_UNPRIV "comment:test_auxiliary_unpriv"
 
 /* Warning: duplicated in bpf_misc.h */
 #define POINTER_VALUE	0xcafe4all
@@ -59,6 +61,8 @@ struct test_spec {
 	int log_level;
 	int prog_flags;
 	int mode_mask;
+	bool auxiliary;
+	bool valid;
 };
 
 static int tester_init(struct test_loader *tester)
@@ -87,6 +91,11 @@ static void free_test_spec(struct test_spec *spec)
 	free(spec->unpriv.name);
 	free(spec->priv.expect_msgs);
 	free(spec->unpriv.expect_msgs);
+
+	spec->priv.name = NULL;
+	spec->unpriv.name = NULL;
+	spec->priv.expect_msgs = NULL;
+	spec->unpriv.expect_msgs = NULL;
 }
 
 static int push_msg(const char *msg, struct test_subspec *subspec)
@@ -204,6 +213,12 @@ static int parse_test_spec(struct test_loader *tester,
 			spec->unpriv.expect_failure = false;
 			spec->mode_mask |= UNPRIV;
 			has_unpriv_result = true;
+		} else if (strcmp(s, TEST_TAG_AUXILIARY) == 0) {
+			spec->auxiliary = true;
+			spec->mode_mask |= PRIV;
+		} else if (strcmp(s, TEST_TAG_AUXILIARY_UNPRIV) == 0) {
+			spec->auxiliary = true;
+			spec->mode_mask |= UNPRIV;
 		} else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
 			msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
 			err = push_msg(msg, &spec->priv);
@@ -314,6 +329,8 @@ static int parse_test_spec(struct test_loader *tester,
 		}
 	}
 
+	spec->valid = true;
+
 	return 0;
 
 cleanup:
@@ -516,16 +533,18 @@ void run_subtest(struct test_loader *tester,
 		 struct bpf_object_open_opts *open_opts,
 		 const void *obj_bytes,
 		 size_t obj_byte_cnt,
+		 struct test_spec *specs,
 		 struct test_spec *spec,
 		 bool unpriv)
 {
 	struct test_subspec *subspec = unpriv ? &spec->unpriv : &spec->priv;
+	struct bpf_program *tprog, *tprog_iter;
+	struct test_spec *spec_iter;
 	struct cap_state caps = {};
-	struct bpf_program *tprog;
 	struct bpf_object *tobj;
 	struct bpf_map *map;
-	int retval;
-	int err;
+	int retval, err, i;
+	bool should_load;
 
 	if (!test__start_subtest(subspec->name))
 		return;
@@ -546,15 +565,23 @@ void run_subtest(struct test_loader *tester,
 	if (!ASSERT_OK_PTR(tobj, "obj_open_mem")) /* shouldn't happen */
 		goto subtest_cleanup;
 
-	bpf_object__for_each_program(tprog, tobj)
-		bpf_program__set_autoload(tprog, false);
+	i = 0;
+	bpf_object__for_each_program(tprog_iter, tobj) {
+		spec_iter = &specs[i++];
+		should_load = false;
+
+		if (spec_iter->valid) {
+			if (strcmp(bpf_program__name(tprog_iter), spec->prog_name) == 0) {
+				tprog = tprog_iter;
+				should_load = true;
+			}
 
-	bpf_object__for_each_program(tprog, tobj) {
-		/* only load specified program */
-		if (strcmp(bpf_program__name(tprog), spec->prog_name) == 0) {
-			bpf_program__set_autoload(tprog, true);
-			break;
+			if (spec_iter->auxiliary &&
+			    spec_iter->mode_mask & (unpriv ? UNPRIV : PRIV))
+				should_load = true;
 		}
+
+		bpf_program__set_autoload(tprog_iter, should_load);
 	}
 
 	prepare_case(tester, spec, tobj, tprog);
@@ -617,11 +644,12 @@ static void process_subtest(struct test_loader *tester,
 			    skel_elf_bytes_fn elf_bytes_factory)
 {
 	LIBBPF_OPTS(bpf_object_open_opts, open_opts, .object_name = skel_name);
+	struct test_spec *specs = NULL;
 	struct bpf_object *obj = NULL;
 	struct bpf_program *prog;
 	const void *obj_bytes;
+	int err, i, nr_progs;
 	size_t obj_byte_cnt;
-	int err;
 
 	if (tester_init(tester) < 0)
 		return; /* failed to initialize tester */
@@ -631,25 +659,42 @@ static void process_subtest(struct test_loader *tester,
 	if (!ASSERT_OK_PTR(obj, "obj_open_mem"))
 		return;
 
-	bpf_object__for_each_program(prog, obj) {
-		struct test_spec spec;
+	nr_progs = 0;
+	bpf_object__for_each_program(prog, obj)
+		++nr_progs;
+
+	specs = calloc(nr_progs, sizeof(struct test_spec));
+	if (!ASSERT_OK_PTR(specs, "Can't alloc specs array"))
+		return;
 
-		/* if we can't derive test specification, go to the next test */
-		err = parse_test_spec(tester, obj, prog, &spec);
-		if (err) {
+	i = 0;
+	bpf_object__for_each_program(prog, obj) {
+		/* ignore tests for which  we can't derive test specification */
+		err = parse_test_spec(tester, obj, prog, &specs[i++]);
+		if (err)
 			PRINT_FAIL("Can't parse test spec for program '%s'\n",
 				   bpf_program__name(prog));
+	}
+
+	i = 0;
+	bpf_object__for_each_program(prog, obj) {
+		struct test_spec *spec = &specs[i++];
+
+		if (!spec->valid || spec->auxiliary)
 			continue;
-		}
 
-		if (spec.mode_mask & PRIV)
-			run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, &spec, false);
-		if (spec.mode_mask & UNPRIV)
-			run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, &spec, true);
+		if (spec->mode_mask & PRIV)
+			run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt,
+				    specs, spec, false);
+		if (spec->mode_mask & UNPRIV)
+			run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt,
+				    specs, spec, true);
 
-		free_test_spec(&spec);
 	}
 
+	for (i = 0; i < nr_progs; ++i)
+		free_test_spec(&specs[i]);
+	free(specs);
 	bpf_object__close(obj);
 }
 
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 02/24] selftests/bpf: verifier/bounds converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 01/24] selftests/bpf: Add notion of auxiliary programs for test_loader Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 03/24] selftests/bpf: verifier/bpf_get_stack " Eduard Zingerman
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/bounds automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |    2 +
 .../selftests/bpf/progs/verifier_bounds.c     | 1076 +++++++++++++++++
 tools/testing/selftests/bpf/verifier/bounds.c |  884 --------------
 3 files changed, 1078 insertions(+), 884 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bounds.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/bounds.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 7c68d78da9ea..e61c9120e261 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -6,6 +6,7 @@
 #include "verifier_and.skel.h"
 #include "verifier_array_access.skel.h"
 #include "verifier_basic_stack.skel.h"
+#include "verifier_bounds.skel.h"
 #include "verifier_bounds_deduction.skel.h"
 #include "verifier_bounds_deduction_non_const.skel.h"
 #include "verifier_bounds_mix_sign_unsign.skel.h"
@@ -80,6 +81,7 @@ static void run_tests_aux(const char *skel_name,
 
 void test_verifier_and(void)                  { RUN(verifier_and); }
 void test_verifier_basic_stack(void)          { RUN(verifier_basic_stack); }
+void test_verifier_bounds(void)               { RUN(verifier_bounds); }
 void test_verifier_bounds_deduction(void)     { RUN(verifier_bounds_deduction); }
 void test_verifier_bounds_deduction_non_const(void)     { RUN(verifier_bounds_deduction_non_const); }
 void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c
new file mode 100644
index 000000000000..c5588a14fe2e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
@@ -0,0 +1,1076 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/bounds.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, long long);
+} map_hash_8b SEC(".maps");
+
+SEC("socket")
+__description("subtraction bounds (map value) variant 1")
+__failure __msg("R0 max value is outside of the allowed memory range")
+__failure_unpriv
+__naked void bounds_map_value_variant_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u8*)(r0 + 0);				\
+	if r1 > 0xff goto l0_%=;			\
+	r3 = *(u8*)(r0 + 1);				\
+	if r3 > 0xff goto l0_%=;			\
+	r1 -= r3;					\
+	r1 >>= 56;					\
+	r0 += r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("subtraction bounds (map value) variant 2")
+__failure
+__msg("R0 min value is negative, either use unsigned index or do a if (index >=0) check.")
+__msg_unpriv("R1 has unknown scalar with mixed signed bounds")
+__naked void bounds_map_value_variant_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u8*)(r0 + 0);				\
+	if r1 > 0xff goto l0_%=;			\
+	r3 = *(u8*)(r0 + 1);				\
+	if r3 > 0xff goto l0_%=;			\
+	r1 -= r3;					\
+	r0 += r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check subtraction on pointers for unpriv")
+__success __failure_unpriv __msg_unpriv("R9 pointer -= pointer prohibited")
+__retval(0)
+__naked void subtraction_on_pointers_for_unpriv(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	r1 = %[map_hash_8b] ll;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r6 = 9;						\
+	*(u64*)(r2 + 0) = r6;				\
+	call %[bpf_map_lookup_elem];			\
+	r9 = r10;					\
+	r9 -= r0;					\
+	r1 = %[map_hash_8b] ll;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r6 = 0;						\
+	*(u64*)(r2 + 0) = r6;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	*(u64*)(r0 + 0) = r9;				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check based on zero-extended MOV")
+__success __success_unpriv __retval(0)
+__naked void based_on_zero_extended_mov(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r2 = 0x0000'0000'ffff'ffff */		\
+	w2 = 0xffffffff;				\
+	/* r2 = 0 */					\
+	r2 >>= 32;					\
+	/* no-op */					\
+	r0 += r2;					\
+	/* access at offset 0 */			\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	/* exit */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check based on sign-extended MOV. test1")
+__failure __msg("map_value pointer and 4294967295")
+__failure_unpriv
+__naked void on_sign_extended_mov_test1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r2 = 0xffff'ffff'ffff'ffff */		\
+	r2 = 0xffffffff;				\
+	/* r2 = 0xffff'ffff */				\
+	r2 >>= 32;					\
+	/* r0 = <oob pointer> */			\
+	r0 += r2;					\
+	/* access to OOB pointer */			\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	/* exit */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check based on sign-extended MOV. test2")
+__failure __msg("R0 min value is outside of the allowed memory range")
+__failure_unpriv
+__naked void on_sign_extended_mov_test2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r2 = 0xffff'ffff'ffff'ffff */		\
+	r2 = 0xffffffff;				\
+	/* r2 = 0xfff'ffff */				\
+	r2 >>= 36;					\
+	/* r0 = <oob pointer> */			\
+	r0 += r2;					\
+	/* access to OOB pointer */			\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	/* exit */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("bounds check based on reg_off + var_off + insn_off. test1")
+__failure __msg("value_size=8 off=1073741825")
+__naked void var_off_insn_off_test1(void)
+{
+	asm volatile ("					\
+	r6 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r6 &= 1;					\
+	r6 += %[__imm_0];				\
+	r0 += r6;					\
+	r0 += %[__imm_0];				\
+l0_%=:	r0 = *(u8*)(r0 + 3);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b),
+	  __imm_const(__imm_0, (1 << 29) - 1),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("bounds check based on reg_off + var_off + insn_off. test2")
+__failure __msg("value 1073741823")
+__naked void var_off_insn_off_test2(void)
+{
+	asm volatile ("					\
+	r6 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r6 &= 1;					\
+	r6 += %[__imm_0];				\
+	r0 += r6;					\
+	r0 += %[__imm_1];				\
+l0_%=:	r0 = *(u8*)(r0 + 3);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b),
+	  __imm_const(__imm_0, (1 << 30) - 1),
+	  __imm_const(__imm_1, (1 << 29) - 1),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check after truncation of non-boundary-crossing range")
+__success __success_unpriv __retval(0)
+__naked void of_non_boundary_crossing_range(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r1 = [0x00, 0xff] */				\
+	r1 = *(u8*)(r0 + 0);				\
+	r2 = 1;						\
+	/* r2 = 0x10'0000'0000 */			\
+	r2 <<= 36;					\
+	/* r1 = [0x10'0000'0000, 0x10'0000'00ff] */	\
+	r1 += r2;					\
+	/* r1 = [0x10'7fff'ffff, 0x10'8000'00fe] */	\
+	r1 += 0x7fffffff;				\
+	/* r1 = [0x00, 0xff] */				\
+	w1 -= 0x7fffffff;				\
+	/* r1 = 0 */					\
+	r1 >>= 8;					\
+	/* no-op */					\
+	r0 += r1;					\
+	/* access at offset 0 */			\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	/* exit */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check after truncation of boundary-crossing range (1)")
+__failure
+/* not actually fully unbounded, but the bound is very high */
+__msg("value -4294967168 makes map_value pointer be out of bounds")
+__failure_unpriv
+__naked void of_boundary_crossing_range_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r1 = [0x00, 0xff] */				\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 += %[__imm_0];				\
+	/* r1 = [0xffff'ff80, 0x1'0000'007f] */		\
+	r1 += %[__imm_0];				\
+	/* r1 = [0xffff'ff80, 0xffff'ffff] or		\
+	 *      [0x0000'0000, 0x0000'007f]		\
+	 */						\
+	w1 += 0;					\
+	r1 -= %[__imm_0];				\
+	/* r1 = [0x00, 0xff] or				\
+	 *      [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]\
+	 */						\
+	r1 -= %[__imm_0];				\
+	/* error on OOB pointer computation */		\
+	r0 += r1;					\
+	/* exit */					\
+	r0 = 0;						\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b),
+	  __imm_const(__imm_0, 0xffffff80 >> 1)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check after truncation of boundary-crossing range (2)")
+__failure __msg("value -4294967168 makes map_value pointer be out of bounds")
+__failure_unpriv
+__naked void of_boundary_crossing_range_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r1 = [0x00, 0xff] */				\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 += %[__imm_0];				\
+	/* r1 = [0xffff'ff80, 0x1'0000'007f] */		\
+	r1 += %[__imm_0];				\
+	/* r1 = [0xffff'ff80, 0xffff'ffff] or		\
+	 *      [0x0000'0000, 0x0000'007f]		\
+	 * difference to previous test: truncation via MOV32\
+	 * instead of ALU32.				\
+	 */						\
+	w1 = w1;					\
+	r1 -= %[__imm_0];				\
+	/* r1 = [0x00, 0xff] or				\
+	 *      [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]\
+	 */						\
+	r1 -= %[__imm_0];				\
+	/* error on OOB pointer computation */		\
+	r0 += r1;					\
+	/* exit */					\
+	r0 = 0;						\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b),
+	  __imm_const(__imm_0, 0xffffff80 >> 1)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check after wrapping 32-bit addition")
+__success __success_unpriv __retval(0)
+__naked void after_wrapping_32_bit_addition(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r1 = 0x7fff'ffff */				\
+	r1 = 0x7fffffff;				\
+	/* r1 = 0xffff'fffe */				\
+	r1 += 0x7fffffff;				\
+	/* r1 = 0 */					\
+	w1 += 2;					\
+	/* no-op */					\
+	r0 += r1;					\
+	/* access at offset 0 */			\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	/* exit */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check after shift with oversized count operand")
+__failure __msg("R0 max value is outside of the allowed memory range")
+__failure_unpriv
+__naked void shift_with_oversized_count_operand(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r2 = 32;					\
+	r1 = 1;						\
+	/* r1 = (u32)1 << (u32)32 = ? */		\
+	w1 <<= w2;					\
+	/* r1 = [0x0000, 0xffff] */			\
+	r1 &= 0xffff;					\
+	/* computes unknown pointer, potentially OOB */	\
+	r0 += r1;					\
+	/* potentially OOB access */			\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	/* exit */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check after right shift of maybe-negative number")
+__failure __msg("R0 unbounded memory access")
+__failure_unpriv
+__naked void shift_of_maybe_negative_number(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	/* r1 = [0x00, 0xff] */				\
+	r1 = *(u8*)(r0 + 0);				\
+	/* r1 = [-0x01, 0xfe] */			\
+	r1 -= 1;					\
+	/* r1 = 0 or 0xff'ffff'ffff'ffff */		\
+	r1 >>= 8;					\
+	/* r1 = 0 or 0xffff'ffff'ffff */		\
+	r1 >>= 8;					\
+	/* computes unknown pointer, potentially OOB */	\
+	r0 += r1;					\
+	/* potentially OOB access */			\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	/* exit */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check after 32-bit right shift with 64-bit input")
+__failure __msg("math between map_value pointer and 4294967294 is not allowed")
+__failure_unpriv
+__naked void shift_with_64_bit_input(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 2;						\
+	/* r1 = 1<<32 */				\
+	r1 <<= 31;					\
+	/* r1 = 0 (NOT 2!) */				\
+	w1 >>= 31;					\
+	/* r1 = 0xffff'fffe (NOT 0!) */			\
+	w1 -= 2;					\
+	/* error on computing OOB pointer */		\
+	r0 += r1;					\
+	/* exit */					\
+	r0 = 0;						\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check map access with off+size signed 32bit overflow. test1")
+__failure __msg("map_value pointer and 2147483646")
+__failure_unpriv
+__naked void size_signed_32bit_overflow_test1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r0 += 0x7ffffffe;				\
+	r0 = *(u64*)(r0 + 0);				\
+	goto l1_%=;					\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check map access with off+size signed 32bit overflow. test2")
+__failure __msg("pointer offset 1073741822")
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__naked void size_signed_32bit_overflow_test2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r0 += 0x1fffffff;				\
+	r0 += 0x1fffffff;				\
+	r0 += 0x1fffffff;				\
+	r0 = *(u64*)(r0 + 0);				\
+	goto l1_%=;					\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check map access with off+size signed 32bit overflow. test3")
+__failure __msg("pointer offset -1073741822")
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__naked void size_signed_32bit_overflow_test3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r0 -= 0x1fffffff;				\
+	r0 -= 0x1fffffff;				\
+	r0 = *(u64*)(r0 + 2);				\
+	goto l1_%=;					\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check map access with off+size signed 32bit overflow. test4")
+__failure __msg("map_value pointer and 1000000000000")
+__failure_unpriv
+__naked void size_signed_32bit_overflow_test4(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = 1000000;					\
+	r1 *= 1000000;					\
+	r0 += r1;					\
+	r0 = *(u64*)(r0 + 2);				\
+	goto l1_%=;					\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check mixed 32bit and 64bit arithmetic. test1")
+__success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'")
+__retval(0)
+__naked void _32bit_and_64bit_arithmetic_test1(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	r1 = -1;					\
+	r1 <<= 32;					\
+	r1 += 1;					\
+	/* r1 = 0xffffFFFF00000001 */			\
+	if w1 > 1 goto l0_%=;				\
+	/* check ALU64 op keeps 32bit bounds */		\
+	r1 += 1;					\
+	if w1 > 2 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:	/* invalid ldx if bounds are lost above */	\
+	r0 = *(u64*)(r0 - 1);				\
+l1_%=:	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check mixed 32bit and 64bit arithmetic. test2")
+__success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'")
+__retval(0)
+__naked void _32bit_and_64bit_arithmetic_test2(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	r1 = -1;					\
+	r1 <<= 32;					\
+	r1 += 1;					\
+	/* r1 = 0xffffFFFF00000001 */			\
+	r2 = 3;						\
+	/* r1 = 0x2 */					\
+	w1 += 1;					\
+	/* check ALU32 op zero extends 64bit bounds */	\
+	if r1 > r2 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:	/* invalid ldx if bounds are lost above */	\
+	r0 = *(u64*)(r0 - 1);				\
+l1_%=:	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tc")
+__description("assigning 32bit bounds to 64bit for wA = 0, wB = wA")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void for_wa_0_wb_wa(void)
+{
+	asm volatile ("					\
+	r8 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r7 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	w9 = 0;						\
+	w2 = w9;					\
+	r6 = r7;					\
+	r6 += r2;					\
+	r3 = r6;					\
+	r3 += 8;					\
+	if r3 > r8 goto l0_%=;				\
+	r5 = *(u32*)(r6 + 0);				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check for reg = 0, reg xor 1")
+__success __failure_unpriv
+__msg_unpriv("R0 min value is outside of the allowed memory range")
+__retval(0)
+__naked void reg_0_reg_xor_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = 0;						\
+	r1 ^= 1;					\
+	if r1 != 0 goto l1_%=;				\
+	r0 = *(u64*)(r0 + 8);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check for reg32 = 0, reg32 xor 1")
+__success __failure_unpriv
+__msg_unpriv("R0 min value is outside of the allowed memory range")
+__retval(0)
+__naked void reg32_0_reg32_xor_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	w1 = 0;						\
+	w1 ^= 1;					\
+	if w1 != 0 goto l1_%=;				\
+	r0 = *(u64*)(r0 + 8);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check for reg = 2, reg xor 3")
+__success __failure_unpriv
+__msg_unpriv("R0 min value is outside of the allowed memory range")
+__retval(0)
+__naked void reg_2_reg_xor_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = 2;						\
+	r1 ^= 3;					\
+	if r1 > 0 goto l1_%=;				\
+	r0 = *(u64*)(r0 + 8);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check for reg = any, reg xor 3")
+__failure __msg("invalid access to map value")
+__msg_unpriv("invalid access to map value")
+__naked void reg_any_reg_xor_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = *(u64*)(r0 + 0);				\
+	r1 ^= 3;					\
+	if r1 != 0 goto l1_%=;				\
+	r0 = *(u64*)(r0 + 8);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check for reg32 = any, reg32 xor 3")
+__failure __msg("invalid access to map value")
+__msg_unpriv("invalid access to map value")
+__naked void reg32_any_reg32_xor_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = *(u64*)(r0 + 0);				\
+	w1 ^= 3;					\
+	if w1 != 0 goto l1_%=;				\
+	r0 = *(u64*)(r0 + 8);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check for reg > 0, reg xor 3")
+__success __failure_unpriv
+__msg_unpriv("R0 min value is outside of the allowed memory range")
+__retval(0)
+__naked void reg_0_reg_xor_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = *(u64*)(r0 + 0);				\
+	if r1 <= 0 goto l1_%=;				\
+	r1 ^= 3;					\
+	if r1 >= 0 goto l1_%=;				\
+	r0 = *(u64*)(r0 + 8);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds check for reg32 > 0, reg32 xor 3")
+__success __failure_unpriv
+__msg_unpriv("R0 min value is outside of the allowed memory range")
+__retval(0)
+__naked void reg32_0_reg32_xor_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = *(u64*)(r0 + 0);				\
+	if w1 <= 0 goto l1_%=;				\
+	w1 ^= 3;					\
+	if w1 >= 0 goto l1_%=;				\
+	r0 = *(u64*)(r0 + 8);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds checks after 32-bit truncation. test 1")
+__success __failure_unpriv __msg_unpriv("R0 leaks addr")
+__retval(0)
+__naked void _32_bit_truncation_test_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u32*)(r0 + 0);				\
+	/* This used to reduce the max bound to 0x7fffffff */\
+	if r1 == 0 goto l1_%=;				\
+	if r1 > 0x7fffffff goto l0_%=;			\
+l1_%=:	r0 = 0;						\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("bounds checks after 32-bit truncation. test 2")
+__success __failure_unpriv __msg_unpriv("R0 leaks addr")
+__retval(0)
+__naked void _32_bit_truncation_test_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u32*)(r0 + 0);				\
+	if r1 s< 1 goto l1_%=;				\
+	if w1 s< 0 goto l0_%=;				\
+l1_%=:	r0 = 0;						\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("bound check with JMP_JLT for crossing 64-bit signed boundary")
+__success __retval(0)
+__naked void crossing_64_bit_signed_boundary_1(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[xdp_md_data]);		\
+	r3 = *(u32*)(r1 + %[xdp_md_data_end]);		\
+	r1 = r2;					\
+	r1 += 1;					\
+	if r1 > r3 goto l0_%=;				\
+	r1 = *(u8*)(r2 + 0);				\
+	r0 = 0x7fffffffffffff10 ll;			\
+	r1 += r0;					\
+	r0 = 0x8000000000000000 ll;			\
+l1_%=:	r0 += 1;					\
+	/* r1 unsigned range is [0x7fffffffffffff10, 0x800000000000000f] */\
+	if r0 < r1 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(xdp_md_data, offsetof(struct xdp_md, data)),
+	  __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end))
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("bound check with JMP_JSLT for crossing 64-bit signed boundary")
+__success __retval(0)
+__naked void crossing_64_bit_signed_boundary_2(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[xdp_md_data]);		\
+	r3 = *(u32*)(r1 + %[xdp_md_data_end]);		\
+	r1 = r2;					\
+	r1 += 1;					\
+	if r1 > r3 goto l0_%=;				\
+	r1 = *(u8*)(r2 + 0);				\
+	r0 = 0x7fffffffffffff10 ll;			\
+	r1 += r0;					\
+	r2 = 0x8000000000000fff ll;			\
+	r0 = 0x8000000000000000 ll;			\
+l1_%=:	r0 += 1;					\
+	if r0 s> r2 goto l0_%=;				\
+	/* r1 signed range is [S64_MIN, S64_MAX] */	\
+	if r0 s< r1 goto l1_%=;				\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(xdp_md_data, offsetof(struct xdp_md, data)),
+	  __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end))
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("bound check for loop upper bound greater than U32_MAX")
+__success __retval(0)
+__naked void bound_greater_than_u32_max(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[xdp_md_data]);		\
+	r3 = *(u32*)(r1 + %[xdp_md_data_end]);		\
+	r1 = r2;					\
+	r1 += 1;					\
+	if r1 > r3 goto l0_%=;				\
+	r1 = *(u8*)(r2 + 0);				\
+	r0 = 0x100000000 ll;				\
+	r1 += r0;					\
+	r0 = 0x100000000 ll;				\
+l1_%=:	r0 += 1;					\
+	if r0 < r1 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(xdp_md_data, offsetof(struct xdp_md, data)),
+	  __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end))
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("bound check with JMP32_JLT for crossing 32-bit signed boundary")
+__success __retval(0)
+__naked void crossing_32_bit_signed_boundary_1(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[xdp_md_data]);		\
+	r3 = *(u32*)(r1 + %[xdp_md_data_end]);		\
+	r1 = r2;					\
+	r1 += 1;					\
+	if r1 > r3 goto l0_%=;				\
+	r1 = *(u8*)(r2 + 0);				\
+	w0 = 0x7fffff10;				\
+	w1 += w0;					\
+	w0 = 0x80000000;				\
+l1_%=:	w0 += 1;					\
+	/* r1 unsigned range is [0, 0x8000000f] */	\
+	if w0 < w1 goto l1_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(xdp_md_data, offsetof(struct xdp_md, data)),
+	  __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end))
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("bound check with JMP32_JSLT for crossing 32-bit signed boundary")
+__success __retval(0)
+__naked void crossing_32_bit_signed_boundary_2(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[xdp_md_data]);		\
+	r3 = *(u32*)(r1 + %[xdp_md_data_end]);		\
+	r1 = r2;					\
+	r1 += 1;					\
+	if r1 > r3 goto l0_%=;				\
+	r1 = *(u8*)(r2 + 0);				\
+	w0 = 0x7fffff10;				\
+	w1 += w0;					\
+	w2 = 0x80000fff;				\
+	w0 = 0x80000000;				\
+l1_%=:	w0 += 1;					\
+	if w0 s> w2 goto l0_%=;				\
+	/* r1 signed range is [S32_MIN, S32_MAX] */	\
+	if w0 s< w1 goto l1_%=;				\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(xdp_md_data, offsetof(struct xdp_md, data)),
+	  __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c
deleted file mode 100644
index 43942ce8cf15..000000000000
--- a/tools/testing/selftests/bpf/verifier/bounds.c
+++ /dev/null
@@ -1,884 +0,0 @@
-{
-	"subtraction bounds (map value) variant 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 7),
-	BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 5),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 56),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "R0 max value is outside of the allowed memory range",
-	.result = REJECT,
-},
-{
-	"subtraction bounds (map value) variant 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 6),
-	BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 4),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.",
-	.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
-	.result = REJECT,
-},
-{
-	"check subtraction on pointers for unpriv",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
-	BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 9),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_FP),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_0),
-	BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
-	BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 1, 9 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R9 pointer -= pointer prohibited",
-},
-{
-	"bounds check based on zero-extended MOV",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	/* r2 = 0x0000'0000'ffff'ffff */
-	BPF_MOV32_IMM(BPF_REG_2, 0xffffffff),
-	/* r2 = 0 */
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
-	/* no-op */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
-	/* access at offset 0 */
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT
-},
-{
-	"bounds check based on sign-extended MOV. test1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	/* r2 = 0xffff'ffff'ffff'ffff */
-	BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
-	/* r2 = 0xffff'ffff */
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
-	/* r0 = <oob pointer> */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
-	/* access to OOB pointer */
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "map_value pointer and 4294967295",
-	.result = REJECT
-},
-{
-	"bounds check based on sign-extended MOV. test2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	/* r2 = 0xffff'ffff'ffff'ffff */
-	BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
-	/* r2 = 0xfff'ffff */
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 36),
-	/* r0 = <oob pointer> */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
-	/* access to OOB pointer */
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "R0 min value is outside of the allowed memory range",
-	.result = REJECT
-},
-{
-	"bounds check based on reg_off + var_off + insn_off. test1",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 29) - 1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 4 },
-	.errstr = "value_size=8 off=1073741825",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"bounds check based on reg_off + var_off + insn_off. test2",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 30) - 1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 4 },
-	.errstr = "value 1073741823",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"bounds check after truncation of non-boundary-crossing range",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
-	/* r1 = [0x00, 0xff] */
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_2, 1),
-	/* r2 = 0x10'0000'0000 */
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 36),
-	/* r1 = [0x10'0000'0000, 0x10'0000'00ff] */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
-	/* r1 = [0x10'7fff'ffff, 0x10'8000'00fe] */
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
-	/* r1 = [0x00, 0xff] */
-	BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 0x7fffffff),
-	/* r1 = 0 */
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
-	/* no-op */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	/* access at offset 0 */
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT
-},
-{
-	"bounds check after truncation of boundary-crossing range (1)",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	/* r1 = [0x00, 0xff] */
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
-	/* r1 = [0xffff'ff80, 0x1'0000'007f] */
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
-	/* r1 = [0xffff'ff80, 0xffff'ffff] or
-	 *      [0x0000'0000, 0x0000'007f]
-	 */
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
-	/* r1 = [0x00, 0xff] or
-	 *      [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
-	 */
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
-	/* error on OOB pointer computation */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	/* not actually fully unbounded, but the bound is very high */
-	.errstr = "value -4294967168 makes map_value pointer be out of bounds",
-	.result = REJECT,
-},
-{
-	"bounds check after truncation of boundary-crossing range (2)",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	/* r1 = [0x00, 0xff] */
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
-	/* r1 = [0xffff'ff80, 0x1'0000'007f] */
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
-	/* r1 = [0xffff'ff80, 0xffff'ffff] or
-	 *      [0x0000'0000, 0x0000'007f]
-	 * difference to previous test: truncation via MOV32
-	 * instead of ALU32.
-	 */
-	BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
-	/* r1 = [0x00, 0xff] or
-	 *      [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
-	 */
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
-	/* error on OOB pointer computation */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "value -4294967168 makes map_value pointer be out of bounds",
-	.result = REJECT,
-},
-{
-	"bounds check after wrapping 32-bit addition",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-	/* r1 = 0x7fff'ffff */
-	BPF_MOV64_IMM(BPF_REG_1, 0x7fffffff),
-	/* r1 = 0xffff'fffe */
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
-	/* r1 = 0 */
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 2),
-	/* no-op */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	/* access at offset 0 */
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT
-},
-{
-	"bounds check after shift with oversized count operand",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
-	BPF_MOV64_IMM(BPF_REG_2, 32),
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	/* r1 = (u32)1 << (u32)32 = ? */
-	BPF_ALU32_REG(BPF_LSH, BPF_REG_1, BPF_REG_2),
-	/* r1 = [0x0000, 0xffff] */
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xffff),
-	/* computes unknown pointer, potentially OOB */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	/* potentially OOB access */
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "R0 max value is outside of the allowed memory range",
-	.result = REJECT
-},
-{
-	"bounds check after right shift of maybe-negative number",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
-	/* r1 = [0x00, 0xff] */
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	/* r1 = [-0x01, 0xfe] */
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
-	/* r1 = 0 or 0xff'ffff'ffff'ffff */
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
-	/* r1 = 0 or 0xffff'ffff'ffff */
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
-	/* computes unknown pointer, potentially OOB */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	/* potentially OOB access */
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "R0 unbounded memory access",
-	.result = REJECT
-},
-{
-	"bounds check after 32-bit right shift with 64-bit input",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
-	/* r1 = 2 */
-	BPF_MOV64_IMM(BPF_REG_1, 2),
-	/* r1 = 1<<32 */
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 31),
-	/* r1 = 0 (NOT 2!) */
-	BPF_ALU32_IMM(BPF_RSH, BPF_REG_1, 31),
-	/* r1 = 0xffff'fffe (NOT 0!) */
-	BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 2),
-	/* error on computing OOB pointer */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	/* exit */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "math between map_value pointer and 4294967294 is not allowed",
-	.result = REJECT,
-},
-{
-	"bounds check map access with off+size signed 32bit overflow. test1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7ffffffe),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_A(0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "map_value pointer and 2147483646",
-	.result = REJECT
-},
-{
-	"bounds check map access with off+size signed 32bit overflow. test2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_A(0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "pointer offset 1073741822",
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-	.result = REJECT
-},
-{
-	"bounds check map access with off+size signed 32bit overflow. test3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
-	BPF_JMP_A(0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "pointer offset -1073741822",
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-	.result = REJECT
-},
-{
-	"bounds check map access with off+size signed 32bit overflow. test4",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_1, 1000000),
-	BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 1000000),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
-	BPF_JMP_A(0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "map_value pointer and 1000000000000",
-	.result = REJECT
-},
-{
-	"bounds check mixed 32bit and 64bit arithmetic. test1",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_1, -1),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	/* r1 = 0xffffFFFF00000001 */
-	BPF_JMP32_IMM(BPF_JGT, BPF_REG_1, 1, 3),
-	/* check ALU64 op keeps 32bit bounds */
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	BPF_JMP32_IMM(BPF_JGT, BPF_REG_1, 2, 1),
-	BPF_JMP_A(1),
-	/* invalid ldx if bounds are lost above */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R0 invalid mem access 'scalar'",
-	.result_unpriv = REJECT,
-	.result = ACCEPT
-},
-{
-	"bounds check mixed 32bit and 64bit arithmetic. test2",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_1, -1),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	/* r1 = 0xffffFFFF00000001 */
-	BPF_MOV64_IMM(BPF_REG_2, 3),
-	/* r1 = 0x2 */
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 1),
-	/* check ALU32 op zero extends 64bit bounds */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 1),
-	BPF_JMP_A(1),
-	/* invalid ldx if bounds are lost above */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R0 invalid mem access 'scalar'",
-	.result_unpriv = REJECT,
-	.result = ACCEPT
-},
-{
-	"assigning 32bit bounds to 64bit for wA = 0, wB = wA",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_MOV32_IMM(BPF_REG_9, 0),
-	BPF_MOV32_REG(BPF_REG_2, BPF_REG_9),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_8, 1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_6, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"bounds check for reg = 0, reg xor 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
-	.result_unpriv = REJECT,
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT,
-},
-{
-	"bounds check for reg32 = 0, reg32 xor 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV32_IMM(BPF_REG_1, 0),
-	BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 1),
-	BPF_JMP32_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
-	.result_unpriv = REJECT,
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT,
-},
-{
-	"bounds check for reg = 2, reg xor 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_1, 2),
-	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
-	.result_unpriv = REJECT,
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT,
-},
-{
-	"bounds check for reg = any, reg xor 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.result = REJECT,
-	.errstr = "invalid access to map value",
-	.errstr_unpriv = "invalid access to map value",
-},
-{
-	"bounds check for reg32 = any, reg32 xor 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP32_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.result = REJECT,
-	.errstr = "invalid access to map value",
-	.errstr_unpriv = "invalid access to map value",
-},
-{
-	"bounds check for reg > 0, reg xor 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_1, 0, 3),
-	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
-	.result_unpriv = REJECT,
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT,
-},
-{
-	"bounds check for reg32 > 0, reg32 xor 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP32_IMM(BPF_JLE, BPF_REG_1, 0, 3),
-	BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP32_IMM(BPF_JGE, BPF_REG_1, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
-	.result_unpriv = REJECT,
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT,
-},
-{
-	"bounds checks after 32-bit truncation. test 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	/* This used to reduce the max bound to 0x7fffffff */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0x7fffffff, 1),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr_unpriv = "R0 leaks addr",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"bounds checks after 32-bit truncation. test 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_1, 1, 1),
-	BPF_JMP32_IMM(BPF_JSLT, BPF_REG_1, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr_unpriv = "R0 leaks addr",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"bound check with JMP_JLT for crossing 64-bit signed boundary",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 8),
-
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
-	BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-
-	BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	/* r1 unsigned range is [0x7fffffffffffff10, 0x800000000000000f] */
-	BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2),
-
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-},
-{
-	"bound check with JMP_JSLT for crossing 64-bit signed boundary",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 13),
-
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
-	BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-
-	BPF_LD_IMM64(BPF_REG_2, 0x8000000000000fff),
-	BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_2, 3),
-	/* r1 signed range is [S64_MIN, S64_MAX] */
-	BPF_JMP_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -3),
-
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-},
-{
-	"bound check for loop upper bound greater than U32_MAX",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 8),
-
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
-	BPF_LD_IMM64(BPF_REG_0, 0x100000000),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-
-	BPF_LD_IMM64(BPF_REG_0, 0x100000000),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2),
-
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-},
-{
-	"bound check with JMP32_JLT for crossing 32-bit signed boundary",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 6),
-
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
-	BPF_MOV32_IMM(BPF_REG_0, 0x7fffff10),
-	BPF_ALU32_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-
-	BPF_MOV32_IMM(BPF_REG_0, 0x80000000),
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1),
-	/* r1 unsigned range is [0, 0x8000000f] */
-	BPF_JMP32_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2),
-
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-},
-{
-	"bound check with JMP32_JSLT for crossing 32-bit signed boundary",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 10),
-
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
-	BPF_MOV32_IMM(BPF_REG_0, 0x7fffff10),
-	BPF_ALU32_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-
-	BPF_MOV32_IMM(BPF_REG_2, 0x80000fff),
-	BPF_MOV32_IMM(BPF_REG_0, 0x80000000),
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP32_REG(BPF_JSGT, BPF_REG_0, BPF_REG_2, 3),
-	/* r1 signed range is [S32_MIN, S32_MAX] */
-	BPF_JMP32_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -3),
-
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 03/24] selftests/bpf: verifier/bpf_get_stack converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 01/24] selftests/bpf: Add notion of auxiliary programs for test_loader Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 02/24] selftests/bpf: verifier/bounds converted to inline assembly Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 04/24] selftests/bpf: verifier/btf_ctx_access " Eduard Zingerman
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/bpf_get_stack automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_bpf_get_stack.c        | 124 ++++++++++++++++++
 .../selftests/bpf/verifier/bpf_get_stack.c    |  87 ------------
 3 files changed, 126 insertions(+), 87 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/bpf_get_stack.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index e61c9120e261..db55d125928c 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -10,6 +10,7 @@
 #include "verifier_bounds_deduction.skel.h"
 #include "verifier_bounds_deduction_non_const.skel.h"
 #include "verifier_bounds_mix_sign_unsign.skel.h"
+#include "verifier_bpf_get_stack.skel.h"
 #include "verifier_cfg.skel.h"
 #include "verifier_cgroup_inv_retcode.skel.h"
 #include "verifier_cgroup_skb.skel.h"
@@ -85,6 +86,7 @@ void test_verifier_bounds(void)               { RUN(verifier_bounds); }
 void test_verifier_bounds_deduction(void)     { RUN(verifier_bounds_deduction); }
 void test_verifier_bounds_deduction_non_const(void)     { RUN(verifier_bounds_deduction_non_const); }
 void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); }
+void test_verifier_bpf_get_stack(void)        { RUN(verifier_bpf_get_stack); }
 void test_verifier_cfg(void)                  { RUN(verifier_cfg); }
 void test_verifier_cgroup_inv_retcode(void)   { RUN(verifier_cgroup_inv_retcode); }
 void test_verifier_cgroup_skb(void)           { RUN(verifier_cgroup_skb); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c b/tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c
new file mode 100644
index 000000000000..325a2bab4a71
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_bpf_get_stack.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/bpf_get_stack.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct test_val);
+} map_array_48b SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, struct test_val);
+} map_hash_48b SEC(".maps");
+
+SEC("tracepoint")
+__description("bpf_get_stack return R0 within range")
+__success
+__naked void stack_return_r0_within_range(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	r9 = %[__imm_0];				\
+	r1 = r6;					\
+	r2 = r7;					\
+	r3 = %[__imm_0];				\
+	r4 = 256;					\
+	call %[bpf_get_stack];				\
+	r1 = 0;						\
+	r8 = r0;					\
+	r8 <<= 32;					\
+	r8 s>>= 32;					\
+	if r1 s> r8 goto l0_%=;				\
+	r9 -= r8;					\
+	r2 = r7;					\
+	r2 += r8;					\
+	r1 = r9;					\
+	r1 <<= 32;					\
+	r1 s>>= 32;					\
+	r3 = r2;					\
+	r3 += r1;					\
+	r1 = r7;					\
+	r5 = %[__imm_0];				\
+	r1 += r5;					\
+	if r3 >= r1 goto l0_%=;				\
+	r1 = r6;					\
+	r3 = r9;					\
+	r4 = 0;						\
+	call %[bpf_get_stack];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_stack),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b),
+	  __imm_const(__imm_0, sizeof(struct test_val) / 2)
+	: __clobber_all);
+}
+
+SEC("iter/task")
+__description("bpf_get_task_stack return R0 range is refined")
+__success
+__naked void return_r0_range_is_refined(void)
+{
+	asm volatile ("					\
+	r6 = *(u64*)(r1 + 0);				\
+	r6 = *(u64*)(r6 + 0);		/* ctx->meta->seq */\
+	r7 = *(u64*)(r1 + 8);		/* ctx->task */\
+	r1 = %[map_array_48b] ll;	/* fixup_map_array_48b */\
+	r2 = 0;						\
+	*(u64*)(r10 - 8) = r2;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	if r7 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r1 = r7;					\
+	r2 = r0;					\
+	r9 = r0;			/* keep buf for seq_write */\
+	r3 = 48;					\
+	r4 = 0;						\
+	call %[bpf_get_task_stack];			\
+	if r0 s> 0 goto l2_%=;				\
+	r0 = 0;						\
+	exit;						\
+l2_%=:	r1 = r6;					\
+	r2 = r9;					\
+	r3 = r0;					\
+	call %[bpf_seq_write];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_task_stack),
+	  __imm(bpf_map_lookup_elem),
+	  __imm(bpf_seq_write),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
deleted file mode 100644
index 3e024c891178..000000000000
--- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+++ /dev/null
@@ -1,87 +0,0 @@
-{
-	"bpf_get_stack return R0 within range",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 28),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)/2),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
-	BPF_MOV64_IMM(BPF_REG_3, sizeof(struct test_val)/2),
-	BPF_MOV64_IMM(BPF_REG_4, 256),
-	BPF_EMIT_CALL(BPF_FUNC_get_stack),
-	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
-	BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_1, BPF_REG_8, 16),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
-	BPF_ALU64_IMM(BPF_ARSH, BPF_REG_1, 32),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_1),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_MOV64_IMM(BPF_REG_5, sizeof(struct test_val)/2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_5),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_9),
-	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_EMIT_CALL(BPF_FUNC_get_stack),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"bpf_get_task_stack return R0 range is refined",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_6, 0), // ctx->meta->seq
-	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_1, 8), // ctx->task
-	BPF_LD_MAP_FD(BPF_REG_1, 0), // fixup_map_array_48b
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), // keep buf for seq_write
-	BPF_MOV64_IMM(BPF_REG_3, 48),
-	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_EMIT_CALL(BPF_FUNC_get_task_stack),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_seq_write),
-
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACING,
-	.expected_attach_type = BPF_TRACE_ITER,
-	.kfunc = "task",
-	.runs = -1, // Don't run, just load
-	.fixup_map_array_48b = { 3 },
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 04/24] selftests/bpf: verifier/btf_ctx_access converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (2 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 03/24] selftests/bpf: verifier/bpf_get_stack " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 05/24] selftests/bpf: verifier/ctx " Eduard Zingerman
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/btf_ctx_access automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |  2 ++
 .../bpf/progs/verifier_btf_ctx_access.c       | 32 +++++++++++++++++++
 .../selftests/bpf/verifier/btf_ctx_access.c   | 25 ---------------
 3 files changed, 34 insertions(+), 25 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/btf_ctx_access.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index db55d125928c..b42601f7edcb 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -11,6 +11,7 @@
 #include "verifier_bounds_deduction_non_const.skel.h"
 #include "verifier_bounds_mix_sign_unsign.skel.h"
 #include "verifier_bpf_get_stack.skel.h"
+#include "verifier_btf_ctx_access.skel.h"
 #include "verifier_cfg.skel.h"
 #include "verifier_cgroup_inv_retcode.skel.h"
 #include "verifier_cgroup_skb.skel.h"
@@ -87,6 +88,7 @@ void test_verifier_bounds_deduction(void)     { RUN(verifier_bounds_deduction);
 void test_verifier_bounds_deduction_non_const(void)     { RUN(verifier_bounds_deduction_non_const); }
 void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); }
 void test_verifier_bpf_get_stack(void)        { RUN(verifier_bpf_get_stack); }
+void test_verifier_btf_ctx_access(void)       { RUN(verifier_btf_ctx_access); }
 void test_verifier_cfg(void)                  { RUN(verifier_cfg); }
 void test_verifier_cgroup_inv_retcode(void)   { RUN(verifier_cgroup_inv_retcode); }
 void test_verifier_cgroup_skb(void)           { RUN(verifier_cgroup_skb); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
new file mode 100644
index 000000000000..a570e48b917a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
@@ -0,0 +1,32 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/btf_ctx_access.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("fentry/bpf_modify_return_test")
+__description("btf_ctx_access accept")
+__success __retval(0)
+__naked void btf_ctx_access_accept(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + 8);		/* load 2nd argument value (int pointer) */\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test9")
+__description("btf_ctx_access u32 pointer accept")
+__success __retval(0)
+__naked void ctx_access_u32_pointer_accept(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + 0);		/* load 1nd argument value (u32 pointer) */\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/btf_ctx_access.c b/tools/testing/selftests/bpf/verifier/btf_ctx_access.c
deleted file mode 100644
index 0484d3de040d..000000000000
--- a/tools/testing/selftests/bpf/verifier/btf_ctx_access.c
+++ /dev/null
@@ -1,25 +0,0 @@
-{
-	"btf_ctx_access accept",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 8),	/* load 2nd argument value (int pointer) */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACING,
-	.expected_attach_type = BPF_TRACE_FENTRY,
-	.kfunc = "bpf_modify_return_test",
-},
-
-{
-	"btf_ctx_access u32 pointer accept",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),	/* load 1nd argument value (u32 pointer) */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACING,
-	.expected_attach_type = BPF_TRACE_FENTRY,
-	.kfunc = "bpf_fentry_test9",
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 05/24] selftests/bpf: verifier/ctx converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (3 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 04/24] selftests/bpf: verifier/btf_ctx_access " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 06/24] selftests/bpf: verifier/d_path " Eduard Zingerman
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/ctx automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_ctx.c        | 221 ++++++++++++++++++
 tools/testing/selftests/bpf/verifier/ctx.c    | 186 ---------------
 3 files changed, 223 insertions(+), 186 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/ctx.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index b42601f7edcb..f559bc3f7c2f 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -17,6 +17,7 @@
 #include "verifier_cgroup_skb.skel.h"
 #include "verifier_cgroup_storage.skel.h"
 #include "verifier_const_or.skel.h"
+#include "verifier_ctx.skel.h"
 #include "verifier_ctx_sk_msg.skel.h"
 #include "verifier_direct_stack_access_wraparound.skel.h"
 #include "verifier_div0.skel.h"
@@ -94,6 +95,7 @@ void test_verifier_cgroup_inv_retcode(void)   { RUN(verifier_cgroup_inv_retcode)
 void test_verifier_cgroup_skb(void)           { RUN(verifier_cgroup_skb); }
 void test_verifier_cgroup_storage(void)       { RUN(verifier_cgroup_storage); }
 void test_verifier_const_or(void)             { RUN(verifier_const_or); }
+void test_verifier_ctx(void)                  { RUN(verifier_ctx); }
 void test_verifier_ctx_sk_msg(void)           { RUN(verifier_ctx_sk_msg); }
 void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); }
 void test_verifier_div0(void)                 { RUN(verifier_div0); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_ctx.c b/tools/testing/selftests/bpf/progs/verifier_ctx.c
new file mode 100644
index 000000000000..a83809a1dbbf
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_ctx.c
@@ -0,0 +1,221 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/ctx.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("tc")
+__description("context stores via BPF_ATOMIC")
+__failure __msg("BPF_ATOMIC stores into R1 ctx is not allowed")
+__naked void context_stores_via_bpf_atomic(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	lock *(u32 *)(r1 + %[__sk_buff_mark]) += w0;	\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("arithmetic ops make PTR_TO_CTX unusable")
+__failure __msg("dereference of modified ctx ptr")
+__naked void make_ptr_to_ctx_unusable(void)
+{
+	asm volatile ("					\
+	r1 += %[__imm_0];				\
+	r0 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	exit;						\
+"	:
+	: __imm_const(__imm_0,
+		      offsetof(struct __sk_buff, data) - offsetof(struct __sk_buff, mark)),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("pass unmodified ctx pointer to helper")
+__success __retval(0)
+__naked void unmodified_ctx_pointer_to_helper(void)
+{
+	asm volatile ("					\
+	r2 = 0;						\
+	call %[bpf_csum_update];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_csum_update)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("pass modified ctx pointer to helper, 1")
+__failure __msg("negative offset ctx ptr R1 off=-612 disallowed")
+__naked void ctx_pointer_to_helper_1(void)
+{
+	asm volatile ("					\
+	r1 += -612;					\
+	r2 = 0;						\
+	call %[bpf_csum_update];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_csum_update)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("pass modified ctx pointer to helper, 2")
+__failure __msg("negative offset ctx ptr R1 off=-612 disallowed")
+__failure_unpriv __msg_unpriv("negative offset ctx ptr R1 off=-612 disallowed")
+__naked void ctx_pointer_to_helper_2(void)
+{
+	asm volatile ("					\
+	r1 += -612;					\
+	call %[bpf_get_socket_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_socket_cookie)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("pass modified ctx pointer to helper, 3")
+__failure __msg("variable ctx access var_off=(0x0; 0x4)")
+__naked void ctx_pointer_to_helper_3(void)
+{
+	asm volatile ("					\
+	r3 = *(u32*)(r1 + 0);				\
+	r3 &= 4;					\
+	r1 += r3;					\
+	r2 = 0;						\
+	call %[bpf_csum_update];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_csum_update)
+	: __clobber_all);
+}
+
+SEC("cgroup/sendmsg6")
+__description("pass ctx or null check, 1: ctx")
+__success
+__naked void or_null_check_1_ctx(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_netns_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_netns_cookie)
+	: __clobber_all);
+}
+
+SEC("cgroup/sendmsg6")
+__description("pass ctx or null check, 2: null")
+__success
+__naked void or_null_check_2_null(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	call %[bpf_get_netns_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_netns_cookie)
+	: __clobber_all);
+}
+
+SEC("cgroup/sendmsg6")
+__description("pass ctx or null check, 3: 1")
+__failure __msg("R1 type=scalar expected=ctx")
+__naked void or_null_check_3_1(void)
+{
+	asm volatile ("					\
+	r1 = 1;						\
+	call %[bpf_get_netns_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_netns_cookie)
+	: __clobber_all);
+}
+
+SEC("cgroup/sendmsg6")
+__description("pass ctx or null check, 4: ctx - const")
+__failure __msg("negative offset ctx ptr R1 off=-612 disallowed")
+__naked void null_check_4_ctx_const(void)
+{
+	asm volatile ("					\
+	r1 += -612;					\
+	call %[bpf_get_netns_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_netns_cookie)
+	: __clobber_all);
+}
+
+SEC("cgroup/connect4")
+__description("pass ctx or null check, 5: null (connect)")
+__success
+__naked void null_check_5_null_connect(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	call %[bpf_get_netns_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_netns_cookie)
+	: __clobber_all);
+}
+
+SEC("cgroup/post_bind4")
+__description("pass ctx or null check, 6: null (bind)")
+__success
+__naked void null_check_6_null_bind(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	call %[bpf_get_netns_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_netns_cookie)
+	: __clobber_all);
+}
+
+SEC("cgroup/post_bind4")
+__description("pass ctx or null check, 7: ctx (bind)")
+__success
+__naked void null_check_7_ctx_bind(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_socket_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_socket_cookie)
+	: __clobber_all);
+}
+
+SEC("cgroup/post_bind4")
+__description("pass ctx or null check, 8: null (bind)")
+__failure __msg("R1 type=scalar expected=ctx")
+__naked void null_check_8_null_bind(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	call %[bpf_get_socket_cookie];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_socket_cookie)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c
deleted file mode 100644
index 2fd31612c0b8..000000000000
--- a/tools/testing/selftests/bpf/verifier/ctx.c
+++ /dev/null
@@ -1,186 +0,0 @@
-{
-	"context stores via BPF_ATOMIC",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_1, BPF_REG_0, offsetof(struct __sk_buff, mark)),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "BPF_ATOMIC stores into R1 ctx is not allowed",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"arithmetic ops make PTR_TO_CTX unusable",
-	.insns = {
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1,
-			      offsetof(struct __sk_buff, data) -
-			      offsetof(struct __sk_buff, mark)),
-		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-			    offsetof(struct __sk_buff, mark)),
-		BPF_EXIT_INSN(),
-	},
-	.errstr = "dereference of modified ctx ptr",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"pass unmodified ctx pointer to helper",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_csum_update),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"pass modified ctx pointer to helper, 1",
-	.insns = {
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
-		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_csum_update),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "negative offset ctx ptr R1 off=-612 disallowed",
-},
-{
-	"pass modified ctx pointer to helper, 2",
-	.insns = {
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_socket_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.result_unpriv = REJECT,
-	.result = REJECT,
-	.errstr_unpriv = "negative offset ctx ptr R1 off=-612 disallowed",
-	.errstr = "negative offset ctx ptr R1 off=-612 disallowed",
-},
-{
-	"pass modified ctx pointer to helper, 3",
-	.insns = {
-		BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 0),
-		BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 4),
-		BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
-		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_csum_update),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "variable ctx access var_off=(0x0; 0x4)",
-},
-{
-	"pass ctx or null check, 1: ctx",
-	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_netns_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
-	.expected_attach_type = BPF_CGROUP_UDP6_SENDMSG,
-	.result = ACCEPT,
-},
-{
-	"pass ctx or null check, 2: null",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_netns_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
-	.expected_attach_type = BPF_CGROUP_UDP6_SENDMSG,
-	.result = ACCEPT,
-},
-{
-	"pass ctx or null check, 3: 1",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_1, 1),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_netns_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
-	.expected_attach_type = BPF_CGROUP_UDP6_SENDMSG,
-	.result = REJECT,
-	.errstr = "R1 type=scalar expected=ctx",
-},
-{
-	"pass ctx or null check, 4: ctx - const",
-	.insns = {
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_netns_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
-	.expected_attach_type = BPF_CGROUP_UDP6_SENDMSG,
-	.result = REJECT,
-	.errstr = "negative offset ctx ptr R1 off=-612 disallowed",
-},
-{
-	"pass ctx or null check, 5: null (connect)",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_netns_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
-	.expected_attach_type = BPF_CGROUP_INET4_CONNECT,
-	.result = ACCEPT,
-},
-{
-	"pass ctx or null check, 6: null (bind)",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_netns_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
-	.expected_attach_type = BPF_CGROUP_INET4_POST_BIND,
-	.result = ACCEPT,
-},
-{
-	"pass ctx or null check, 7: ctx (bind)",
-	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_socket_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
-	.expected_attach_type = BPF_CGROUP_INET4_POST_BIND,
-	.result = ACCEPT,
-},
-{
-	"pass ctx or null check, 8: null (bind)",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
-			     BPF_FUNC_get_socket_cookie),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
-	.expected_attach_type = BPF_CGROUP_INET4_POST_BIND,
-	.result = REJECT,
-	.errstr = "R1 type=scalar expected=ctx",
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 06/24] selftests/bpf: verifier/d_path converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (4 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 05/24] selftests/bpf: verifier/ctx " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 07/24] selftests/bpf: verifier/direct_packet_access " Eduard Zingerman
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/d_path automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |  2 +
 .../selftests/bpf/progs/verifier_d_path.c     | 48 +++++++++++++++++++
 tools/testing/selftests/bpf/verifier/d_path.c | 37 --------------
 3 files changed, 50 insertions(+), 37 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_d_path.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/d_path.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index f559bc3f7c2f..ae4da5f7598c 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -19,6 +19,7 @@
 #include "verifier_const_or.skel.h"
 #include "verifier_ctx.skel.h"
 #include "verifier_ctx_sk_msg.skel.h"
+#include "verifier_d_path.skel.h"
 #include "verifier_direct_stack_access_wraparound.skel.h"
 #include "verifier_div0.skel.h"
 #include "verifier_div_overflow.skel.h"
@@ -97,6 +98,7 @@ void test_verifier_cgroup_storage(void)       { RUN(verifier_cgroup_storage); }
 void test_verifier_const_or(void)             { RUN(verifier_const_or); }
 void test_verifier_ctx(void)                  { RUN(verifier_ctx); }
 void test_verifier_ctx_sk_msg(void)           { RUN(verifier_ctx_sk_msg); }
+void test_verifier_d_path(void)               { RUN(verifier_d_path); }
 void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); }
 void test_verifier_div0(void)                 { RUN(verifier_div0); }
 void test_verifier_div_overflow(void)         { RUN(verifier_div_overflow); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_d_path.c b/tools/testing/selftests/bpf/progs/verifier_d_path.c
new file mode 100644
index 000000000000..ec79cbcfde91
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_d_path.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/d_path.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("fentry/dentry_open")
+__description("d_path accept")
+__success __retval(0)
+__naked void d_path_accept(void)
+{
+	asm volatile ("					\
+	r1 = *(u32*)(r1 + 0);				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r6 = 0;						\
+	*(u64*)(r2 + 0) = r6;				\
+	r3 = 8 ll;					\
+	call %[bpf_d_path];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_d_path)
+	: __clobber_all);
+}
+
+SEC("fentry/d_path")
+__description("d_path reject")
+__failure __msg("helper call is not allowed in probe")
+__naked void d_path_reject(void)
+{
+	asm volatile ("					\
+	r1 = *(u32*)(r1 + 0);				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r6 = 0;						\
+	*(u64*)(r2 + 0) = r6;				\
+	r3 = 8 ll;					\
+	call %[bpf_d_path];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_d_path)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/d_path.c b/tools/testing/selftests/bpf/verifier/d_path.c
deleted file mode 100644
index b988396379a7..000000000000
--- a/tools/testing/selftests/bpf/verifier/d_path.c
+++ /dev/null
@@ -1,37 +0,0 @@
-{
-	"d_path accept",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_MOV64_IMM(BPF_REG_6, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0),
-	BPF_LD_IMM64(BPF_REG_3, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACING,
-	.expected_attach_type = BPF_TRACE_FENTRY,
-	.kfunc = "dentry_open",
-},
-{
-	"d_path reject",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_MOV64_IMM(BPF_REG_6, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0),
-	BPF_LD_IMM64(BPF_REG_3, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "helper call is not allowed in probe",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_TRACING,
-	.expected_attach_type = BPF_TRACE_FENTRY,
-	.kfunc = "d_path",
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 07/24] selftests/bpf: verifier/direct_packet_access converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (5 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 06/24] selftests/bpf: verifier/d_path " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 08/24] selftests/bpf: verifier/jeq_infer_not_null " Eduard Zingerman
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/direct_packet_access automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_direct_packet_access.c | 803 ++++++++++++++++++
 .../bpf/verifier/direct_packet_access.c       | 710 ----------------
 3 files changed, 805 insertions(+), 710 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/direct_packet_access.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index ae4da5f7598c..2c9e61b9a83e 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -20,6 +20,7 @@
 #include "verifier_ctx.skel.h"
 #include "verifier_ctx_sk_msg.skel.h"
 #include "verifier_d_path.skel.h"
+#include "verifier_direct_packet_access.skel.h"
 #include "verifier_direct_stack_access_wraparound.skel.h"
 #include "verifier_div0.skel.h"
 #include "verifier_div_overflow.skel.h"
@@ -99,6 +100,7 @@ void test_verifier_const_or(void)             { RUN(verifier_const_or); }
 void test_verifier_ctx(void)                  { RUN(verifier_ctx); }
 void test_verifier_ctx_sk_msg(void)           { RUN(verifier_ctx_sk_msg); }
 void test_verifier_d_path(void)               { RUN(verifier_d_path); }
+void test_verifier_direct_packet_access(void) { RUN(verifier_direct_packet_access); }
 void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); }
 void test_verifier_div0(void)                 { RUN(verifier_div0); }
 void test_verifier_div_overflow(void)         { RUN(verifier_div_overflow); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
new file mode 100644
index 000000000000..99a23dea8233
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c
@@ -0,0 +1,803 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/direct_packet_access.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("tc")
+__description("pkt_end - pkt_start is allowed")
+__success __retval(TEST_DATA_LEN)
+__naked void end_pkt_start_is_allowed(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r0 -= r2;					\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test1")
+__success __retval(0)
+__naked void direct_packet_access_test1(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test2")
+__success __retval(0)
+__naked void direct_packet_access_test2(void)
+{
+	asm volatile ("					\
+	r0 = 1;						\
+	r4 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r3 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r5 = r3;					\
+	r5 += 14;					\
+	if r5 > r4 goto l0_%=;				\
+	r0 = *(u8*)(r3 + 7);				\
+	r4 = *(u8*)(r3 + 12);				\
+	r4 *= 14;					\
+	r3 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 += r4;					\
+	r2 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r2 <<= 49;					\
+	r2 >>= 49;					\
+	r3 += r2;					\
+	r2 = r3;					\
+	r2 += 8;					\
+	r1 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	if r2 > r1 goto l1_%=;				\
+	r1 = *(u8*)(r3 + 4);				\
+l1_%=:	r0 = 0;						\
+l0_%=:	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("direct packet access: test3")
+__failure __msg("invalid bpf_context access off=76")
+__failure_unpriv
+__naked void direct_packet_access_test3(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test4 (write)")
+__success __retval(0)
+__naked void direct_packet_access_test4_write(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	*(u8*)(r2 + 0) = r2;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test5 (pkt_end >= reg, good access)")
+__success __retval(0)
+__naked void pkt_end_reg_good_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r3 >= r0 goto l0_%=;				\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = *(u8*)(r2 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test6 (pkt_end >= reg, bad access)")
+__failure __msg("invalid access to packet")
+__naked void pkt_end_reg_bad_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r3 >= r0 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test7 (pkt_end >= reg, both accesses)")
+__failure __msg("invalid access to packet")
+__naked void pkt_end_reg_both_accesses(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r3 >= r0 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = *(u8*)(r2 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test8 (double test, variant 1)")
+__success __retval(0)
+__naked void test8_double_test_variant_1(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r3 >= r0 goto l0_%=;				\
+	if r0 > r3 goto l1_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l1_%=:	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = *(u8*)(r2 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test9 (double test, variant 2)")
+__success __retval(0)
+__naked void test9_double_test_variant_2(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r3 >= r0 goto l0_%=;				\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	if r0 > r3 goto l1_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l1_%=:	r0 = *(u8*)(r2 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test10 (write invalid)")
+__failure __msg("invalid access to packet")
+__naked void packet_access_test10_write_invalid(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	*(u8*)(r2 + 0) = r2;				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test11 (shift, good access)")
+__success __retval(1)
+__naked void access_test11_shift_good_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 22;					\
+	if r0 > r3 goto l0_%=;				\
+	r3 = 144;					\
+	r5 = r3;					\
+	r5 += 23;					\
+	r5 >>= 3;					\
+	r6 = r2;					\
+	r6 += r5;					\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test12 (and, good access)")
+__success __retval(1)
+__naked void access_test12_and_good_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 22;					\
+	if r0 > r3 goto l0_%=;				\
+	r3 = 144;					\
+	r5 = r3;					\
+	r5 += 23;					\
+	r5 &= 15;					\
+	r6 = r2;					\
+	r6 += r5;					\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test13 (branches, good access)")
+__success __retval(1)
+__naked void access_test13_branches_good_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 22;					\
+	if r0 > r3 goto l0_%=;				\
+	r3 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	r4 = 1;						\
+	if r3 > r4 goto l1_%=;				\
+	r3 = 14;					\
+	goto l2_%=;					\
+l1_%=:	r3 = 24;					\
+l2_%=:	r5 = r3;					\
+	r5 += 23;					\
+	r5 &= 15;					\
+	r6 = r2;					\
+	r6 += r5;					\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test14 (pkt_ptr += 0, CONST_IMM, good access)")
+__success __retval(1)
+__naked void _0_const_imm_good_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 22;					\
+	if r0 > r3 goto l0_%=;				\
+	r5 = 12;					\
+	r5 >>= 4;					\
+	r6 = r2;					\
+	r6 += r5;					\
+	r0 = *(u8*)(r6 + 0);				\
+	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test15 (spill with xadd)")
+__failure __msg("R2 invalid mem access 'scalar'")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void access_test15_spill_with_xadd(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r5 = 4096;					\
+	r4 = r10;					\
+	r4 += -8;					\
+	*(u64*)(r4 + 0) = r2;				\
+	lock *(u64 *)(r4 + 0) += r5;			\
+	r2 = *(u64*)(r4 + 0);				\
+	*(u32*)(r2 + 0) = r5;				\
+	r0 = 0;						\
+l0_%=:	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test16 (arith on data_end)")
+__failure __msg("R3 pointer arithmetic on pkt_end")
+__naked void test16_arith_on_data_end(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	r3 += 16;					\
+	if r0 > r3 goto l0_%=;				\
+	*(u8*)(r2 + 0) = r2;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test17 (pruning, alignment)")
+__failure __msg("misaligned packet access off 2+(0x0; 0x0)+15+-4 size 4")
+__flag(BPF_F_STRICT_ALIGNMENT)
+__naked void packet_access_test17_pruning_alignment(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r7 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	r0 = r2;					\
+	r0 += 14;					\
+	if r7 > 1 goto l0_%=;				\
+l2_%=:	if r0 > r3 goto l1_%=;				\
+	*(u32*)(r0 - 4) = r0;				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 += 1;					\
+	goto l2_%=;					\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test18 (imm += pkt_ptr, 1)")
+__success __retval(0)
+__naked void test18_imm_pkt_ptr_1(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = 8;						\
+	r0 += r2;					\
+	if r0 > r3 goto l0_%=;				\
+	*(u8*)(r2 + 0) = r2;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test19 (imm += pkt_ptr, 2)")
+__success __retval(0)
+__naked void test19_imm_pkt_ptr_2(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r4 = 4;						\
+	r4 += r2;					\
+	*(u8*)(r4 + 0) = r4;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test20 (x += pkt_ptr, 1)")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void test20_x_pkt_ptr_1(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = 0xffffffff;				\
+	*(u64*)(r10 - 8) = r0;				\
+	r0 = *(u64*)(r10 - 8);				\
+	r0 &= 0x7fff;					\
+	r4 = r0;					\
+	r4 += r2;					\
+	r5 = r4;					\
+	r4 += %[__imm_0];				\
+	if r4 > r3 goto l0_%=;				\
+	*(u64*)(r5 + 0) = r4;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__imm_0, 0x7fff - 1),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test21 (x += pkt_ptr, 2)")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void test21_x_pkt_ptr_2(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r4 = 0xffffffff;				\
+	*(u64*)(r10 - 8) = r4;				\
+	r4 = *(u64*)(r10 - 8);				\
+	r4 &= 0x7fff;					\
+	r4 += r2;					\
+	r5 = r4;					\
+	r4 += %[__imm_0];				\
+	if r4 > r3 goto l0_%=;				\
+	*(u64*)(r5 + 0) = r4;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__imm_0, 0x7fff - 1),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test22 (x += pkt_ptr, 3)")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void test22_x_pkt_ptr_3(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	*(u64*)(r10 - 8) = r2;				\
+	*(u64*)(r10 - 16) = r3;				\
+	r3 = *(u64*)(r10 - 16);				\
+	if r0 > r3 goto l0_%=;				\
+	r2 = *(u64*)(r10 - 8);				\
+	r4 = 0xffffffff;				\
+	lock *(u64 *)(r10 - 8) += r4;			\
+	r4 = *(u64*)(r10 - 8);				\
+	r4 >>= 49;					\
+	r4 += r2;					\
+	r0 = r4;					\
+	r0 += 2;					\
+	if r0 > r3 goto l0_%=;				\
+	r2 = 1;						\
+	*(u16*)(r4 + 0) = r2;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test23 (x += pkt_ptr, 4)")
+__failure __msg("invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void test23_x_pkt_ptr_4(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	*(u64*)(r10 - 8) = r0;				\
+	r0 = *(u64*)(r10 - 8);				\
+	r0 &= 0xffff;					\
+	r4 = r0;					\
+	r0 = 31;					\
+	r0 += r4;					\
+	r0 += r2;					\
+	r5 = r0;					\
+	r0 += %[__imm_0];				\
+	if r0 > r3 goto l0_%=;				\
+	*(u64*)(r5 + 0) = r0;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__imm_0, 0xffff - 1),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test24 (x += pkt_ptr, 5)")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void test24_x_pkt_ptr_5(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = 0xffffffff;				\
+	*(u64*)(r10 - 8) = r0;				\
+	r0 = *(u64*)(r10 - 8);				\
+	r0 &= 0xff;					\
+	r4 = r0;					\
+	r0 = 64;					\
+	r0 += r4;					\
+	r0 += r2;					\
+	r5 = r0;					\
+	r0 += %[__imm_0];				\
+	if r0 > r3 goto l0_%=;				\
+	*(u64*)(r5 + 0) = r0;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__imm_0, 0x7fff - 1),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test25 (marking on <, good access)")
+__success __retval(0)
+__naked void test25_marking_on_good_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 < r3 goto l0_%=;				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u8*)(r2 + 0);				\
+	goto l1_%=;					\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test26 (marking on <, bad access)")
+__failure __msg("invalid access to packet")
+__naked void test26_marking_on_bad_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 < r3 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l1_%=:	r0 = 0;						\
+	exit;						\
+l0_%=:	goto l1_%=;					\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test27 (marking on <=, good access)")
+__success __retval(1)
+__naked void test27_marking_on_good_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r3 <= r0 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test28 (marking on <=, bad access)")
+__failure __msg("invalid access to packet")
+__naked void test28_marking_on_bad_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r3 <= r0 goto l0_%=;				\
+l1_%=:	r0 = 1;						\
+	exit;						\
+l0_%=:	r0 = *(u8*)(r2 + 0);				\
+	goto l1_%=;					\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test29 (reg > pkt_end in subprog)")
+__success __retval(0)
+__naked void reg_pkt_end_in_subprog(void)
+{
+	asm volatile ("					\
+	r6 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r2 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r3 = r6;					\
+	r3 += 8;					\
+	call reg_pkt_end_in_subprog__1;			\
+	if r0 == 0 goto l0_%=;				\
+	r0 = *(u8*)(r6 + 0);				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void reg_pkt_end_in_subprog__1(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	if r3 > r2 goto l0_%=;				\
+	r0 = 1;						\
+l0_%=:	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tc")
+__description("direct packet access: test30 (check_id() in regsafe(), bad access)")
+__failure __msg("invalid access to packet, off=0 size=1, R2")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void id_in_regsafe_bad_access(void)
+{
+	asm volatile ("					\
+	/* r9 = ctx */					\
+	r9 = r1;					\
+	/* r7 = ktime_get_ns() */			\
+	call %[bpf_ktime_get_ns];			\
+	r7 = r0;					\
+	/* r6 = ktime_get_ns() */			\
+	call %[bpf_ktime_get_ns];			\
+	r6 = r0;					\
+	/* r2 = ctx->data				\
+	 * r3 = ctx->data				\
+	 * r4 = ctx->data_end				\
+	 */						\
+	r2 = *(u32*)(r9 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r9 + %[__sk_buff_data]);		\
+	r4 = *(u32*)(r9 + %[__sk_buff_data_end]);	\
+	/* if r6 > 100 goto exit			\
+	 * if r7 > 100 goto exit			\
+	 */						\
+	if r6 > 100 goto l0_%=;				\
+	if r7 > 100 goto l0_%=;				\
+	/* r2 += r6              ; this forces assignment of ID to r2\
+	 * r2 += 1               ; get some fixed off for r2\
+	 * r3 += r7              ; this forces assignment of ID to r3\
+	 * r3 += 1               ; get some fixed off for r3\
+	 */						\
+	r2 += r6;					\
+	r2 += 1;					\
+	r3 += r7;					\
+	r3 += 1;					\
+	/* if r6 > r7 goto +1    ; no new information about the state is derived from\
+	 *                       ; this check, thus produced verifier states differ\
+	 *                       ; only in 'insn_idx'	\
+	 * r2 = r3               ; optionally share ID between r2 and r3\
+	 */						\
+	if r6 != r7 goto l1_%=;				\
+	r2 = r3;					\
+l1_%=:	/* if r3 > ctx->data_end goto exit */		\
+	if r3 > r4 goto l0_%=;				\
+	/* r5 = *(u8 *) (r2 - 1) ; access packet memory using r2,\
+	 *                       ; this is not always safe\
+	 */						\
+	r5 = *(u8*)(r2 - 1);				\
+l0_%=:	/* exit(0) */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/direct_packet_access.c b/tools/testing/selftests/bpf/verifier/direct_packet_access.c
deleted file mode 100644
index dce2e28aeb43..000000000000
--- a/tools/testing/selftests/bpf/verifier/direct_packet_access.c
+++ /dev/null
@@ -1,710 +0,0 @@
-{
-	"pkt_end - pkt_start is allowed",
-	.insns = {
-		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-			    offsetof(struct __sk_buff, data_end)),
-		BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-			    offsetof(struct __sk_buff, data)),
-		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_2),
-		BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = TEST_DATA_LEN,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test1",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test2",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 15),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 7),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_3, 12),
-	BPF_ALU64_IMM(BPF_MUL, BPF_REG_4, 14),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 49),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 49),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_3, 4),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test3",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "invalid bpf_context access off=76",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
-},
-{
-	"direct packet access: test4 (write)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test5 (pkt_end >= reg, good access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test6 (pkt_end >= reg, bad access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "invalid access to packet",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test7 (pkt_end >= reg, both accesses)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "invalid access to packet",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test8 (double test, variant 1)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 4),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test9 (double test, variant 2)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test10 (write invalid)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "invalid access to packet",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test11 (shift, good access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
-	BPF_MOV64_IMM(BPF_REG_3, 144),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 3),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.retval = 1,
-},
-{
-	"direct packet access: test12 (and, good access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
-	BPF_MOV64_IMM(BPF_REG_3, 144),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.retval = 1,
-},
-{
-	"direct packet access: test13 (branches, good access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 13),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_MOV64_IMM(BPF_REG_4, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 2),
-	BPF_MOV64_IMM(BPF_REG_3, 14),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_3, 24),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.retval = 1,
-},
-{
-	"direct packet access: test14 (pkt_ptr += 0, CONST_IMM, good access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7),
-	BPF_MOV64_IMM(BPF_REG_5, 12),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 4),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.retval = 1,
-},
-{
-	"direct packet access: test15 (spill with xadd)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
-	BPF_MOV64_IMM(BPF_REG_5, 4096),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_4, BPF_REG_5, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_4, 0),
-	BPF_STX_MEM(BPF_W, BPF_REG_2, BPF_REG_5, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "R2 invalid mem access 'scalar'",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"direct packet access: test16 (arith on data_end)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 16),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "R3 pointer arithmetic on pkt_end",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test17 (pruning, alignment)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 14),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 1, 4),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, -4),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_A(-6),
-	},
-	.errstr = "misaligned packet access off 2+(0x0; 0x0)+15+-4 size 4",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
-},
-{
-	"direct packet access: test18 (imm += pkt_ptr, 1)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_IMM(BPF_REG_0, 8),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test19 (imm += pkt_ptr, 2)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
-	BPF_MOV64_IMM(BPF_REG_4, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
-	BPF_STX_MEM(BPF_B, BPF_REG_4, BPF_REG_4, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test20 (x += pkt_ptr, 1)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_IMM(BPF_REG_0, 0xffffffff),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0x7fff),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"direct packet access: test21 (x += pkt_ptr, 2)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9),
-	BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_4, 0x7fff),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"direct packet access: test22 (x += pkt_ptr, 3)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_3, -16),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -16),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 11),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8),
-	BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
-	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_4, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_4, 49),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
-	BPF_MOV64_IMM(BPF_REG_2, 1),
-	BPF_STX_MEM(BPF_H, BPF_REG_4, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"direct packet access: test23 (x += pkt_ptr, 4)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_0, 31),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0xffff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)",
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"direct packet access: test24 (x += pkt_ptr, 5)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_IMM(BPF_REG_0, 0xffffffff),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xff),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_0, 64),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7fff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"direct packet access: test25 (marking on <, good access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -4),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test26 (marking on <, bad access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 3),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -3),
-	},
-	.result = REJECT,
-	.errstr = "invalid access to packet",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test27 (marking on <=, good access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.retval = 1,
-},
-{
-	"direct packet access: test28 (marking on <=, bad access)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -4),
-	},
-	.result = REJECT,
-	.errstr = "invalid access to packet",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test29 (reg > pkt_end in subprog)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 1),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"direct packet access: test30 (check_id() in regsafe(), bad access)",
-	.insns = {
-	/* r9 = ctx */
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_1),
-	/* r7 = ktime_get_ns() */
-	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	/* r6 = ktime_get_ns() */
-	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	/* r2 = ctx->data
-	 * r3 = ctx->data
-	 * r4 = ctx->data_end
-	 */
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_9, offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_9, offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_9, offsetof(struct __sk_buff, data_end)),
-	/* if r6 > 100 goto exit
-	 * if r7 > 100 goto exit
-	 */
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_6, 100, 9),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 100, 8),
-	/* r2 += r6              ; this forces assignment of ID to r2
-	 * r2 += 1               ; get some fixed off for r2
-	 * r3 += r7              ; this forces assignment of ID to r3
-	 * r3 += 1               ; get some fixed off for r3
-	 */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_7),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 1),
-	/* if r6 > r7 goto +1    ; no new information about the state is derived from
-	 *                       ; this check, thus produced verifier states differ
-	 *                       ; only in 'insn_idx'
-	 * r2 = r3               ; optionally share ID between r2 and r3
-	 */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
-	/* if r3 > ctx->data_end goto exit */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 1),
-	/* r5 = *(u8 *) (r2 - 1) ; access packet memory using r2,
-	 *                       ; this is not always safe
-	 */
-	BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_2, -1),
-	/* exit(0) */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.result = REJECT,
-	.errstr = "invalid access to packet, off=0 size=1, R2",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 08/24] selftests/bpf: verifier/jeq_infer_not_null converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (6 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 07/24] selftests/bpf: verifier/direct_packet_access " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 09/24] selftests/bpf: verifier/loops1 " Eduard Zingerman
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/jeq_infer_not_null automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_jeq_infer_not_null.c   | 213 ++++++++++++++++++
 .../bpf/verifier/jeq_infer_not_null.c         | 174 --------------
 3 files changed, 215 insertions(+), 174 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 2c9e61b9a83e..de5db0de98a1 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -29,6 +29,7 @@
 #include "verifier_helper_restricted.skel.h"
 #include "verifier_helper_value_access.skel.h"
 #include "verifier_int_ptr.skel.h"
+#include "verifier_jeq_infer_not_null.skel.h"
 #include "verifier_ld_ind.skel.h"
 #include "verifier_leak_ptr.skel.h"
 #include "verifier_map_ptr.skel.h"
@@ -109,6 +110,7 @@ void test_verifier_helper_packet_access(void) { RUN(verifier_helper_packet_acces
 void test_verifier_helper_restricted(void)    { RUN(verifier_helper_restricted); }
 void test_verifier_helper_value_access(void)  { RUN(verifier_helper_value_access); }
 void test_verifier_int_ptr(void)              { RUN(verifier_int_ptr); }
+void test_verifier_jeq_infer_not_null(void)   { RUN(verifier_jeq_infer_not_null); }
 void test_verifier_ld_ind(void)               { RUN(verifier_ld_ind); }
 void test_verifier_leak_ptr(void)             { RUN(verifier_leak_ptr); }
 void test_verifier_map_ptr(void)              { RUN(verifier_map_ptr); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c b/tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c
new file mode 100644
index 000000000000..bf16b00502f2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_jeq_infer_not_null.c
@@ -0,0 +1,213 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_XSKMAP);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+} map_xskmap SEC(".maps");
+
+/* This is equivalent to the following program:
+ *
+ *   r6 = skb->sk;
+ *   r7 = sk_fullsock(r6);
+ *   r0 = sk_fullsock(r6);
+ *   if (r0 == 0) return 0;    (a)
+ *   if (r0 != r7) return 0;   (b)
+ *   *r7->type;                (c)
+ *   return 0;
+ *
+ * It is safe to dereference r7 at point (c), because of (a) and (b).
+ * The test verifies that relation r0 == r7 is propagated from (b) to (c).
+ */
+SEC("cgroup/skb")
+__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JNE false branch")
+__success __failure_unpriv __msg_unpriv("R7 pointer comparison")
+__retval(0)
+__naked void socket_for_jne_false_branch(void)
+{
+	asm volatile ("					\
+	/* r6 = skb->sk; */				\
+	r6 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	/* if (r6 == 0) return 0; */			\
+	if r6 == 0 goto l0_%=;				\
+	/* r7 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	r7 = r0;					\
+	/* r0 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	/* if (r0 == null) return 0; */			\
+	if r0 == 0 goto l0_%=;				\
+	/* if (r0 == r7) r0 = *(r7->type); */		\
+	if r0 != r7 goto l0_%=;		/* Use ! JNE ! */\
+	r0 = *(u32*)(r7 + %[bpf_sock_type]);		\
+l0_%=:	/* return 0 */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+/* Same as above, but verify that another branch of JNE still
+ * prohibits access to PTR_MAYBE_NULL.
+ */
+SEC("cgroup/skb")
+__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JNE true branch")
+__failure __msg("R7 invalid mem access 'sock_or_null'")
+__failure_unpriv __msg_unpriv("R7 pointer comparison")
+__naked void unchanged_for_jne_true_branch(void)
+{
+	asm volatile ("					\
+	/* r6 = skb->sk */				\
+	r6 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	/* if (r6 == 0) return 0; */			\
+	if r6 == 0 goto l0_%=;				\
+	/* r7 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	r7 = r0;					\
+	/* r0 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	/* if (r0 == null) return 0; */			\
+	if r0 != 0 goto l0_%=;				\
+	/* if (r0 == r7) return 0; */			\
+	if r0 != r7 goto l1_%=;		/* Use ! JNE ! */\
+	goto l0_%=;					\
+l1_%=:	/* r0 = *(r7->type); */				\
+	r0 = *(u32*)(r7 + %[bpf_sock_type]);		\
+l0_%=:	/* return 0 */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+/* Same as a first test, but not null should be inferred for JEQ branch */
+SEC("cgroup/skb")
+__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JEQ true branch")
+__success __failure_unpriv __msg_unpriv("R7 pointer comparison")
+__retval(0)
+__naked void socket_for_jeq_true_branch(void)
+{
+	asm volatile ("					\
+	/* r6 = skb->sk; */				\
+	r6 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	/* if (r6 == null) return 0; */			\
+	if r6 == 0 goto l0_%=;				\
+	/* r7 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	r7 = r0;					\
+	/* r0 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	/* if (r0 == null) return 0; */			\
+	if r0 == 0 goto l0_%=;				\
+	/* if (r0 != r7) return 0; */			\
+	if r0 == r7 goto l1_%=;		/* Use ! JEQ ! */\
+	goto l0_%=;					\
+l1_%=:	/* r0 = *(r7->type); */				\
+	r0 = *(u32*)(r7 + %[bpf_sock_type]);		\
+l0_%=:	/* return 0; */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+/* Same as above, but verify that another branch of JNE still
+ * prohibits access to PTR_MAYBE_NULL.
+ */
+SEC("cgroup/skb")
+__description("jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JEQ false branch")
+__failure __msg("R7 invalid mem access 'sock_or_null'")
+__failure_unpriv __msg_unpriv("R7 pointer comparison")
+__naked void unchanged_for_jeq_false_branch(void)
+{
+	asm volatile ("					\
+	/* r6 = skb->sk; */				\
+	r6 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	/* if (r6 == null) return 0; */			\
+	if r6 == 0 goto l0_%=;				\
+	/* r7 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	r7 = r0;					\
+	/* r0 = sk_fullsock(skb); */			\
+	r1 = r6;					\
+	call %[bpf_sk_fullsock];			\
+	/* if (r0 == null) return 0; */			\
+	if r0 == 0 goto l0_%=;				\
+	/* if (r0 != r7) r0 = *(r7->type); */		\
+	if r0 == r7 goto l0_%=;		/* Use ! JEQ ! */\
+	r0 = *(u32*)(r7 + %[bpf_sock_type]);		\
+l0_%=:	/* return 0; */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+/* Maps are treated in a different branch of `mark_ptr_not_null_reg`,
+ * so separate test for maps case.
+ */
+SEC("xdp")
+__description("jne/jeq infer not null, PTR_TO_MAP_VALUE_OR_NULL -> PTR_TO_MAP_VALUE")
+__success __retval(0)
+__naked void null_ptr_to_map_value(void)
+{
+	asm volatile ("					\
+	/* r9 = &some stack to use as key */		\
+	r1 = 0;						\
+	*(u32*)(r10 - 8) = r1;				\
+	r9 = r10;					\
+	r9 += -8;					\
+	/* r8 = process local map */			\
+	r8 = %[map_xskmap] ll;				\
+	/* r6 = map_lookup_elem(r8, r9); */		\
+	r1 = r8;					\
+	r2 = r9;					\
+	call %[bpf_map_lookup_elem];			\
+	r6 = r0;					\
+	/* r7 = map_lookup_elem(r8, r9); */		\
+	r1 = r8;					\
+	r2 = r9;					\
+	call %[bpf_map_lookup_elem];			\
+	r7 = r0;					\
+	/* if (r6 == 0) return 0; */			\
+	if r6 == 0 goto l0_%=;				\
+	/* if (r6 != r7) return 0; */			\
+	if r6 != r7 goto l0_%=;				\
+	/* read *r7; */					\
+	r0 = *(u32*)(r7 + %[bpf_xdp_sock_queue_id]);	\
+l0_%=:	/* return 0; */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_xskmap),
+	  __imm_const(bpf_xdp_sock_queue_id, offsetof(struct bpf_xdp_sock, queue_id))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c b/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c
deleted file mode 100644
index 67a1c07ead34..000000000000
--- a/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c
+++ /dev/null
@@ -1,174 +0,0 @@
-{
-	/* This is equivalent to the following program:
-	 *
-	 *   r6 = skb->sk;
-	 *   r7 = sk_fullsock(r6);
-	 *   r0 = sk_fullsock(r6);
-	 *   if (r0 == 0) return 0;    (a)
-	 *   if (r0 != r7) return 0;   (b)
-	 *   *r7->type;                (c)
-	 *   return 0;
-	 *
-	 * It is safe to dereference r7 at point (c), because of (a) and (b).
-	 * The test verifies that relation r0 == r7 is propagated from (b) to (c).
-	 */
-	"jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JNE false branch",
-	.insns = {
-	/* r6 = skb->sk; */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	/* if (r6 == 0) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 8),
-	/* r7 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	/* r0 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	/* if (r0 == r7) r0 = *(r7->type); */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
-	/* return 0 */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R7 pointer comparison",
-},
-{
-	/* Same as above, but verify that another branch of JNE still
-	 * prohibits access to PTR_MAYBE_NULL.
-	 */
-	"jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JNE true branch",
-	.insns = {
-	/* r6 = skb->sk */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	/* if (r6 == 0) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 9),
-	/* r7 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	/* r0 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	/* if (r0 == r7) return 0; */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	/* r0 = *(r7->type); */
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
-	/* return 0 */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "R7 invalid mem access 'sock_or_null'",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R7 pointer comparison",
-},
-{
-	/* Same as a first test, but not null should be inferred for JEQ branch */
-	"jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL -> PTR_TO_SOCKET for JEQ true branch",
-	.insns = {
-	/* r6 = skb->sk; */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	/* if (r6 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 9),
-	/* r7 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	/* r0 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	/* if (r0 != r7) return 0; */
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	/* r0 = *(r7->type); */
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
-	/* return 0; */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R7 pointer comparison",
-},
-{
-	/* Same as above, but verify that another branch of JNE still
-	 * prohibits access to PTR_MAYBE_NULL.
-	 */
-	"jne/jeq infer not null, PTR_TO_SOCKET_OR_NULL unchanged for JEQ false branch",
-	.insns = {
-	/* r6 = skb->sk; */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	/* if (r6 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 8),
-	/* r7 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	/* r0 = sk_fullsock(skb); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	/* if (r0 != r7) r0 = *(r7->type); */
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
-	/* return 0; */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "R7 invalid mem access 'sock_or_null'",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R7 pointer comparison",
-},
-{
-	/* Maps are treated in a different branch of `mark_ptr_not_null_reg`,
-	 * so separate test for maps case.
-	 */
-	"jne/jeq infer not null, PTR_TO_MAP_VALUE_OR_NULL -> PTR_TO_MAP_VALUE",
-	.insns = {
-	/* r9 = &some stack to use as key */
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_9, -8),
-	/* r8 = process local map */
-	BPF_LD_MAP_FD(BPF_REG_8, 0),
-	/* r6 = map_lookup_elem(r8, r9); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	/* r7 = map_lookup_elem(r8, r9); */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	/* if (r6 == 0) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
-	/* if (r6 != r7) return 0; */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1),
-	/* read *r7; */
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_xdp_sock, queue_id)),
-	/* return 0; */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_xskmap = { 3 },
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.result = ACCEPT,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 09/24] selftests/bpf: verifier/loops1 converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (7 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 08/24] selftests/bpf: verifier/jeq_infer_not_null " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 10/24] selftests/bpf: verifier/lwt " Eduard Zingerman
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/loops1 automatically converted to use inline assembly.

There are a few modifications for the converted tests.
"tracepoint" programs do not support test execution, change program
type to "xdp" (which supports test execution) for the following tests
that have __retval tags:
- bounded loop, count to 4
- bonded loop containing forward jump

Also, remove the __retval tag for test:
- bounded loop, count from positive unknown to 4

As it's return value is a random number.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_loops1.c     | 259 ++++++++++++++++++
 tools/testing/selftests/bpf/verifier/loops1.c | 206 --------------
 3 files changed, 261 insertions(+), 206 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_loops1.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/loops1.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index de5db0de98a1..33a50dbc2321 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -32,6 +32,7 @@
 #include "verifier_jeq_infer_not_null.skel.h"
 #include "verifier_ld_ind.skel.h"
 #include "verifier_leak_ptr.skel.h"
+#include "verifier_loops1.skel.h"
 #include "verifier_map_ptr.skel.h"
 #include "verifier_map_ret_val.skel.h"
 #include "verifier_masking.skel.h"
@@ -113,6 +114,7 @@ void test_verifier_int_ptr(void)              { RUN(verifier_int_ptr); }
 void test_verifier_jeq_infer_not_null(void)   { RUN(verifier_jeq_infer_not_null); }
 void test_verifier_ld_ind(void)               { RUN(verifier_ld_ind); }
 void test_verifier_leak_ptr(void)             { RUN(verifier_leak_ptr); }
+void test_verifier_loops1(void)               { RUN(verifier_loops1); }
 void test_verifier_map_ptr(void)              { RUN(verifier_map_ptr); }
 void test_verifier_map_ret_val(void)          { RUN(verifier_map_ret_val); }
 void test_verifier_masking(void)              { RUN(verifier_masking); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_loops1.c b/tools/testing/selftests/bpf/progs/verifier_loops1.c
new file mode 100644
index 000000000000..5bc86af80a9a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_loops1.c
@@ -0,0 +1,259 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/loops1.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("xdp")
+__description("bounded loop, count to 4")
+__success __retval(4)
+__naked void bounded_loop_count_to_4(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+l0_%=:	r0 += 1;					\
+	if r0 < 4 goto l0_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("bounded loop, count to 20")
+__success
+__naked void bounded_loop_count_to_20(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+l0_%=:	r0 += 3;					\
+	if r0 < 20 goto l0_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("bounded loop, count from positive unknown to 4")
+__success
+__naked void from_positive_unknown_to_4(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	if r0 s< 0 goto l0_%=;				\
+l1_%=:	r0 += 1;					\
+	if r0 < 4 goto l1_%=;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("bounded loop, count from totally unknown to 4")
+__success
+__naked void from_totally_unknown_to_4(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+l0_%=:	r0 += 1;					\
+	if r0 < 4 goto l0_%=;				\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("bounded loop, count to 4 with equality")
+__success
+__naked void count_to_4_with_equality(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+l0_%=:	r0 += 1;					\
+	if r0 != 4 goto l0_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("bounded loop, start in the middle")
+__failure __msg("back-edge")
+__naked void loop_start_in_the_middle(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	goto l0_%=;					\
+l1_%=:	r0 += 1;					\
+l0_%=:	if r0 < 4 goto l1_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("xdp")
+__description("bounded loop containing a forward jump")
+__success __retval(4)
+__naked void loop_containing_a_forward_jump(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+l1_%=:	r0 += 1;					\
+	if r0 == r0 goto l0_%=;				\
+l0_%=:	if r0 < 4 goto l1_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("bounded loop that jumps out rather than in")
+__success
+__naked void jumps_out_rather_than_in(void)
+{
+	asm volatile ("					\
+	r6 = 0;						\
+l1_%=:	r6 += 1;					\
+	if r6 > 10000 goto l0_%=;			\
+	call %[bpf_get_prandom_u32];			\
+	goto l1_%=;					\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("infinite loop after a conditional jump")
+__failure __msg("program is too large")
+__naked void loop_after_a_conditional_jump(void)
+{
+	asm volatile ("					\
+	r0 = 5;						\
+	if r0 < 4 goto l0_%=;				\
+l1_%=:	r0 += 1;					\
+	goto l1_%=;					\
+l0_%=:	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("bounded recursion")
+__failure __msg("back-edge")
+__naked void bounded_recursion(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	call bounded_recursion__1;			\
+	exit;						\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void bounded_recursion__1(void)
+{
+	asm volatile ("					\
+	r1 += 1;					\
+	r0 = r1;					\
+	if r1 < 4 goto l0_%=;				\
+	exit;						\
+l0_%=:	call bounded_recursion__1;			\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("infinite loop in two jumps")
+__failure __msg("loop detected")
+__naked void infinite_loop_in_two_jumps(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+l1_%=:	goto l0_%=;					\
+l0_%=:	if r0 < 4 goto l1_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("infinite loop: three-jump trick")
+__failure __msg("loop detected")
+__naked void infinite_loop_three_jump_trick(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+l2_%=:	r0 += 1;					\
+	r0 &= 1;					\
+	if r0 < 2 goto l0_%=;				\
+	exit;						\
+l0_%=:	r0 += 1;					\
+	r0 &= 1;					\
+	if r0 < 2 goto l1_%=;				\
+	exit;						\
+l1_%=:	r0 += 1;					\
+	r0 &= 1;					\
+	if r0 < 2 goto l2_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("xdp")
+__description("not-taken loop with back jump to 1st insn")
+__success __retval(123)
+__naked void back_jump_to_1st_insn_1(void)
+{
+	asm volatile ("					\
+l0_%=:	r0 = 123;					\
+	if r0 == 4 goto l0_%=;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("xdp")
+__description("taken loop with back jump to 1st insn")
+__success __retval(55)
+__naked void back_jump_to_1st_insn_2(void)
+{
+	asm volatile ("					\
+	r1 = 10;					\
+	r2 = 0;						\
+	call back_jump_to_1st_insn_2__1;		\
+	exit;						\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void back_jump_to_1st_insn_2__1(void)
+{
+	asm volatile ("					\
+l0_%=:	r2 += r1;					\
+	r1 -= 1;					\
+	if r1 != 0 goto l0_%=;				\
+	r0 = r2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("xdp")
+__description("taken loop with back jump to 1st insn, 2")
+__success __retval(55)
+__naked void jump_to_1st_insn_2(void)
+{
+	asm volatile ("					\
+	r1 = 10;					\
+	r2 = 0;						\
+	call jump_to_1st_insn_2__1;			\
+	exit;						\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void jump_to_1st_insn_2__1(void)
+{
+	asm volatile ("					\
+l0_%=:	r2 += r1;					\
+	r1 -= 1;					\
+	if w1 != 0 goto l0_%=;				\
+	r0 = r2;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/loops1.c b/tools/testing/selftests/bpf/verifier/loops1.c
deleted file mode 100644
index 1af37187dc12..000000000000
--- a/tools/testing/selftests/bpf/verifier/loops1.c
+++ /dev/null
@@ -1,206 +0,0 @@
-{
-	"bounded loop, count to 4",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.retval = 4,
-},
-{
-	"bounded loop, count to 20",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 3),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 20, -2),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"bounded loop, count from positive unknown to 4",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_0, 0, 2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.retval = 4,
-},
-{
-	"bounded loop, count from totally unknown to 4",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"bounded loop, count to 4 with equality",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 4, -2),
-		BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"bounded loop, start in the middle",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_A(1),
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-		BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
-		BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "back-edge",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.retval = 4,
-},
-{
-	"bounded loop containing a forward jump",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-		BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -3),
-		BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.retval = 4,
-},
-{
-	"bounded loop that jumps out rather than in",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_6, 0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_6, 10000, 2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_JMP_A(-4),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"infinite loop after a conditional jump",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 5),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, 2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_A(-2),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "program is too large",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"bounded recursion",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 4, 1),
-	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -5),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "back-edge",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"infinite loop in two jumps",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_A(0),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "loop detected",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"infinite loop: three-jump trick",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, -11),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "loop detected",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"not-taken loop with back jump to 1st insn",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 123),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, -2),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.retval = 123,
-},
-{
-	"taken loop with back jump to 1st insn",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 10),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, -3),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.retval = 55,
-},
-{
-	"taken loop with back jump to 1st insn, 2",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 10),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
-	BPF_JMP32_IMM(BPF_JNE, BPF_REG_1, 0, -3),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.retval = 55,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 10/24] selftests/bpf: verifier/lwt converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (8 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 09/24] selftests/bpf: verifier/loops1 " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 11/24] selftests/bpf: verifier/map_in_map " Eduard Zingerman
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/lwt automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_lwt.c        | 234 ++++++++++++++++++
 tools/testing/selftests/bpf/verifier/lwt.c    | 189 --------------
 3 files changed, 236 insertions(+), 189 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_lwt.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/lwt.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 33a50dbc2321..54c30fe1b693 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -33,6 +33,7 @@
 #include "verifier_ld_ind.skel.h"
 #include "verifier_leak_ptr.skel.h"
 #include "verifier_loops1.skel.h"
+#include "verifier_lwt.skel.h"
 #include "verifier_map_ptr.skel.h"
 #include "verifier_map_ret_val.skel.h"
 #include "verifier_masking.skel.h"
@@ -115,6 +116,7 @@ void test_verifier_jeq_infer_not_null(void)   { RUN(verifier_jeq_infer_not_null)
 void test_verifier_ld_ind(void)               { RUN(verifier_ld_ind); }
 void test_verifier_leak_ptr(void)             { RUN(verifier_leak_ptr); }
 void test_verifier_loops1(void)               { RUN(verifier_loops1); }
+void test_verifier_lwt(void)                  { RUN(verifier_lwt); }
 void test_verifier_map_ptr(void)              { RUN(verifier_map_ptr); }
 void test_verifier_map_ret_val(void)          { RUN(verifier_map_ret_val); }
 void test_verifier_masking(void)              { RUN(verifier_masking); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_lwt.c b/tools/testing/selftests/bpf/progs/verifier_lwt.c
new file mode 100644
index 000000000000..5ab746307309
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_lwt.c
@@ -0,0 +1,234 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/lwt.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("lwt_in")
+__description("invalid direct packet write for LWT_IN")
+__failure __msg("cannot write into packet")
+__naked void packet_write_for_lwt_in(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	*(u8*)(r2 + 0) = r2;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("lwt_out")
+__description("invalid direct packet write for LWT_OUT")
+__failure __msg("cannot write into packet")
+__naked void packet_write_for_lwt_out(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	*(u8*)(r2 + 0) = r2;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("lwt_xmit")
+__description("direct packet write for LWT_XMIT")
+__success __retval(0)
+__naked void packet_write_for_lwt_xmit(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	*(u8*)(r2 + 0) = r2;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("lwt_in")
+__description("direct packet read for LWT_IN")
+__success __retval(0)
+__naked void packet_read_for_lwt_in(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("lwt_out")
+__description("direct packet read for LWT_OUT")
+__success __retval(0)
+__naked void packet_read_for_lwt_out(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("lwt_xmit")
+__description("direct packet read for LWT_XMIT")
+__success __retval(0)
+__naked void packet_read_for_lwt_xmit(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r0 = *(u8*)(r2 + 0);				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("lwt_xmit")
+__description("overlapping checks for direct packet access")
+__success __retval(0)
+__naked void checks_for_direct_packet_access(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 8;					\
+	if r0 > r3 goto l0_%=;				\
+	r1 = r2;					\
+	r1 += 6;					\
+	if r1 > r3 goto l0_%=;				\
+	r0 = *(u16*)(r2 + 6);				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("lwt_xmit")
+__description("make headroom for LWT_XMIT")
+__success __retval(0)
+__naked void make_headroom_for_lwt_xmit(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r2 = 34;					\
+	r3 = 0;						\
+	call %[bpf_skb_change_head];			\
+	/* split for s390 to succeed */			\
+	r1 = r6;					\
+	r2 = 42;					\
+	r3 = 0;						\
+	call %[bpf_skb_change_head];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_skb_change_head)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("invalid access of tc_classid for LWT_IN")
+__failure __msg("invalid bpf_context access")
+__failure_unpriv
+__naked void tc_classid_for_lwt_in(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_tc_classid]);	\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("invalid access of tc_classid for LWT_OUT")
+__failure __msg("invalid bpf_context access")
+__failure_unpriv
+__naked void tc_classid_for_lwt_out(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_tc_classid]);	\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("invalid access of tc_classid for LWT_XMIT")
+__failure __msg("invalid bpf_context access")
+__failure_unpriv
+__naked void tc_classid_for_lwt_xmit(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_tc_classid]);	\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid))
+	: __clobber_all);
+}
+
+SEC("lwt_in")
+__description("check skb->tc_classid half load not permitted for lwt prog")
+__failure __msg("invalid bpf_context access")
+__naked void not_permitted_for_lwt_prog(void)
+{
+	asm volatile (
+	"r0 = 0;"
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	"r0 = *(u16*)(r1 + %[__sk_buff_tc_classid]);"
+#else
+	"r0 = *(u16*)(r1 + %[__imm_0]);"
+#endif
+	"exit;"
+	:
+	: __imm_const(__imm_0, offsetof(struct __sk_buff, tc_classid) + 2),
+	  __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/lwt.c b/tools/testing/selftests/bpf/verifier/lwt.c
deleted file mode 100644
index 5c8944d0b091..000000000000
--- a/tools/testing/selftests/bpf/verifier/lwt.c
+++ /dev/null
@@ -1,189 +0,0 @@
-{
-	"invalid direct packet write for LWT_IN",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "cannot write into packet",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_LWT_IN,
-},
-{
-	"invalid direct packet write for LWT_OUT",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "cannot write into packet",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_LWT_OUT,
-},
-{
-	"direct packet write for LWT_XMIT",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_LWT_XMIT,
-},
-{
-	"direct packet read for LWT_IN",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_LWT_IN,
-},
-{
-	"direct packet read for LWT_OUT",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_LWT_OUT,
-},
-{
-	"direct packet read for LWT_XMIT",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_LWT_XMIT,
-},
-{
-	"overlapping checks for direct packet access",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_LWT_XMIT,
-},
-{
-	"make headroom for LWT_XMIT",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_2, 34),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_EMIT_CALL(BPF_FUNC_skb_change_head),
-	/* split for s390 to succeed */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_IMM(BPF_REG_2, 42),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_EMIT_CALL(BPF_FUNC_skb_change_head),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_LWT_XMIT,
-},
-{
-	"invalid access of tc_classid for LWT_IN",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, tc_classid)),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "invalid bpf_context access",
-},
-{
-	"invalid access of tc_classid for LWT_OUT",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, tc_classid)),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "invalid bpf_context access",
-},
-{
-	"invalid access of tc_classid for LWT_XMIT",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, tc_classid)),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "invalid bpf_context access",
-},
-{
-	"check skb->tc_classid half load not permitted for lwt prog",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, tc_classid)),
-#else
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, tc_classid) + 2),
-#endif
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "invalid bpf_context access",
-	.prog_type = BPF_PROG_TYPE_LWT_IN,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 11/24] selftests/bpf: verifier/map_in_map converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (9 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 10/24] selftests/bpf: verifier/lwt " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 12/24] selftests/bpf: verifier/map_ptr_mixing " Eduard Zingerman
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/map_in_map automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_map_in_map.c | 142 ++++++++++++++++++
 .../selftests/bpf/verifier/map_in_map.c       |  96 ------------
 3 files changed, 144 insertions(+), 96 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_map_in_map.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/map_in_map.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 54c30fe1b693..95fc9cb231ad 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -34,6 +34,7 @@
 #include "verifier_leak_ptr.skel.h"
 #include "verifier_loops1.skel.h"
 #include "verifier_lwt.skel.h"
+#include "verifier_map_in_map.skel.h"
 #include "verifier_map_ptr.skel.h"
 #include "verifier_map_ret_val.skel.h"
 #include "verifier_masking.skel.h"
@@ -117,6 +118,7 @@ void test_verifier_ld_ind(void)               { RUN(verifier_ld_ind); }
 void test_verifier_leak_ptr(void)             { RUN(verifier_leak_ptr); }
 void test_verifier_loops1(void)               { RUN(verifier_loops1); }
 void test_verifier_lwt(void)                  { RUN(verifier_lwt); }
+void test_verifier_map_in_map(void)           { RUN(verifier_map_in_map); }
 void test_verifier_map_ptr(void)              { RUN(verifier_map_ptr); }
 void test_verifier_map_ret_val(void)          { RUN(verifier_map_ret_val); }
 void test_verifier_masking(void)              { RUN(verifier_masking); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_map_in_map.c b/tools/testing/selftests/bpf/progs/verifier_map_in_map.c
new file mode 100644
index 000000000000..4eaab1468eb7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_map_in_map.c
@@ -0,0 +1,142 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/map_in_map.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+	__array(values, struct {
+		__uint(type, BPF_MAP_TYPE_ARRAY);
+		__uint(max_entries, 1);
+		__type(key, int);
+		__type(value, int);
+	});
+} map_in_map SEC(".maps");
+
+SEC("socket")
+__description("map in map access")
+__success __success_unpriv __retval(0)
+__naked void map_in_map_access(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_in_map] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = r0;					\
+	call %[bpf_map_lookup_elem];			\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_in_map)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("map in map state pruning")
+__success __msg("processed 26 insns")
+__log_level(2) __retval(0) __flag(BPF_F_TEST_STATE_FREQ)
+__naked void map_in_map_state_pruning(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r6 = r10;					\
+	r6 += -4;					\
+	r2 = r6;					\
+	r1 = %[map_in_map] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r2 = r6;					\
+	r1 = r0;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l1_%=;				\
+	r2 = r6;					\
+	r1 = %[map_in_map] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l2_%=;				\
+	exit;						\
+l2_%=:	r2 = r6;					\
+	r1 = r0;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r0 = *(u32*)(r0 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_in_map)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("invalid inner map pointer")
+__failure __msg("R1 pointer arithmetic on map_ptr prohibited")
+__failure_unpriv
+__naked void invalid_inner_map_pointer(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_in_map] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = r0;					\
+	r1 += 8;					\
+	call %[bpf_map_lookup_elem];			\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_in_map)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("forgot null checking on the inner map pointer")
+__failure __msg("R1 type=map_value_or_null expected=map_ptr")
+__failure_unpriv
+__naked void on_the_inner_map_pointer(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_in_map] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = r0;					\
+	call %[bpf_map_lookup_elem];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_in_map)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/map_in_map.c b/tools/testing/selftests/bpf/verifier/map_in_map.c
deleted file mode 100644
index 128a348b762d..000000000000
--- a/tools/testing/selftests/bpf/verifier/map_in_map.c
+++ /dev/null
@@ -1,96 +0,0 @@
-{
-	"map in map access",
-	.insns = {
-	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_in_map = { 3 },
-	.result = ACCEPT,
-},
-{
-	"map in map state pruning",
-	.insns = {
-	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 11),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_in_map = { 4, 14 },
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.result = VERBOSE_ACCEPT,
-	.errstr = "processed 25 insns",
-	.prog_type = BPF_PROG_TYPE_XDP,
-},
-{
-	"invalid inner map pointer",
-	.insns = {
-	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
-	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_in_map = { 3 },
-	.errstr = "R1 pointer arithmetic on map_ptr prohibited",
-	.result = REJECT,
-},
-{
-	"forgot null checking on the inner map pointer",
-	.insns = {
-	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_in_map = { 3 },
-	.errstr = "R1 type=map_value_or_null expected=map_ptr",
-	.result = REJECT,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 12/24] selftests/bpf: verifier/map_ptr_mixing converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (10 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 11/24] selftests/bpf: verifier/map_in_map " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 13/24] selftests/bpf: verifier/precise " Eduard Zingerman
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/map_ptr_mixing automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_map_ptr_mixing.c       | 265 ++++++++++++++++++
 .../selftests/bpf/verifier/map_ptr_mixing.c   | 100 -------
 3 files changed, 267 insertions(+), 100 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/map_ptr_mixing.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 95fc9cb231ad..261567230dd0 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -36,6 +36,7 @@
 #include "verifier_lwt.skel.h"
 #include "verifier_map_in_map.skel.h"
 #include "verifier_map_ptr.skel.h"
+#include "verifier_map_ptr_mixing.skel.h"
 #include "verifier_map_ret_val.skel.h"
 #include "verifier_masking.skel.h"
 #include "verifier_meta_access.skel.h"
@@ -120,6 +121,7 @@ void test_verifier_loops1(void)               { RUN(verifier_loops1); }
 void test_verifier_lwt(void)                  { RUN(verifier_lwt); }
 void test_verifier_map_in_map(void)           { RUN(verifier_map_in_map); }
 void test_verifier_map_ptr(void)              { RUN(verifier_map_ptr); }
+void test_verifier_map_ptr_mixing(void)       { RUN(verifier_map_ptr_mixing); }
 void test_verifier_map_ret_val(void)          { RUN(verifier_map_ret_val); }
 void test_verifier_masking(void)              { RUN(verifier_masking); }
 void test_verifier_meta_access(void)          { RUN(verifier_meta_access); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c b/tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c
new file mode 100644
index 000000000000..c5a7c1ddc562
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_map_ptr_mixing.c
@@ -0,0 +1,265 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/map_ptr_mixing.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct test_val);
+} map_array_48b SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, struct test_val);
+} map_hash_48b SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+	__array(values, struct {
+		__uint(type, BPF_MAP_TYPE_ARRAY);
+		__uint(max_entries, 1);
+		__type(key, int);
+		__type(value, int);
+	});
+} map_in_map SEC(".maps");
+
+void dummy_prog_42_socket(void);
+void dummy_prog_24_socket(void);
+void dummy_prog_loop1_socket(void);
+void dummy_prog_loop2_socket(void);
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 4);
+	__uint(key_size, sizeof(int));
+	__array(values, void (void));
+} map_prog1_socket SEC(".maps") = {
+	.values = {
+		[0] = (void *)&dummy_prog_42_socket,
+		[1] = (void *)&dummy_prog_loop1_socket,
+		[2] = (void *)&dummy_prog_24_socket,
+	},
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 8);
+	__uint(key_size, sizeof(int));
+	__array(values, void (void));
+} map_prog2_socket SEC(".maps") = {
+	.values = {
+		[1] = (void *)&dummy_prog_loop2_socket,
+		[2] = (void *)&dummy_prog_24_socket,
+		[7] = (void *)&dummy_prog_42_socket,
+	},
+};
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_42_socket(void)
+{
+	asm volatile ("r0 = 42; exit;");
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_24_socket(void)
+{
+	asm volatile ("r0 = 24; exit;");
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_loop1_socket(void)
+{
+	asm volatile ("			\
+	r3 = 1;				\
+	r2 = %[map_prog1_socket] ll;	\
+	call %[bpf_tail_call];		\
+	r0 = 41;			\
+	exit;				\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_loop2_socket(void)
+{
+	asm volatile ("			\
+	r3 = 1;				\
+	r2 = %[map_prog2_socket] ll;	\
+	call %[bpf_tail_call];		\
+	r0 = 41;			\
+	exit;				\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog2_socket)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("calls: two calls returning different map pointers for lookup (hash, array)")
+__success __retval(1)
+__naked void pointers_for_lookup_hash_array(void)
+{
+	asm volatile ("					\
+	/* main prog */					\
+	if r1 != 0 goto l0_%=;				\
+	call pointers_for_lookup_hash_array__1;		\
+	goto l1_%=;					\
+l0_%=:	call pointers_for_lookup_hash_array__2;		\
+l1_%=:	r1 = r0;					\
+	r2 = 0;						\
+	*(u64*)(r10 - 8) = r2;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r1 = %[test_val_foo];				\
+	*(u64*)(r0 + 0) = r1;				\
+	r0 = 1;						\
+l2_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_const(test_val_foo, offsetof(struct test_val, foo))
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void pointers_for_lookup_hash_array__1(void)
+{
+	asm volatile ("					\
+	r0 = %[map_hash_48b] ll;			\
+	exit;						\
+"	:
+	: __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void pointers_for_lookup_hash_array__2(void)
+{
+	asm volatile ("					\
+	r0 = %[map_array_48b] ll;			\
+	exit;						\
+"	:
+	: __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("calls: two calls returning different map pointers for lookup (hash, map in map)")
+__failure __msg("only read from bpf_array is supported")
+__naked void lookup_hash_map_in_map(void)
+{
+	asm volatile ("					\
+	/* main prog */					\
+	if r1 != 0 goto l0_%=;				\
+	call lookup_hash_map_in_map__1;			\
+	goto l1_%=;					\
+l0_%=:	call lookup_hash_map_in_map__2;			\
+l1_%=:	r1 = r0;					\
+	r2 = 0;						\
+	*(u64*)(r10 - 8) = r2;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r1 = %[test_val_foo];				\
+	*(u64*)(r0 + 0) = r1;				\
+	r0 = 1;						\
+l2_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_const(test_val_foo, offsetof(struct test_val, foo))
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void lookup_hash_map_in_map__1(void)
+{
+	asm volatile ("					\
+	r0 = %[map_array_48b] ll;			\
+	exit;						\
+"	:
+	: __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void lookup_hash_map_in_map__2(void)
+{
+	asm volatile ("					\
+	r0 = %[map_in_map] ll;				\
+	exit;						\
+"	:
+	: __imm_addr(map_in_map)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("cond: two branches returning different map pointers for lookup (tail, tail)")
+__success __failure_unpriv __msg_unpriv("tail_call abusing map_ptr")
+__retval(42)
+__naked void pointers_for_lookup_tail_tail_1(void)
+{
+	asm volatile ("					\
+	r6 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	if r6 != 0 goto l0_%=;				\
+	r2 = %[map_prog2_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r2 = %[map_prog1_socket] ll;			\
+l1_%=:	r3 = 7;						\
+	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket),
+	  __imm_addr(map_prog2_socket),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("cond: two branches returning same map pointers for lookup (tail, tail)")
+__success __success_unpriv __retval(42)
+__naked void pointers_for_lookup_tail_tail_2(void)
+{
+	asm volatile ("					\
+	r6 = *(u32*)(r1 + %[__sk_buff_mark]);		\
+	if r6 == 0 goto l0_%=;				\
+	r2 = %[map_prog2_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r2 = %[map_prog2_socket] ll;			\
+l1_%=:	r3 = 7;						\
+	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog2_socket),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
deleted file mode 100644
index 1f2b8c4cb26d..000000000000
--- a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
+++ /dev/null
@@ -1,100 +0,0 @@
-{
-	"calls: two calls returning different map pointers for lookup (hash, array)",
-	.insns = {
-	/* main prog */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_CALL_REL(11),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_CALL_REL(12),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	/* subprog 1 */
-	BPF_LD_MAP_FD(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	/* subprog 2 */
-	BPF_LD_MAP_FD(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.fixup_map_hash_48b = { 13 },
-	.fixup_map_array_48b = { 16 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"calls: two calls returning different map pointers for lookup (hash, map in map)",
-	.insns = {
-	/* main prog */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_CALL_REL(11),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_CALL_REL(12),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	/* subprog 1 */
-	BPF_LD_MAP_FD(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	/* subprog 2 */
-	BPF_LD_MAP_FD(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.fixup_map_in_map = { 16 },
-	.fixup_map_array_48b = { 13 },
-	.result = REJECT,
-	.errstr = "only read from bpf_array is supported",
-},
-{
-	"cond: two branches returning different map pointers for lookup (tail, tail)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 3),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_3, 7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 5 },
-	.fixup_prog2 = { 2 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "tail_call abusing map_ptr",
-	.result = ACCEPT,
-	.retval = 42,
-},
-{
-	"cond: two branches returning same map pointers for lookup (tail, tail)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 3),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_3, 7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog2 = { 2, 5 },
-	.result_unpriv = ACCEPT,
-	.result = ACCEPT,
-	.retval = 42,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 13/24] selftests/bpf: verifier/precise converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (11 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 12/24] selftests/bpf: verifier/map_ptr_mixing " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 14/24] selftests/bpf: verifier/prevent_map_lookup " Eduard Zingerman
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/precise automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_precise.c    | 269 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/precise.c  | 219 --------------
 3 files changed, 271 insertions(+), 219 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_precise.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/precise.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 261567230dd0..d5ec9054c025 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -40,6 +40,7 @@
 #include "verifier_map_ret_val.skel.h"
 #include "verifier_masking.skel.h"
 #include "verifier_meta_access.skel.h"
+#include "verifier_precise.skel.h"
 #include "verifier_raw_stack.skel.h"
 #include "verifier_raw_tp_writable.skel.h"
 #include "verifier_reg_equal.skel.h"
@@ -125,6 +126,7 @@ void test_verifier_map_ptr_mixing(void)       { RUN(verifier_map_ptr_mixing); }
 void test_verifier_map_ret_val(void)          { RUN(verifier_map_ret_val); }
 void test_verifier_masking(void)              { RUN(verifier_masking); }
 void test_verifier_meta_access(void)          { RUN(verifier_meta_access); }
+void test_verifier_precise(void)              { RUN(verifier_precise); }
 void test_verifier_raw_stack(void)            { RUN(verifier_raw_stack); }
 void test_verifier_raw_tp_writable(void)      { RUN(verifier_raw_tp_writable); }
 void test_verifier_reg_equal(void)            { RUN(verifier_reg_equal); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_precise.c b/tools/testing/selftests/bpf/progs/verifier_precise.c
new file mode 100644
index 000000000000..81d58dc7d2d4
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_precise.c
@@ -0,0 +1,269 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/precise.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "../../../include/linux/filter.h"
+#include "bpf_misc.h"
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct test_val);
+} map_array_48b SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+	__uint(max_entries, 4096);
+} map_ringbuf SEC(".maps");
+
+SEC("tracepoint")
+__description("precise: test 1")
+__success
+__msg("27: (85) call bpf_probe_read_kernel#113")
+__msg("last_idx 27 first_idx 21")
+__msg("regs=4 stack=0 before 26")
+__msg("regs=4 stack=0 before 25")
+__msg("regs=4 stack=0 before 24")
+__msg("regs=4 stack=0 before 23")
+__msg("regs=4 stack=0 before 21")
+__msg("parent didn't have regs=4 stack=0 marks")
+__msg("last_idx 20 first_idx 11")
+__msg("regs=4 stack=0 before 20")
+__msg("regs=200 stack=0 before 19")
+__msg("regs=300 stack=0 before 18")
+__msg("regs=201 stack=0 before 16")
+__msg("regs=201 stack=0 before 15")
+__msg("regs=200 stack=0 before 14")
+__msg("regs=200 stack=0 before 13")
+__msg("regs=200 stack=0 before 12")
+__msg("regs=200 stack=0 before 11")
+__msg("parent already had regs=0 stack=0 marks")
+__log_level(2)
+__naked void precise_test_1(void)
+{
+	asm volatile ("					\
+	r0 = 1;						\
+	r6 = %[map_array_48b] ll;			\
+	r1 = r6;					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r7 = 0;						\
+	*(u64*)(r10 - 8) = r7;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r9 = r0;					\
+	r1 = r6;					\
+	r2 = r10;					\
+	r2 += -8;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r8 = r0;					\
+	r9 -= r8;			/* map_value_ptr -= map_value_ptr */\
+	r2 = r9;					\
+	if r2 < 8 goto l2_%=;				\
+	exit;						\
+l2_%=:	r2 += 1;			/* R2=scalar(umin=1, umax=8) */\
+	r1 = r10;					\
+	r1 += -8;					\
+	r3 = 0;						\
+	call %[bpf_probe_read_kernel];			\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_probe_read_kernel),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("precise: test 2")
+__success
+__msg("27: (85) call bpf_probe_read_kernel#113")
+__msg("last_idx 27 first_idx 23")
+__msg("regs=4 stack=0 before 26")
+__msg("regs=4 stack=0 before 25")
+__msg("regs=4 stack=0 before 24")
+__msg("regs=4 stack=0 before 25")
+__msg("parent didn't have regs=4 stack=0 marks")
+__msg("last_idx 21 first_idx 21")
+__msg("regs=4 stack=0 before 21")
+__msg("parent didn't have regs=4 stack=0 marks")
+__msg("last_idx 20 first_idx 18")
+__msg("regs=4 stack=0 before 20")
+__msg("regs=200 stack=0 before 19")
+__msg("regs=300 stack=0 before 18")
+__msg("parent already had regs=0 stack=0 marks")
+__log_level(2) __flag(BPF_F_TEST_STATE_FREQ)
+__naked void precise_test_2(void)
+{
+	asm volatile ("					\
+	r0 = 1;						\
+	r6 = %[map_array_48b] ll;			\
+	r1 = r6;					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r7 = 0;						\
+	*(u64*)(r10 - 8) = r7;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r9 = r0;					\
+	r1 = r6;					\
+	r2 = r10;					\
+	r2 += -8;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r8 = r0;					\
+	r9 -= r8;			/* map_value_ptr -= map_value_ptr */\
+	r2 = r9;					\
+	if r2 < 8 goto l2_%=;				\
+	exit;						\
+l2_%=:	r2 += 1;			/* R2=scalar(umin=1, umax=8) */\
+	r1 = r10;					\
+	r1 += -8;					\
+	r3 = 0;						\
+	call %[bpf_probe_read_kernel];			\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_probe_read_kernel),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("precise: cross frame pruning")
+__failure __msg("!read_ok")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precise_cross_frame_pruning(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r8 = 0;						\
+	if r0 != 0 goto l0_%=;				\
+	r8 = 1;						\
+l0_%=:	call %[bpf_get_prandom_u32];			\
+	r9 = 0;						\
+	if r0 != 0 goto l1_%=;				\
+	r9 = 1;						\
+l1_%=:	r1 = r0;					\
+	call precise_cross_frame_pruning__1;		\
+	if r8 == 1 goto l2_%=;				\
+	r1 = *(u8*)(r2 + 0);				\
+l2_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void precise_cross_frame_pruning__1(void)
+{
+	asm volatile ("					\
+	if r1 == 0 goto l0_%=;				\
+l0_%=:	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("xdp")
+__description("precise: ST insn causing spi > allocated_stack")
+__success
+__msg("5: (2d) if r4 > r0 goto pc+0")
+__msg("last_idx 5 first_idx 5")
+__msg("parent didn't have regs=10 stack=0 marks")
+__msg("last_idx 4 first_idx 2")
+__msg("regs=10 stack=0 before 4")
+__msg("regs=10 stack=0 before 3")
+__msg("regs=0 stack=1 before 2")
+__msg("last_idx 5 first_idx 5")
+__msg("parent didn't have regs=1 stack=0 marks")
+__log_level(2) __retval(-1) __flag(BPF_F_TEST_STATE_FREQ)
+__naked void insn_causing_spi_allocated_stack_1(void)
+{
+	asm volatile ("					\
+	r3 = r10;					\
+	if r3 != 123 goto l0_%=;			\
+l0_%=:	.8byte %[st_mem];				\
+	r4 = *(u64*)(r10 - 8);				\
+	r0 = -1;					\
+	if r4 > r0 goto l1_%=;				\
+l1_%=:	exit;						\
+"	:
+	: __imm_insn(st_mem, BPF_ST_MEM(BPF_DW, BPF_REG_3, -8, 0))
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("precise: STX insn causing spi > allocated_stack")
+__success
+__msg("last_idx 6 first_idx 6")
+__msg("parent didn't have regs=10 stack=0 marks")
+__msg("last_idx 5 first_idx 3")
+__msg("regs=10 stack=0 before 5")
+__msg("regs=10 stack=0 before 4")
+__msg("regs=0 stack=1 before 3")
+__msg("last_idx 6 first_idx 6")
+__msg("parent didn't have regs=1 stack=0 marks")
+__msg("last_idx 5 first_idx 3")
+__msg("regs=1 stack=0 before 5")
+__log_level(2) __retval(-1) __flag(BPF_F_TEST_STATE_FREQ)
+__naked void insn_causing_spi_allocated_stack_2(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r3 = r10;					\
+	if r3 != 123 goto l0_%=;			\
+l0_%=:	*(u64*)(r3 - 8) = r0;				\
+	r4 = *(u64*)(r10 - 8);				\
+	r0 = -1;					\
+	if r4 > r0 goto l1_%=;				\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("precise: mark_chain_precision for ARG_CONST_ALLOC_SIZE_OR_ZERO")
+__failure __msg("invalid access to memory, mem_size=1 off=42 size=8")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void const_alloc_size_or_zero(void)
+{
+	asm volatile ("					\
+	r4 = *(u32*)(r1 + %[xdp_md_ingress_ifindex]);	\
+	r6 = %[map_ringbuf] ll;				\
+	r1 = r6;					\
+	r2 = 1;						\
+	r3 = 0;						\
+	if r4 == 0 goto l0_%=;				\
+	r2 = 0x1000;					\
+l0_%=:	call %[bpf_ringbuf_reserve];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	r2 = *(u64*)(r0 + 42);				\
+	call %[bpf_ringbuf_submit];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_ringbuf_reserve),
+	  __imm(bpf_ringbuf_submit),
+	  __imm_addr(map_ringbuf),
+	  __imm_const(xdp_md_ingress_ifindex, offsetof(struct xdp_md, ingress_ifindex))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
deleted file mode 100644
index 6c03a7d805f9..000000000000
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ /dev/null
@@ -1,219 +0,0 @@
-{
-	"precise: test 1",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_LD_MAP_FD(BPF_REG_6, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
-
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8), /* map_value_ptr -= map_value_ptr */
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.fixup_map_array_48b = { 1 },
-	.result = VERBOSE_ACCEPT,
-	.errstr =
-	"26: (85) call bpf_probe_read_kernel#113\
-	last_idx 26 first_idx 20\
-	regs=4 stack=0 before 25\
-	regs=4 stack=0 before 24\
-	regs=4 stack=0 before 23\
-	regs=4 stack=0 before 22\
-	regs=4 stack=0 before 20\
-	parent didn't have regs=4 stack=0 marks\
-	last_idx 19 first_idx 10\
-	regs=4 stack=0 before 19\
-	regs=200 stack=0 before 18\
-	regs=300 stack=0 before 17\
-	regs=201 stack=0 before 15\
-	regs=201 stack=0 before 14\
-	regs=200 stack=0 before 13\
-	regs=200 stack=0 before 12\
-	regs=200 stack=0 before 11\
-	regs=200 stack=0 before 10\
-	parent already had regs=0 stack=0 marks",
-},
-{
-	"precise: test 2",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_LD_MAP_FD(BPF_REG_6, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
-
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8), /* map_value_ptr -= map_value_ptr */
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1),
-	BPF_EXIT_INSN(),
-
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.fixup_map_array_48b = { 1 },
-	.result = VERBOSE_ACCEPT,
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.errstr =
-	"26: (85) call bpf_probe_read_kernel#113\
-	last_idx 26 first_idx 22\
-	regs=4 stack=0 before 25\
-	regs=4 stack=0 before 24\
-	regs=4 stack=0 before 23\
-	regs=4 stack=0 before 22\
-	parent didn't have regs=4 stack=0 marks\
-	last_idx 20 first_idx 20\
-	regs=4 stack=0 before 20\
-	parent didn't have regs=4 stack=0 marks\
-	last_idx 19 first_idx 17\
-	regs=4 stack=0 before 19\
-	regs=200 stack=0 before 18\
-	regs=300 stack=0 before 17\
-	parent already had regs=0 stack=0 marks",
-},
-{
-	"precise: cross frame pruning",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_8, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_IMM(BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_9, 1),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.errstr = "!read_ok",
-	.result = REJECT,
-},
-{
-	"precise: ST insn causing spi > allocated_stack",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_3, -8, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
-	BPF_MOV64_IMM(BPF_REG_0, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.errstr = "5: (2d) if r4 > r0 goto pc+0\
-	last_idx 5 first_idx 5\
-	parent didn't have regs=10 stack=0 marks\
-	last_idx 4 first_idx 2\
-	regs=10 stack=0 before 4\
-	regs=10 stack=0 before 3\
-	regs=0 stack=1 before 2\
-	last_idx 5 first_idx 5\
-	parent didn't have regs=1 stack=0 marks",
-	.result = VERBOSE_ACCEPT,
-	.retval = -1,
-},
-{
-	"precise: STX insn causing spi > allocated_stack",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
-	BPF_MOV64_IMM(BPF_REG_0, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.errstr = "last_idx 6 first_idx 6\
-	parent didn't have regs=10 stack=0 marks\
-	last_idx 5 first_idx 3\
-	regs=10 stack=0 before 5\
-	regs=10 stack=0 before 4\
-	regs=0 stack=1 before 3\
-	last_idx 6 first_idx 6\
-	parent didn't have regs=1 stack=0 marks\
-	last_idx 5 first_idx 3\
-	regs=1 stack=0 before 5",
-	.result = VERBOSE_ACCEPT,
-	.retval = -1,
-},
-{
-	"precise: mark_chain_precision for ARG_CONST_ALLOC_SIZE_OR_ZERO",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct xdp_md, ingress_ifindex)),
-	BPF_LD_MAP_FD(BPF_REG_6, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_IMM(BPF_REG_2, 1),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_2, 0x1000),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 42),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_ringbuf = { 1 },
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.errstr = "invalid access to memory, mem_size=1 off=42 size=8",
-	.result = REJECT,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 14/24] selftests/bpf: verifier/prevent_map_lookup converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (12 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 13/24] selftests/bpf: verifier/precise " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 15/24] selftests/bpf: verifier/ref_tracking " Eduard Zingerman
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/prevent_map_lookup automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |  2 +
 .../bpf/progs/verifier_prevent_map_lookup.c   | 65 +++++++++++++++++++
 .../bpf/verifier/prevent_map_lookup.c         | 29 ---------
 3 files changed, 67 insertions(+), 29 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/prevent_map_lookup.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index d5ec9054c025..7627893dd849 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -41,6 +41,7 @@
 #include "verifier_masking.skel.h"
 #include "verifier_meta_access.skel.h"
 #include "verifier_precise.skel.h"
+#include "verifier_prevent_map_lookup.skel.h"
 #include "verifier_raw_stack.skel.h"
 #include "verifier_raw_tp_writable.skel.h"
 #include "verifier_reg_equal.skel.h"
@@ -127,6 +128,7 @@ void test_verifier_map_ret_val(void)          { RUN(verifier_map_ret_val); }
 void test_verifier_masking(void)              { RUN(verifier_masking); }
 void test_verifier_meta_access(void)          { RUN(verifier_meta_access); }
 void test_verifier_precise(void)              { RUN(verifier_precise); }
+void test_verifier_prevent_map_lookup(void)   { RUN(verifier_prevent_map_lookup); }
 void test_verifier_raw_stack(void)            { RUN(verifier_raw_stack); }
 void test_verifier_raw_tp_writable(void)      { RUN(verifier_raw_tp_writable); }
 void test_verifier_reg_equal(void)            { RUN(verifier_reg_equal); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c b/tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c
new file mode 100644
index 000000000000..e85f5b0d60d7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_prevent_map_lookup.c
@@ -0,0 +1,65 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/prevent_map_lookup.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_STACK_TRACE);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, __u64);
+} map_stacktrace SEC(".maps");
+
+void dummy_prog_42_socket(void);
+void dummy_prog_24_socket(void);
+void dummy_prog_loop2_socket(void);
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 8);
+	__uint(key_size, sizeof(int));
+	__array(values, void (void));
+} map_prog2_socket SEC(".maps");
+
+SEC("perf_event")
+__description("prevent map lookup in stack trace")
+__failure __msg("cannot pass map_type 7 into func bpf_map_lookup_elem")
+__naked void map_lookup_in_stack_trace(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_stacktrace] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_stacktrace)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("prevent map lookup in prog array")
+__failure __msg("cannot pass map_type 3 into func bpf_map_lookup_elem")
+__failure_unpriv
+__naked void map_lookup_in_prog_array(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_prog2_socket] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_prog2_socket)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c b/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c
deleted file mode 100644
index fc4e301260f6..000000000000
--- a/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c
+++ /dev/null
@@ -1,29 +0,0 @@
-{
-	"prevent map lookup in stack trace",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_stacktrace = { 3 },
-	.result = REJECT,
-	.errstr = "cannot pass map_type 7 into func bpf_map_lookup_elem",
-	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
-},
-{
-	"prevent map lookup in prog array",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog2 = { 3 },
-	.result = REJECT,
-	.errstr = "cannot pass map_type 3 into func bpf_map_lookup_elem",
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 15/24] selftests/bpf: verifier/ref_tracking converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (13 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 14/24] selftests/bpf: verifier/prevent_map_lookup " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 16/24] selftests/bpf: verifier/regalloc " Eduard Zingerman
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/ref_tracking automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |    2 +
 .../bpf/progs/verifier_ref_tracking.c         | 1495 +++++++++++++++++
 .../selftests/bpf/verifier/ref_tracking.c     | 1082 ------------
 3 files changed, 1497 insertions(+), 1082 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/ref_tracking.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 7627893dd849..5941ef59ed76 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -45,6 +45,7 @@
 #include "verifier_raw_stack.skel.h"
 #include "verifier_raw_tp_writable.skel.h"
 #include "verifier_reg_equal.skel.h"
+#include "verifier_ref_tracking.skel.h"
 #include "verifier_ringbuf.skel.h"
 #include "verifier_spill_fill.skel.h"
 #include "verifier_stack_ptr.skel.h"
@@ -132,6 +133,7 @@ void test_verifier_prevent_map_lookup(void)   { RUN(verifier_prevent_map_lookup)
 void test_verifier_raw_stack(void)            { RUN(verifier_raw_stack); }
 void test_verifier_raw_tp_writable(void)      { RUN(verifier_raw_tp_writable); }
 void test_verifier_reg_equal(void)            { RUN(verifier_reg_equal); }
+void test_verifier_ref_tracking(void)         { RUN(verifier_ref_tracking); }
 void test_verifier_ringbuf(void)              { RUN(verifier_ringbuf); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
new file mode 100644
index 000000000000..c4c6da21265e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
@@ -0,0 +1,1495 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/ref_tracking.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "../../../include/linux/filter.h"
+#include "bpf_misc.h"
+
+#define BPF_SK_LOOKUP(func) \
+	/* struct bpf_sock_tuple tuple = {} */ \
+	"r2 = 0;"			\
+	"*(u32*)(r10 - 8) = r2;"	\
+	"*(u64*)(r10 - 16) = r2;"	\
+	"*(u64*)(r10 - 24) = r2;"	\
+	"*(u64*)(r10 - 32) = r2;"	\
+	"*(u64*)(r10 - 40) = r2;"	\
+	"*(u64*)(r10 - 48) = r2;"	\
+	/* sk = func(ctx, &tuple, sizeof tuple, 0, 0) */ \
+	"r2 = r10;"			\
+	"r2 += -48;"			\
+	"r3 = %[sizeof_bpf_sock_tuple];"\
+	"r4 = 0;"			\
+	"r5 = 0;"			\
+	"call %[" #func "];"
+
+struct bpf_key {} __attribute__((preserve_access_index));
+
+extern void bpf_key_put(struct bpf_key *key) __ksym;
+extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym;
+extern struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym;
+
+/* BTF FUNC records are not generated for kfuncs referenced
+ * from inline assembly. These records are necessary for
+ * libbpf to link the program. The function below is a hack
+ * to ensure that BTF FUNC records are generated.
+ */
+void __kfunc_btf_root(void)
+{
+	bpf_key_put(0);
+	bpf_lookup_system_key(0);
+	bpf_lookup_user_key(0, 0);
+}
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct test_val);
+} map_array_48b SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+	__uint(max_entries, 4096);
+} map_ringbuf SEC(".maps");
+
+void dummy_prog_42_tc(void);
+void dummy_prog_24_tc(void);
+void dummy_prog_loop1_tc(void);
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 4);
+	__uint(key_size, sizeof(int));
+	__array(values, void (void));
+} map_prog1_tc SEC(".maps") = {
+	.values = {
+		[0] = (void *)&dummy_prog_42_tc,
+		[1] = (void *)&dummy_prog_loop1_tc,
+		[2] = (void *)&dummy_prog_24_tc,
+	},
+};
+
+SEC("tc")
+__auxiliary
+__naked void dummy_prog_42_tc(void)
+{
+	asm volatile ("r0 = 42; exit;");
+}
+
+SEC("tc")
+__auxiliary
+__naked void dummy_prog_24_tc(void)
+{
+	asm volatile ("r0 = 24; exit;");
+}
+
+SEC("tc")
+__auxiliary
+__naked void dummy_prog_loop1_tc(void)
+{
+	asm volatile ("			\
+	r3 = 1;				\
+	r2 = %[map_prog1_tc] ll;	\
+	call %[bpf_tail_call];		\
+	r0 = 41;			\
+	exit;				\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_tc)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: leak potential reference")
+__failure __msg("Unreleased reference")
+__naked void reference_tracking_leak_potential_reference(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r6 = r0;		/* leak reference */	\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: leak potential reference to sock_common")
+__failure __msg("Unreleased reference")
+__naked void potential_reference_to_sock_common_1(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_skc_lookup_tcp)
+"	r6 = r0;		/* leak reference */	\
+	exit;						\
+"	:
+	: __imm(bpf_skc_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: leak potential reference on stack")
+__failure __msg("Unreleased reference")
+__naked void leak_potential_reference_on_stack(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r4 = r10;					\
+	r4 += -8;					\
+	*(u64*)(r4 + 0) = r0;				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: leak potential reference on stack 2")
+__failure __msg("Unreleased reference")
+__naked void potential_reference_on_stack_2(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r4 = r10;					\
+	r4 += -8;					\
+	*(u64*)(r4 + 0) = r0;				\
+	r0 = 0;						\
+	r1 = 0;						\
+	*(u64*)(r4 + 0) = r1;				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: zero potential reference")
+__failure __msg("Unreleased reference")
+__naked void reference_tracking_zero_potential_reference(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r0 = 0;			/* leak reference */	\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: zero potential reference to sock_common")
+__failure __msg("Unreleased reference")
+__naked void potential_reference_to_sock_common_2(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_skc_lookup_tcp)
+"	r0 = 0;			/* leak reference */	\
+	exit;						\
+"	:
+	: __imm(bpf_skc_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: copy and zero potential references")
+__failure __msg("Unreleased reference")
+__naked void copy_and_zero_potential_references(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r7 = r0;					\
+	r0 = 0;						\
+	r7 = 0;			/* leak reference */	\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("lsm.s/bpf")
+__description("reference tracking: acquire/release user key reference")
+__success
+__naked void acquire_release_user_key_reference(void)
+{
+	asm volatile ("					\
+	r1 = -3;					\
+	r2 = 0;						\
+	call %[bpf_lookup_user_key];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = r0;					\
+	call %[bpf_key_put];				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_key_put),
+	  __imm(bpf_lookup_user_key)
+	: __clobber_all);
+}
+
+SEC("lsm.s/bpf")
+__description("reference tracking: acquire/release system key reference")
+__success
+__naked void acquire_release_system_key_reference(void)
+{
+	asm volatile ("					\
+	r1 = 1;						\
+	call %[bpf_lookup_system_key];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = r0;					\
+	call %[bpf_key_put];				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_key_put),
+	  __imm(bpf_lookup_system_key)
+	: __clobber_all);
+}
+
+SEC("lsm.s/bpf")
+__description("reference tracking: release user key reference without check")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__naked void user_key_reference_without_check(void)
+{
+	asm volatile ("					\
+	r1 = -3;					\
+	r2 = 0;						\
+	call %[bpf_lookup_user_key];			\
+	r1 = r0;					\
+	call %[bpf_key_put];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_key_put),
+	  __imm(bpf_lookup_user_key)
+	: __clobber_all);
+}
+
+SEC("lsm.s/bpf")
+__description("reference tracking: release system key reference without check")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__naked void system_key_reference_without_check(void)
+{
+	asm volatile ("					\
+	r1 = 1;						\
+	call %[bpf_lookup_system_key];			\
+	r1 = r0;					\
+	call %[bpf_key_put];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_key_put),
+	  __imm(bpf_lookup_system_key)
+	: __clobber_all);
+}
+
+SEC("lsm.s/bpf")
+__description("reference tracking: release with NULL key pointer")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__naked void release_with_null_key_pointer(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	call %[bpf_key_put];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_key_put)
+	: __clobber_all);
+}
+
+SEC("lsm.s/bpf")
+__description("reference tracking: leak potential reference to user key")
+__failure __msg("Unreleased reference")
+__naked void potential_reference_to_user_key(void)
+{
+	asm volatile ("					\
+	r1 = -3;					\
+	r2 = 0;						\
+	call %[bpf_lookup_user_key];			\
+	exit;						\
+"	:
+	: __imm(bpf_lookup_user_key)
+	: __clobber_all);
+}
+
+SEC("lsm.s/bpf")
+__description("reference tracking: leak potential reference to system key")
+__failure __msg("Unreleased reference")
+__naked void potential_reference_to_system_key(void)
+{
+	asm volatile ("					\
+	r1 = 1;						\
+	call %[bpf_lookup_system_key];			\
+	exit;						\
+"	:
+	: __imm(bpf_lookup_system_key)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference without check")
+__failure __msg("type=sock_or_null expected=sock")
+__naked void tracking_release_reference_without_check(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	/* reference in r0 may be NULL */		\
+	r1 = r0;					\
+	r2 = 0;						\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference to sock_common without check")
+__failure __msg("type=sock_common_or_null expected=sock")
+__naked void to_sock_common_without_check(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_skc_lookup_tcp)
+"	/* reference in r0 may be NULL */		\
+	r1 = r0;					\
+	r2 = 0;						\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_release),
+	  __imm(bpf_skc_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference")
+__success __retval(0)
+__naked void reference_tracking_release_reference(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference to sock_common")
+__success __retval(0)
+__naked void release_reference_to_sock_common(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_skc_lookup_tcp)
+"	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_release),
+	  __imm(bpf_skc_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference 2")
+__success __retval(0)
+__naked void reference_tracking_release_reference_2(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference twice")
+__failure __msg("type=scalar expected=sock")
+__naked void reference_tracking_release_reference_twice(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	r6 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference twice inside branch")
+__failure __msg("type=scalar expected=sock")
+__naked void release_reference_twice_inside_branch(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	r6 = r0;					\
+	if r0 == 0 goto l0_%=;		/* goto end */	\
+	call %[bpf_sk_release];				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: alloc, check, free in one subbranch")
+__failure __msg("Unreleased reference")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void check_free_in_one_subbranch(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 16;					\
+	/* if (offsetof(skb, mark) > data_len) exit; */	\
+	if r0 <= r3 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = *(u32*)(r2 + %[__sk_buff_mark]);		\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r6 == 0 goto l1_%=;		/* mark == 0? */\
+	/* Leak reference in R0 */			\
+	exit;						\
+l1_%=:	if r0 == 0 goto l2_%=;		/* sk NULL? */	\
+	r1 = r0;					\
+	call %[bpf_sk_release];				\
+l2_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: alloc, check, free in both subbranches")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void check_free_in_both_subbranches(void)
+{
+	asm volatile ("					\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 16;					\
+	/* if (offsetof(skb, mark) > data_len) exit; */	\
+	if r0 <= r3 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = *(u32*)(r2 + %[__sk_buff_mark]);		\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r6 == 0 goto l1_%=;		/* mark == 0? */\
+	if r0 == 0 goto l2_%=;		/* sk NULL? */	\
+	r1 = r0;					\
+	call %[bpf_sk_release];				\
+l2_%=:	exit;						\
+l1_%=:	if r0 == 0 goto l3_%=;		/* sk NULL? */	\
+	r1 = r0;					\
+	call %[bpf_sk_release];				\
+l3_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking in call: free reference in subprog")
+__success __retval(0)
+__naked void call_free_reference_in_subprog(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;	/* unchecked reference */	\
+	call call_free_reference_in_subprog__1;		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void call_free_reference_in_subprog__1(void)
+{
+	asm volatile ("					\
+	/* subprog 1 */					\
+	r2 = r1;					\
+	if r2 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_release)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking in call: free reference in subprog and outside")
+__failure __msg("type=scalar expected=sock")
+__naked void reference_in_subprog_and_outside(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;	/* unchecked reference */	\
+	r6 = r0;					\
+	call reference_in_subprog_and_outside__1;	\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void reference_in_subprog_and_outside__1(void)
+{
+	asm volatile ("					\
+	/* subprog 1 */					\
+	r2 = r1;					\
+	if r2 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_release)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking in call: alloc & leak reference in subprog")
+__failure __msg("Unreleased reference")
+__naked void alloc_leak_reference_in_subprog(void)
+{
+	asm volatile ("					\
+	r4 = r10;					\
+	r4 += -8;					\
+	call alloc_leak_reference_in_subprog__1;	\
+	r1 = r0;					\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void alloc_leak_reference_in_subprog__1(void)
+{
+	asm volatile ("					\
+	/* subprog 1 */					\
+	r6 = r4;					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	/* spill unchecked sk_ptr into stack of caller */\
+	*(u64*)(r6 + 0) = r0;				\
+	r1 = r0;					\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking in call: alloc in subprog, release outside")
+__success __retval(POINTER_VALUE)
+__naked void alloc_in_subprog_release_outside(void)
+{
+	asm volatile ("					\
+	r4 = r10;					\
+	call alloc_in_subprog_release_outside__1;	\
+	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_release)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void alloc_in_subprog_release_outside__1(void)
+{
+	asm volatile ("					\
+	/* subprog 1 */					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	exit;				/* return sk */	\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking in call: sk_ptr leak into caller stack")
+__failure __msg("Unreleased reference")
+__naked void ptr_leak_into_caller_stack(void)
+{
+	asm volatile ("					\
+	r4 = r10;					\
+	r4 += -8;					\
+	call ptr_leak_into_caller_stack__1;		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void ptr_leak_into_caller_stack__1(void)
+{
+	asm volatile ("					\
+	/* subprog 1 */					\
+	r5 = r10;					\
+	r5 += -8;					\
+	*(u64*)(r5 + 0) = r4;				\
+	call ptr_leak_into_caller_stack__2;		\
+	/* spill unchecked sk_ptr into stack of caller */\
+	r5 = r10;					\
+	r5 += -8;					\
+	r4 = *(u64*)(r5 + 0);				\
+	*(u64*)(r4 + 0) = r0;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void ptr_leak_into_caller_stack__2(void)
+{
+	asm volatile ("					\
+	/* subprog 2 */					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking in call: sk_ptr spill into caller stack")
+__success __retval(0)
+__naked void ptr_spill_into_caller_stack(void)
+{
+	asm volatile ("					\
+	r4 = r10;					\
+	r4 += -8;					\
+	call ptr_spill_into_caller_stack__1;		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void ptr_spill_into_caller_stack__1(void)
+{
+	asm volatile ("					\
+	/* subprog 1 */					\
+	r5 = r10;					\
+	r5 += -8;					\
+	*(u64*)(r5 + 0) = r4;				\
+	call ptr_spill_into_caller_stack__2;		\
+	/* spill unchecked sk_ptr into stack of caller */\
+	r5 = r10;					\
+	r5 += -8;					\
+	r4 = *(u64*)(r5 + 0);				\
+	*(u64*)(r4 + 0) = r0;				\
+	if r0 == 0 goto l0_%=;				\
+	/* now the sk_ptr is verified, free the reference */\
+	r1 = *(u64*)(r4 + 0);				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_release)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void ptr_spill_into_caller_stack__2(void)
+{
+	asm volatile ("					\
+	/* subprog 2 */					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: allow LD_ABS")
+__success __retval(0)
+__naked void reference_tracking_allow_ld_abs(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	r0 = *(u8*)skb[0];				\
+	r0 = *(u16*)skb[0];				\
+	r0 = *(u32*)skb[0];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: forbid LD_ABS while holding reference")
+__failure __msg("BPF_LD_[ABS|IND] cannot be mixed with socket references")
+__naked void ld_abs_while_holding_reference(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r0 = *(u8*)skb[0];				\
+	r0 = *(u16*)skb[0];				\
+	r0 = *(u32*)skb[0];				\
+	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: allow LD_IND")
+__success __retval(1)
+__naked void reference_tracking_allow_ld_ind(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	r7 = 1;						\
+	.8byte %[ld_ind];				\
+	r0 = r7;					\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)),
+	  __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: forbid LD_IND while holding reference")
+__failure __msg("BPF_LD_[ABS|IND] cannot be mixed with socket references")
+__naked void ld_ind_while_holding_reference(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r4 = r0;					\
+	r7 = 1;						\
+	.8byte %[ld_ind];				\
+	r0 = r7;					\
+	r1 = r4;					\
+	if r1 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)),
+	  __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: check reference or tail call")
+__success __retval(0)
+__naked void check_reference_or_tail_call(void)
+{
+	asm volatile ("					\
+	r7 = r1;					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	/* if (sk) bpf_sk_release() */			\
+	r1 = r0;					\
+	if r1 != 0 goto l0_%=;				\
+	/* bpf_tail_call() */				\
+	r3 = 3;						\
+	r2 = %[map_prog1_tc] ll;			\
+	r1 = r7;					\
+	call %[bpf_tail_call];				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_tc),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: release reference then tail call")
+__success __retval(0)
+__naked void release_reference_then_tail_call(void)
+{
+	asm volatile ("					\
+	r7 = r1;					\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	/* if (sk) bpf_sk_release() */			\
+	r1 = r0;					\
+	if r1 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	/* bpf_tail_call() */				\
+	r3 = 3;						\
+	r2 = %[map_prog1_tc] ll;			\
+	r1 = r7;					\
+	call %[bpf_tail_call];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_tc),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: leak possible reference over tail call")
+__failure __msg("tail_call would lead to reference leak")
+__naked void possible_reference_over_tail_call(void)
+{
+	asm volatile ("					\
+	r7 = r1;					\
+	/* Look up socket and store in REG_6 */		\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	/* bpf_tail_call() */				\
+	r6 = r0;					\
+	r3 = 3;						\
+	r2 = %[map_prog1_tc] ll;			\
+	r1 = r7;					\
+	call %[bpf_tail_call];				\
+	r0 = 0;						\
+	/* if (sk) bpf_sk_release() */			\
+	r1 = r6;					\
+	if r1 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_tc),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: leak checked reference over tail call")
+__failure __msg("tail_call would lead to reference leak")
+__naked void checked_reference_over_tail_call(void)
+{
+	asm volatile ("					\
+	r7 = r1;					\
+	/* Look up socket and store in REG_6 */		\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r6 = r0;					\
+	/* if (!sk) goto end */				\
+	if r0 == 0 goto l0_%=;				\
+	/* bpf_tail_call() */				\
+	r3 = 0;						\
+	r2 = %[map_prog1_tc] ll;			\
+	r1 = r7;					\
+	call %[bpf_tail_call];				\
+	r0 = 0;						\
+	r1 = r6;					\
+l0_%=:	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_tc),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: mangle and release sock_or_null")
+__failure __msg("R1 pointer arithmetic on sock_or_null prohibited")
+__naked void and_release_sock_or_null(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	r1 += 5;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: mangle and release sock")
+__failure __msg("R1 pointer arithmetic on sock prohibited")
+__naked void tracking_mangle_and_release_sock(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	r1 += 5;					\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: access member")
+__success __retval(0)
+__naked void reference_tracking_access_member(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r6 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	r2 = *(u32*)(r0 + 4);				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: write to member")
+__failure __msg("cannot write into sock")
+__naked void reference_tracking_write_to_member(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r6 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	r1 = r6;					\
+	r2 = 42 ll;					\
+	*(u32*)(r1 + %[bpf_sock_mark]) = r2;		\
+	r1 = r6;					\
+l0_%=:	call %[bpf_sk_release];				\
+	r0 = 0 ll;					\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(bpf_sock_mark, offsetof(struct bpf_sock, mark)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: invalid 64-bit access of member")
+__failure __msg("invalid sock access off=0 size=8")
+__naked void _64_bit_access_of_member(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r6 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	r2 = *(u64*)(r0 + 0);				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: access after release")
+__failure __msg("!read_ok")
+__naked void reference_tracking_access_after_release(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r1 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+	r2 = *(u32*)(r1 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: direct access for lookup")
+__success __retval(0)
+__naked void tracking_direct_access_for_lookup(void)
+{
+	asm volatile ("					\
+	/* Check that the packet is at least 64B long */\
+	r2 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r3 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r0 = r2;					\
+	r0 += 64;					\
+	if r0 > r3 goto l0_%=;				\
+	/* sk = sk_lookup_tcp(ctx, skb->data, ...) */	\
+	r3 = %[sizeof_bpf_sock_tuple];			\
+	r4 = 0;						\
+	r5 = 0;						\
+	call %[bpf_sk_lookup_tcp];			\
+	r6 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	r2 = *(u32*)(r0 + 4);				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: use ptr from bpf_tcp_sock() after release")
+__failure __msg("invalid mem access")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void bpf_tcp_sock_after_release(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r7 = r0;					\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	r0 = *(u32*)(r7 + %[bpf_tcp_sock_snd_cwnd]);	\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tcp_sock),
+	  __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: use ptr from bpf_sk_fullsock() after release")
+__failure __msg("invalid mem access")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void bpf_sk_fullsock_after_release(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r7 = r0;					\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	r0 = *(u32*)(r7 + %[bpf_sock_type]);		\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: use ptr from bpf_sk_fullsock(tp) after release")
+__failure __msg("invalid mem access")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void sk_fullsock_tp_after_release(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	call %[bpf_sk_fullsock];			\
+	r1 = r6;					\
+	r6 = r0;					\
+	call %[bpf_sk_release];				\
+	if r6 != 0 goto l2_%=;				\
+	exit;						\
+l2_%=:	r0 = *(u32*)(r6 + %[bpf_sock_type]);		\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tcp_sock),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: use sk after bpf_sk_release(tp)")
+__failure __msg("invalid mem access")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void after_bpf_sk_release_tp(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	call %[bpf_sk_release];				\
+	r0 = *(u32*)(r6 + %[bpf_sock_type]);		\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tcp_sock),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: use ptr from bpf_get_listener_sock() after bpf_sk_release(sk)")
+__success __retval(0)
+__naked void after_bpf_sk_release_sk(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_get_listener_sock];			\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r1 = r6;					\
+	r6 = r0;					\
+	call %[bpf_sk_release];				\
+	r0 = *(u32*)(r6 + %[bpf_sock_src_port]);	\
+	exit;						\
+"	:
+	: __imm(bpf_get_listener_sock),
+	  __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(bpf_sock_src_port, offsetof(struct bpf_sock, src_port)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: bpf_sk_release(listen_sk)")
+__failure __msg("R1 must be referenced when passed to release function")
+__naked void bpf_sk_release_listen_sk(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_get_listener_sock];			\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	call %[bpf_sk_release];				\
+	r0 = *(u32*)(r6 + %[bpf_sock_type]);		\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_get_listener_sock),
+	  __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+/* !bpf_sk_fullsock(sk) is checked but !bpf_tcp_sock(sk) is not checked */
+SEC("tc")
+__description("reference tracking: tp->snd_cwnd after bpf_sk_fullsock(sk) and bpf_tcp_sock(sk)")
+__failure __msg("invalid mem access")
+__naked void and_bpf_tcp_sock_sk(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_sk_fullsock];			\
+	r7 = r0;					\
+	r1 = r6;					\
+	call %[bpf_tcp_sock];				\
+	r8 = r0;					\
+	if r7 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r0 = *(u32*)(r8 + %[bpf_tcp_sock_snd_cwnd]);	\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_tcp_sock),
+	  __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: branch tracking valid pointer null comparison")
+__success __retval(0)
+__naked void tracking_valid_pointer_null_comparison(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r6 = r0;					\
+	r3 = 1;						\
+	if r6 != 0 goto l0_%=;				\
+	r3 = 0;						\
+l0_%=:	if r6 == 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: branch tracking valid pointer value comparison")
+__failure __msg("Unreleased reference")
+__naked void tracking_valid_pointer_value_comparison(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r6 = r0;					\
+	r3 = 1;						\
+	if r6 == 0 goto l0_%=;				\
+	r3 = 0;						\
+	if r6 == 1234 goto l0_%=;			\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: bpf_sk_release(btf_tcp_sock)")
+__success
+__retval(0)
+__naked void sk_release_btf_tcp_sock(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_skc_to_tcp_sock];			\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_skc_to_tcp_sock),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("reference tracking: use ptr from bpf_skc_to_tcp_sock() after release")
+__failure __msg("invalid mem access")
+__naked void to_tcp_sock_after_release(void)
+{
+	asm volatile (
+	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	call %[bpf_skc_to_tcp_sock];			\
+	if r0 != 0 goto l1_%=;				\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	exit;						\
+l1_%=:	r7 = r0;					\
+	r1 = r6;					\
+	call %[bpf_sk_release];				\
+	r0 = *(u8*)(r7 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm(bpf_skc_to_tcp_sock),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("reference tracking: try to leak released ptr reg")
+__success __failure_unpriv __msg_unpriv("R8 !read_ok")
+__retval(0)
+__naked void to_leak_released_ptr_reg(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	*(u32*)(r10 - 4) = r0;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r9 = r0;					\
+	r0 = 0;						\
+	r1 = %[map_ringbuf] ll;				\
+	r2 = 8;						\
+	r3 = 0;						\
+	call %[bpf_ringbuf_reserve];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r8 = r0;					\
+	r1 = r8;					\
+	r2 = 0;						\
+	call %[bpf_ringbuf_discard];			\
+	r0 = 0;						\
+	*(u64*)(r9 + 0) = r8;				\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_ringbuf_discard),
+	  __imm(bpf_ringbuf_reserve),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_ringbuf)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c
deleted file mode 100644
index 5a2e154dd1e0..000000000000
--- a/tools/testing/selftests/bpf/verifier/ref_tracking.c
+++ /dev/null
@@ -1,1082 +0,0 @@
-{
-	"reference tracking: leak potential reference",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), /* leak reference */
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: leak potential reference to sock_common",
-	.insns = {
-	BPF_SK_LOOKUP(skc_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), /* leak reference */
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: leak potential reference on stack",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: leak potential reference on stack 2",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: zero potential reference",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_IMM(BPF_REG_0, 0), /* leak reference */
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: zero potential reference to sock_common",
-	.insns = {
-	BPF_SK_LOOKUP(skc_lookup_tcp),
-	BPF_MOV64_IMM(BPF_REG_0, 0), /* leak reference */
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: copy and zero potential references",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_7, 0), /* leak reference */
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: acquire/release user key reference",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, -3),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_LSM,
-	.kfunc = "bpf",
-	.expected_attach_type = BPF_LSM_MAC,
-	.flags = BPF_F_SLEEPABLE,
-	.fixup_kfunc_btf_id = {
-		{ "bpf_lookup_user_key", 2 },
-		{ "bpf_key_put", 5 },
-	},
-	.result = ACCEPT,
-},
-{
-	"reference tracking: acquire/release system key reference",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_LSM,
-	.kfunc = "bpf",
-	.expected_attach_type = BPF_LSM_MAC,
-	.flags = BPF_F_SLEEPABLE,
-	.fixup_kfunc_btf_id = {
-		{ "bpf_lookup_system_key", 1 },
-		{ "bpf_key_put", 4 },
-	},
-	.result = ACCEPT,
-},
-{
-	"reference tracking: release user key reference without check",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, -3),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_LSM,
-	.kfunc = "bpf",
-	.expected_attach_type = BPF_LSM_MAC,
-	.flags = BPF_F_SLEEPABLE,
-	.errstr = "Possibly NULL pointer passed to trusted arg0",
-	.fixup_kfunc_btf_id = {
-		{ "bpf_lookup_user_key", 2 },
-		{ "bpf_key_put", 4 },
-	},
-	.result = REJECT,
-},
-{
-	"reference tracking: release system key reference without check",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_LSM,
-	.kfunc = "bpf",
-	.expected_attach_type = BPF_LSM_MAC,
-	.flags = BPF_F_SLEEPABLE,
-	.errstr = "Possibly NULL pointer passed to trusted arg0",
-	.fixup_kfunc_btf_id = {
-		{ "bpf_lookup_system_key", 1 },
-		{ "bpf_key_put", 3 },
-	},
-	.result = REJECT,
-},
-{
-	"reference tracking: release with NULL key pointer",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_LSM,
-	.kfunc = "bpf",
-	.expected_attach_type = BPF_LSM_MAC,
-	.flags = BPF_F_SLEEPABLE,
-	.errstr = "Possibly NULL pointer passed to trusted arg0",
-	.fixup_kfunc_btf_id = {
-		{ "bpf_key_put", 1 },
-	},
-	.result = REJECT,
-},
-{
-	"reference tracking: leak potential reference to user key",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, -3),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_LSM,
-	.kfunc = "bpf",
-	.expected_attach_type = BPF_LSM_MAC,
-	.flags = BPF_F_SLEEPABLE,
-	.errstr = "Unreleased reference",
-	.fixup_kfunc_btf_id = {
-		{ "bpf_lookup_user_key", 2 },
-	},
-	.result = REJECT,
-},
-{
-	"reference tracking: leak potential reference to system key",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_LSM,
-	.kfunc = "bpf",
-	.expected_attach_type = BPF_LSM_MAC,
-	.flags = BPF_F_SLEEPABLE,
-	.errstr = "Unreleased reference",
-	.fixup_kfunc_btf_id = {
-		{ "bpf_lookup_system_key", 1 },
-	},
-	.result = REJECT,
-},
-{
-	"reference tracking: release reference without check",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	/* reference in r0 may be NULL */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "type=sock_or_null expected=sock",
-	.result = REJECT,
-},
-{
-	"reference tracking: release reference to sock_common without check",
-	.insns = {
-	BPF_SK_LOOKUP(skc_lookup_tcp),
-	/* reference in r0 may be NULL */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "type=sock_common_or_null expected=sock",
-	.result = REJECT,
-},
-{
-	"reference tracking: release reference",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: release reference to sock_common",
-	.insns = {
-	BPF_SK_LOOKUP(skc_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: release reference 2",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: release reference twice",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "type=scalar expected=sock",
-	.result = REJECT,
-},
-{
-	"reference tracking: release reference twice inside branch",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), /* goto end */
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "type=scalar expected=sock",
-	.result = REJECT,
-},
-{
-	"reference tracking: alloc, check, free in one subbranch",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 16),
-	/* if (offsetof(skb, mark) > data_len) exit; */
-	BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_2,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 1), /* mark == 0? */
-	/* Leak reference in R0 */
-	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"reference tracking: alloc, check, free in both subbranches",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 16),
-	/* if (offsetof(skb, mark) > data_len) exit; */
-	BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_2,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 4), /* mark == 0? */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"reference tracking in call: free reference in subprog",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), /* unchecked reference */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-
-	/* subprog 1 */
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking in call: free reference in subprog and outside",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), /* unchecked reference */
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-
-	/* subprog 1 */
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "type=scalar expected=sock",
-	.result = REJECT,
-},
-{
-	"reference tracking in call: alloc & leak reference in subprog",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-
-	/* subprog 1 */
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_4),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	/* spill unchecked sk_ptr into stack of caller */
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking in call: alloc in subprog, release outside",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-
-	/* subprog 1 */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_EXIT_INSN(), /* return sk */
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.retval = POINTER_VALUE,
-	.result = ACCEPT,
-},
-{
-	"reference tracking in call: sk_ptr leak into caller stack",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-
-	/* subprog 1 */
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
-	/* spill unchecked sk_ptr into stack of caller */
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_5, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-
-	/* subprog 2 */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking in call: sk_ptr spill into caller stack",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-
-	/* subprog 1 */
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
-	/* spill unchecked sk_ptr into stack of caller */
-	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_5, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	/* now the sk_ptr is verified, free the reference */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_4, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-
-	/* subprog 2 */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: allow LD_ABS",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LD_ABS(BPF_B, 0),
-	BPF_LD_ABS(BPF_H, 0),
-	BPF_LD_ABS(BPF_W, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: forbid LD_ABS while holding reference",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_LD_ABS(BPF_B, 0),
-	BPF_LD_ABS(BPF_H, 0),
-	BPF_LD_ABS(BPF_W, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "BPF_LD_[ABS|IND] cannot be mixed with socket references",
-	.result = REJECT,
-},
-{
-	"reference tracking: allow LD_IND",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_IMM(BPF_REG_7, 1),
-	BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"reference tracking: forbid LD_IND while holding reference",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_7, 1),
-	BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "BPF_LD_[ABS|IND] cannot be mixed with socket references",
-	.result = REJECT,
-},
-{
-	"reference tracking: check reference or tail call",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	/* if (sk) bpf_sk_release() */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 7),
-	/* bpf_tail_call() */
-	BPF_MOV64_IMM(BPF_REG_3, 3),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 17 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: release reference then tail call",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	/* if (sk) bpf_sk_release() */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	/* bpf_tail_call() */
-	BPF_MOV64_IMM(BPF_REG_3, 3),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 18 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: leak possible reference over tail call",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
-	/* Look up socket and store in REG_6 */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	/* bpf_tail_call() */
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_3, 3),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	/* if (sk) bpf_sk_release() */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 16 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "tail_call would lead to reference leak",
-	.result = REJECT,
-},
-{
-	"reference tracking: leak checked reference over tail call",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
-	/* Look up socket and store in REG_6 */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	/* if (!sk) goto end */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
-	/* bpf_tail_call() */
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 17 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "tail_call would lead to reference leak",
-	.result = REJECT,
-},
-{
-	"reference tracking: mangle and release sock_or_null",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 5),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "R1 pointer arithmetic on sock_or_null prohibited",
-	.result = REJECT,
-},
-{
-	"reference tracking: mangle and release sock",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 5),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "R1 pointer arithmetic on sock prohibited",
-	.result = REJECT,
-},
-{
-	"reference tracking: access member",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: write to member",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_LD_IMM64(BPF_REG_2, 42),
-	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_2,
-		    offsetof(struct bpf_sock, mark)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LD_IMM64(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "cannot write into sock",
-	.result = REJECT,
-},
-{
-	"reference tracking: invalid 64-bit access of member",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "invalid sock access off=0 size=8",
-	.result = REJECT,
-},
-{
-	"reference tracking: access after release",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "!read_ok",
-	.result = REJECT,
-},
-{
-	"reference tracking: direct access for lookup",
-	.insns = {
-	/* Check that the packet is at least 64B long */
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 64),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9),
-	/* sk = sk_lookup_tcp(ctx, skb->data, ...) */
-	BPF_MOV64_IMM(BPF_REG_3, sizeof(struct bpf_sock_tuple)),
-	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: use ptr from bpf_tcp_sock() after release",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_tcp_sock, snd_cwnd)),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid mem access",
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"reference tracking: use ptr from bpf_sk_fullsock() after release",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid mem access",
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"reference tracking: use ptr from bpf_sk_fullsock(tp) after release",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, type)),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid mem access",
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"reference tracking: use sk after bpf_sk_release(tp)",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, type)),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid mem access",
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"reference tracking: use ptr from bpf_get_listener_sock() after bpf_sk_release(sk)",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_listener_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, src_port)),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: bpf_sk_release(listen_sk)",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_listener_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, type)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "R1 must be referenced when passed to release function",
-},
-{
-	/* !bpf_sk_fullsock(sk) is checked but !bpf_tcp_sock(sk) is not checked */
-	"reference tracking: tp->snd_cwnd after bpf_sk_fullsock(sk) and bpf_tcp_sock(sk)",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_8, offsetof(struct bpf_tcp_sock, snd_cwnd)),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid mem access",
-},
-{
-	"reference tracking: branch tracking valid pointer null comparison",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"reference tracking: branch tracking valid pointer value comparison",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 4),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 1234, 2),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.errstr = "Unreleased reference",
-	.result = REJECT,
-},
-{
-	"reference tracking: bpf_sk_release(btf_tcp_sock)",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "unknown func",
-},
-{
-	"reference tracking: use ptr from bpf_skc_to_tcp_sock() after release",
-	.insns = {
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid mem access",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "unknown func",
-},
-{
-	"reference tracking: try to leak released ptr reg",
-	.insns = {
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
-		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-		BPF_LD_MAP_FD(BPF_REG_1, 0),
-		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-		BPF_EXIT_INSN(),
-		BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
-
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_LD_MAP_FD(BPF_REG_1, 0),
-		BPF_MOV64_IMM(BPF_REG_2, 8),
-		BPF_MOV64_IMM(BPF_REG_3, 0),
-		BPF_EMIT_CALL(BPF_FUNC_ringbuf_reserve),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-		BPF_EXIT_INSN(),
-		BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-
-		BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_EMIT_CALL(BPF_FUNC_ringbuf_discard),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-
-		BPF_STX_MEM(BPF_DW, BPF_REG_9, BPF_REG_8, 0),
-		BPF_EXIT_INSN()
-	},
-	.fixup_map_array_48b = { 4 },
-	.fixup_map_ringbuf = { 11 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R8 !read_ok"
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 16/24] selftests/bpf: verifier/regalloc converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (14 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 15/24] selftests/bpf: verifier/ref_tracking " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 17/24] selftests/bpf: verifier/runtime_jit " Eduard Zingerman
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/regalloc automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_regalloc.c   | 364 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/regalloc.c | 277 -------------
 3 files changed, 366 insertions(+), 277 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_regalloc.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/regalloc.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 5941ef59ed76..f0b9b74c43d7 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -46,6 +46,7 @@
 #include "verifier_raw_tp_writable.skel.h"
 #include "verifier_reg_equal.skel.h"
 #include "verifier_ref_tracking.skel.h"
+#include "verifier_regalloc.skel.h"
 #include "verifier_ringbuf.skel.h"
 #include "verifier_spill_fill.skel.h"
 #include "verifier_stack_ptr.skel.h"
@@ -134,6 +135,7 @@ void test_verifier_raw_stack(void)            { RUN(verifier_raw_stack); }
 void test_verifier_raw_tp_writable(void)      { RUN(verifier_raw_tp_writable); }
 void test_verifier_reg_equal(void)            { RUN(verifier_reg_equal); }
 void test_verifier_ref_tracking(void)         { RUN(verifier_ref_tracking); }
+void test_verifier_regalloc(void)             { RUN(verifier_regalloc); }
 void test_verifier_ringbuf(void)              { RUN(verifier_ringbuf); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_regalloc.c b/tools/testing/selftests/bpf/progs/verifier_regalloc.c
new file mode 100644
index 000000000000..ee5ddea87c91
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_regalloc.c
@@ -0,0 +1,364 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/regalloc.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, struct test_val);
+} map_hash_48b SEC(".maps");
+
+SEC("tracepoint")
+__description("regalloc basic")
+__success __flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_basic(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	if r0 s> 20 goto l0_%=;				\
+	if r2 s< 0 goto l0_%=;				\
+	r7 += r0;					\
+	r7 += r2;					\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc negative")
+__failure __msg("invalid access to map value, value_size=48 off=48 size=1")
+__naked void regalloc_negative(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	if r0 s> 24 goto l0_%=;				\
+	if r2 s< 0 goto l0_%=;				\
+	r7 += r0;					\
+	r7 += r2;					\
+	r0 = *(u8*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc src_reg mark")
+__success __flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_src_reg_mark(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	if r0 s> 20 goto l0_%=;				\
+	r3 = 0;						\
+	if r3 s>= r2 goto l0_%=;			\
+	r7 += r0;					\
+	r7 += r2;					\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc src_reg negative")
+__failure __msg("invalid access to map value, value_size=48 off=44 size=8")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_src_reg_negative(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	if r0 s> 22 goto l0_%=;				\
+	r3 = 0;						\
+	if r3 s>= r2 goto l0_%=;			\
+	r7 += r0;					\
+	r7 += r2;					\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc and spill")
+__success __flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_and_spill(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	if r0 s> 20 goto l0_%=;				\
+	/* r0 has upper bound that should propagate into r2 */\
+	*(u64*)(r10 - 8) = r2;		/* spill r2 */	\
+	r0 = 0;						\
+	r2 = 0;				/* clear r0 and r2 */\
+	r3 = *(u64*)(r10 - 8);		/* fill r3 */	\
+	if r0 s>= r3 goto l0_%=;			\
+	/* r3 has lower and upper bounds */		\
+	r7 += r3;					\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc and spill negative")
+__failure __msg("invalid access to map value, value_size=48 off=48 size=8")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_and_spill_negative(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	if r0 s> 48 goto l0_%=;				\
+	/* r0 has upper bound that should propagate into r2 */\
+	*(u64*)(r10 - 8) = r2;		/* spill r2 */	\
+	r0 = 0;						\
+	r2 = 0;				/* clear r0 and r2 */\
+	r3 = *(u64*)(r10 - 8);		/* fill r3 */\
+	if r0 s>= r3 goto l0_%=;			\
+	/* r3 has lower and upper bounds */		\
+	r7 += r3;					\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc three regs")
+__success __flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_three_regs(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	r4 = r2;					\
+	if r0 s> 12 goto l0_%=;				\
+	if r2 s< 0 goto l0_%=;				\
+	r7 += r0;					\
+	r7 += r2;					\
+	r7 += r4;					\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc after call")
+__success __flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_after_call(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r8 = r0;					\
+	r9 = r0;					\
+	call regalloc_after_call__1;			\
+	if r8 s> 20 goto l0_%=;				\
+	if r9 s< 0 goto l0_%=;				\
+	r7 += r8;					\
+	r7 += r9;					\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void regalloc_after_call__1(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc in callee")
+__success __flag(BPF_F_ANY_ALIGNMENT)
+__naked void regalloc_in_callee(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r7 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r2 = r0;					\
+	r3 = r7;					\
+	call regalloc_in_callee__1;			\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void regalloc_in_callee__1(void)
+{
+	asm volatile ("					\
+	if r1 s> 20 goto l0_%=;				\
+	if r2 s< 0 goto l0_%=;				\
+	r3 += r1;					\
+	r3 += r2;					\
+	r0 = *(u64*)(r3 + 0);				\
+	exit;						\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("regalloc, spill, JEQ")
+__success
+__naked void regalloc_spill_jeq(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	*(u64*)(r10 - 8) = r0;		/* spill r0 */	\
+	if r0 == 0 goto l0_%=;				\
+l0_%=:	/* The verifier will walk the rest twice with r0 == 0 and r0 == map_value */\
+	call %[bpf_get_prandom_u32];			\
+	r2 = r0;					\
+	if r2 == 20 goto l1_%=;				\
+l1_%=:	/* The verifier will walk the rest two more times with r0 == 20 and r0 == unknown */\
+	r3 = *(u64*)(r10 - 8);		/* fill r3 with map_value */\
+	if r3 == 0 goto l2_%=;		/* skip ldx if map_value == NULL */\
+	/* Buggy verifier will think that r3 == 20 here */\
+	r0 = *(u64*)(r3 + 0);		/* read from map_value */\
+l2_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/regalloc.c b/tools/testing/selftests/bpf/verifier/regalloc.c
deleted file mode 100644
index bb0dd89dd212..000000000000
--- a/tools/testing/selftests/bpf/verifier/regalloc.c
+++ /dev/null
@@ -1,277 +0,0 @@
-{
-	"regalloc basic",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 4),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc negative",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 24, 4),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = REJECT,
-	.errstr = "invalid access to map value, value_size=48 off=48 size=1",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"regalloc src_reg mark",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 5),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc src_reg negative",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 22, 5),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = REJECT,
-	.errstr = "invalid access to map value, value_size=48 off=44 size=8",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc and spill",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 7),
-	/* r0 has upper bound that should propagate into r2 */
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), /* spill r2 */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_2, 0), /* clear r0 and r2 */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 */
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2),
-	/* r3 has lower and upper bounds */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_3),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc and spill negative",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 48, 7),
-	/* r0 has upper bound that should propagate into r2 */
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), /* spill r2 */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_2, 0), /* clear r0 and r2 */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 */
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2),
-	/* r3 has lower and upper bounds */
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_3),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = REJECT,
-	.errstr = "invalid access to map value, value_size=48 off=48 size=8",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc three regs",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 12, 5),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_4),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc after call",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 6),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 20, 4),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_9, 0, 3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_8),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_9),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc in callee",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 20, 5),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"regalloc, spill, JEQ",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), /* spill r0 */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 0),
-	/* The verifier will walk the rest twice with r0 == 0 and r0 == map_value */
-	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 20, 0),
-	/* The verifier will walk the rest two more times with r0 == 20 and r0 == unknown */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 with map_value */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 1), /* skip ldx if map_value == NULL */
-	/* Buggy verifier will think that r3 == 20 here */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0), /* read from map_value */
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 4 },
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 17/24] selftests/bpf: verifier/runtime_jit converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (15 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 16/24] selftests/bpf: verifier/regalloc " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 18/24] selftests/bpf: verifier/search_pruning " Eduard Zingerman
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/runtime_jit automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_runtime_jit.c          | 360 ++++++++++++++++++
 .../selftests/bpf/verifier/runtime_jit.c      | 231 -----------
 3 files changed, 362 insertions(+), 231 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_runtime_jit.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/runtime_jit.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index f0b9b74c43d7..072b0eb47391 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -48,6 +48,7 @@
 #include "verifier_ref_tracking.skel.h"
 #include "verifier_regalloc.skel.h"
 #include "verifier_ringbuf.skel.h"
+#include "verifier_runtime_jit.skel.h"
 #include "verifier_spill_fill.skel.h"
 #include "verifier_stack_ptr.skel.h"
 #include "verifier_uninit.skel.h"
@@ -137,6 +138,7 @@ void test_verifier_reg_equal(void)            { RUN(verifier_reg_equal); }
 void test_verifier_ref_tracking(void)         { RUN(verifier_ref_tracking); }
 void test_verifier_regalloc(void)             { RUN(verifier_regalloc); }
 void test_verifier_ringbuf(void)              { RUN(verifier_ringbuf); }
+void test_verifier_runtime_jit(void)          { RUN(verifier_runtime_jit); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
 void test_verifier_uninit(void)               { RUN(verifier_uninit); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_runtime_jit.c b/tools/testing/selftests/bpf/progs/verifier_runtime_jit.c
new file mode 100644
index 000000000000..27ebfc1fd9ee
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_runtime_jit.c
@@ -0,0 +1,360 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/runtime_jit.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+void dummy_prog_42_socket(void);
+void dummy_prog_24_socket(void);
+void dummy_prog_loop1_socket(void);
+void dummy_prog_loop2_socket(void);
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 4);
+	__uint(key_size, sizeof(int));
+	__array(values, void (void));
+} map_prog1_socket SEC(".maps") = {
+	.values = {
+		[0] = (void *)&dummy_prog_42_socket,
+		[1] = (void *)&dummy_prog_loop1_socket,
+		[2] = (void *)&dummy_prog_24_socket,
+	},
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 8);
+	__uint(key_size, sizeof(int));
+	__array(values, void (void));
+} map_prog2_socket SEC(".maps") = {
+	.values = {
+		[1] = (void *)&dummy_prog_loop2_socket,
+		[2] = (void *)&dummy_prog_24_socket,
+		[7] = (void *)&dummy_prog_42_socket,
+	},
+};
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_42_socket(void)
+{
+	asm volatile ("r0 = 42; exit;");
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_24_socket(void)
+{
+	asm volatile ("r0 = 24; exit;");
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_loop1_socket(void)
+{
+	asm volatile ("			\
+	r3 = 1;				\
+	r2 = %[map_prog1_socket] ll;	\
+	call %[bpf_tail_call];		\
+	r0 = 41;			\
+	exit;				\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_loop2_socket(void)
+{
+	asm volatile ("			\
+	r3 = 1;				\
+	r2 = %[map_prog2_socket] ll;	\
+	call %[bpf_tail_call];		\
+	r0 = 41;			\
+	exit;				\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog2_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, prog once")
+__success __success_unpriv __retval(42)
+__naked void call_within_bounds_prog_once(void)
+{
+	asm volatile ("					\
+	r3 = 0;						\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, prog loop")
+__success __success_unpriv __retval(41)
+__naked void call_within_bounds_prog_loop(void)
+{
+	asm volatile ("					\
+	r3 = 1;						\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, no prog")
+__success __success_unpriv __retval(1)
+__naked void call_within_bounds_no_prog(void)
+{
+	asm volatile ("					\
+	r3 = 3;						\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, key 2")
+__success __success_unpriv __retval(24)
+__naked void call_within_bounds_key_2(void)
+{
+	asm volatile ("					\
+	r3 = 2;						\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, key 2 / key 2, first branch")
+__success __success_unpriv __retval(24)
+__naked void _2_key_2_first_branch(void)
+{
+	asm volatile ("					\
+	r0 = 13;					\
+	*(u8*)(r1 + %[__sk_buff_cb_0]) = r0;		\
+	r0 = *(u8*)(r1 + %[__sk_buff_cb_0]);		\
+	if r0 == 13 goto l0_%=;				\
+	r3 = 2;						\
+	r2 = %[map_prog1_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r3 = 2;						\
+	r2 = %[map_prog1_socket] ll;			\
+l1_%=:	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket),
+	  __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0]))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, key 2 / key 2, second branch")
+__success __success_unpriv __retval(24)
+__naked void _2_key_2_second_branch(void)
+{
+	asm volatile ("					\
+	r0 = 14;					\
+	*(u8*)(r1 + %[__sk_buff_cb_0]) = r0;		\
+	r0 = *(u8*)(r1 + %[__sk_buff_cb_0]);		\
+	if r0 == 13 goto l0_%=;				\
+	r3 = 2;						\
+	r2 = %[map_prog1_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r3 = 2;						\
+	r2 = %[map_prog1_socket] ll;			\
+l1_%=:	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket),
+	  __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0]))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, key 0 / key 2, first branch")
+__success __success_unpriv __retval(24)
+__naked void _0_key_2_first_branch(void)
+{
+	asm volatile ("					\
+	r0 = 13;					\
+	*(u8*)(r1 + %[__sk_buff_cb_0]) = r0;		\
+	r0 = *(u8*)(r1 + %[__sk_buff_cb_0]);		\
+	if r0 == 13 goto l0_%=;				\
+	r3 = 0;						\
+	r2 = %[map_prog1_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r3 = 2;						\
+	r2 = %[map_prog1_socket] ll;			\
+l1_%=:	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket),
+	  __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0]))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, key 0 / key 2, second branch")
+__success __success_unpriv __retval(42)
+__naked void _0_key_2_second_branch(void)
+{
+	asm volatile ("					\
+	r0 = 14;					\
+	*(u8*)(r1 + %[__sk_buff_cb_0]) = r0;		\
+	r0 = *(u8*)(r1 + %[__sk_buff_cb_0]);		\
+	if r0 == 13 goto l0_%=;				\
+	r3 = 0;						\
+	r2 = %[map_prog1_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r3 = 2;						\
+	r2 = %[map_prog1_socket] ll;			\
+l1_%=:	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket),
+	  __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0]))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, different maps, first branch")
+__success __failure_unpriv __msg_unpriv("tail_call abusing map_ptr")
+__retval(1)
+__naked void bounds_different_maps_first_branch(void)
+{
+	asm volatile ("					\
+	r0 = 13;					\
+	*(u8*)(r1 + %[__sk_buff_cb_0]) = r0;		\
+	r0 = *(u8*)(r1 + %[__sk_buff_cb_0]);		\
+	if r0 == 13 goto l0_%=;				\
+	r3 = 0;						\
+	r2 = %[map_prog1_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r3 = 0;						\
+	r2 = %[map_prog2_socket] ll;			\
+l1_%=:	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket),
+	  __imm_addr(map_prog2_socket),
+	  __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0]))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call within bounds, different maps, second branch")
+__success __failure_unpriv __msg_unpriv("tail_call abusing map_ptr")
+__retval(42)
+__naked void bounds_different_maps_second_branch(void)
+{
+	asm volatile ("					\
+	r0 = 14;					\
+	*(u8*)(r1 + %[__sk_buff_cb_0]) = r0;		\
+	r0 = *(u8*)(r1 + %[__sk_buff_cb_0]);		\
+	if r0 == 13 goto l0_%=;				\
+	r3 = 0;						\
+	r2 = %[map_prog1_socket] ll;			\
+	goto l1_%=;					\
+l0_%=:	r3 = 0;						\
+	r2 = %[map_prog2_socket] ll;			\
+l1_%=:	call %[bpf_tail_call];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket),
+	  __imm_addr(map_prog2_socket),
+	  __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0]))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: tail_call out of bounds")
+__success __success_unpriv __retval(2)
+__naked void tail_call_out_of_bounds(void)
+{
+	asm volatile ("					\
+	r3 = 256;					\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 2;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: pass negative index to tail_call")
+__success __success_unpriv __retval(2)
+__naked void negative_index_to_tail_call(void)
+{
+	asm volatile ("					\
+	r3 = -1;					\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 2;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("runtime/jit: pass > 32bit index to tail_call")
+__success __success_unpriv __retval(42)
+/* Verifier rewrite for unpriv skips tail call here. */
+__retval_unpriv(2)
+__naked void _32bit_index_to_tail_call(void)
+{
+	asm volatile ("					\
+	r3 = 0x100000000 ll;				\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 2;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/runtime_jit.c b/tools/testing/selftests/bpf/verifier/runtime_jit.c
deleted file mode 100644
index 94c399d1faca..000000000000
--- a/tools/testing/selftests/bpf/verifier/runtime_jit.c
+++ /dev/null
@@ -1,231 +0,0 @@
-{
-	"runtime/jit: tail_call within bounds, prog once",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 1 },
-	.result = ACCEPT,
-	.retval = 42,
-},
-{
-	"runtime/jit: tail_call within bounds, prog loop",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_3, 1),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 1 },
-	.result = ACCEPT,
-	.retval = 41,
-},
-{
-	"runtime/jit: tail_call within bounds, no prog",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_3, 3),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 1 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"runtime/jit: tail_call within bounds, key 2",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 1 },
-	.result = ACCEPT,
-	.retval = 24,
-},
-{
-	"runtime/jit: tail_call within bounds, key 2 / key 2, first branch",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 13),
-	BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
-	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 5, 9 },
-	.result = ACCEPT,
-	.retval = 24,
-},
-{
-	"runtime/jit: tail_call within bounds, key 2 / key 2, second branch",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 14),
-	BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
-	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 5, 9 },
-	.result = ACCEPT,
-	.retval = 24,
-},
-{
-	"runtime/jit: tail_call within bounds, key 0 / key 2, first branch",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 13),
-	BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 5, 9 },
-	.result = ACCEPT,
-	.retval = 24,
-},
-{
-	"runtime/jit: tail_call within bounds, key 0 / key 2, second branch",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 14),
-	BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 5, 9 },
-	.result = ACCEPT,
-	.retval = 42,
-},
-{
-	"runtime/jit: tail_call within bounds, different maps, first branch",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 13),
-	BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 5 },
-	.fixup_prog2 = { 9 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "tail_call abusing map_ptr",
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"runtime/jit: tail_call within bounds, different maps, second branch",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 14),
-	BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 5 },
-	.fixup_prog2 = { 9 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "tail_call abusing map_ptr",
-	.result = ACCEPT,
-	.retval = 42,
-},
-{
-	"runtime/jit: tail_call out of bounds",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_3, 256),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 2),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 1 },
-	.result = ACCEPT,
-	.retval = 2,
-},
-{
-	"runtime/jit: pass negative index to tail_call",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_3, -1),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 2),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 1 },
-	.result = ACCEPT,
-	.retval = 2,
-},
-{
-	"runtime/jit: pass > 32bit index to tail_call",
-	.insns = {
-	BPF_LD_IMM64(BPF_REG_3, 0x100000000ULL),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 2),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 2 },
-	.result = ACCEPT,
-	.retval = 42,
-	/* Verifier rewrite for unpriv skips tail call here. */
-	.retval_unpriv = 2,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 18/24] selftests/bpf: verifier/search_pruning converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (16 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 17/24] selftests/bpf: verifier/runtime_jit " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 19/24] selftests/bpf: verifier/sock " Eduard Zingerman
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/search_pruning automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_search_pruning.c       | 339 ++++++++++++++++++
 .../selftests/bpf/verifier/search_pruning.c   | 266 --------------
 3 files changed, 341 insertions(+), 266 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_search_pruning.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/search_pruning.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 072b0eb47391..1ef44e699e9c 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -49,6 +49,7 @@
 #include "verifier_regalloc.skel.h"
 #include "verifier_ringbuf.skel.h"
 #include "verifier_runtime_jit.skel.h"
+#include "verifier_search_pruning.skel.h"
 #include "verifier_spill_fill.skel.h"
 #include "verifier_stack_ptr.skel.h"
 #include "verifier_uninit.skel.h"
@@ -139,6 +140,7 @@ void test_verifier_ref_tracking(void)         { RUN(verifier_ref_tracking); }
 void test_verifier_regalloc(void)             { RUN(verifier_regalloc); }
 void test_verifier_ringbuf(void)              { RUN(verifier_ringbuf); }
 void test_verifier_runtime_jit(void)          { RUN(verifier_runtime_jit); }
+void test_verifier_search_pruning(void)       { RUN(verifier_search_pruning); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
 void test_verifier_uninit(void)               { RUN(verifier_uninit); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_search_pruning.c b/tools/testing/selftests/bpf/progs/verifier_search_pruning.c
new file mode 100644
index 000000000000..5a14498d352f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_search_pruning.c
@@ -0,0 +1,339 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/search_pruning.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, struct test_val);
+} map_hash_48b SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, long long);
+} map_hash_8b SEC(".maps");
+
+SEC("socket")
+__description("pointer/scalar confusion in state equality check (way 1)")
+__success __failure_unpriv __msg_unpriv("R0 leaks addr as return value")
+__retval(POINTER_VALUE)
+__naked void state_equality_check_way_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r0 = *(u64*)(r0 + 0);				\
+	goto l1_%=;					\
+l0_%=:	r0 = r10;					\
+l1_%=:	goto l2_%=;					\
+l2_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("pointer/scalar confusion in state equality check (way 2)")
+__success __failure_unpriv __msg_unpriv("R0 leaks addr as return value")
+__retval(POINTER_VALUE)
+__naked void state_equality_check_way_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	r0 = r10;					\
+	goto l1_%=;					\
+l0_%=:	r0 = *(u64*)(r0 + 0);				\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("lwt_in")
+__description("liveness pruning and write screening")
+__failure __msg("R0 !read_ok")
+__naked void liveness_pruning_and_write_screening(void)
+{
+	asm volatile ("					\
+	/* Get an unknown value */			\
+	r2 = *(u32*)(r1 + 0);				\
+	/* branch conditions teach us nothing about R2 */\
+	if r2 >= 0 goto l0_%=;				\
+	r0 = 0;						\
+l0_%=:	if r2 >= 0 goto l1_%=;				\
+	r0 = 0;						\
+l1_%=:	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("varlen_map_value_access pruning")
+__failure __msg("R0 unbounded memory access")
+__failure_unpriv __msg_unpriv("R0 leaks addr")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void varlen_map_value_access_pruning(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u64*)(r0 + 0);				\
+	w2 = %[max_entries];				\
+	if r2 s> r1 goto l1_%=;				\
+	w1 = 0;						\
+l1_%=:	w1 <<= 2;					\
+	r0 += r1;					\
+	goto l2_%=;					\
+l2_%=:	r1 = %[test_val_foo];				\
+	*(u64*)(r0 + 0) = r1;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b),
+	  __imm_const(max_entries, MAX_ENTRIES),
+	  __imm_const(test_val_foo, offsetof(struct test_val, foo))
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("search pruning: all branches should be verified (nop operation)")
+__failure __msg("R6 invalid mem access 'scalar'")
+__naked void should_be_verified_nop_operation(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = 0;						\
+	*(u64*)(r2 + 0) = r1;				\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r3 = *(u64*)(r0 + 0);				\
+	if r3 == 0xbeef goto l1_%=;			\
+	r4 = 0;						\
+	goto l2_%=;					\
+l1_%=:	r4 = 1;						\
+l2_%=:	*(u64*)(r10 - 16) = r4;				\
+	call %[bpf_ktime_get_ns];			\
+	r5 = *(u64*)(r10 - 16);				\
+	if r5 == 0 goto l0_%=;				\
+	r6 = 0;						\
+	r1 = 0xdead;					\
+	*(u64*)(r6 + 0) = r1;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("search pruning: all branches should be verified (invalid stack access)")
+/* in privileged mode reads from uninitialized stack locations are permitted */
+__success __failure_unpriv
+__msg_unpriv("invalid read from stack off -16+0 size 8")
+__retval(0)
+__naked void be_verified_invalid_stack_access(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = 0;						\
+	*(u64*)(r2 + 0) = r1;				\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r3 = *(u64*)(r0 + 0);				\
+	r4 = 0;						\
+	if r3 == 0xbeef goto l1_%=;			\
+	*(u64*)(r10 - 16) = r4;				\
+	goto l2_%=;					\
+l1_%=:	*(u64*)(r10 - 24) = r4;				\
+l2_%=:	call %[bpf_ktime_get_ns];			\
+	r5 = *(u64*)(r10 - 16);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("precision tracking for u32 spill/fill")
+__failure __msg("R0 min value is outside of the allowed memory range")
+__naked void tracking_for_u32_spill_fill(void)
+{
+	asm volatile ("					\
+	r7 = r1;					\
+	call %[bpf_get_prandom_u32];			\
+	w6 = 32;					\
+	if r0 == 0 goto l0_%=;				\
+	w6 = 4;						\
+l0_%=:	/* Additional insns to introduce a pruning point. */\
+	call %[bpf_get_prandom_u32];			\
+	r3 = 0;						\
+	r3 = 0;						\
+	if r0 == 0 goto l1_%=;				\
+	r3 = 0;						\
+l1_%=:	/* u32 spill/fill */				\
+	*(u32*)(r10 - 8) = r6;				\
+	r8 = *(u32*)(r10 - 8);				\
+	/* out-of-bound map value access for r6=32 */	\
+	r1 = 0;						\
+	*(u64*)(r10 - 16) = r1;				\
+	r2 = r10;					\
+	r2 += -16;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r0 += r8;					\
+	r1 = *(u32*)(r0 + 0);				\
+l2_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("precision tracking for u32 spills, u64 fill")
+__failure __msg("div by zero")
+__naked void for_u32_spills_u64_fill(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r6 = r0;					\
+	w7 = 0xffffffff;				\
+	/* Additional insns to introduce a pruning point. */\
+	r3 = 1;						\
+	r3 = 1;						\
+	r3 = 1;						\
+	r3 = 1;						\
+	call %[bpf_get_prandom_u32];			\
+	if r0 == 0 goto l0_%=;				\
+	r3 = 1;						\
+l0_%=:	w3 /= 0;					\
+	/* u32 spills, u64 fill */			\
+	*(u32*)(r10 - 4) = r6;				\
+	*(u32*)(r10 - 8) = r7;				\
+	r8 = *(u64*)(r10 - 8);				\
+	/* if r8 != X goto pc+1  r8 known in fallthrough branch */\
+	if r8 != 0xffffffff goto l1_%=;			\
+	r3 = 1;						\
+l1_%=:	/* if r8 == X goto pc+1  condition always true on first\
+	 * traversal, so starts backtracking to mark r8 as requiring\
+	 * precision. r7 marked as needing precision. r6 not marked\
+	 * since it's not tracked.			\
+	 */						\
+	if r8 == 0xffffffff goto l2_%=;			\
+	/* fails if r8 correctly marked unknown after fill. */\
+	w3 /= 0;					\
+l2_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("allocated_stack")
+__success __msg("processed 15 insns")
+__success_unpriv __msg_unpriv("") __log_level(1) __retval(0)
+__naked void allocated_stack(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	call %[bpf_get_prandom_u32];			\
+	r7 = r0;					\
+	if r0 == 0 goto l0_%=;				\
+	r0 = 0;						\
+	*(u64*)(r10 - 8) = r6;				\
+	r6 = *(u64*)(r10 - 8);				\
+	*(u8*)(r10 - 9) = r7;				\
+	r7 = *(u8*)(r10 - 9);				\
+l0_%=:	if r0 != 0 goto l1_%=;				\
+l1_%=:	if r0 != 0 goto l2_%=;				\
+l2_%=:	if r0 != 0 goto l3_%=;				\
+l3_%=:	if r0 != 0 goto l4_%=;				\
+l4_%=:	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+/* The test performs a conditional 64-bit write to a stack location
+ * fp[-8], this is followed by an unconditional 8-bit write to fp[-8],
+ * then data is read from fp[-8]. This sequence is unsafe.
+ *
+ * The test would be mistakenly marked as safe w/o dst register parent
+ * preservation in verifier.c:copy_register_state() function.
+ *
+ * Note the usage of BPF_F_TEST_STATE_FREQ to force creation of the
+ * checkpoint state after conditional 64-bit assignment.
+ */
+
+SEC("socket")
+__description("write tracking and register parent chain bug")
+/* in privileged mode reads from uninitialized stack locations are permitted */
+__success __failure_unpriv
+__msg_unpriv("invalid read from stack off -8+1 size 8")
+__retval(0) __flag(BPF_F_TEST_STATE_FREQ)
+__naked void and_register_parent_chain_bug(void)
+{
+	asm volatile ("					\
+	/* r6 = ktime_get_ns() */			\
+	call %[bpf_ktime_get_ns];			\
+	r6 = r0;					\
+	/* r0 = ktime_get_ns() */			\
+	call %[bpf_ktime_get_ns];			\
+	/* if r0 > r6 goto +1 */			\
+	if r0 > r6 goto l0_%=;				\
+	/* *(u64 *)(r10 - 8) = 0xdeadbeef */		\
+	r0 = 0xdeadbeef;				\
+	*(u64*)(r10 - 8) = r0;				\
+l0_%=:	r1 = 42;					\
+	*(u8*)(r10 - 8) = r1;				\
+	r2 = *(u64*)(r10 - 8);				\
+	/* exit(0) */					\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
deleted file mode 100644
index 745d6b5842fd..000000000000
--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
+++ /dev/null
@@ -1,266 +0,0 @@
-{
-	"pointer/scalar confusion in state equality check (way 1)",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_A(1),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
-	BPF_JMP_A(0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT,
-	.retval = POINTER_VALUE,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 leaks addr as return value"
-},
-{
-	"pointer/scalar confusion in state equality check (way 2)",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
-	BPF_JMP_A(1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.result = ACCEPT,
-	.retval = POINTER_VALUE,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 leaks addr as return value"
-},
-{
-	"liveness pruning and write screening",
-	.insns = {
-	/* Get an unknown value */
-	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
-	/* branch conditions teach us nothing about R2 */
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "R0 !read_ok",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_LWT_IN,
-},
-{
-	"varlen_map_value_access pruning",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
-	BPF_MOV32_IMM(BPF_REG_1, 0),
-	BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 3 },
-	.errstr_unpriv = "R0 leaks addr",
-	.errstr = "R0 unbounded memory access",
-	.result_unpriv = REJECT,
-	.result = REJECT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"search pruning: all branches should be verified (nop operation)",
-	.insns = {
-		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
-		BPF_LD_MAP_FD(BPF_REG_1, 0),
-		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
-		BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2),
-		BPF_MOV64_IMM(BPF_REG_4, 0),
-		BPF_JMP_A(1),
-		BPF_MOV64_IMM(BPF_REG_4, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -16),
-		BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-		BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -16),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_5, 0, 2),
-		BPF_MOV64_IMM(BPF_REG_6, 0),
-		BPF_ST_MEM(BPF_DW, BPF_REG_6, 0, 0xdead),
-		BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr = "R6 invalid mem access 'scalar'",
-	.result = REJECT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"search pruning: all branches should be verified (invalid stack access)",
-	.insns = {
-		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
-		BPF_LD_MAP_FD(BPF_REG_1, 0),
-		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-		BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0),
-		BPF_MOV64_IMM(BPF_REG_4, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2),
-		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -16),
-		BPF_JMP_A(1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -24),
-		BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-		BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -16),
-		BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr_unpriv = "invalid read from stack off -16+0 size 8",
-	.result_unpriv = REJECT,
-	/* in privileged mode reads from uninitialized stack locations are permitted */
-	.result = ACCEPT,
-},
-{
-	"precision tracking for u32 spill/fill",
-	.insns = {
-		BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
-		BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-		BPF_MOV32_IMM(BPF_REG_6, 32),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-		BPF_MOV32_IMM(BPF_REG_6, 4),
-		/* Additional insns to introduce a pruning point. */
-		BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-		BPF_MOV64_IMM(BPF_REG_3, 0),
-		BPF_MOV64_IMM(BPF_REG_3, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-		BPF_MOV64_IMM(BPF_REG_3, 0),
-		/* u32 spill/fill */
-		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -8),
-		BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_10, -8),
-		/* out-of-bound map value access for r6=32 */
-		BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0),
-		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-		BPF_LD_MAP_FD(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-		BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
-		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 15 },
-	.result = REJECT,
-	.errstr = "R0 min value is outside of the allowed memory range",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"precision tracking for u32 spills, u64 fill",
-	.insns = {
-		BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-		BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-		BPF_MOV32_IMM(BPF_REG_7, 0xffffffff),
-		/* Additional insns to introduce a pruning point. */
-		BPF_MOV64_IMM(BPF_REG_3, 1),
-		BPF_MOV64_IMM(BPF_REG_3, 1),
-		BPF_MOV64_IMM(BPF_REG_3, 1),
-		BPF_MOV64_IMM(BPF_REG_3, 1),
-		BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-		BPF_MOV64_IMM(BPF_REG_3, 1),
-		BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0),
-		/* u32 spills, u64 fill */
-		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4),
-		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8),
-		BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, -8),
-		/* if r8 != X goto pc+1  r8 known in fallthrough branch */
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0xffffffff, 1),
-		BPF_MOV64_IMM(BPF_REG_3, 1),
-		/* if r8 == X goto pc+1  condition always true on first
-		 * traversal, so starts backtracking to mark r8 as requiring
-		 * precision. r7 marked as needing precision. r6 not marked
-		 * since it's not tracked.
-		 */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 0xffffffff, 1),
-		/* fails if r8 correctly marked unknown after fill. */
-		BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "div by zero",
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"allocated_stack",
-	.insns = {
-		BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-		BPF_ALU64_REG(BPF_MOV, BPF_REG_7, BPF_REG_0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
-		BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, -8),
-		BPF_STX_MEM(BPF_B, BPF_REG_10, BPF_REG_7, -9),
-		BPF_LDX_MEM(BPF_B, BPF_REG_7, BPF_REG_10, -9),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
-		BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.result_unpriv = ACCEPT,
-	.insn_processed = 15,
-},
-/* The test performs a conditional 64-bit write to a stack location
- * fp[-8], this is followed by an unconditional 8-bit write to fp[-8],
- * then data is read from fp[-8]. This sequence is unsafe.
- *
- * The test would be mistakenly marked as safe w/o dst register parent
- * preservation in verifier.c:copy_register_state() function.
- *
- * Note the usage of BPF_F_TEST_STATE_FREQ to force creation of the
- * checkpoint state after conditional 64-bit assignment.
- */
-{
-	"write tracking and register parent chain bug",
-	.insns = {
-	/* r6 = ktime_get_ns() */
-	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	/* r0 = ktime_get_ns() */
-	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-	/* if r0 > r6 goto +1 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_6, 1),
-	/* *(u64 *)(r10 - 8) = 0xdeadbeef */
-	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0xdeadbeef),
-	/* r1 = 42 */
-	BPF_MOV64_IMM(BPF_REG_1, 42),
-	/* *(u8 *)(r10 - 8) = r1 */
-	BPF_STX_MEM(BPF_B, BPF_REG_FP, BPF_REG_1, -8),
-	/* r2 = *(u64 *)(r10 - 8) */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_FP, -8),
-	/* exit(0) */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.flags = BPF_F_TEST_STATE_FREQ,
-	.errstr_unpriv = "invalid read from stack off -8+1 size 8",
-	.result_unpriv = REJECT,
-	/* in privileged mode reads from uninitialized stack locations are permitted */
-	.result = ACCEPT,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 19/24] selftests/bpf: verifier/sock converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (17 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 18/24] selftests/bpf: verifier/search_pruning " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 20/24] selftests/bpf: verifier/spin_lock " Eduard Zingerman
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/sock automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_sock.c       | 980 ++++++++++++++++++
 tools/testing/selftests/bpf/verifier/sock.c   | 706 -------------
 3 files changed, 982 insertions(+), 706 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_sock.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/sock.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 1ef44e699e9c..60bcff62d968 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -50,6 +50,7 @@
 #include "verifier_ringbuf.skel.h"
 #include "verifier_runtime_jit.skel.h"
 #include "verifier_search_pruning.skel.h"
+#include "verifier_sock.skel.h"
 #include "verifier_spill_fill.skel.h"
 #include "verifier_stack_ptr.skel.h"
 #include "verifier_uninit.skel.h"
@@ -141,6 +142,7 @@ void test_verifier_regalloc(void)             { RUN(verifier_regalloc); }
 void test_verifier_ringbuf(void)              { RUN(verifier_ringbuf); }
 void test_verifier_runtime_jit(void)          { RUN(verifier_runtime_jit); }
 void test_verifier_search_pruning(void)       { RUN(verifier_search_pruning); }
+void test_verifier_sock(void)                 { RUN(verifier_sock); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
 void test_verifier_uninit(void)               { RUN(verifier_uninit); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_sock.c b/tools/testing/selftests/bpf/progs/verifier_sock.c
new file mode 100644
index 000000000000..ee76b51005ab
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_sock.c
@@ -0,0 +1,980 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/sock.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#define sizeof_field(TYPE, MEMBER) sizeof((((TYPE *)0)->MEMBER))
+#define offsetofend(TYPE, MEMBER) \
+	(offsetof(TYPE, MEMBER)	+ sizeof_field(TYPE, MEMBER))
+
+struct {
+	__uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, __u64);
+} map_reuseport_array SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKHASH);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+} map_sockhash SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+} map_sockmap SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_XSKMAP);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+} map_xskmap SEC(".maps");
+
+struct val {
+	int cnt;
+	struct bpf_spin_lock l;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SK_STORAGE);
+	__uint(max_entries, 0);
+	__type(key, int);
+	__type(value, struct val);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+} sk_storage_map SEC(".maps");
+
+SEC("cgroup/skb")
+__description("skb->sk: no NULL check")
+__failure __msg("invalid mem access 'sock_common_or_null'")
+__failure_unpriv
+__naked void skb_sk_no_null_check(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	r0 = *(u32*)(r1 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("skb->sk: sk->family [non fullsock field]")
+__success __success_unpriv __retval(0)
+__naked void sk_family_non_fullsock_field_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u32*)(r1 + %[bpf_sock_family]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_family, offsetof(struct bpf_sock, family))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("skb->sk: sk->type [fullsock field]")
+__failure __msg("invalid sock_common access")
+__failure_unpriv
+__naked void sk_sk_type_fullsock_field_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r0 = *(u32*)(r1 + %[bpf_sock_type]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("bpf_sk_fullsock(skb->sk): no !skb->sk check")
+__failure __msg("type=sock_common_or_null expected=sock_common")
+__failure_unpriv
+__naked void sk_no_skb_sk_check_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	call %[bpf_sk_fullsock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): no NULL check on ret")
+__failure __msg("invalid mem access 'sock_or_null'")
+__failure_unpriv
+__naked void no_null_check_on_ret_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	r0 = *(u32*)(r0 + %[bpf_sock_type]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->type [fullsock field]")
+__success __success_unpriv __retval(0)
+__naked void sk_sk_type_fullsock_field_2(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u32*)(r0 + %[bpf_sock_type]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->family [non fullsock field]")
+__success __success_unpriv __retval(0)
+__naked void sk_family_non_fullsock_field_2(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r0 = *(u32*)(r0 + %[bpf_sock_family]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_family, offsetof(struct bpf_sock, family))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->state [narrow load]")
+__success __success_unpriv __retval(0)
+__naked void sk_sk_state_narrow_load(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u8*)(r0 + %[bpf_sock_state]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_state, offsetof(struct bpf_sock, state))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)")
+__success __success_unpriv __retval(0)
+__naked void port_word_load_backward_compatibility(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u32*)(r0 + %[bpf_sock_dst_port]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_dst_port, offsetof(struct bpf_sock, dst_port))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->dst_port [half load]")
+__success __success_unpriv __retval(0)
+__naked void sk_dst_port_half_load(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u16*)(r0 + %[bpf_sock_dst_port]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_dst_port, offsetof(struct bpf_sock, dst_port))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)")
+__failure __msg("invalid sock access")
+__failure_unpriv
+__naked void dst_port_half_load_invalid_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u16*)(r0 + %[__imm_0]);			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__imm_0, offsetof(struct bpf_sock, dst_port) + 2),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->dst_port [byte load]")
+__success __success_unpriv __retval(0)
+__naked void sk_dst_port_byte_load(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r2 = *(u8*)(r0 + %[bpf_sock_dst_port]);		\
+	r2 = *(u8*)(r0 + %[__imm_0]);			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__imm_0, offsetof(struct bpf_sock, dst_port) + 1),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_dst_port, offsetof(struct bpf_sock, dst_port))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)")
+__failure __msg("invalid sock access")
+__failure_unpriv
+__naked void dst_port_byte_load_invalid(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u8*)(r0 + %[__imm_0]);			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__imm_0, offsetof(struct bpf_sock, dst_port) + 2),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)")
+__failure __msg("invalid sock access")
+__failure_unpriv
+__naked void dst_port_half_load_invalid_2(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u16*)(r0 + %[bpf_sock_dst_port__end]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_dst_port__end, offsetofend(struct bpf_sock, dst_port))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->dst_ip6 [load 2nd byte]")
+__success __success_unpriv __retval(0)
+__naked void dst_ip6_load_2nd_byte(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u8*)(r0 + %[__imm_0]);			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__imm_0, offsetof(struct bpf_sock, dst_ip6[0]) + 1),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->type [narrow load]")
+__success __success_unpriv __retval(0)
+__naked void sk_sk_type_narrow_load(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u8*)(r0 + %[bpf_sock_type]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): sk->protocol [narrow load]")
+__success __success_unpriv __retval(0)
+__naked void sk_sk_protocol_narrow_load(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u8*)(r0 + %[bpf_sock_protocol]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_protocol, offsetof(struct bpf_sock, protocol))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("sk_fullsock(skb->sk): beyond last field")
+__failure __msg("invalid sock access")
+__failure_unpriv
+__naked void skb_sk_beyond_last_field_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u32*)(r0 + %[bpf_sock_rx_queue_mapping__end]);\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_sock_rx_queue_mapping__end, offsetofend(struct bpf_sock, rx_queue_mapping))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("bpf_tcp_sock(skb->sk): no !skb->sk check")
+__failure __msg("type=sock_common_or_null expected=sock_common")
+__failure_unpriv
+__naked void sk_no_skb_sk_check_2(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	call %[bpf_tcp_sock];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("bpf_tcp_sock(skb->sk): no NULL check on ret")
+__failure __msg("invalid mem access 'tcp_sock_or_null'")
+__failure_unpriv
+__naked void no_null_check_on_ret_2(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_tcp_sock];				\
+	r0 = *(u32*)(r0 + %[bpf_tcp_sock_snd_cwnd]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("bpf_tcp_sock(skb->sk): tp->snd_cwnd")
+__success __success_unpriv __retval(0)
+__naked void skb_sk_tp_snd_cwnd_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r0 = *(u32*)(r0 + %[bpf_tcp_sock_snd_cwnd]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("bpf_tcp_sock(skb->sk): tp->bytes_acked")
+__success __success_unpriv __retval(0)
+__naked void skb_sk_tp_bytes_acked(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r0 = *(u64*)(r0 + %[bpf_tcp_sock_bytes_acked]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_tcp_sock_bytes_acked, offsetof(struct bpf_tcp_sock, bytes_acked))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("bpf_tcp_sock(skb->sk): beyond last field")
+__failure __msg("invalid tcp_sock access")
+__failure_unpriv
+__naked void skb_sk_beyond_last_field_2(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r0 = *(u64*)(r0 + %[bpf_tcp_sock_bytes_acked__end]);\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_tcp_sock_bytes_acked__end, offsetofend(struct bpf_tcp_sock, bytes_acked))
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("bpf_tcp_sock(bpf_sk_fullsock(skb->sk)): tp->snd_cwnd")
+__success __success_unpriv __retval(0)
+__naked void skb_sk_tp_snd_cwnd_2(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l2_%=;				\
+	exit;						\
+l2_%=:	r0 = *(u32*)(r0 + %[bpf_tcp_sock_snd_cwnd]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk)),
+	  __imm_const(bpf_tcp_sock_snd_cwnd, offsetof(struct bpf_tcp_sock, snd_cwnd))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("bpf_sk_release(skb->sk)")
+__failure __msg("R1 must be referenced when passed to release function")
+__naked void bpf_sk_release_skb_sk(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 == 0 goto l0_%=;				\
+	call %[bpf_sk_release];				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_release),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("bpf_sk_release(bpf_sk_fullsock(skb->sk))")
+__failure __msg("R1 must be referenced when passed to release function")
+__naked void bpf_sk_fullsock_skb_sk(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	call %[bpf_sk_release];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_sk_release),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("bpf_sk_release(bpf_tcp_sock(skb->sk))")
+__failure __msg("R1 must be referenced when passed to release function")
+__naked void bpf_tcp_sock_skb_sk(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_tcp_sock];				\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r1 = r0;					\
+	call %[bpf_sk_release];				\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_release),
+	  __imm(bpf_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("sk_storage_get(map, skb->sk, NULL, 0): value == NULL")
+__success __retval(0)
+__naked void sk_null_0_value_null(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r4 = 0;						\
+	r3 = 0;						\
+	r2 = r0;					\
+	r1 = %[sk_storage_map] ll;			\
+	call %[bpf_sk_storage_get];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_sk_storage_get),
+	  __imm_addr(sk_storage_map),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("sk_storage_get(map, skb->sk, 1, 1): value == 1")
+__failure __msg("R3 type=scalar expected=fp")
+__naked void sk_1_1_value_1(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r4 = 1;						\
+	r3 = 1;						\
+	r2 = r0;					\
+	r1 = %[sk_storage_map] ll;			\
+	call %[bpf_sk_storage_get];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_sk_storage_get),
+	  __imm_addr(sk_storage_map),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("sk_storage_get(map, skb->sk, &stack_value, 1): stack_value")
+__success __retval(0)
+__naked void stack_value_1_stack_value(void)
+{
+	asm volatile ("					\
+	r2 = 0;						\
+	*(u64*)(r10 - 8) = r2;				\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	call %[bpf_sk_fullsock];			\
+	if r0 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r4 = 1;						\
+	r3 = r10;					\
+	r3 += -8;					\
+	r2 = r0;					\
+	r1 = %[sk_storage_map] ll;			\
+	call %[bpf_sk_storage_get];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_fullsock),
+	  __imm(bpf_sk_storage_get),
+	  __imm_addr(sk_storage_map),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("bpf_map_lookup_elem(smap, &key)")
+__failure __msg("cannot pass map_type 24 into func bpf_map_lookup_elem")
+__naked void map_lookup_elem_smap_key(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[sk_storage_map] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(sk_storage_map)
+	: __clobber_all);
+}
+
+SEC("xdp")
+__description("bpf_map_lookup_elem(xskmap, &key); xs->queue_id")
+__success __retval(0)
+__naked void xskmap_key_xs_queue_id(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_xskmap] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r0 = *(u32*)(r0 + %[bpf_xdp_sock_queue_id]);	\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_xskmap),
+	  __imm_const(bpf_xdp_sock_queue_id, offsetof(struct bpf_xdp_sock, queue_id))
+	: __clobber_all);
+}
+
+SEC("sk_skb")
+__description("bpf_map_lookup_elem(sockmap, &key)")
+__failure __msg("Unreleased reference id=2 alloc_insn=6")
+__naked void map_lookup_elem_sockmap_key(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_sockmap] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_sockmap)
+	: __clobber_all);
+}
+
+SEC("sk_skb")
+__description("bpf_map_lookup_elem(sockhash, &key)")
+__failure __msg("Unreleased reference id=2 alloc_insn=6")
+__naked void map_lookup_elem_sockhash_key(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_sockhash] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_sockhash)
+	: __clobber_all);
+}
+
+SEC("sk_skb")
+__description("bpf_map_lookup_elem(sockmap, &key); sk->type [fullsock field]; bpf_sk_release(sk)")
+__success
+__naked void field_bpf_sk_release_sk_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_sockmap] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = r0;					\
+	r0 = *(u32*)(r0 + %[bpf_sock_type]);		\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_sk_release),
+	  __imm_addr(map_sockmap),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+SEC("sk_skb")
+__description("bpf_map_lookup_elem(sockhash, &key); sk->type [fullsock field]; bpf_sk_release(sk)")
+__success
+__naked void field_bpf_sk_release_sk_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_sockhash] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = r0;					\
+	r0 = *(u32*)(r0 + %[bpf_sock_type]);		\
+	call %[bpf_sk_release];				\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_sk_release),
+	  __imm_addr(map_sockhash),
+	  __imm_const(bpf_sock_type, offsetof(struct bpf_sock, type))
+	: __clobber_all);
+}
+
+SEC("sk_reuseport")
+__description("bpf_sk_select_reuseport(ctx, reuseport_array, &key, flags)")
+__success
+__naked void ctx_reuseport_array_key_flags(void)
+{
+	asm volatile ("					\
+	r4 = 0;						\
+	r2 = 0;						\
+	*(u32*)(r10 - 4) = r2;				\
+	r3 = r10;					\
+	r3 += -4;					\
+	r2 = %[map_reuseport_array] ll;			\
+	call %[bpf_sk_select_reuseport];		\
+	exit;						\
+"	:
+	: __imm(bpf_sk_select_reuseport),
+	  __imm_addr(map_reuseport_array)
+	: __clobber_all);
+}
+
+SEC("sk_reuseport")
+__description("bpf_sk_select_reuseport(ctx, sockmap, &key, flags)")
+__success
+__naked void reuseport_ctx_sockmap_key_flags(void)
+{
+	asm volatile ("					\
+	r4 = 0;						\
+	r2 = 0;						\
+	*(u32*)(r10 - 4) = r2;				\
+	r3 = r10;					\
+	r3 += -4;					\
+	r2 = %[map_sockmap] ll;				\
+	call %[bpf_sk_select_reuseport];		\
+	exit;						\
+"	:
+	: __imm(bpf_sk_select_reuseport),
+	  __imm_addr(map_sockmap)
+	: __clobber_all);
+}
+
+SEC("sk_reuseport")
+__description("bpf_sk_select_reuseport(ctx, sockhash, &key, flags)")
+__success
+__naked void reuseport_ctx_sockhash_key_flags(void)
+{
+	asm volatile ("					\
+	r4 = 0;						\
+	r2 = 0;						\
+	*(u32*)(r10 - 4) = r2;				\
+	r3 = r10;					\
+	r3 += -4;					\
+	r2 = %[map_sockmap] ll;				\
+	call %[bpf_sk_select_reuseport];		\
+	exit;						\
+"	:
+	: __imm(bpf_sk_select_reuseport),
+	  __imm_addr(map_sockmap)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("mark null check on return value of bpf_skc_to helpers")
+__failure __msg("invalid mem access")
+__naked void of_bpf_skc_to_helpers(void)
+{
+	asm volatile ("					\
+	r1 = *(u64*)(r1 + %[__sk_buff_sk]);		\
+	if r1 != 0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:	r6 = r1;					\
+	call %[bpf_skc_to_tcp_sock];			\
+	r7 = r0;					\
+	r1 = r6;					\
+	call %[bpf_skc_to_tcp_request_sock];		\
+	r8 = r0;					\
+	if r8 != 0 goto l1_%=;				\
+	r0 = 0;						\
+	exit;						\
+l1_%=:	r0 = *(u8*)(r7 + 0);				\
+	exit;						\
+"	:
+	: __imm(bpf_skc_to_tcp_request_sock),
+	  __imm(bpf_skc_to_tcp_sock),
+	  __imm_const(__sk_buff_sk, offsetof(struct __sk_buff, sk))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c
deleted file mode 100644
index 108dd3ee1edd..000000000000
--- a/tools/testing/selftests/bpf/verifier/sock.c
+++ /dev/null
@@ -1,706 +0,0 @@
-{
-	"skb->sk: no NULL check",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid mem access 'sock_common_or_null'",
-},
-{
-	"skb->sk: sk->family [non fullsock field]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, offsetof(struct bpf_sock, family)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"skb->sk: sk->type [fullsock field]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, offsetof(struct bpf_sock, type)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid sock_common access",
-},
-{
-	"bpf_sk_fullsock(skb->sk): no !skb->sk check",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "type=sock_common_or_null expected=sock_common",
-},
-{
-	"sk_fullsock(skb->sk): no NULL check on ret",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid mem access 'sock_or_null'",
-},
-{
-	"sk_fullsock(skb->sk): sk->type [fullsock field]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->family [non fullsock field]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, family)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->state [narrow load]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, state)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->dst_port [half load]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid sock access",
-},
-{
-	"sk_fullsock(skb->sk): sk->dst_port [byte load]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
-	BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid sock access",
-},
-{
-	"sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, dst_port)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid sock access",
-},
-{
-	"sk_fullsock(skb->sk): sk->dst_ip6 [load 2nd byte]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_ip6[0]) + 1),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->type [narrow load]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): sk->protocol [narrow load]",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, protocol)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"sk_fullsock(skb->sk): beyond last field",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, rx_queue_mapping)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid sock access",
-},
-{
-	"bpf_tcp_sock(skb->sk): no !skb->sk check",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "type=sock_common_or_null expected=sock_common",
-},
-{
-	"bpf_tcp_sock(skb->sk): no NULL check on ret",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid mem access 'tcp_sock_or_null'",
-},
-{
-	"bpf_tcp_sock(skb->sk): tp->snd_cwnd",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"bpf_tcp_sock(skb->sk): tp->bytes_acked",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, bytes_acked)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"bpf_tcp_sock(skb->sk): beyond last field",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_tcp_sock, bytes_acked)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = REJECT,
-	.errstr = "invalid tcp_sock access",
-},
-{
-	"bpf_tcp_sock(bpf_sk_fullsock(skb->sk)): tp->snd_cwnd",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.result = ACCEPT,
-},
-{
-	"bpf_sk_release(skb->sk)",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "R1 must be referenced when passed to release function",
-},
-{
-	"bpf_sk_release(bpf_sk_fullsock(skb->sk))",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "R1 must be referenced when passed to release function",
-},
-{
-	"bpf_sk_release(bpf_tcp_sock(skb->sk))",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "R1 must be referenced when passed to release function",
-},
-{
-	"sk_storage_get(map, skb->sk, NULL, 0): value == NULL",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_storage_get),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_sk_storage_map = { 11 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"sk_storage_get(map, skb->sk, 1, 1): value == 1",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_4, 1),
-	BPF_MOV64_IMM(BPF_REG_3, 1),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_storage_get),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_sk_storage_map = { 11 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "R3 type=scalar expected=fp",
-},
-{
-	"sk_storage_get(map, skb->sk, &stack_value, 1): stack_value",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_4, 1),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -8),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_storage_get),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_sk_storage_map = { 14 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-},
-{
-	"bpf_map_lookup_elem(smap, &key)",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_sk_storage_map = { 3 },
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "cannot pass map_type 24 into func bpf_map_lookup_elem",
-},
-{
-	"bpf_map_lookup_elem(xskmap, &key); xs->queue_id",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_xdp_sock, queue_id)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_xskmap = { 3 },
-	.prog_type = BPF_PROG_TYPE_XDP,
-	.result = ACCEPT,
-},
-{
-	"bpf_map_lookup_elem(sockmap, &key)",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_sockmap = { 3 },
-	.prog_type = BPF_PROG_TYPE_SK_SKB,
-	.result = REJECT,
-	.errstr = "Unreleased reference id=2 alloc_insn=5",
-},
-{
-	"bpf_map_lookup_elem(sockhash, &key)",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_sockhash = { 3 },
-	.prog_type = BPF_PROG_TYPE_SK_SKB,
-	.result = REJECT,
-	.errstr = "Unreleased reference id=2 alloc_insn=5",
-},
-{
-	"bpf_map_lookup_elem(sockmap, &key); sk->type [fullsock field]; bpf_sk_release(sk)",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_sockmap = { 3 },
-	.prog_type = BPF_PROG_TYPE_SK_SKB,
-	.result = ACCEPT,
-},
-{
-	"bpf_map_lookup_elem(sockhash, &key); sk->type [fullsock field]; bpf_sk_release(sk)",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
-	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_sockhash = { 3 },
-	.prog_type = BPF_PROG_TYPE_SK_SKB,
-	.result = ACCEPT,
-},
-{
-	"bpf_sk_select_reuseport(ctx, reuseport_array, &key, flags)",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -4),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_select_reuseport),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_reuseport_array = { 4 },
-	.prog_type = BPF_PROG_TYPE_SK_REUSEPORT,
-	.result = ACCEPT,
-},
-{
-	"bpf_sk_select_reuseport(ctx, sockmap, &key, flags)",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -4),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_select_reuseport),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_sockmap = { 4 },
-	.prog_type = BPF_PROG_TYPE_SK_REUSEPORT,
-	.result = ACCEPT,
-},
-{
-	"bpf_sk_select_reuseport(ctx, sockhash, &key, flags)",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -4),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_EMIT_CALL(BPF_FUNC_sk_select_reuseport),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_sockmap = { 4 },
-	.prog_type = BPF_PROG_TYPE_SK_REUSEPORT,
-	.result = ACCEPT,
-},
-{
-	"mark null check on return value of bpf_skc_to helpers",
-	.insns = {
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_request_sock),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = REJECT,
-	.errstr = "invalid mem access",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "unknown func",
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 20/24] selftests/bpf: verifier/spin_lock converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (18 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 19/24] selftests/bpf: verifier/sock " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 21/24] selftests/bpf: verifier/subreg " Eduard Zingerman
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/spin_lock automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_spin_lock.c  | 533 ++++++++++++++++++
 .../selftests/bpf/verifier/spin_lock.c        | 447 ---------------
 3 files changed, 535 insertions(+), 447 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_spin_lock.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/spin_lock.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 60bcff62d968..0ea88282859d 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -52,6 +52,7 @@
 #include "verifier_search_pruning.skel.h"
 #include "verifier_sock.skel.h"
 #include "verifier_spill_fill.skel.h"
+#include "verifier_spin_lock.skel.h"
 #include "verifier_stack_ptr.skel.h"
 #include "verifier_uninit.skel.h"
 #include "verifier_value_adj_spill.skel.h"
@@ -144,6 +145,7 @@ void test_verifier_runtime_jit(void)          { RUN(verifier_runtime_jit); }
 void test_verifier_search_pruning(void)       { RUN(verifier_search_pruning); }
 void test_verifier_sock(void)                 { RUN(verifier_sock); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
+void test_verifier_spin_lock(void)            { RUN(verifier_spin_lock); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
 void test_verifier_uninit(void)               { RUN(verifier_uninit); }
 void test_verifier_value_adj_spill(void)      { RUN(verifier_value_adj_spill); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_spin_lock.c b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c
new file mode 100644
index 000000000000..9c1aa69650f8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c
@@ -0,0 +1,533 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/spin_lock.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct val {
+	int cnt;
+	struct bpf_spin_lock l;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct val);
+} map_spin_lock SEC(".maps");
+
+SEC("cgroup/skb")
+__description("spin_lock: test1 success")
+__success __failure_unpriv __msg_unpriv("")
+__retval(0)
+__naked void spin_lock_test1_success(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r6;					\
+	r1 += 4;					\
+	r0 = *(u32*)(r6 + 0);				\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test2 direct ld/st")
+__failure __msg("cannot be accessed directly")
+__failure_unpriv __msg_unpriv("")
+__naked void lock_test2_direct_ld_st(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r6;					\
+	r1 += 4;					\
+	r0 = *(u32*)(r1 + 0);				\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test3 direct ld/st")
+__failure __msg("cannot be accessed directly")
+__failure_unpriv __msg_unpriv("")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void lock_test3_direct_ld_st(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r6;					\
+	r1 += 4;					\
+	r0 = *(u32*)(r6 + 1);				\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test4 direct ld/st")
+__failure __msg("cannot be accessed directly")
+__failure_unpriv __msg_unpriv("")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void lock_test4_direct_ld_st(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r6;					\
+	r1 += 4;					\
+	r0 = *(u16*)(r6 + 3);				\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test5 call within a locked region")
+__failure __msg("calls are not allowed")
+__failure_unpriv __msg_unpriv("")
+__naked void call_within_a_locked_region(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r6;					\
+	r1 += 4;					\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32),
+	  __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test6 missing unlock")
+__failure __msg("unlock is missing")
+__failure_unpriv __msg_unpriv("")
+__naked void spin_lock_test6_missing_unlock(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r6;					\
+	r1 += 4;					\
+	r0 = *(u32*)(r6 + 0);				\
+	if r0 != 0 goto l1_%=;				\
+	call %[bpf_spin_unlock];			\
+l1_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test7 unlock without lock")
+__failure __msg("without taking a lock")
+__failure_unpriv __msg_unpriv("")
+__naked void lock_test7_unlock_without_lock(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	if r1 != 0 goto l1_%=;				\
+	call %[bpf_spin_lock];				\
+l1_%=:	r1 = r6;					\
+	r1 += 4;					\
+	r0 = *(u32*)(r6 + 0);				\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test8 double lock")
+__failure __msg("calls are not allowed")
+__failure_unpriv __msg_unpriv("")
+__naked void spin_lock_test8_double_lock(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r6;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r6;					\
+	r1 += 4;					\
+	r0 = *(u32*)(r6 + 0);				\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test9 different lock")
+__failure __msg("unlock of different lock")
+__failure_unpriv __msg_unpriv("")
+__naked void spin_lock_test9_different_lock(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r7 = r0;					\
+	r1 = r6;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r1 = r7;					\
+	r1 += 4;					\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("cgroup/skb")
+__description("spin_lock: test10 lock in subprog without unlock")
+__failure __msg("unlock is missing")
+__failure_unpriv __msg_unpriv("")
+__naked void lock_in_subprog_without_unlock(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r6 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call lock_in_subprog_without_unlock__1;		\
+	r1 = r6;					\
+	r1 += 4;					\
+	call %[bpf_spin_unlock];			\
+	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+static __naked __noinline __attribute__((used))
+void lock_in_subprog_without_unlock__1(void)
+{
+	asm volatile ("					\
+	call %[bpf_spin_lock];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_spin_lock)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("spin_lock: test11 ld_abs under lock")
+__failure __msg("inside bpf_spin_lock")
+__naked void test11_ld_abs_under_lock(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r7 = r0;					\
+	r1 = r0;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r0 = *(u8*)skb[0];				\
+	r1 = r7;					\
+	r1 += 4;					\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("spin_lock: regsafe compare reg->id for map value")
+__failure __msg("bpf_spin_unlock of different lock")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void reg_id_for_map_value(void)
+{
+	asm volatile ("					\
+	r6 = r1;					\
+	r6 = *(u32*)(r6 + %[__sk_buff_mark]);		\
+	r1 = %[map_spin_lock] ll;			\
+	r9 = r1;					\
+	r2 = 0;						\
+	*(u32*)(r10 - 4) = r2;				\
+	r2 = r10;					\
+	r2 += -4;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r7 = r0;					\
+	r1 = r9;					\
+	r2 = r10;					\
+	r2 += -4;					\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l1_%=;				\
+	exit;						\
+l1_%=:	r8 = r0;					\
+	r1 = r7;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	if r6 == 0 goto l2_%=;				\
+	goto l3_%=;					\
+l2_%=:	r7 = r8;					\
+l3_%=:	r1 = r7;					\
+	r1 += 4;					\
+	call %[bpf_spin_unlock];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+/* Make sure that regsafe() compares ids for spin lock records using
+ * check_ids():
+ *  1: r9 = map_lookup_elem(...)  ; r9.id == 1
+ *  2: r8 = map_lookup_elem(...)  ; r8.id == 2
+ *  3: r7 = ktime_get_ns()
+ *  4: r6 = ktime_get_ns()
+ *  5: if r6 > r7 goto <9>
+ *  6: spin_lock(r8)
+ *  7: r9 = r8
+ *  8: goto <10>
+ *  9: spin_lock(r9)
+ * 10: spin_unlock(r9)             ; r9.id == 1 || r9.id == 2 and lock is active,
+ *                                 ; second visit to (10) should be considered safe
+ *                                 ; if check_ids() is used.
+ * 11: exit(0)
+ */
+
+SEC("cgroup/skb")
+__description("spin_lock: regsafe() check_ids() similar id mappings")
+__success __msg("29: safe")
+__failure_unpriv __msg_unpriv("")
+__log_level(2) __retval(0) __flag(BPF_F_TEST_STATE_FREQ)
+__naked void check_ids_similar_id_mappings(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u32*)(r10 - 4) = r1;				\
+	/* r9 = map_lookup_elem(...) */			\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r9 = r0;					\
+	/* r8 = map_lookup_elem(...) */			\
+	r2 = r10;					\
+	r2 += -4;					\
+	r1 = %[map_spin_lock] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l1_%=;				\
+	r8 = r0;					\
+	/* r7 = ktime_get_ns() */			\
+	call %[bpf_ktime_get_ns];			\
+	r7 = r0;					\
+	/* r6 = ktime_get_ns() */			\
+	call %[bpf_ktime_get_ns];			\
+	r6 = r0;					\
+	/* if r6 > r7 goto +5      ; no new information about the state is derived from\
+	 *                         ; this check, thus produced verifier states differ\
+	 *                         ; only in 'insn_idx'	\
+	 * spin_lock(r8)				\
+	 * r9 = r8					\
+	 * goto unlock					\
+	 */						\
+	if r6 > r7 goto l2_%=;				\
+	r1 = r8;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+	r9 = r8;					\
+	goto l3_%=;					\
+l2_%=:	/* spin_lock(r9) */				\
+	r1 = r9;					\
+	r1 += 4;					\
+	call %[bpf_spin_lock];				\
+l3_%=:	/* spin_unlock(r9) */				\
+	r1 = r9;					\
+	r1 += 4;					\
+	call %[bpf_spin_unlock];			\
+l0_%=:	/* exit(0) */					\
+	r0 = 0;						\
+l1_%=:	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns),
+	  __imm(bpf_map_lookup_elem),
+	  __imm(bpf_spin_lock),
+	  __imm(bpf_spin_unlock),
+	  __imm_addr(map_spin_lock)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/spin_lock.c b/tools/testing/selftests/bpf/verifier/spin_lock.c
deleted file mode 100644
index eaf114f07e2e..000000000000
--- a/tools/testing/selftests/bpf/verifier/spin_lock.c
+++ /dev/null
@@ -1,447 +0,0 @@
-{
-	"spin_lock: test1 success",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test2 direct ld/st",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "cannot be accessed directly",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test3 direct ld/st",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "cannot be accessed directly",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"spin_lock: test4 direct ld/st",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_6, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "cannot be accessed directly",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"spin_lock: test5 call within a locked region",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "calls are not allowed",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test6 missing unlock",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "unlock is missing",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test7 unlock without lock",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "without taking a lock",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test8 double lock",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "calls are not allowed",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test9 different lock",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3, 11 },
-	.result = REJECT,
-	.errstr = "unlock of different lock",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test10 lock in subprog without unlock",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3 },
-	.result = REJECT,
-	.errstr = "unlock is missing",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-},
-{
-	"spin_lock: test11 ld_abs under lock",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_LD_ABS(BPF_B, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 4 },
-	.result = REJECT,
-	.errstr = "inside bpf_spin_lock",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"spin_lock: regsafe compare reg->id for map value",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_6, offsetof(struct __sk_buff, mark)),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_1),
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_8),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 2 },
-	.result = REJECT,
-	.errstr = "bpf_spin_unlock of different lock",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.flags = BPF_F_TEST_STATE_FREQ,
-},
-/* Make sure that regsafe() compares ids for spin lock records using
- * check_ids():
- *  1: r9 = map_lookup_elem(...)  ; r9.id == 1
- *  2: r8 = map_lookup_elem(...)  ; r8.id == 2
- *  3: r7 = ktime_get_ns()
- *  4: r6 = ktime_get_ns()
- *  5: if r6 > r7 goto <9>
- *  6: spin_lock(r8)
- *  7: r9 = r8
- *  8: goto <10>
- *  9: spin_lock(r9)
- * 10: spin_unlock(r9)             ; r9.id == 1 || r9.id == 2 and lock is active,
- *                                 ; second visit to (10) should be considered safe
- *                                 ; if check_ids() is used.
- * 11: exit(0)
- */
-{
-	"spin_lock: regsafe() check_ids() similar id mappings",
-	.insns = {
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
-	/* r9 = map_lookup_elem(...) */
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 24),
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
-	/* r8 = map_lookup_elem(...) */
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_LD_MAP_FD(BPF_REG_1,
-		      0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 18),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	/* r7 = ktime_get_ns() */
-	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	/* r6 = ktime_get_ns() */
-	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	/* if r6 > r7 goto +5      ; no new information about the state is derived from
-	 *                         ; this check, thus produced verifier states differ
-	 *                         ; only in 'insn_idx'
-	 * spin_lock(r8)
-	 * r9 = r8
-	 * goto unlock
-	 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 5),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
-	BPF_MOV64_REG(BPF_REG_9, BPF_REG_8),
-	BPF_JMP_A(3),
-	/* spin_lock(r9) */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
-	/* spin_unlock(r9) */
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_EMIT_CALL(BPF_FUNC_spin_unlock),
-	/* exit(0) */
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_spin_lock = { 3, 10 },
-	.result = VERBOSE_ACCEPT,
-	.errstr = "28: safe",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "",
-	.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
-	.flags = BPF_F_TEST_STATE_FREQ,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 21/24] selftests/bpf: verifier/subreg converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (19 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 20/24] selftests/bpf: verifier/spin_lock " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 22/24] selftests/bpf: verifier/unpriv " Eduard Zingerman
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/subreg automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_subreg.c     | 673 ++++++++++++++++++
 tools/testing/selftests/bpf/verifier/subreg.c | 533 --------------
 3 files changed, 675 insertions(+), 533 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_subreg.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/subreg.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 0ea88282859d..999b694850d3 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -54,6 +54,7 @@
 #include "verifier_spill_fill.skel.h"
 #include "verifier_spin_lock.skel.h"
 #include "verifier_stack_ptr.skel.h"
+#include "verifier_subreg.skel.h"
 #include "verifier_uninit.skel.h"
 #include "verifier_value_adj_spill.skel.h"
 #include "verifier_value.skel.h"
@@ -147,6 +148,7 @@ void test_verifier_sock(void)                 { RUN(verifier_sock); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_spin_lock(void)            { RUN(verifier_spin_lock); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
+void test_verifier_subreg(void)               { RUN(verifier_subreg); }
 void test_verifier_uninit(void)               { RUN(verifier_uninit); }
 void test_verifier_value_adj_spill(void)      { RUN(verifier_value_adj_spill); }
 void test_verifier_value(void)                { RUN(verifier_value); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_subreg.c b/tools/testing/selftests/bpf/progs/verifier_subreg.c
new file mode 100644
index 000000000000..8613ea160dcd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_subreg.c
@@ -0,0 +1,673 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/subreg.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+/* This file contains sub-register zero extension checks for insns defining
+ * sub-registers, meaning:
+ *   - All insns under BPF_ALU class. Their BPF_ALU32 variants or narrow width
+ *     forms (BPF_END) could define sub-registers.
+ *   - Narrow direct loads, BPF_B/H/W | BPF_LDX.
+ *   - BPF_LD is not exposed to JIT back-ends, so no need for testing.
+ *
+ * "get_prandom_u32" is used to initialize low 32-bit of some registers to
+ * prevent potential optimizations done by verifier or JIT back-ends which could
+ * optimize register back into constant when range info shows one register is a
+ * constant.
+ */
+
+SEC("socket")
+__description("add32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void add32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r0 = 0x100000000 ll;				\
+	w0 += w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("add32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void add32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	/* An insn could have no effect on the low 32-bit, for example:\
+	 *   a = a + 0					\
+	 *   a = a | 0					\
+	 *   a = a & -1					\
+	 * But, they should still zero high 32-bit.	\
+	 */						\
+	w0 += 0;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 += -2;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("sub32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void sub32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r0 = 0x1ffffffff ll;				\
+	w0 -= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("sub32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void sub32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 -= 0;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 -= 1;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("mul32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void mul32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r0 = 0x100000001 ll;				\
+	w0 *= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("mul32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void mul32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 *= 1;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 *= -1;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("div32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void div32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r0 = -1;					\
+	w0 /= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("div32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void div32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 /= 1;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 /= 2;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("or32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void or32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r0 = 0x100000001 ll;				\
+	w0 |= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("or32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void or32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 |= 0;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 |= 1;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("and32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void and32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x100000000 ll;				\
+	r1 |= r0;					\
+	r0 = 0x1ffffffff ll;				\
+	w0 &= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("and32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void and32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 &= -1;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 &= -2;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("lsh32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void lsh32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x100000000 ll;				\
+	r0 |= r1;					\
+	r1 = 1;						\
+	w0 <<= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("lsh32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void lsh32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 <<= 0;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 <<= 1;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("rsh32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void rsh32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	r1 = 1;						\
+	w0 >>= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("rsh32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void rsh32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 >>= 0;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 >>= 1;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("neg32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void neg32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 = -w0;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("mod32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void mod32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r0 = -1;					\
+	w0 %%= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("mod32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void mod32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 %%= 1;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 %%= 2;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("xor32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void xor32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = r0;					\
+	r0 = 0x100000000 ll;				\
+	w0 ^= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("xor32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void xor32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 ^= 1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("mov32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void mov32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x100000000 ll;				\
+	r1 |= r0;					\
+	r0 = 0x100000000 ll;				\
+	w0 = w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("mov32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void mov32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 = 0;						\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 = 1;						\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("arsh32 reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void arsh32_reg_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	r1 = 1;						\
+	w0 s>>= w1;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("arsh32 imm zero extend check")
+__success __success_unpriv __retval(0)
+__naked void arsh32_imm_zero_extend_check(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 s>>= 0;					\
+	r0 >>= 32;					\
+	r6 = r0;					\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	w0 s>>= 1;					\
+	r0 >>= 32;					\
+	r0 |= r6;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("end16 (to_le) reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void le_reg_zero_extend_check_1(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r6 = r0;					\
+	r6 <<= 32;					\
+	call %[bpf_get_prandom_u32];			\
+	r0 |= r6;					\
+	r0 = le16 r0;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("end32 (to_le) reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void le_reg_zero_extend_check_2(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r6 = r0;					\
+	r6 <<= 32;					\
+	call %[bpf_get_prandom_u32];			\
+	r0 |= r6;					\
+	r0 = le32 r0;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("end16 (to_be) reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void be_reg_zero_extend_check_1(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r6 = r0;					\
+	r6 <<= 32;					\
+	call %[bpf_get_prandom_u32];			\
+	r0 |= r6;					\
+	r0 = be16 r0;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("end32 (to_be) reg zero extend check")
+__success __success_unpriv __retval(0)
+__naked void be_reg_zero_extend_check_2(void)
+{
+	asm volatile ("					\
+	call %[bpf_get_prandom_u32];			\
+	r6 = r0;					\
+	r6 <<= 32;					\
+	call %[bpf_get_prandom_u32];			\
+	r0 |= r6;					\
+	r0 = be32 r0;					\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("ldx_b zero extend check")
+__success __success_unpriv __retval(0)
+__naked void ldx_b_zero_extend_check(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -4;					\
+	r7 = 0xfaceb00c;				\
+	*(u32*)(r6 + 0) = r7;				\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	r0 = *(u8*)(r6 + 0);				\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("ldx_h zero extend check")
+__success __success_unpriv __retval(0)
+__naked void ldx_h_zero_extend_check(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -4;					\
+	r7 = 0xfaceb00c;				\
+	*(u32*)(r6 + 0) = r7;				\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	r0 = *(u16*)(r6 + 0);				\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("ldx_w zero extend check")
+__success __success_unpriv __retval(0)
+__naked void ldx_w_zero_extend_check(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -4;					\
+	r7 = 0xfaceb00c;				\
+	*(u32*)(r6 + 0) = r7;				\
+	call %[bpf_get_prandom_u32];			\
+	r1 = 0x1000000000 ll;				\
+	r0 |= r1;					\
+	r0 = *(u32*)(r6 + 0);				\
+	r0 >>= 32;					\
+	exit;						\
+"	:
+	: __imm(bpf_get_prandom_u32)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/subreg.c b/tools/testing/selftests/bpf/verifier/subreg.c
deleted file mode 100644
index 4c4133c80440..000000000000
--- a/tools/testing/selftests/bpf/verifier/subreg.c
+++ /dev/null
@@ -1,533 +0,0 @@
-/* This file contains sub-register zero extension checks for insns defining
- * sub-registers, meaning:
- *   - All insns under BPF_ALU class. Their BPF_ALU32 variants or narrow width
- *     forms (BPF_END) could define sub-registers.
- *   - Narrow direct loads, BPF_B/H/W | BPF_LDX.
- *   - BPF_LD is not exposed to JIT back-ends, so no need for testing.
- *
- * "get_prandom_u32" is used to initialize low 32-bit of some registers to
- * prevent potential optimizations done by verifier or JIT back-ends which could
- * optimize register back into constant when range info shows one register is a
- * constant.
- */
-{
-	"add32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL),
-	BPF_ALU32_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"add32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	/* An insn could have no effect on the low 32-bit, for example:
-	 *   a = a + 0
-	 *   a = a | 0
-	 *   a = a & -1
-	 * But, they should still zero high 32-bit.
-	 */
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, -2),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"sub32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LD_IMM64(BPF_REG_0, 0x1ffffffffULL),
-	BPF_ALU32_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"sub32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_SUB, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_SUB, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"mul32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL),
-	BPF_ALU32_REG(BPF_MUL, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"mul32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, -1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"div32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_0, -1),
-	BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"div32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, 2),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"or32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL),
-	BPF_ALU32_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"or32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_OR, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_OR, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"and32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_1, BPF_REG_0),
-	BPF_LD_IMM64(BPF_REG_0, 0x1ffffffffULL),
-	BPF_ALU32_REG(BPF_AND, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"and32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_AND, BPF_REG_0, -1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_AND, BPF_REG_0, -2),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"lsh32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_ALU32_REG(BPF_LSH, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"lsh32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_LSH, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_LSH, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"rsh32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_ALU32_REG(BPF_RSH, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"rsh32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_RSH, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_RSH, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"neg32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_NEG, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"mod32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_0, -1),
-	BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"mod32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, 2),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"xor32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL),
-	BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"xor32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_XOR, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"mov32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_1, BPF_REG_0),
-	BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL),
-	BPF_MOV32_REG(BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"mov32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_MOV32_IMM(BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_MOV32_IMM(BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"arsh32 reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_ALU32_REG(BPF_ARSH, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"arsh32 imm zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 1),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"end16 (to_le) reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_ENDIAN(BPF_TO_LE, BPF_REG_0, 16),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"end32 (to_le) reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_ENDIAN(BPF_TO_LE, BPF_REG_0, 32),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"end16 (to_be) reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_ENDIAN(BPF_TO_BE, BPF_REG_0, 16),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"end32 (to_be) reg zero extend check",
-	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
-	BPF_ENDIAN(BPF_TO_BE, BPF_REG_0, 32),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"ldx_b zero extend check",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
-	BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"ldx_h zero extend check",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
-	BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_6, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"ldx_w zero extend check",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
-	BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
-	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = 0,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 22/24] selftests/bpf: verifier/unpriv converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (20 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 21/24] selftests/bpf: verifier/subreg " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 23/24] selftests/bpf: verifier/value_illegal_alu " Eduard Zingerman
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/unpriv semi-automatically converted to use inline assembly.

The verifier/unpriv.c had to be split in two parts:
- the bulk of the tests is in the progs/verifier_unpriv.c;
- the single test that needs `struct bpf_perf_event_data`
  definition is in the progs/verifier_unpriv_perf.c.

The tests above can't be in a single file because:
- first requires inclusion of the filter.h header
  (to get access to BPF_ST_MEM macro, inline assembler does
   not support this isntruction);
- the second requires vmlinux.h, which contains definitions
  conflicting with filter.h.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   4 +
 .../selftests/bpf/progs/verifier_unpriv.c     | 726 ++++++++++++++++++
 .../bpf/progs/verifier_unpriv_perf.c          |  34 +
 tools/testing/selftests/bpf/verifier/unpriv.c | 562 --------------
 4 files changed, 764 insertions(+), 562 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_unpriv.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/unpriv.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 999b694850d3..94405bf00b47 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -56,6 +56,8 @@
 #include "verifier_stack_ptr.skel.h"
 #include "verifier_subreg.skel.h"
 #include "verifier_uninit.skel.h"
+#include "verifier_unpriv.skel.h"
+#include "verifier_unpriv_perf.skel.h"
 #include "verifier_value_adj_spill.skel.h"
 #include "verifier_value.skel.h"
 #include "verifier_value_or_null.skel.h"
@@ -150,6 +152,8 @@ void test_verifier_spin_lock(void)            { RUN(verifier_spin_lock); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
 void test_verifier_subreg(void)               { RUN(verifier_subreg); }
 void test_verifier_uninit(void)               { RUN(verifier_uninit); }
+void test_verifier_unpriv(void)               { RUN(verifier_unpriv); }
+void test_verifier_unpriv_perf(void)          { RUN(verifier_unpriv_perf); }
 void test_verifier_value_adj_spill(void)      { RUN(verifier_value_adj_spill); }
 void test_verifier_value(void)                { RUN(verifier_value); }
 void test_verifier_value_or_null(void)        { RUN(verifier_value_or_null); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_unpriv.c b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
new file mode 100644
index 000000000000..7ea535bfbacd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
@@ -0,0 +1,726 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/unpriv.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "../../../include/linux/filter.h"
+#include "bpf_misc.h"
+
+#define BPF_SK_LOOKUP(func) \
+	/* struct bpf_sock_tuple tuple = {} */ \
+	"r2 = 0;"			\
+	"*(u32*)(r10 - 8) = r2;"	\
+	"*(u64*)(r10 - 16) = r2;"	\
+	"*(u64*)(r10 - 24) = r2;"	\
+	"*(u64*)(r10 - 32) = r2;"	\
+	"*(u64*)(r10 - 40) = r2;"	\
+	"*(u64*)(r10 - 48) = r2;"	\
+	/* sk = func(ctx, &tuple, sizeof tuple, 0, 0) */ \
+	"r2 = r10;"			\
+	"r2 += -48;"			\
+	"r3 = %[sizeof_bpf_sock_tuple];"\
+	"r4 = 0;"			\
+	"r5 = 0;"			\
+	"call %[" #func "];"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, long long);
+} map_hash_8b SEC(".maps");
+
+void dummy_prog_42_socket(void);
+void dummy_prog_24_socket(void);
+void dummy_prog_loop1_socket(void);
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+	__uint(max_entries, 4);
+	__uint(key_size, sizeof(int));
+	__array(values, void (void));
+} map_prog1_socket SEC(".maps") = {
+	.values = {
+		[0] = (void *)&dummy_prog_42_socket,
+		[1] = (void *)&dummy_prog_loop1_socket,
+		[2] = (void *)&dummy_prog_24_socket,
+	},
+};
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_42_socket(void)
+{
+	asm volatile ("r0 = 42; exit;");
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_24_socket(void)
+{
+	asm volatile ("r0 = 24; exit;");
+}
+
+SEC("socket")
+__auxiliary __auxiliary_unpriv
+__naked void dummy_prog_loop1_socket(void)
+{
+	asm volatile ("			\
+	r3 = 1;				\
+	r2 = %[map_prog1_socket] ll;	\
+	call %[bpf_tail_call];		\
+	r0 = 41;			\
+	exit;				\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: return pointer")
+__success __failure_unpriv __msg_unpriv("R0 leaks addr")
+__retval(POINTER_VALUE)
+__naked void unpriv_return_pointer(void)
+{
+	asm volatile ("					\
+	r0 = r10;					\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: add const to pointer")
+__success __success_unpriv __retval(0)
+__naked void unpriv_add_const_to_pointer(void)
+{
+	asm volatile ("					\
+	r1 += 8;					\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: add pointer to pointer")
+__failure __msg("R1 pointer += pointer")
+__failure_unpriv
+__naked void unpriv_add_pointer_to_pointer(void)
+{
+	asm volatile ("					\
+	r1 += r10;					\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: neg pointer")
+__success __failure_unpriv __msg_unpriv("R1 pointer arithmetic")
+__retval(0)
+__naked void unpriv_neg_pointer(void)
+{
+	asm volatile ("					\
+	r1 = -r1;					\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: cmp pointer with const")
+__success __failure_unpriv __msg_unpriv("R1 pointer comparison")
+__retval(0)
+__naked void unpriv_cmp_pointer_with_const(void)
+{
+	asm volatile ("					\
+	if r1 == 0 goto l0_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: cmp pointer with pointer")
+__success __failure_unpriv __msg_unpriv("R10 pointer comparison")
+__retval(0)
+__naked void unpriv_cmp_pointer_with_pointer(void)
+{
+	asm volatile ("					\
+	if r1 == r10 goto l0_%=;			\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tracepoint")
+__description("unpriv: check that printk is disallowed")
+__success
+__naked void check_that_printk_is_disallowed(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r1 = r10;					\
+	r1 += -8;					\
+	r2 = 8;						\
+	r3 = r1;					\
+	call %[bpf_trace_printk];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_trace_printk)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: pass pointer to helper function")
+__success __failure_unpriv __msg_unpriv("R4 leaks addr")
+__retval(0)
+__naked void pass_pointer_to_helper_function(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	r3 = r2;					\
+	r4 = r2;					\
+	call %[bpf_map_update_elem];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_update_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: indirectly pass pointer on stack to helper function")
+__success __failure_unpriv
+__msg_unpriv("invalid indirect read from stack R2 off -8+0 size 8")
+__retval(0)
+__naked void on_stack_to_helper_function(void)
+{
+	asm volatile ("					\
+	*(u64*)(r10 - 8) = r10;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: mangle pointer on stack 1")
+__success __failure_unpriv __msg_unpriv("attempt to corrupt spilled")
+__retval(0)
+__naked void mangle_pointer_on_stack_1(void)
+{
+	asm volatile ("					\
+	*(u64*)(r10 - 8) = r10;				\
+	r0 = 0;						\
+	*(u32*)(r10 - 8) = r0;				\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: mangle pointer on stack 2")
+__success __failure_unpriv __msg_unpriv("attempt to corrupt spilled")
+__retval(0)
+__naked void mangle_pointer_on_stack_2(void)
+{
+	asm volatile ("					\
+	*(u64*)(r10 - 8) = r10;				\
+	r0 = 0;						\
+	*(u8*)(r10 - 1) = r0;				\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: read pointer from stack in small chunks")
+__failure __msg("invalid size")
+__failure_unpriv
+__naked void from_stack_in_small_chunks(void)
+{
+	asm volatile ("					\
+	*(u64*)(r10 - 8) = r10;				\
+	r0 = *(u32*)(r10 - 8);				\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: write pointer into ctx")
+__failure __msg("invalid bpf_context access")
+__failure_unpriv __msg_unpriv("R1 leaks addr")
+__naked void unpriv_write_pointer_into_ctx(void)
+{
+	asm volatile ("					\
+	*(u64*)(r1 + 0) = r1;				\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: spill/fill of ctx")
+__success __success_unpriv __retval(0)
+__naked void unpriv_spill_fill_of_ctx(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -8;					\
+	*(u64*)(r6 + 0) = r1;				\
+	r1 = *(u64*)(r6 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of ctx 2")
+__success __retval(0)
+__naked void spill_fill_of_ctx_2(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -8;					\
+	*(u64*)(r6 + 0) = r1;				\
+	r1 = *(u64*)(r6 + 0);				\
+	call %[bpf_get_hash_recalc];			\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_get_hash_recalc)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of ctx 3")
+__failure __msg("R1 type=fp expected=ctx")
+__naked void spill_fill_of_ctx_3(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -8;					\
+	*(u64*)(r6 + 0) = r1;				\
+	*(u64*)(r6 + 0) = r10;				\
+	r1 = *(u64*)(r6 + 0);				\
+	call %[bpf_get_hash_recalc];			\
+	exit;						\
+"	:
+	: __imm(bpf_get_hash_recalc)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of ctx 4")
+__failure __msg("R1 type=scalar expected=ctx")
+__naked void spill_fill_of_ctx_4(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -8;					\
+	*(u64*)(r6 + 0) = r1;				\
+	r0 = 1;						\
+	lock *(u64 *)(r10 - 8) += r0;			\
+	r1 = *(u64*)(r6 + 0);				\
+	call %[bpf_get_hash_recalc];			\
+	exit;						\
+"	:
+	: __imm(bpf_get_hash_recalc)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of different pointers stx")
+__failure __msg("same insn cannot be used with different pointers")
+__naked void fill_of_different_pointers_stx(void)
+{
+	asm volatile ("					\
+	r3 = 42;					\
+	r6 = r10;					\
+	r6 += -8;					\
+	if r1 == 0 goto l0_%=;				\
+	r2 = r10;					\
+	r2 += -16;					\
+	*(u64*)(r6 + 0) = r2;				\
+l0_%=:	if r1 != 0 goto l1_%=;				\
+	*(u64*)(r6 + 0) = r1;				\
+l1_%=:	r1 = *(u64*)(r6 + 0);				\
+	*(u32*)(r1 + %[__sk_buff_mark]) = r3;		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark))
+	: __clobber_all);
+}
+
+/* Same as above, but use BPF_ST_MEM to save 42
+ * instead of BPF_STX_MEM.
+ */
+SEC("tc")
+__description("unpriv: spill/fill of different pointers st")
+__failure __msg("same insn cannot be used with different pointers")
+__naked void fill_of_different_pointers_st(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -8;					\
+	if r1 == 0 goto l0_%=;				\
+	r2 = r10;					\
+	r2 += -16;					\
+	*(u64*)(r6 + 0) = r2;				\
+l0_%=:	if r1 != 0 goto l1_%=;				\
+	*(u64*)(r6 + 0) = r1;				\
+l1_%=:	r1 = *(u64*)(r6 + 0);				\
+	.8byte %[st_mem];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)),
+	  __imm_insn(st_mem,
+		     BPF_ST_MEM(BPF_W, BPF_REG_1, offsetof(struct __sk_buff, mark), 42))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of different pointers stx - ctx and sock")
+__failure __msg("type=ctx expected=sock")
+__naked void pointers_stx_ctx_and_sock(void)
+{
+	asm volatile ("					\
+	r8 = r1;					\
+	/* struct bpf_sock *sock = bpf_sock_lookup(...); */\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r2 = r0;					\
+	/* u64 foo; */					\
+	/* void *target = &foo; */			\
+	r6 = r10;					\
+	r6 += -8;					\
+	r1 = r8;					\
+	/* if (skb == NULL) *target = sock; */		\
+	if r1 == 0 goto l0_%=;				\
+	*(u64*)(r6 + 0) = r2;				\
+l0_%=:	/* else *target = skb; */			\
+	if r1 != 0 goto l1_%=;				\
+	*(u64*)(r6 + 0) = r1;				\
+l1_%=:	/* struct __sk_buff *skb = *target; */		\
+	r1 = *(u64*)(r6 + 0);				\
+	/* skb->mark = 42; */				\
+	r3 = 42;					\
+	*(u32*)(r1 + %[__sk_buff_mark]) = r3;		\
+	/* if (sk) bpf_sk_release(sk) */		\
+	if r1 == 0 goto l2_%=;				\
+	call %[bpf_sk_release];				\
+l2_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of different pointers stx - leak sock")
+__failure
+//.errstr = "same insn cannot be used with different pointers",
+__msg("Unreleased reference")
+__naked void different_pointers_stx_leak_sock(void)
+{
+	asm volatile ("					\
+	r8 = r1;					\
+	/* struct bpf_sock *sock = bpf_sock_lookup(...); */\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r2 = r0;					\
+	/* u64 foo; */					\
+	/* void *target = &foo; */			\
+	r6 = r10;					\
+	r6 += -8;					\
+	r1 = r8;					\
+	/* if (skb == NULL) *target = sock; */		\
+	if r1 == 0 goto l0_%=;				\
+	*(u64*)(r6 + 0) = r2;				\
+l0_%=:	/* else *target = skb; */			\
+	if r1 != 0 goto l1_%=;				\
+	*(u64*)(r6 + 0) = r1;				\
+l1_%=:	/* struct __sk_buff *skb = *target; */		\
+	r1 = *(u64*)(r6 + 0);				\
+	/* skb->mark = 42; */				\
+	r3 = 42;					\
+	*(u32*)(r1 + %[__sk_buff_mark]) = r3;		\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of different pointers stx - sock and ctx (read)")
+__failure __msg("same insn cannot be used with different pointers")
+__naked void stx_sock_and_ctx_read(void)
+{
+	asm volatile ("					\
+	r8 = r1;					\
+	/* struct bpf_sock *sock = bpf_sock_lookup(...); */\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r2 = r0;					\
+	/* u64 foo; */					\
+	/* void *target = &foo; */			\
+	r6 = r10;					\
+	r6 += -8;					\
+	r1 = r8;					\
+	/* if (skb) *target = skb */			\
+	if r1 == 0 goto l0_%=;				\
+	*(u64*)(r6 + 0) = r1;				\
+l0_%=:	/* else *target = sock */			\
+	if r1 != 0 goto l1_%=;				\
+	*(u64*)(r6 + 0) = r2;				\
+l1_%=:	/* struct bpf_sock *sk = *target; */		\
+	r1 = *(u64*)(r6 + 0);				\
+	/* if (sk) u32 foo = sk->mark; bpf_sk_release(sk); */\
+	if r1 == 0 goto l2_%=;				\
+	r3 = *(u32*)(r1 + %[bpf_sock_mark]);		\
+	call %[bpf_sk_release];				\
+l2_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(bpf_sock_mark, offsetof(struct bpf_sock, mark)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("unpriv: spill/fill of different pointers stx - sock and ctx (write)")
+__failure
+//.errstr = "same insn cannot be used with different pointers",
+__msg("cannot write into sock")
+__naked void stx_sock_and_ctx_write(void)
+{
+	asm volatile ("					\
+	r8 = r1;					\
+	/* struct bpf_sock *sock = bpf_sock_lookup(...); */\
+"	BPF_SK_LOOKUP(bpf_sk_lookup_tcp)
+"	r2 = r0;					\
+	/* u64 foo; */					\
+	/* void *target = &foo; */			\
+	r6 = r10;					\
+	r6 += -8;					\
+	r1 = r8;					\
+	/* if (skb) *target = skb */			\
+	if r1 == 0 goto l0_%=;				\
+	*(u64*)(r6 + 0) = r1;				\
+l0_%=:	/* else *target = sock */			\
+	if r1 != 0 goto l1_%=;				\
+	*(u64*)(r6 + 0) = r2;				\
+l1_%=:	/* struct bpf_sock *sk = *target; */		\
+	r1 = *(u64*)(r6 + 0);				\
+	/* if (sk) sk->mark = 42; bpf_sk_release(sk); */\
+	if r1 == 0 goto l2_%=;				\
+	r3 = 42;					\
+	*(u32*)(r1 + %[bpf_sock_mark]) = r3;		\
+	call %[bpf_sk_release];				\
+l2_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_sk_lookup_tcp),
+	  __imm(bpf_sk_release),
+	  __imm_const(bpf_sock_mark, offsetof(struct bpf_sock, mark)),
+	  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: write pointer into map elem value")
+__success __failure_unpriv __msg_unpriv("R0 leaks addr")
+__retval(0)
+__naked void pointer_into_map_elem_value(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_hash_8b] ll;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	*(u64*)(r0 + 0) = r0;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("alu32: mov u32 const")
+__success __failure_unpriv __msg_unpriv("R7 invalid mem access 'scalar'")
+__retval(0)
+__naked void alu32_mov_u32_const(void)
+{
+	asm volatile ("					\
+	w7 = 0;						\
+	w7 &= 1;					\
+	w0 = w7;					\
+	if r0 == 0 goto l0_%=;				\
+	r0 = *(u64*)(r7 + 0);				\
+l0_%=:	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: partial copy of pointer")
+__success __failure_unpriv __msg_unpriv("R10 partial copy")
+__retval(0)
+__naked void unpriv_partial_copy_of_pointer(void)
+{
+	asm volatile ("					\
+	w1 = w10;					\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: pass pointer to tail_call")
+__success __failure_unpriv __msg_unpriv("R3 leaks addr into helper")
+__retval(0)
+__naked void pass_pointer_to_tail_call(void)
+{
+	asm volatile ("					\
+	r3 = r1;					\
+	r2 = %[map_prog1_socket] ll;			\
+	call %[bpf_tail_call];				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_tail_call),
+	  __imm_addr(map_prog1_socket)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: cmp map pointer with zero")
+__success __failure_unpriv __msg_unpriv("R1 pointer comparison")
+__retval(0)
+__naked void cmp_map_pointer_with_zero(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	r1 = %[map_hash_8b] ll;				\
+	if r1 == 0 goto l0_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_addr(map_hash_8b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: write into frame pointer")
+__failure __msg("frame pointer is read only")
+__failure_unpriv
+__naked void unpriv_write_into_frame_pointer(void)
+{
+	asm volatile ("					\
+	r10 = r1;					\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: spill/fill frame pointer")
+__failure __msg("frame pointer is read only")
+__failure_unpriv
+__naked void unpriv_spill_fill_frame_pointer(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -8;					\
+	*(u64*)(r6 + 0) = r10;				\
+	r10 = *(u64*)(r6 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: cmp of frame pointer")
+__success __failure_unpriv __msg_unpriv("R10 pointer comparison")
+__retval(0)
+__naked void unpriv_cmp_of_frame_pointer(void)
+{
+	asm volatile ("					\
+	if r10 == 0 goto l0_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: adding of fp, reg")
+__success __failure_unpriv
+__msg_unpriv("R1 stack pointer arithmetic goes out of range")
+__retval(0)
+__naked void unpriv_adding_of_fp_reg(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	r1 = 0;						\
+	r1 += r10;					\
+	*(u64*)(r1 - 8) = r0;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: adding of fp, imm")
+__success __failure_unpriv
+__msg_unpriv("R1 stack pointer arithmetic goes out of range")
+__retval(0)
+__naked void unpriv_adding_of_fp_imm(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	r1 = r10;					\
+	r1 += 0;					\
+	*(u64*)(r1 - 8) = r0;				\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("socket")
+__description("unpriv: cmp of stack pointer")
+__success __failure_unpriv __msg_unpriv("R2 pointer comparison")
+__retval(0)
+__naked void unpriv_cmp_of_stack_pointer(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r2 == 0 goto l0_%=;				\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c b/tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c
new file mode 100644
index 000000000000..4d77407a0a79
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_unpriv_perf.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/unpriv.c */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("perf_event")
+__description("unpriv: spill/fill of different pointers ldx")
+__failure __msg("same insn cannot be used with different pointers")
+__naked void fill_of_different_pointers_ldx(void)
+{
+	asm volatile ("					\
+	r6 = r10;					\
+	r6 += -8;					\
+	if r1 == 0 goto l0_%=;				\
+	r2 = r10;					\
+	r2 += %[__imm_0];				\
+	*(u64*)(r6 + 0) = r2;				\
+l0_%=:	if r1 != 0 goto l1_%=;				\
+	*(u64*)(r6 + 0) = r1;				\
+l1_%=:	r1 = *(u64*)(r6 + 0);				\
+	r1 = *(u64*)(r1 + %[sample_period]);		\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__imm_0,
+		      -(__s32) offsetof(struct bpf_perf_event_data, sample_period) - 8),
+	  __imm_const(sample_period,
+		      offsetof(struct bpf_perf_event_data, sample_period))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c
deleted file mode 100644
index af0c0f336625..000000000000
--- a/tools/testing/selftests/bpf/verifier/unpriv.c
+++ /dev/null
@@ -1,562 +0,0 @@
-{
-	"unpriv: return pointer",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 leaks addr",
-	.retval = POINTER_VALUE,
-},
-{
-	"unpriv: add const to pointer",
-	.insns = {
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-},
-{
-	"unpriv: add pointer to pointer",
-	.insns = {
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_10),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "R1 pointer += pointer",
-},
-{
-	"unpriv: neg pointer",
-	.insns = {
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R1 pointer arithmetic",
-},
-{
-	"unpriv: cmp pointer with const",
-	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R1 pointer comparison",
-},
-{
-	"unpriv: cmp pointer with pointer",
-	.insns = {
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R10 pointer comparison",
-},
-{
-	"unpriv: check that printk is disallowed",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_MOV64_IMM(BPF_REG_2, 8),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_trace_printk),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "unknown func bpf_trace_printk#6",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-},
-{
-	"unpriv: pass pointer to helper function",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
-	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr_unpriv = "R4 leaks addr",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: indirectly pass pointer on stack to helper function",
-	.insns = {
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr_unpriv = "invalid indirect read from stack R2 off -8+0 size 8",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: mangle pointer on stack 1",
-	.insns = {
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8),
-	BPF_ST_MEM(BPF_W, BPF_REG_10, -8, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "attempt to corrupt spilled",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: mangle pointer on stack 2",
-	.insns = {
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8),
-	BPF_ST_MEM(BPF_B, BPF_REG_10, -1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "attempt to corrupt spilled",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: read pointer from stack in small chunks",
-	.insns = {
-	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_10, -8),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "invalid size",
-	.result = REJECT,
-},
-{
-	"unpriv: write pointer into ctx",
-	.insns = {
-	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R1 leaks addr",
-	.result_unpriv = REJECT,
-	.errstr = "invalid bpf_context access",
-	.result = REJECT,
-},
-{
-	"unpriv: spill/fill of ctx",
-	.insns = {
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-},
-{
-	"unpriv: spill/fill of ctx 2",
-	.insns = {
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of ctx 3",
-	.insns = {
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "R1 type=fp expected=ctx",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of ctx 4",
-	.insns = {
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW,
-		     BPF_REG_10, BPF_REG_0, -8, BPF_ADD),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "R1 type=scalar expected=ctx",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of different pointers stx",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_3, 42),
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "same insn cannot be used with different pointers",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	/* Same as above, but use BPF_ST_MEM to save 42
-	 * instead of BPF_STX_MEM.
-	 */
-	"unpriv: spill/fill of different pointers st",
-	.insns = {
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_ST_MEM(BPF_W, BPF_REG_1, offsetof(struct __sk_buff, mark), 42),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "same insn cannot be used with different pointers",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of different pointers stx - ctx and sock",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
-	/* struct bpf_sock *sock = bpf_sock_lookup(...); */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	/* u64 foo; */
-	/* void *target = &foo; */
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	/* if (skb == NULL) *target = sock; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	/* else *target = skb; */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	/* struct __sk_buff *skb = *target; */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	/* skb->mark = 42; */
-	BPF_MOV64_IMM(BPF_REG_3, 42),
-	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3,
-		    offsetof(struct __sk_buff, mark)),
-	/* if (sk) bpf_sk_release(sk) */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-		BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "type=ctx expected=sock",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of different pointers stx - leak sock",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
-	/* struct bpf_sock *sock = bpf_sock_lookup(...); */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	/* u64 foo; */
-	/* void *target = &foo; */
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	/* if (skb == NULL) *target = sock; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	/* else *target = skb; */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	/* struct __sk_buff *skb = *target; */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	/* skb->mark = 42; */
-	BPF_MOV64_IMM(BPF_REG_3, 42),
-	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3,
-		    offsetof(struct __sk_buff, mark)),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	//.errstr = "same insn cannot be used with different pointers",
-	.errstr = "Unreleased reference",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of different pointers stx - sock and ctx (read)",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
-	/* struct bpf_sock *sock = bpf_sock_lookup(...); */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	/* u64 foo; */
-	/* void *target = &foo; */
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	/* if (skb) *target = skb */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	/* else *target = sock */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	/* struct bpf_sock *sk = *target; */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	/* if (sk) u32 foo = sk->mark; bpf_sk_release(sk); */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
-		BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
-			    offsetof(struct bpf_sock, mark)),
-		BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "same insn cannot be used with different pointers",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of different pointers stx - sock and ctx (write)",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
-	/* struct bpf_sock *sock = bpf_sock_lookup(...); */
-	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	/* u64 foo; */
-	/* void *target = &foo; */
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	/* if (skb) *target = skb */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	/* else *target = sock */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	/* struct bpf_sock *sk = *target; */
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	/* if (sk) sk->mark = 42; bpf_sk_release(sk); */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
-		BPF_MOV64_IMM(BPF_REG_3, 42),
-		BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3,
-			    offsetof(struct bpf_sock, mark)),
-		BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	//.errstr = "same insn cannot be used with different pointers",
-	.errstr = "cannot write into sock",
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
-	"unpriv: spill/fill of different pointers ldx",
-	.insns = {
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2,
-		      -(__s32)offsetof(struct bpf_perf_event_data,
-				       sample_period) - 8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1,
-		    offsetof(struct bpf_perf_event_data, sample_period)),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.result = REJECT,
-	.errstr = "same insn cannot be used with different pointers",
-	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
-},
-{
-	"unpriv: write pointer into map elem value",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 3 },
-	.errstr_unpriv = "R0 leaks addr",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"alu32: mov u32 const",
-	.insns = {
-	BPF_MOV32_IMM(BPF_REG_7, 0),
-	BPF_ALU32_IMM(BPF_AND, BPF_REG_7, 1),
-	BPF_MOV32_REG(BPF_REG_0, BPF_REG_7),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R7 invalid mem access 'scalar'",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-	.retval = 0,
-},
-{
-	"unpriv: partial copy of pointer",
-	.insns = {
-	BPF_MOV32_REG(BPF_REG_1, BPF_REG_10),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R10 partial copy",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: pass pointer to tail_call",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_3, BPF_REG_1),
-	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_prog1 = { 1 },
-	.errstr_unpriv = "R3 leaks addr into helper",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: cmp map pointer with zero",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_8b = { 1 },
-	.errstr_unpriv = "R1 pointer comparison",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: write into frame pointer",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_10, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "frame pointer is read only",
-	.result = REJECT,
-},
-{
-	"unpriv: spill/fill frame pointer",
-	.insns = {
-	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr = "frame pointer is read only",
-	.result = REJECT,
-},
-{
-	"unpriv: cmp of frame pointer",
-	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_10, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R10 pointer comparison",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: adding of fp, reg",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_10),
-	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: adding of fp, imm",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0),
-	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-{
-	"unpriv: cmp of stack pointer",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.errstr_unpriv = "R2 pointer comparison",
-	.result_unpriv = REJECT,
-	.result = ACCEPT,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 23/24] selftests/bpf: verifier/value_illegal_alu converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (21 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 22/24] selftests/bpf: verifier/unpriv " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 17:42 ` [PATCH bpf-next 24/24] selftests/bpf: verifier/value_ptr_arith " Eduard Zingerman
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/value_illegal_alu automatically converted to use inline assembly.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../bpf/progs/verifier_value_illegal_alu.c    | 149 ++++++++++++++++++
 .../bpf/verifier/value_illegal_alu.c          |  95 -----------
 3 files changed, 151 insertions(+), 95 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/value_illegal_alu.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 94405bf00b47..56b9248a15c0 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -60,6 +60,7 @@
 #include "verifier_unpriv_perf.skel.h"
 #include "verifier_value_adj_spill.skel.h"
 #include "verifier_value.skel.h"
+#include "verifier_value_illegal_alu.skel.h"
 #include "verifier_value_or_null.skel.h"
 #include "verifier_var_off.skel.h"
 #include "verifier_xadd.skel.h"
@@ -156,6 +157,7 @@ void test_verifier_unpriv(void)               { RUN(verifier_unpriv); }
 void test_verifier_unpriv_perf(void)          { RUN(verifier_unpriv_perf); }
 void test_verifier_value_adj_spill(void)      { RUN(verifier_value_adj_spill); }
 void test_verifier_value(void)                { RUN(verifier_value); }
+void test_verifier_value_illegal_alu(void)    { RUN(verifier_value_illegal_alu); }
 void test_verifier_value_or_null(void)        { RUN(verifier_value_or_null); }
 void test_verifier_var_off(void)              { RUN(verifier_var_off); }
 void test_verifier_xadd(void)                 { RUN(verifier_xadd); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
new file mode 100644
index 000000000000..71814a753216
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/value_illegal_alu.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, struct test_val);
+} map_hash_48b SEC(".maps");
+
+SEC("socket")
+__description("map element value illegal alu op, 1")
+__failure __msg("R0 bitwise operator &= on pointer")
+__failure_unpriv
+__naked void value_illegal_alu_op_1(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = 0;						\
+	*(u64*)(r2 + 0) = r1;				\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r0 &= 8;					\
+	r1 = 22;					\
+	*(u64*)(r0 + 0) = r1;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map element value illegal alu op, 2")
+__failure __msg("R0 32-bit pointer arithmetic prohibited")
+__failure_unpriv
+__naked void value_illegal_alu_op_2(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = 0;						\
+	*(u64*)(r2 + 0) = r1;				\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	w0 += 0;					\
+	r1 = 22;					\
+	*(u64*)(r0 + 0) = r1;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map element value illegal alu op, 3")
+__failure __msg("R0 pointer arithmetic with /= operator")
+__failure_unpriv
+__naked void value_illegal_alu_op_3(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = 0;						\
+	*(u64*)(r2 + 0) = r1;				\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r0 /= 42;					\
+	r1 = 22;					\
+	*(u64*)(r0 + 0) = r1;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map element value illegal alu op, 4")
+__failure __msg("invalid mem access 'scalar'")
+__failure_unpriv __msg_unpriv("R0 pointer arithmetic prohibited")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void value_illegal_alu_op_4(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = 0;						\
+	*(u64*)(r2 + 0) = r1;				\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r0 = be64 r0;					\
+	r1 = 22;					\
+	*(u64*)(r0 + 0) = r1;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map element value illegal alu op, 5")
+__failure __msg("R0 invalid mem access 'scalar'")
+__msg_unpriv("leaking pointer from stack off -8")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void value_illegal_alu_op_5(void)
+{
+	asm volatile ("					\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = 0;						\
+	*(u64*)(r2 + 0) = r1;				\
+	r1 = %[map_hash_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r3 = 4096;					\
+	r2 = r10;					\
+	r2 += -8;					\
+	*(u64*)(r2 + 0) = r0;				\
+	lock *(u64 *)(r2 + 0) += r3;			\
+	r0 = *(u64*)(r2 + 0);				\
+	r1 = 22;					\
+	*(u64*)(r0 + 0) = r1;				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_hash_48b)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
deleted file mode 100644
index d6f29eb4bd57..000000000000
--- a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
+++ /dev/null
@@ -1,95 +0,0 @@
-{
-	"map element value illegal alu op, 1",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 3 },
-	.errstr = "R0 bitwise operator &= on pointer",
-	.result = REJECT,
-},
-{
-	"map element value illegal alu op, 2",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 3 },
-	.errstr = "R0 32-bit pointer arithmetic prohibited",
-	.result = REJECT,
-},
-{
-	"map element value illegal alu op, 3",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ALU64_IMM(BPF_DIV, BPF_REG_0, 42),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 3 },
-	.errstr = "R0 pointer arithmetic with /= operator",
-	.result = REJECT,
-},
-{
-	"map element value illegal alu op, 4",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ENDIAN(BPF_FROM_BE, BPF_REG_0, 64),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 3 },
-	.errstr_unpriv = "R0 pointer arithmetic prohibited",
-	.errstr = "invalid mem access 'scalar'",
-	.result = REJECT,
-	.result_unpriv = REJECT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"map element value illegal alu op, 5",
-	.insns = {
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
-	BPF_MOV64_IMM(BPF_REG_3, 4096),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_2, BPF_REG_3, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 3 },
-	.errstr_unpriv = "leaking pointer from stack off -8",
-	.errstr = "R0 invalid mem access 'scalar'",
-	.result = REJECT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH bpf-next 24/24] selftests/bpf: verifier/value_ptr_arith converted to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (22 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 23/24] selftests/bpf: verifier/value_illegal_alu " Eduard Zingerman
@ 2023-04-21 17:42 ` Eduard Zingerman
  2023-04-21 19:40 ` [PATCH bpf-next 00/24] Second set of verifier/*.c migrated " patchwork-bot+netdevbpf
  2023-04-21 19:48 ` Alexei Starovoitov
  25 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 17:42 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Test verifier/value_ptr_arith automatically converted to use inline assembly.

Test cases "sanitation: alu with different scalars 2" and
"sanitation: alu with different scalars 3" are updated to
avoid -ENOENT as return value, as __retval() annotation
only supports numeric literals.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   34 +-
 .../bpf/progs/verifier_value_ptr_arith.c      | 1423 +++++++++++++++++
 .../selftests/bpf/verifier/value_ptr_arith.c  | 1140 -------------
 3 files changed, 1451 insertions(+), 1146 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
 delete mode 100644 tools/testing/selftests/bpf/verifier/value_ptr_arith.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 56b9248a15c0..bcb955adb447 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -62,6 +62,7 @@
 #include "verifier_value.skel.h"
 #include "verifier_value_illegal_alu.skel.h"
 #include "verifier_value_or_null.skel.h"
+#include "verifier_value_ptr_arith.skel.h"
 #include "verifier_var_off.skel.h"
 #include "verifier_xadd.skel.h"
 #include "verifier_xdp.skel.h"
@@ -164,29 +165,50 @@ void test_verifier_xadd(void)                 { RUN(verifier_xadd); }
 void test_verifier_xdp(void)                  { RUN(verifier_xdp); }
 void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); }
 
-static int init_array_access_maps(struct bpf_object *obj)
+static int init_test_val_map(struct bpf_object *obj, char *map_name)
 {
-	struct bpf_map *array_ro;
 	struct test_val value = {
 		.index = (6 + 1) * sizeof(int),
 		.foo[6] = 0xabcdef12,
 	};
+	struct bpf_map *map;
 	int err, key = 0;
 
-	array_ro = bpf_object__find_map_by_name(obj, "map_array_ro");
-	if (!ASSERT_OK_PTR(array_ro, "lookup map_array_ro"))
+	map = bpf_object__find_map_by_name(obj, map_name);
+	if (!map) {
+		PRINT_FAIL("Can't find map '%s'\n", map_name);
 		return -EINVAL;
+	}
 
-	err = bpf_map_update_elem(bpf_map__fd(array_ro), &key, &value, 0);
-	if (!ASSERT_OK(err, "map_array_ro update"))
+	err = bpf_map_update_elem(bpf_map__fd(map), &key, &value, 0);
+	if (err) {
+		PRINT_FAIL("Error while updating map '%s': %d\n", map_name, err);
 		return err;
+	}
 
 	return 0;
 }
 
+static int init_array_access_maps(struct bpf_object *obj)
+{
+	return init_test_val_map(obj, "map_array_ro");
+}
+
 void test_verifier_array_access(void)
 {
 	run_tests_aux("verifier_array_access",
 		      verifier_array_access__elf_bytes,
 		      init_array_access_maps);
 }
+
+static int init_value_ptr_arith_maps(struct bpf_object *obj)
+{
+	return init_test_val_map(obj, "map_array_48b");
+}
+
+void test_verifier_value_ptr_arith(void)
+{
+	run_tests_aux("verifier_value_ptr_arith",
+		      verifier_value_ptr_arith__elf_bytes,
+		      init_value_ptr_arith_maps);
+}
diff --git a/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c b/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
new file mode 100644
index 000000000000..5ba6e53571c8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
@@ -0,0 +1,1423 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Converted from tools/testing/selftests/bpf/verifier/value_ptr_arith.c */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <errno.h>
+#include "bpf_misc.h"
+
+#define MAX_ENTRIES 11
+
+struct test_val {
+	unsigned int index;
+	int foo[MAX_ENTRIES];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct test_val);
+} map_array_48b SEC(".maps");
+
+struct other_val {
+	long long foo;
+	long long bar;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, struct other_val);
+} map_hash_16b SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, struct test_val);
+} map_hash_48b SEC(".maps");
+
+SEC("socket")
+__description("map access: known scalar += value_ptr unknown vs const")
+__success __failure_unpriv
+__msg_unpriv("R1 tried to add from different maps, paths or scalars")
+__retval(1)
+__naked void value_ptr_unknown_vs_const(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r4 = *(u8*)(r0 + 0);				\
+	if r4 == 1 goto l3_%=;				\
+	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x7;					\
+	goto l4_%=;					\
+l3_%=:	r1 = 3;						\
+l4_%=:	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr const vs unknown")
+__success __failure_unpriv
+__msg_unpriv("R1 tried to add from different maps, paths or scalars")
+__retval(1)
+__naked void value_ptr_const_vs_unknown(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r4 = *(u8*)(r0 + 0);				\
+	if r4 == 1 goto l3_%=;				\
+	r1 = 3;						\
+	goto l4_%=;					\
+l3_%=:	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x7;					\
+l4_%=:	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr const vs const (ne)")
+__success __failure_unpriv
+__msg_unpriv("R1 tried to add from different maps, paths or scalars")
+__retval(1)
+__naked void ptr_const_vs_const_ne(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r4 = *(u8*)(r0 + 0);				\
+	if r4 == 1 goto l3_%=;				\
+	r1 = 3;						\
+	goto l4_%=;					\
+l3_%=:	r1 = 5;						\
+l4_%=:	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr const vs const (eq)")
+__success __success_unpriv __retval(1)
+__naked void ptr_const_vs_const_eq(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r4 = *(u8*)(r0 + 0);				\
+	if r4 == 1 goto l3_%=;				\
+	r1 = 5;						\
+	goto l4_%=;					\
+l3_%=:	r1 = 5;						\
+l4_%=:	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr unknown vs unknown (eq)")
+__success __success_unpriv __retval(1)
+__naked void ptr_unknown_vs_unknown_eq(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r4 = *(u8*)(r0 + 0);				\
+	if r4 == 1 goto l3_%=;				\
+	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x7;					\
+	goto l4_%=;					\
+l3_%=:	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x7;					\
+l4_%=:	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr unknown vs unknown (lt)")
+__success __failure_unpriv
+__msg_unpriv("R1 tried to add from different maps, paths or scalars")
+__retval(1)
+__naked void ptr_unknown_vs_unknown_lt(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r4 = *(u8*)(r0 + 0);				\
+	if r4 == 1 goto l3_%=;				\
+	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x3;					\
+	goto l4_%=;					\
+l3_%=:	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x7;					\
+l4_%=:	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr unknown vs unknown (gt)")
+__success __failure_unpriv
+__msg_unpriv("R1 tried to add from different maps, paths or scalars")
+__retval(1)
+__naked void ptr_unknown_vs_unknown_gt(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r4 = *(u8*)(r0 + 0);				\
+	if r4 == 1 goto l3_%=;				\
+	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x7;					\
+	goto l4_%=;					\
+l3_%=:	r1 = 6;						\
+	r1 = -r1;					\
+	r1 &= 0x3;					\
+l4_%=:	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr from different maps")
+__success __success_unpriv __retval(1)
+__naked void value_ptr_from_different_maps(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r1 = 4;						\
+	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= known scalar from different maps")
+__success __failure_unpriv
+__msg_unpriv("R0 min value is outside of the allowed memory range")
+__retval(1)
+__naked void known_scalar_from_different_maps(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_16b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r1 = 4;						\
+	r0 -= r1;					\
+	r0 += r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_16b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr from different maps, but same value properties")
+__success __success_unpriv __retval(1)
+__naked void maps_but_same_value_properties(void)
+{
+	asm volatile ("					\
+	r0 = *(u32*)(r1 + %[__sk_buff_len]);		\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	if r0 == 1 goto l0_%=;				\
+	r1 = %[map_hash_48b] ll;			\
+	if r0 != 1 goto l1_%=;				\
+l0_%=:	r1 = %[map_array_48b] ll;			\
+l1_%=:	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l2_%=;				\
+	r1 = 4;						\
+	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l2_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_addr(map_hash_48b),
+	  __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: mixing value pointer and scalar, 1")
+__success __failure_unpriv __msg_unpriv("R2 pointer comparison prohibited")
+__retval(0)
+__naked void value_pointer_and_scalar_1(void)
+{
+	asm volatile ("					\
+	/* load map value pointer into r0 and r2 */	\
+	r0 = 1;						\
+	r1 = %[map_array_48b] ll;			\
+	r2 = r10;					\
+	r2 += -16;					\
+	r6 = 0;						\
+	*(u64*)(r10 - 16) = r6;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	/* load some number from the map into r1 */	\
+	r1 = *(u8*)(r0 + 0);				\
+	/* depending on r1, branch: */			\
+	if r1 != 0 goto l1_%=;				\
+	/* branch A */					\
+	r2 = r0;					\
+	r3 = 0;						\
+	goto l2_%=;					\
+l1_%=:	/* branch B */					\
+	r2 = 0;						\
+	r3 = 0x100000;					\
+l2_%=:	/* common instruction */			\
+	r2 += r3;					\
+	/* depending on r1, branch: */			\
+	if r1 != 0 goto l3_%=;				\
+	/* branch A */					\
+	goto l4_%=;					\
+l3_%=:	/* branch B */					\
+	r0 = 0x13371337;				\
+	/* verifier follows fall-through */		\
+	if r2 != 0x100000 goto l4_%=;			\
+	r0 = 0;						\
+	exit;						\
+l4_%=:	/* fake-dead code; targeted from branch A to	\
+	 * prevent dead code sanitization		\
+	 */						\
+	r0 = *(u8*)(r0 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: mixing value pointer and scalar, 2")
+__success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'")
+__retval(0)
+__naked void value_pointer_and_scalar_2(void)
+{
+	asm volatile ("					\
+	/* load map value pointer into r0 and r2 */	\
+	r0 = 1;						\
+	r1 = %[map_array_48b] ll;			\
+	r2 = r10;					\
+	r2 += -16;					\
+	r6 = 0;						\
+	*(u64*)(r10 - 16) = r6;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	/* load some number from the map into r1 */	\
+	r1 = *(u8*)(r0 + 0);				\
+	/* depending on r1, branch: */			\
+	if r1 == 0 goto l1_%=;				\
+	/* branch A */					\
+	r2 = 0;						\
+	r3 = 0x100000;					\
+	goto l2_%=;					\
+l1_%=:	/* branch B */					\
+	r2 = r0;					\
+	r3 = 0;						\
+l2_%=:	/* common instruction */			\
+	r2 += r3;					\
+	/* depending on r1, branch: */			\
+	if r1 != 0 goto l3_%=;				\
+	/* branch A */					\
+	goto l4_%=;					\
+l3_%=:	/* branch B */					\
+	r0 = 0x13371337;				\
+	/* verifier follows fall-through */		\
+	if r2 != 0x100000 goto l4_%=;			\
+	r0 = 0;						\
+	exit;						\
+l4_%=:	/* fake-dead code; targeted from branch A to	\
+	 * prevent dead code sanitization, rejected	\
+	 * via branch B however				\
+	 */						\
+	r0 = *(u8*)(r0 + 0);				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("sanitation: alu with different scalars 1")
+__success __success_unpriv __retval(0x100000)
+__naked void alu_with_different_scalars_1(void)
+{
+	asm volatile ("					\
+	r0 = 1;						\
+	r1 = %[map_array_48b] ll;			\
+	r2 = r10;					\
+	r2 += -16;					\
+	r6 = 0;						\
+	*(u64*)(r10 - 16) = r6;				\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r1 = *(u32*)(r0 + 0);				\
+	if r1 == 0 goto l1_%=;				\
+	r2 = 0;						\
+	r3 = 0x100000;					\
+	goto l2_%=;					\
+l1_%=:	r2 = 42;					\
+	r3 = 0x100001;					\
+l2_%=:	r2 += r3;					\
+	r0 = r2;					\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("sanitation: alu with different scalars 2")
+__success __success_unpriv __retval(0)
+__naked void alu_with_different_scalars_2(void)
+{
+	asm volatile ("					\
+	r0 = 1;						\
+	r1 = %[map_array_48b] ll;			\
+	r6 = r1;					\
+	r2 = r10;					\
+	r2 += -16;					\
+	r7 = 0;						\
+	*(u64*)(r10 - 16) = r7;				\
+	call %[bpf_map_delete_elem];			\
+	r7 = r0;					\
+	r1 = r6;					\
+	r2 = r10;					\
+	r2 += -16;					\
+	call %[bpf_map_delete_elem];			\
+	r6 = r0;					\
+	r8 = r6;					\
+	r8 += r7;					\
+	r0 = r8;					\
+	r0 += %[einval];				\
+	r0 += %[einval];				\
+	exit;						\
+"	:
+	: __imm(bpf_map_delete_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_const(einval, EINVAL)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("sanitation: alu with different scalars 3")
+__success __success_unpriv __retval(0)
+__naked void alu_with_different_scalars_3(void)
+{
+	asm volatile ("					\
+	r0 = %[einval];					\
+	r0 *= -1;					\
+	r7 = r0;					\
+	r0 = %[einval];					\
+	r0 *= -1;					\
+	r6 = r0;					\
+	r8 = r6;					\
+	r8 += r7;					\
+	r0 = r8;					\
+	r0 += %[einval];				\
+	r0 += %[einval];				\
+	exit;						\
+"	:
+	: __imm_const(einval, EINVAL)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, upper oob arith, test 1")
+__success __failure_unpriv
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__retval(1)
+__naked void upper_oob_arith_test_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 48;					\
+	r0 += r1;					\
+	r0 -= r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, upper oob arith, test 2")
+__success __failure_unpriv
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__retval(1)
+__naked void upper_oob_arith_test_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 49;					\
+	r0 += r1;					\
+	r0 -= r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, upper oob arith, test 3")
+__success __success_unpriv __retval(1)
+__naked void upper_oob_arith_test_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 47;					\
+	r0 += r1;					\
+	r0 -= r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= known scalar, lower oob arith, test 1")
+__failure __msg("R0 min value is outside of the allowed memory range")
+__failure_unpriv
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__naked void lower_oob_arith_test_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 47;					\
+	r0 += r1;					\
+	r1 = 48;					\
+	r0 -= r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= known scalar, lower oob arith, test 2")
+__success __failure_unpriv
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__retval(1)
+__naked void lower_oob_arith_test_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 47;					\
+	r0 += r1;					\
+	r1 = 48;					\
+	r0 -= r1;					\
+	r1 = 1;						\
+	r0 += r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= known scalar, lower oob arith, test 3")
+__success __success_unpriv __retval(1)
+__naked void lower_oob_arith_test_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 47;					\
+	r0 += r1;					\
+	r1 = 47;					\
+	r0 -= r1;					\
+	r0 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar += value_ptr")
+__success __success_unpriv __retval(1)
+__naked void access_known_scalar_value_ptr_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 4;						\
+	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, 1")
+__success __success_unpriv __retval(1)
+__naked void value_ptr_known_scalar_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 4;						\
+	r0 += r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, 2")
+__failure __msg("invalid access to map value")
+__failure_unpriv
+__naked void value_ptr_known_scalar_2_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 49;					\
+	r0 += r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, 3")
+__failure __msg("invalid access to map value")
+__failure_unpriv
+__naked void value_ptr_known_scalar_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = -1;					\
+	r0 += r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, 4")
+__success __success_unpriv __retval(1)
+__naked void value_ptr_known_scalar_4(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 5;						\
+	r0 += r1;					\
+	r1 = -2;					\
+	r0 += r1;					\
+	r1 = -1;					\
+	r0 += r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, 5")
+__success __success_unpriv __retval(0xabcdef12)
+__naked void value_ptr_known_scalar_5(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = %[__imm_0];				\
+	r1 += r0;					\
+	r0 = *(u32*)(r1 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_const(__imm_0, (6 + 1) * sizeof(int))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += known scalar, 6")
+__success __success_unpriv __retval(0xabcdef12)
+__naked void value_ptr_known_scalar_6(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = %[__imm_0];				\
+	r0 += r1;					\
+	r1 = %[__imm_1];				\
+	r0 += r1;					\
+	r0 = *(u32*)(r0 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b),
+	  __imm_const(__imm_0, (3 + 1) * sizeof(int)),
+	  __imm_const(__imm_1, 3 * sizeof(int))
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += N, value_ptr -= N known scalar")
+__success __success_unpriv __retval(0x12345678)
+__naked void value_ptr_n_known_scalar(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	w1 = 0x12345678;				\
+	*(u32*)(r0 + 0) = r1;				\
+	r0 += 2;					\
+	r1 = 2;						\
+	r0 -= r1;					\
+	r0 = *(u32*)(r0 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: unknown scalar += value_ptr, 1")
+__success __success_unpriv __retval(1)
+__naked void unknown_scalar_value_ptr_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 &= 0xf;					\
+	r1 += r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: unknown scalar += value_ptr, 2")
+__success __success_unpriv __retval(0xabcdef12) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void unknown_scalar_value_ptr_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u32*)(r0 + 0);				\
+	r1 &= 31;					\
+	r1 += r0;					\
+	r0 = *(u32*)(r1 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: unknown scalar += value_ptr, 3")
+__success __failure_unpriv
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__retval(0xabcdef12) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void unknown_scalar_value_ptr_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = -1;					\
+	r0 += r1;					\
+	r1 = 1;						\
+	r0 += r1;					\
+	r1 = *(u32*)(r0 + 0);				\
+	r1 &= 31;					\
+	r1 += r0;					\
+	r0 = *(u32*)(r1 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: unknown scalar += value_ptr, 4")
+__failure __msg("R1 max value is outside of the allowed memory range")
+__msg_unpriv("R1 pointer arithmetic of map value goes out of range")
+__flag(BPF_F_ANY_ALIGNMENT)
+__naked void unknown_scalar_value_ptr_4(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 19;					\
+	r0 += r1;					\
+	r1 = *(u32*)(r0 + 0);				\
+	r1 &= 31;					\
+	r1 += r0;					\
+	r0 = *(u32*)(r1 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += unknown scalar, 1")
+__success __success_unpriv __retval(1)
+__naked void value_ptr_unknown_scalar_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 &= 0xf;					\
+	r0 += r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += unknown scalar, 2")
+__success __success_unpriv __retval(0xabcdef12) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void value_ptr_unknown_scalar_2_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u32*)(r0 + 0);				\
+	r1 &= 31;					\
+	r0 += r1;					\
+	r0 = *(u32*)(r0 + 0);				\
+l0_%=:	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += unknown scalar, 3")
+__success __success_unpriv __retval(1)
+__naked void value_ptr_unknown_scalar_3(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u64*)(r0 + 0);				\
+	r2 = *(u64*)(r0 + 8);				\
+	r3 = *(u64*)(r0 + 16);				\
+	r1 &= 0xf;					\
+	r3 &= 1;					\
+	r3 |= 1;					\
+	if r2 > r3 goto l0_%=;				\
+	r0 += r3;					\
+	r0 = *(u8*)(r0 + 0);				\
+	r0 = 1;						\
+l1_%=:	exit;						\
+l0_%=:	r0 = 2;						\
+	goto l1_%=;					\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr += value_ptr")
+__failure __msg("R0 pointer += pointer prohibited")
+__failure_unpriv
+__naked void access_value_ptr_value_ptr_1(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r0 += r0;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: known scalar -= value_ptr")
+__failure __msg("R1 tried to subtract pointer from scalar")
+__failure_unpriv
+__naked void access_known_scalar_value_ptr_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 4;						\
+	r1 -= r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= known scalar")
+__failure __msg("R0 min value is outside of the allowed memory range")
+__failure_unpriv
+__naked void access_value_ptr_known_scalar(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 4;						\
+	r0 -= r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= known scalar, 2")
+__success __success_unpriv __retval(1)
+__naked void value_ptr_known_scalar_2_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = 6;						\
+	r2 = 4;						\
+	r0 += r1;					\
+	r0 -= r2;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: unknown scalar -= value_ptr")
+__failure __msg("R1 tried to subtract pointer from scalar")
+__failure_unpriv
+__naked void access_unknown_scalar_value_ptr(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 &= 0xf;					\
+	r1 -= r0;					\
+	r0 = *(u8*)(r1 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= unknown scalar")
+__failure __msg("R0 min value is negative")
+__failure_unpriv
+__naked void access_value_ptr_unknown_scalar(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 &= 0xf;					\
+	r0 -= r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= unknown scalar, 2")
+__success __failure_unpriv
+__msg_unpriv("R0 pointer arithmetic of map value goes out of range")
+__retval(1)
+__naked void value_ptr_unknown_scalar_2_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 &= 0xf;					\
+	r1 |= 0x7;					\
+	r0 += r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+	r1 &= 0x7;					\
+	r0 -= r1;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: value_ptr -= value_ptr")
+__failure __msg("R0 invalid mem access 'scalar'")
+__msg_unpriv("R0 pointer -= pointer prohibited")
+__naked void access_value_ptr_value_ptr_2(void)
+{
+	asm volatile ("					\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 == 0 goto l0_%=;				\
+	r0 -= r0;					\
+	r1 = *(u8*)(r0 + 0);				\
+l0_%=:	r0 = 1;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("map access: trying to leak tainted dst reg")
+__failure __msg("math between map_value pointer and 4294967295 is not allowed")
+__failure_unpriv
+__naked void to_leak_tainted_dst_reg(void)
+{
+	asm volatile ("					\
+	r0 = 0;						\
+	r1 = 0;						\
+	*(u64*)(r10 - 8) = r1;				\
+	r2 = r10;					\
+	r2 += -8;					\
+	r1 = %[map_array_48b] ll;			\
+	call %[bpf_map_lookup_elem];			\
+	if r0 != 0 goto l0_%=;				\
+	exit;						\
+l0_%=:	r2 = r0;					\
+	w1 = 0xFFFFFFFF;				\
+	w1 = w1;					\
+	r2 -= r1;					\
+	*(u64*)(r0 + 0) = r2;				\
+	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm(bpf_map_lookup_elem),
+	  __imm_addr(map_array_48b)
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("32bit pkt_ptr -= scalar")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void _32bit_pkt_ptr_scalar(void)
+{
+	asm volatile ("					\
+	r8 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r7 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r6 = r7;					\
+	r6 += 40;					\
+	if r6 > r8 goto l0_%=;				\
+	w4 = w7;					\
+	w6 -= w4;					\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+SEC("tc")
+__description("32bit scalar -= pkt_ptr")
+__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT)
+__naked void _32bit_scalar_pkt_ptr(void)
+{
+	asm volatile ("					\
+	r8 = *(u32*)(r1 + %[__sk_buff_data_end]);	\
+	r7 = *(u32*)(r1 + %[__sk_buff_data]);		\
+	r6 = r7;					\
+	r6 += 40;					\
+	if r6 > r8 goto l0_%=;				\
+	w4 = w6;					\
+	w4 -= w7;					\
+l0_%=:	r0 = 0;						\
+	exit;						\
+"	:
+	: __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+	  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
deleted file mode 100644
index 249187d3c530..000000000000
--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+++ /dev/null
@@ -1,1140 +0,0 @@
-{
-	"map access: known scalar += value_ptr unknown vs const",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_1, 3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R1 tried to add from different maps, paths or scalars",
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr const vs unknown",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
-	BPF_MOV64_IMM(BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R1 tried to add from different maps, paths or scalars",
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr const vs const (ne)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
-	BPF_MOV64_IMM(BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_1, 5),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R1 tried to add from different maps, paths or scalars",
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr const vs const (eq)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
-	BPF_MOV64_IMM(BPF_REG_1, 5),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_MOV64_IMM(BPF_REG_1, 5),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr unknown vs unknown (eq)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr unknown vs unknown (lt)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R1 tried to add from different maps, paths or scalars",
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr unknown vs unknown (gt)",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
-	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x3),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R1 tried to add from different maps, paths or scalars",
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr from different maps",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: value_ptr -= known scalar from different maps",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_16b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 min value is outside of the allowed memory range",
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr from different maps, but same value properties",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-		    offsetof(struct __sk_buff, len)),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_hash_48b = { 5 },
-	.fixup_map_array_48b = { 8 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: mixing value pointer and scalar, 1",
-	.insns = {
-	// load map value pointer into r0 and r2
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
-	BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16),
-	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	// load some number from the map into r1
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 3),
-	// branch A
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_A(2),
-	// branch B
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_3, 0x100000),
-	// common instruction
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
-	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	// branch A
-	BPF_JMP_A(4),
-	// branch B
-	BPF_MOV64_IMM(BPF_REG_0, 0x13371337),
-	// verifier follows fall-through
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	// fake-dead code; targeted from branch A to
-	// prevent dead code sanitization
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 1 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R2 pointer comparison prohibited",
-	.retval = 0,
-},
-{
-	"map access: mixing value pointer and scalar, 2",
-	.insns = {
-	// load map value pointer into r0 and r2
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
-	BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16),
-	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	// load some number from the map into r1
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
-	// branch A
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_3, 0x100000),
-	BPF_JMP_A(2),
-	// branch B
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_3, 0),
-	// common instruction
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
-	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	// branch A
-	BPF_JMP_A(4),
-	// branch B
-	BPF_MOV64_IMM(BPF_REG_0, 0x13371337),
-	// verifier follows fall-through
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	// fake-dead code; targeted from branch A to
-	// prevent dead code sanitization, rejected
-	// via branch B however
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 1 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 invalid mem access 'scalar'",
-	.retval = 0,
-},
-{
-	"sanitation: alu with different scalars 1",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
-	BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16),
-	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_3, 0x100000),
-	BPF_JMP_A(2),
-	BPF_MOV64_IMM(BPF_REG_2, 42),
-	BPF_MOV64_IMM(BPF_REG_3, 0x100001),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 1 },
-	.result = ACCEPT,
-	.retval = 0x100000,
-},
-{
-	"sanitation: alu with different scalars 2",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0),
-	BPF_EMIT_CALL(BPF_FUNC_map_delete_elem),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_EMIT_CALL(BPF_FUNC_map_delete_elem),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_6),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_7),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_8),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 1 },
-	.result = ACCEPT,
-	.retval = -EINVAL * 2,
-},
-{
-	"sanitation: alu with different scalars 3",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, EINVAL),
-	BPF_ALU64_IMM(BPF_MUL, BPF_REG_0, -1),
-	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
-	BPF_MOV64_IMM(BPF_REG_0, EINVAL),
-	BPF_ALU64_IMM(BPF_MUL, BPF_REG_0, -1),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_MOV64_REG(BPF_REG_8, BPF_REG_6),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_7),
-	BPF_MOV64_REG(BPF_REG_0, BPF_REG_8),
-	BPF_EXIT_INSN(),
-	},
-	.result = ACCEPT,
-	.retval = -EINVAL * 2,
-},
-{
-	"map access: value_ptr += known scalar, upper oob arith, test 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 48),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-	.retval = 1,
-},
-{
-	"map access: value_ptr += known scalar, upper oob arith, test 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 49),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-	.retval = 1,
-},
-{
-	"map access: value_ptr += known scalar, upper oob arith, test 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_MOV64_IMM(BPF_REG_1, 47),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: value_ptr -= known scalar, lower oob arith, test 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-	BPF_MOV64_IMM(BPF_REG_1, 47),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 48),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R0 min value is outside of the allowed memory range",
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-},
-{
-	"map access: value_ptr -= known scalar, lower oob arith, test 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
-	BPF_MOV64_IMM(BPF_REG_1, 47),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 48),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-	.retval = 1,
-},
-{
-	"map access: value_ptr -= known scalar, lower oob arith, test 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-	BPF_MOV64_IMM(BPF_REG_1, 47),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 47),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: known scalar += value_ptr",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: value_ptr += known scalar, 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: value_ptr += known scalar, 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 49),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "invalid access to map value",
-},
-{
-	"map access: value_ptr += known scalar, 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, -1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "invalid access to map value",
-},
-{
-	"map access: value_ptr += known scalar, 4",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
-	BPF_MOV64_IMM(BPF_REG_1, 5),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, -2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, -1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: value_ptr += known scalar, 5",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, (6 + 1) * sizeof(int)),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 0xabcdef12,
-},
-{
-	"map access: value_ptr += known scalar, 6",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-	BPF_MOV64_IMM(BPF_REG_1, (3 + 1) * sizeof(int)),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 3 * sizeof(int)),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 0xabcdef12,
-},
-{
-	"map access: value_ptr += N, value_ptr -= N known scalar",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
-	BPF_MOV32_IMM(BPF_REG_1, 0x12345678),
-	BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
-	BPF_MOV64_IMM(BPF_REG_1, 2),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 0x12345678,
-},
-{
-	"map access: unknown scalar += value_ptr, 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: unknown scalar += value_ptr, 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 0xabcdef12,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"map access: unknown scalar += value_ptr, 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	BPF_MOV64_IMM(BPF_REG_1, -1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-	.retval = 0xabcdef12,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"map access: unknown scalar += value_ptr, 4",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
-	BPF_MOV64_IMM(BPF_REG_1, 19),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R1 max value is outside of the allowed memory range",
-	.errstr_unpriv = "R1 pointer arithmetic of map value goes out of range",
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"map access: value_ptr += unknown scalar, 1",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: value_ptr += unknown scalar, 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 0xabcdef12,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"map access: value_ptr += unknown scalar, 3",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 8),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 16),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 1),
-	BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_3),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_IMM(BPF_REG_0, 2),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -3),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: value_ptr += value_ptr",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R0 pointer += pointer prohibited",
-},
-{
-	"map access: known scalar -= value_ptr",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R1 tried to subtract pointer from scalar",
-},
-{
-	"map access: value_ptr -= known scalar",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R0 min value is outside of the allowed memory range",
-},
-{
-	"map access: value_ptr -= known scalar, 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
-	BPF_MOV64_IMM(BPF_REG_1, 6),
-	BPF_MOV64_IMM(BPF_REG_2, 4),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_2),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.retval = 1,
-},
-{
-	"map access: unknown scalar -= value_ptr",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R1 tried to subtract pointer from scalar",
-},
-{
-	"map access: value_ptr -= unknown scalar",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R0 min value is negative",
-},
-{
-	"map access: value_ptr -= unknown scalar, 2",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
-	BPF_ALU64_IMM(BPF_OR, BPF_REG_1, 0x7),
-	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = ACCEPT,
-	.result_unpriv = REJECT,
-	.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
-	.retval = 1,
-},
-{
-	"map access: value_ptr -= value_ptr",
-	.insns = {
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_0),
-	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 3 },
-	.result = REJECT,
-	.errstr = "R0 invalid mem access 'scalar'",
-	.errstr_unpriv = "R0 pointer -= pointer prohibited",
-},
-{
-	"map access: trying to leak tainted dst reg",
-	.insns = {
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_EXIT_INSN(),
-	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF),
-	BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
-	BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1),
-	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.fixup_map_array_48b = { 4 },
-	.result = REJECT,
-	.errstr = "math between map_value pointer and 4294967295 is not allowed",
-},
-{
-	"32bit pkt_ptr -= scalar",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 40),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2),
-	BPF_ALU32_REG(BPF_MOV, BPF_REG_4, BPF_REG_7),
-	BPF_ALU32_REG(BPF_SUB, BPF_REG_6, BPF_REG_4),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-{
-	"32bit scalar -= pkt_ptr",
-	.insns = {
-	BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
-		    offsetof(struct __sk_buff, data_end)),
-	BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
-		    offsetof(struct __sk_buff, data)),
-	BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
-	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 40),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2),
-	BPF_ALU32_REG(BPF_MOV, BPF_REG_4, BPF_REG_6),
-	BPF_ALU32_REG(BPF_SUB, BPF_REG_4, BPF_REG_7),
-	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_EXIT_INSN(),
-	},
-	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-	.result = ACCEPT,
-	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
-},
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (23 preceding siblings ...)
  2023-04-21 17:42 ` [PATCH bpf-next 24/24] selftests/bpf: verifier/value_ptr_arith " Eduard Zingerman
@ 2023-04-21 19:40 ` patchwork-bot+netdevbpf
  2023-04-21 19:49   ` Eduard Zingerman
  2023-04-21 19:48 ` Alexei Starovoitov
  25 siblings, 1 reply; 30+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-04-21 19:40 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Fri, 21 Apr 2023 20:42:10 +0300 you wrote:
> This is a follow up for RFC [1]. It migrates a second batch of 23
> verifier/*.c tests to inline assembly and use of ./test_progs for
> actual execution. Link to the first batch is [2].
> 
> The migration is done by a python script (see [3]) with minimal manual
> adjustments.
> 
> [...]

Here is the summary with links:
  - [bpf-next,01/24] selftests/bpf: Add notion of auxiliary programs for test_loader
    https://git.kernel.org/bpf/bpf-next/c/63bb645b9da3
  - [bpf-next,02/24] selftests/bpf: verifier/bounds converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/c92336559ac0
  - [bpf-next,03/24] selftests/bpf: verifier/bpf_get_stack converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/965a3f913e72
  - [bpf-next,04/24] selftests/bpf: verifier/btf_ctx_access converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/37467c79e16a
  - [bpf-next,05/24] selftests/bpf: verifier/ctx converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/fcd36964f22b
  - [bpf-next,06/24] selftests/bpf: verifier/d_path converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/608028024384
  - [bpf-next,07/24] selftests/bpf: verifier/direct_packet_access converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/0a372c9c0812
  - [bpf-next,08/24] selftests/bpf: verifier/jeq_infer_not_null converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/a5828e3154d1
  - [bpf-next,09/24] selftests/bpf: verifier/loops1 converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/a6fc14dc5e8d
  - [bpf-next,10/24] selftests/bpf: verifier/lwt converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/b427ca576f83
  - [bpf-next,11/24] selftests/bpf: verifier/map_in_map converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/4a400ef9ba41
  - [bpf-next,12/24] selftests/bpf: verifier/map_ptr_mixing converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/aee1779f0dec
  - [bpf-next,13/24] selftests/bpf: verifier/precise converted to inline assembly
    (no matching commit)
  - [bpf-next,14/24] selftests/bpf: verifier/prevent_map_lookup converted to inline assembly
    (no matching commit)
  - [bpf-next,15/24] selftests/bpf: verifier/ref_tracking converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/8be632795996
  - [bpf-next,16/24] selftests/bpf: verifier/regalloc converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/16a42573c253
  - [bpf-next,17/24] selftests/bpf: verifier/runtime_jit converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/65222842ca04
  - [bpf-next,18/24] selftests/bpf: verifier/search_pruning converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/034d9ad25db3
  - [bpf-next,19/24] selftests/bpf: verifier/sock converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/426fc0e3fce2
  - [bpf-next,20/24] selftests/bpf: verifier/spin_lock converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/f323a81806bd
  - [bpf-next,21/24] selftests/bpf: verifier/subreg converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/81d1d6dd4037
  - [bpf-next,22/24] selftests/bpf: verifier/unpriv converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/82887c2568e4
  - [bpf-next,23/24] selftests/bpf: verifier/value_illegal_alu converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/efe25a330b10
  - [bpf-next,24/24] selftests/bpf: verifier/value_ptr_arith converted to inline assembly
    https://git.kernel.org/bpf/bpf-next/c/4db10a8243df

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly
  2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
                   ` (24 preceding siblings ...)
  2023-04-21 19:40 ` [PATCH bpf-next 00/24] Second set of verifier/*.c migrated " patchwork-bot+netdevbpf
@ 2023-04-21 19:48 ` Alexei Starovoitov
  2023-04-21 20:00   ` Eduard Zingerman
  25 siblings, 1 reply; 30+ messages in thread
From: Alexei Starovoitov @ 2023-04-21 19:48 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Kernel Team, Yonghong Song

On Fri, Apr 21, 2023 at 10:42 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> This is a follow up for RFC [1]. It migrates a second batch of 23
> verifier/*.c tests to inline assembly and use of ./test_progs for
> actual execution. Link to the first batch is [2].
>
> The migration is done by a python script (see [3]) with minimal manual
> adjustments.

All makes sense to me.
Took 22 out of 24 patches.
The 13 and 14 had conflicts.
Also there is a precision fix in bpf tree.
So we're going to wait for bpf/bpf-next to converge during
the merge window next week, then add another precision test as asm
and then regenerate conversion of the precision tests.
Not sure why 14 was conflicting.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly
  2023-04-21 19:40 ` [PATCH bpf-next 00/24] Second set of verifier/*.c migrated " patchwork-bot+netdevbpf
@ 2023-04-21 19:49   ` Eduard Zingerman
  2023-04-21 19:53     ` Alexei Starovoitov
  0 siblings, 1 reply; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 19:49 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: bpf, andrii, daniel, martin.lau, kernel-team, yhs

On Fri, 2023-04-21 at 19:40 +0000, patchwork-bot+netdevbpf@kernel.org wrote:
> Hello:
> 
> This series was applied to bpf/bpf-next.git (master)
> by Alexei Starovoitov <ast@kernel.org>:

Hi Alexei,

Thank you for merging these changes!

I've noticed that email from the bot does not list
commit hashes for patches #13,14 (precise, prevent_map_lookup).
And these commits are indeed not in git [1].
Is this intentional?

Thanks,
Eduard

[1] https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/log/
> 
> On Fri, 21 Apr 2023 20:42:10 +0300 you wrote:
> > This is a follow up for RFC [1]. It migrates a second batch of 23
> > verifier/*.c tests to inline assembly and use of ./test_progs for
> > actual execution. Link to the first batch is [2].
> > 
> > The migration is done by a python script (see [3]) with minimal manual
> > adjustments.
> > 
> > [...]
> 
> Here is the summary with links:
>   - [bpf-next,01/24] selftests/bpf: Add notion of auxiliary programs for test_loader
>     https://git.kernel.org/bpf/bpf-next/c/63bb645b9da3
>   - [bpf-next,02/24] selftests/bpf: verifier/bounds converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/c92336559ac0
>   - [bpf-next,03/24] selftests/bpf: verifier/bpf_get_stack converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/965a3f913e72
>   - [bpf-next,04/24] selftests/bpf: verifier/btf_ctx_access converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/37467c79e16a
>   - [bpf-next,05/24] selftests/bpf: verifier/ctx converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/fcd36964f22b
>   - [bpf-next,06/24] selftests/bpf: verifier/d_path converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/608028024384
>   - [bpf-next,07/24] selftests/bpf: verifier/direct_packet_access converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/0a372c9c0812
>   - [bpf-next,08/24] selftests/bpf: verifier/jeq_infer_not_null converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/a5828e3154d1
>   - [bpf-next,09/24] selftests/bpf: verifier/loops1 converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/a6fc14dc5e8d
>   - [bpf-next,10/24] selftests/bpf: verifier/lwt converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/b427ca576f83
>   - [bpf-next,11/24] selftests/bpf: verifier/map_in_map converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/4a400ef9ba41
>   - [bpf-next,12/24] selftests/bpf: verifier/map_ptr_mixing converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/aee1779f0dec
>   - [bpf-next,13/24] selftests/bpf: verifier/precise converted to inline assembly
>     (no matching commit)
>   - [bpf-next,14/24] selftests/bpf: verifier/prevent_map_lookup converted to inline assembly
>     (no matching commit)
>   - [bpf-next,15/24] selftests/bpf: verifier/ref_tracking converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/8be632795996
>   - [bpf-next,16/24] selftests/bpf: verifier/regalloc converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/16a42573c253
>   - [bpf-next,17/24] selftests/bpf: verifier/runtime_jit converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/65222842ca04
>   - [bpf-next,18/24] selftests/bpf: verifier/search_pruning converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/034d9ad25db3
>   - [bpf-next,19/24] selftests/bpf: verifier/sock converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/426fc0e3fce2
>   - [bpf-next,20/24] selftests/bpf: verifier/spin_lock converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/f323a81806bd
>   - [bpf-next,21/24] selftests/bpf: verifier/subreg converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/81d1d6dd4037
>   - [bpf-next,22/24] selftests/bpf: verifier/unpriv converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/82887c2568e4
>   - [bpf-next,23/24] selftests/bpf: verifier/value_illegal_alu converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/efe25a330b10
>   - [bpf-next,24/24] selftests/bpf: verifier/value_ptr_arith converted to inline assembly
>     https://git.kernel.org/bpf/bpf-next/c/4db10a8243df
> 
> You are awesome, thank you!


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly
  2023-04-21 19:49   ` Eduard Zingerman
@ 2023-04-21 19:53     ` Alexei Starovoitov
  0 siblings, 0 replies; 30+ messages in thread
From: Alexei Starovoitov @ 2023-04-21 19:53 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Alexei Starovoitov, bpf, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Kernel Team, Yonghong Song

On Fri, Apr 21, 2023 at 12:49 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Fri, 2023-04-21 at 19:40 +0000, patchwork-bot+netdevbpf@kernel.org wrote:
> > Hello:
> >
> > This series was applied to bpf/bpf-next.git (master)
> > by Alexei Starovoitov <ast@kernel.org>:
>
> Hi Alexei,
>
> Thank you for merging these changes!
>
> I've noticed that email from the bot does not list
> commit hashes for patches #13,14 (precise, prevent_map_lookup).
> And these commits are indeed not in git [1].
> Is this intentional?

Yes. See other reply.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly
  2023-04-21 19:48 ` Alexei Starovoitov
@ 2023-04-21 20:00   ` Eduard Zingerman
  0 siblings, 0 replies; 30+ messages in thread
From: Eduard Zingerman @ 2023-04-21 20:00 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Kernel Team, Yonghong Song

On Fri, 2023-04-21 at 12:48 -0700, Alexei Starovoitov wrote:
> On Fri, Apr 21, 2023 at 10:42 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > 
> > This is a follow up for RFC [1]. It migrates a second batch of 23
> > verifier/*.c tests to inline assembly and use of ./test_progs for
> > actual execution. Link to the first batch is [2].
> > 
> > The migration is done by a python script (see [3]) with minimal manual
> > adjustments.
> 
> All makes sense to me.
> Took 22 out of 24 patches.
> The 13 and 14 had conflicts.
> Also there is a precision fix in bpf tree.
> So we're going to wait for bpf/bpf-next to converge during
> the merge window next week, then add another precision test as asm
> and then regenerate conversion of the precision tests.
> Not sure why 14 was conflicting.

Oh, understood, thank you.
The #14 does not apply because it also starts from 'p',
so the hunk below does not match:

--- tools/testing/selftests/bpf/prog_tests/verifier.c
+++ tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -41,6 +41,7 @@
 #include "verifier_masking.skel.h"
 #include "verifier_meta_access.skel.h"
 #include "verifier_precise.skel.h"
+#include "verifier_prevent_map_lookup.skel.h"
 #include "verifier_raw_stack.skel.h"
 #include "verifier_raw_tp_writable.skel.h"
 #include "verifier_reg_equal.skel.h"

I will resend it shortly.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2023-04-21 20:01 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-21 17:42 [PATCH bpf-next 00/24] Second set of verifier/*.c migrated to inline assembly Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 01/24] selftests/bpf: Add notion of auxiliary programs for test_loader Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 02/24] selftests/bpf: verifier/bounds converted to inline assembly Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 03/24] selftests/bpf: verifier/bpf_get_stack " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 04/24] selftests/bpf: verifier/btf_ctx_access " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 05/24] selftests/bpf: verifier/ctx " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 06/24] selftests/bpf: verifier/d_path " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 07/24] selftests/bpf: verifier/direct_packet_access " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 08/24] selftests/bpf: verifier/jeq_infer_not_null " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 09/24] selftests/bpf: verifier/loops1 " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 10/24] selftests/bpf: verifier/lwt " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 11/24] selftests/bpf: verifier/map_in_map " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 12/24] selftests/bpf: verifier/map_ptr_mixing " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 13/24] selftests/bpf: verifier/precise " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 14/24] selftests/bpf: verifier/prevent_map_lookup " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 15/24] selftests/bpf: verifier/ref_tracking " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 16/24] selftests/bpf: verifier/regalloc " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 17/24] selftests/bpf: verifier/runtime_jit " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 18/24] selftests/bpf: verifier/search_pruning " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 19/24] selftests/bpf: verifier/sock " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 20/24] selftests/bpf: verifier/spin_lock " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 21/24] selftests/bpf: verifier/subreg " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 22/24] selftests/bpf: verifier/unpriv " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 23/24] selftests/bpf: verifier/value_illegal_alu " Eduard Zingerman
2023-04-21 17:42 ` [PATCH bpf-next 24/24] selftests/bpf: verifier/value_ptr_arith " Eduard Zingerman
2023-04-21 19:40 ` [PATCH bpf-next 00/24] Second set of verifier/*.c migrated " patchwork-bot+netdevbpf
2023-04-21 19:49   ` Eduard Zingerman
2023-04-21 19:53     ` Alexei Starovoitov
2023-04-21 19:48 ` Alexei Starovoitov
2023-04-21 20:00   ` Eduard Zingerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.