All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v5 0/4]  verify scalar ids mapping in regsafe()
@ 2023-06-12 16:07 Eduard Zingerman
  2023-06-12 16:07 ` [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision() Eduard Zingerman
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-12 16:07 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Update regsafe() to use check_ids() for scalar values.
Otherwise the following unsafe pattern is accepted by verifier:

  1: r9 = ... some pointer with range X ...
  2: r6 = ... unbound scalar ID=a ...
  3: r7 = ... unbound scalar ID=b ...
  4: if (r6 > r7) goto +1
  5: r6 = r7
  6: if (r6 > X) goto ...
  --- checkpoint ---
  7: r9 += r7
  8: *(u64 *)r9 = Y

This example is unsafe because not all execution paths verify r7 range.
Because of the jump at (4) the verifier would arrive at (6) in two states:
I.  r6{.id=b}, r7{.id=b} via path 1-6;
II. r6{.id=a}, r7{.id=b} via path 1-4, 6.

Currently regsafe() does not call check_ids() for scalar registers,
thus from POV of regsafe() states (I) and (II) are identical.

The change is split in two parts:
- patches #1,2: update for mark_chain_precision() to propagate
  precision marks through scalar IDs.
- patches #3,4: update for regsafe() to use a special version of
  check_ids() for precise scalar values.

Changelog:
- V4 -> V5 (all changes are based on feedback for V4 from Andrii):
  - mark_precise_scalar_ids() error code is updated to EFAULT;
  - bpf_verifier_env::idmap_scratch field type is changed to struct
    bpf_idmap to encapsulate temporary ID generation counter;
  - regsafe() is updated to call scalar_regs_exact() only for
    env->explore_alu_limits case (this had no measurable impact on
    verification duration when tested using veristat).
- V3 -> V4:
  - check_ids() in regsafe() is replaced by check_scalar_ids(),
    as discussed with Andrii in [3],
    Note: I did not transfer Andrii's ack for patch #3 from V3 because
          of the changes to the algorithm.
  - reg_id_scratch is renamed to idset_scratch;
  - mark_precise_scalar_ids() is modified to propagate error from
    idset_push();
  - test cases adjusted according to feedback from Andrii for V3.
- V2 -> V3:
  - u32_hashset for IDs used for range transfer is removed;
  - mark_chain_precision() is updated as discussed with Andrii in [2].
- V1 -> v2:
  - 'rold->precise' and 'rold->id' checks are dropped as unsafe
    (thanks to discussion with Yonghong);
  - patches #3,4 adding tracking of ids used for range transfer in
    order to mitigate performance impact.
- RFC -> V1:
  - Function verifier.c:mark_equal_scalars_as_read() is dropped,
    as it was an incorrect fix for problem solved by commit [3].
  - check_ids() is called only for precise scalar values.
  - Test case updated to use inline assembly.

[V1]  https://lore.kernel.org/bpf/20230526184126.3104040-1-eddyz87@gmail.com/
[V2]  https://lore.kernel.org/bpf/20230530172739.447290-1-eddyz87@gmail.com/T/
[V3]  https://lore.kernel.org/bpf/20230606222411.1820404-1-eddyz87@gmail.com/T/
[V4]  https://lore.kernel.org/bpf/20230609210143.2625430-1-eddyz87@gmail.com/
[RFC] https://lore.kernel.org/bpf/20221128163442.280187-1-eddyz87@gmail.com/
[1]   https://gist.github.com/eddyz87/a32ea7e62a27d3c201117c9a39ab4286
[2]   https://lore.kernel.org/bpf/20230530172739.447290-1-eddyz87@gmail.com/T/#mc21009dcd8574b195c1860a98014bb037f16f450
[3]   https://lore.kernel.org/bpf/20230606222411.1820404-1-eddyz87@gmail.com/T/#m89da8eeb2fa8c9ca1202c5d0b6660e1f72e45e04

Eduard Zingerman (4):
  bpf: use scalar ids in mark_chain_precision()
  selftests/bpf: check if mark_chain_precision() follows scalar ids
  bpf: verify scalar ids mapping in regsafe() using check_ids()
  selftests/bpf: verify that check_ids() is used for scalars in
    regsafe()

 include/linux/bpf_verifier.h                  |  25 +-
 kernel/bpf/verifier.c                         | 223 +++++-
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_scalar_ids.c | 657 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/precise.c  |   8 +-
 5 files changed, 883 insertions(+), 32 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_scalar_ids.c

-- 
2.40.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision()
  2023-06-12 16:07 [PATCH bpf-next v5 0/4] verify scalar ids mapping in regsafe() Eduard Zingerman
@ 2023-06-12 16:07 ` Eduard Zingerman
  2023-06-12 19:56   ` Andrii Nakryiko
  2023-06-12 16:07 ` [PATCH bpf-next v5 2/4] selftests/bpf: check if mark_chain_precision() follows scalar ids Eduard Zingerman
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-12 16:07 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Change mark_chain_precision() to track precision in situations
like below:

    r2 = unknown value
    ...
  --- state #0 ---
    ...
    r1 = r2                 // r1 and r2 now share the same ID
    ...
  --- state #1 {r1.id = A, r2.id = A} ---
    ...
    if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1
    ...
  --- state #2 {r1.id = A, r2.id = A} ---
    r3 = r10
    r3 += r1                // need to mark both r1 and r2

At the beginning of the processing of each state, ensure that if a
register with a scalar ID is marked as precise, all registers sharing
this ID are also marked as precise.

This property would be used by a follow-up change in regsafe().

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 include/linux/bpf_verifier.h                  |  10 +-
 kernel/bpf/verifier.c                         | 115 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/precise.c  |   8 +-
 3 files changed, 128 insertions(+), 5 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 5fe589e11ac8..73a98f6240fd 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -559,6 +559,11 @@ struct backtrack_state {
 	u64 stack_masks[MAX_CALL_FRAMES];
 };
 
+struct bpf_idset {
+	u32 count;
+	u32 ids[BPF_ID_MAP_SIZE];
+};
+
 /* single container for all structs
  * one verifier_env per bpf_check() call
  */
@@ -590,7 +595,10 @@ struct bpf_verifier_env {
 	const struct bpf_line_info *prev_linfo;
 	struct bpf_verifier_log log;
 	struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
-	struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
+	union {
+		struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
+		struct bpf_idset idset_scratch;
+	};
 	struct {
 		int *insn_state;
 		int *insn_stack;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ed79a93398f8..9b5f2433194f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3787,6 +3787,96 @@ static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_
 	}
 }
 
+static bool idset_contains(struct bpf_idset *s, u32 id)
+{
+	u32 i;
+
+	for (i = 0; i < s->count; ++i)
+		if (s->ids[i] == id)
+			return true;
+
+	return false;
+}
+
+static int idset_push(struct bpf_idset *s, u32 id)
+{
+	if (WARN_ON_ONCE(s->count >= ARRAY_SIZE(s->ids)))
+		return -1;
+	s->ids[s->count++] = id;
+	return 0;
+}
+
+static void idset_reset(struct bpf_idset *s)
+{
+	s->count = 0;
+}
+
+/* Collect a set of IDs for all registers currently marked as precise in env->bt.
+ * Mark all registers with these IDs as precise.
+ */
+static int mark_precise_scalar_ids(struct bpf_verifier_env *env, struct bpf_verifier_state *st)
+{
+	struct bpf_idset *precise_ids = &env->idset_scratch;
+	struct backtrack_state *bt = &env->bt;
+	struct bpf_func_state *func;
+	struct bpf_reg_state *reg;
+	DECLARE_BITMAP(mask, 64);
+	int i, fr;
+
+	idset_reset(precise_ids);
+
+	for (fr = bt->frame; fr >= 0; fr--) {
+		func = st->frame[fr];
+
+		bitmap_from_u64(mask, bt_frame_reg_mask(bt, fr));
+		for_each_set_bit(i, mask, 32) {
+			reg = &func->regs[i];
+			if (!reg->id || reg->type != SCALAR_VALUE)
+				continue;
+			if (idset_push(precise_ids, reg->id))
+				return -EFAULT;
+		}
+
+		bitmap_from_u64(mask, bt_frame_stack_mask(bt, fr));
+		for_each_set_bit(i, mask, 64) {
+			if (i >= func->allocated_stack / BPF_REG_SIZE)
+				break;
+			if (!is_spilled_scalar_reg(&func->stack[i]))
+				continue;
+			reg = &func->stack[i].spilled_ptr;
+			if (!reg->id)
+				continue;
+			if (idset_push(precise_ids, reg->id))
+				return -EFAULT;
+		}
+	}
+
+	for (fr = 0; fr <= st->curframe; ++fr) {
+		func = st->frame[fr];
+
+		for (i = BPF_REG_0; i < BPF_REG_10; ++i) {
+			reg = &func->regs[i];
+			if (!reg->id)
+				continue;
+			if (!idset_contains(precise_ids, reg->id))
+				continue;
+			bt_set_frame_reg(bt, fr, i);
+		}
+		for (i = 0; i < func->allocated_stack / BPF_REG_SIZE; ++i) {
+			if (!is_spilled_scalar_reg(&func->stack[i]))
+				continue;
+			reg = &func->stack[i].spilled_ptr;
+			if (!reg->id)
+				continue;
+			if (!idset_contains(precise_ids, reg->id))
+				continue;
+			bt_set_frame_slot(bt, fr, i);
+		}
+	}
+
+	return 0;
+}
+
 /*
  * __mark_chain_precision() backtracks BPF program instruction sequence and
  * chain of verifier states making sure that register *regno* (if regno >= 0)
@@ -3918,6 +4008,31 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
 				bt->frame, last_idx, first_idx, subseq_idx);
 		}
 
+		/* If some register with scalar ID is marked as precise,
+		 * make sure that all registers sharing this ID are also precise.
+		 * This is needed to estimate effect of find_equal_scalars().
+		 * Do this at the last instruction of each state,
+		 * bpf_reg_state::id fields are valid for these instructions.
+		 *
+		 * Allows to track precision in situation like below:
+		 *
+		 *     r2 = unknown value
+		 *     ...
+		 *   --- state #0 ---
+		 *     ...
+		 *     r1 = r2                 // r1 and r2 now share the same ID
+		 *     ...
+		 *   --- state #1 {r1.id = A, r2.id = A} ---
+		 *     ...
+		 *     if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1
+		 *     ...
+		 *   --- state #2 {r1.id = A, r2.id = A} ---
+		 *     r3 = r10
+		 *     r3 += r1                // need to mark both r1 and r2
+		 */
+		if (mark_precise_scalar_ids(env, st))
+			return -EFAULT;
+
 		if (last_idx < 0) {
 			/* we are at the entry into subprog, which
 			 * is expected for global funcs, but only if
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index b8c0aae8e7ec..99272bb890da 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -46,7 +46,7 @@
 	mark_precise: frame0: regs=r2 stack= before 20\
 	mark_precise: frame0: parent state regs=r2 stack=:\
 	mark_precise: frame0: last_idx 19 first_idx 10\
-	mark_precise: frame0: regs=r2 stack= before 19\
+	mark_precise: frame0: regs=r2,r9 stack= before 19\
 	mark_precise: frame0: regs=r9 stack= before 18\
 	mark_precise: frame0: regs=r8,r9 stack= before 17\
 	mark_precise: frame0: regs=r0,r9 stack= before 15\
@@ -106,10 +106,10 @@
 	mark_precise: frame0: regs=r2 stack= before 22\
 	mark_precise: frame0: parent state regs=r2 stack=:\
 	mark_precise: frame0: last_idx 20 first_idx 20\
-	mark_precise: frame0: regs=r2 stack= before 20\
-	mark_precise: frame0: parent state regs=r2 stack=:\
+	mark_precise: frame0: regs=r2,r9 stack= before 20\
+	mark_precise: frame0: parent state regs=r2,r9 stack=:\
 	mark_precise: frame0: last_idx 19 first_idx 17\
-	mark_precise: frame0: regs=r2 stack= before 19\
+	mark_precise: frame0: regs=r2,r9 stack= before 19\
 	mark_precise: frame0: regs=r9 stack= before 18\
 	mark_precise: frame0: regs=r8,r9 stack= before 17\
 	mark_precise: frame0: parent state regs= stack=:",
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v5 2/4] selftests/bpf: check if mark_chain_precision() follows scalar ids
  2023-06-12 16:07 [PATCH bpf-next v5 0/4] verify scalar ids mapping in regsafe() Eduard Zingerman
  2023-06-12 16:07 ` [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision() Eduard Zingerman
@ 2023-06-12 16:07 ` Eduard Zingerman
  2023-06-12 16:08 ` [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids() Eduard Zingerman
  2023-06-12 16:08 ` [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe() Eduard Zingerman
  3 siblings, 0 replies; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-12 16:07 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Check __mark_chain_precision() log to verify that scalars with same
IDs are marked as precise. Use several scenarios to test that
precision marks are propagated through:
- registers of scalar type with the same ID within one state;
- registers of scalar type with the same ID cross several states;
- registers of scalar type  with the same ID cross several stack frames;
- stack slot of scalar type with the same ID;
- multiple scalar IDs are tracked independently.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_scalar_ids.c | 344 ++++++++++++++++++
 2 files changed, 346 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_scalar_ids.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 531621adef42..070a13833c3f 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -50,6 +50,7 @@
 #include "verifier_regalloc.skel.h"
 #include "verifier_ringbuf.skel.h"
 #include "verifier_runtime_jit.skel.h"
+#include "verifier_scalar_ids.skel.h"
 #include "verifier_search_pruning.skel.h"
 #include "verifier_sock.skel.h"
 #include "verifier_spill_fill.skel.h"
@@ -150,6 +151,7 @@ void test_verifier_ref_tracking(void)         { RUN(verifier_ref_tracking); }
 void test_verifier_regalloc(void)             { RUN(verifier_regalloc); }
 void test_verifier_ringbuf(void)              { RUN(verifier_ringbuf); }
 void test_verifier_runtime_jit(void)          { RUN(verifier_runtime_jit); }
+void test_verifier_scalar_ids(void)           { RUN(verifier_scalar_ids); }
 void test_verifier_search_pruning(void)       { RUN(verifier_search_pruning); }
 void test_verifier_sock(void)                 { RUN(verifier_sock); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
new file mode 100644
index 000000000000..8a5203fb14ca
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
@@ -0,0 +1,344 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+/* Check that precision marks propagate through scalar IDs.
+ * Registers r{0,1,2} have the same scalar ID at the moment when r0 is
+ * marked to be precise, this mark is immediately propagated to r{1,2}.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("frame0: regs=r0,r1,r2 stack= before 4: (bf) r3 = r10")
+__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0")
+__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
+__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
+__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precision_same_state(void)
+{
+	asm volatile (
+	/* r0 = random number up to 0xff */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* tie r0.id == r1.id == r2.id */
+	"r1 = r0;"
+	"r2 = r0;"
+	/* force r0 to be precise, this immediately marks r1 and r2 as
+	 * precise as well because of shared IDs
+	 */
+	"r3 = r10;"
+	"r3 += r0;"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Same as precision_same_state, but mark propagates through state /
+ * parent state boundary.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("frame0: last_idx 6 first_idx 5 subseq_idx -1")
+__msg("frame0: regs=r0,r1,r2 stack= before 5: (bf) r3 = r10")
+__msg("frame0: parent state regs=r0,r1,r2 stack=:")
+__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0")
+__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0")
+__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
+__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
+__msg("frame0: parent state regs=r0 stack=:")
+__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precision_cross_state(void)
+{
+	asm volatile (
+	/* r0 = random number up to 0xff */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* tie r0.id == r1.id == r2.id */
+	"r1 = r0;"
+	"r2 = r0;"
+	/* force checkpoint */
+	"goto +0;"
+	/* force r0 to be precise, this immediately marks r1 and r2 as
+	 * precise as well because of shared IDs
+	 */
+	"r3 = r10;"
+	"r3 += r0;"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Same as precision_same_state, but break one of the
+ * links, note that r1 is absent from regs=... in __msg below.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("frame0: regs=r0,r2 stack= before 5: (bf) r3 = r10")
+__msg("frame0: regs=r0,r2 stack= before 4: (b7) r1 = 0")
+__msg("frame0: regs=r0,r2 stack= before 3: (bf) r2 = r0")
+__msg("frame0: regs=r0 stack= before 2: (bf) r1 = r0")
+__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
+__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precision_same_state_broken_link(void)
+{
+	asm volatile (
+	/* r0 = random number up to 0xff */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* tie r0.id == r1.id == r2.id */
+	"r1 = r0;"
+	"r2 = r0;"
+	/* break link for r1, this is the only line that differs
+	 * compared to the previous test
+	 */
+	"r1 = 0;"
+	/* force r0 to be precise, this immediately marks r1 and r2 as
+	 * precise as well because of shared IDs
+	 */
+	"r3 = r10;"
+	"r3 += r0;"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Same as precision_same_state_broken_link, but with state /
+ * parent state boundary.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("frame0: regs=r0,r2 stack= before 6: (bf) r3 = r10")
+__msg("frame0: regs=r0,r2 stack= before 5: (b7) r1 = 0")
+__msg("frame0: parent state regs=r0,r2 stack=:")
+__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0")
+__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0")
+__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
+__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
+__msg("frame0: parent state regs=r0 stack=:")
+__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precision_cross_state_broken_link(void)
+{
+	asm volatile (
+	/* r0 = random number up to 0xff */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* tie r0.id == r1.id == r2.id */
+	"r1 = r0;"
+	"r2 = r0;"
+	/* force checkpoint, although link between r1 and r{0,2} is
+	 * broken by the next statement current precision tracking
+	 * algorithm can't react to it and propagates mark for r1 to
+	 * the parent state.
+	 */
+	"goto +0;"
+	/* break link for r1, this is the only line that differs
+	 * compared to precision_cross_state()
+	 */
+	"r1 = 0;"
+	/* force r0 to be precise, this immediately marks r1 and r2 as
+	 * precise as well because of shared IDs
+	 */
+	"r3 = r10;"
+	"r3 += r0;"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Check that precision marks propagate through scalar IDs.
+ * Use the same scalar ID in multiple stack frames, check that
+ * precision information is propagated up the call stack.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("11: (0f) r2 += r1")
+/* Current state */
+__msg("frame2: last_idx 11 first_idx 10 subseq_idx -1")
+__msg("frame2: regs=r1 stack= before 10: (bf) r2 = r10")
+__msg("frame2: parent state regs=r1 stack=")
+/* frame1.r{6,7} are marked because mark_precise_scalar_ids()
+ * looks for all registers with frame2.r1.id in the current state
+ */
+__msg("frame1: parent state regs=r6,r7 stack=")
+__msg("frame0: parent state regs=r6 stack=")
+/* Parent state */
+__msg("frame2: last_idx 8 first_idx 8 subseq_idx 10")
+__msg("frame2: regs=r1 stack= before 8: (85) call pc+1")
+/* frame1.r1 is marked because of backtracking of call instruction */
+__msg("frame1: parent state regs=r1,r6,r7 stack=")
+__msg("frame0: parent state regs=r6 stack=")
+/* Parent state */
+__msg("frame1: last_idx 7 first_idx 6 subseq_idx 8")
+__msg("frame1: regs=r1,r6,r7 stack= before 7: (bf) r7 = r1")
+__msg("frame1: regs=r1,r6 stack= before 6: (bf) r6 = r1")
+__msg("frame1: parent state regs=r1 stack=")
+__msg("frame0: parent state regs=r6 stack=")
+/* Parent state */
+__msg("frame1: last_idx 4 first_idx 4 subseq_idx 6")
+__msg("frame1: regs=r1 stack= before 4: (85) call pc+1")
+__msg("frame0: parent state regs=r1,r6 stack=")
+/* Parent state */
+__msg("frame0: last_idx 3 first_idx 1 subseq_idx 4")
+__msg("frame0: regs=r0,r1,r6 stack= before 3: (bf) r6 = r0")
+__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
+__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precision_many_frames(void)
+{
+	asm volatile (
+	/* r0 = random number up to 0xff */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* tie r0.id == r1.id == r6.id */
+	"r1 = r0;"
+	"r6 = r0;"
+	"call precision_many_frames__foo;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+static __naked __noinline __used
+void precision_many_frames__foo(void)
+{
+	asm volatile (
+	/* conflate one of the register numbers (r6) with outer frame,
+	 * to verify that those are tracked independently
+	 */
+	"r6 = r1;"
+	"r7 = r1;"
+	"call precision_many_frames__bar;"
+	"exit"
+	::: __clobber_all);
+}
+
+static __naked __noinline __used
+void precision_many_frames__bar(void)
+{
+	asm volatile (
+	/* force r1 to be precise, this immediately marks:
+	 * - bar frame r1
+	 * - foo frame r{1,6,7}
+	 * - main frame r{1,6}
+	 */
+	"r2 = r10;"
+	"r2 += r1;"
+	"r0 = 0;"
+	"exit;"
+	::: __clobber_all);
+}
+
+/* Check that scalars with the same IDs are marked precise on stack as
+ * well as in registers.
+ */
+SEC("socket")
+__success __log_level(2)
+/* foo frame */
+__msg("frame1: regs=r1 stack=-8,-16 before 9: (bf) r2 = r10")
+__msg("frame1: regs=r1 stack=-8,-16 before 8: (7b) *(u64 *)(r10 -16) = r1")
+__msg("frame1: regs=r1 stack=-8 before 7: (7b) *(u64 *)(r10 -8) = r1")
+__msg("frame1: regs=r1 stack= before 4: (85) call pc+2")
+/* main frame */
+__msg("frame0: regs=r0,r1 stack=-8 before 3: (7b) *(u64 *)(r10 -8) = r1")
+__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0")
+__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precision_stack(void)
+{
+	asm volatile (
+	/* r0 = random number up to 0xff */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* tie r0.id == r1.id == fp[-8].id */
+	"r1 = r0;"
+	"*(u64*)(r10 - 8) = r1;"
+	"call precision_stack__foo;"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+static __naked __noinline __used
+void precision_stack__foo(void)
+{
+	asm volatile (
+	/* conflate one of the register numbers (r6) with outer frame,
+	 * to verify that those are tracked independently
+	 */
+	"*(u64*)(r10 - 8) = r1;"
+	"*(u64*)(r10 - 16) = r1;"
+	/* force r1 to be precise, this immediately marks:
+	 * - foo frame r1,fp{-8,-16}
+	 * - main frame r1,fp{-8}
+	 */
+	"r2 = r10;"
+	"r2 += r1;"
+	"exit"
+	::: __clobber_all);
+}
+
+/* Use two separate scalar IDs to check that these are propagated
+ * independently.
+ */
+SEC("socket")
+__success __log_level(2)
+/* r{6,7} */
+__msg("11: (0f) r3 += r7")
+__msg("frame0: regs=r6,r7 stack= before 10: (bf) r3 = r10")
+/* ... skip some insns ... */
+__msg("frame0: regs=r6,r7 stack= before 3: (bf) r7 = r0")
+__msg("frame0: regs=r0,r6 stack= before 2: (bf) r6 = r0")
+/* r{8,9} */
+__msg("12: (0f) r3 += r9")
+__msg("frame0: regs=r8,r9 stack= before 11: (0f) r3 += r7")
+/* ... skip some insns ... */
+__msg("frame0: regs=r8,r9 stack= before 7: (bf) r9 = r0")
+__msg("frame0: regs=r0,r8 stack= before 6: (bf) r8 = r0")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void precision_two_ids(void)
+{
+	asm volatile (
+	/* r6 = random number up to 0xff
+	 * r6.id == r7.id
+	 */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	"r6 = r0;"
+	"r7 = r0;"
+	/* same, but for r{8,9} */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	"r8 = r0;"
+	"r9 = r0;"
+	/* clear r0 id */
+	"r0 = 0;"
+	/* force checkpoint */
+	"goto +0;"
+	"r3 = r10;"
+	/* force r7 to be precise, this also marks r6 */
+	"r3 += r7;"
+	/* force r9 to be precise, this also marks r8 */
+	"r3 += r9;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids()
  2023-06-12 16:07 [PATCH bpf-next v5 0/4] verify scalar ids mapping in regsafe() Eduard Zingerman
  2023-06-12 16:07 ` [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision() Eduard Zingerman
  2023-06-12 16:07 ` [PATCH bpf-next v5 2/4] selftests/bpf: check if mark_chain_precision() follows scalar ids Eduard Zingerman
@ 2023-06-12 16:08 ` Eduard Zingerman
  2023-06-12 19:56   ` Andrii Nakryiko
  2023-06-13  8:02   ` kernel test robot
  2023-06-12 16:08 ` [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe() Eduard Zingerman
  3 siblings, 2 replies; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-12 16:08 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Make sure that the following unsafe example is rejected by verifier:

1: r9 = ... some pointer with range X ...
2: r6 = ... unbound scalar ID=a ...
3: r7 = ... unbound scalar ID=b ...
4: if (r6 > r7) goto +1
5: r6 = r7
6: if (r6 > X) goto ...
--- checkpoint ---
7: r9 += r7
8: *(u64 *)r9 = Y

This example is unsafe because not all execution paths verify r7 range.
Because of the jump at (4) the verifier would arrive at (6) in two states:
I.  r6{.id=b}, r7{.id=b} via path 1-6;
II. r6{.id=a}, r7{.id=b} via path 1-4, 6.

Currently regsafe() does not call check_ids() for scalar registers,
thus from POV of regsafe() states (I) and (II) are identical. If the
path 1-6 is taken by verifier first, and checkpoint is created at (6)
the path [1-4, 6] would be considered safe.

Changes in this commit:
- Function check_scalar_ids() is added, it differs from check_ids() in
  the following aspects:
  - treats ID zero as a unique scalar ID;
  - disallows mapping same 'cur_id' to multiple 'old_id'.
- check_scalar_ids() needs to generate temporary unique IDs, field
  'tmp_id_gen' is added to bpf_verifier_env::idmap_scratch to
  facilitate this.
- Function scalar_regs_exact() is added, it differs from regs_exact()
  in the following aspects:
  - uses check_scalar_ids() for ID comparison;
  - does not check reg_obj_id as this field is not used for scalar
    values.
- regsafe() is updated to:
  - use check_scalar_ids() for precise scalar registers.
  - use scalar_regs_exact() for scalar registers, but only for
    explore_alu_limits branch. This simplifies control flow for scalar
    case, and has no measurable performance impact.
- check_alu_op() is updated avoid generating bpf_reg_state::id for
  constant scalar values when processing BPF_MOV. ID is needed to
  propagate range information for identical values, but there is
  nothing to propagate for constants.

Fixes: 75748837b7e5 ("bpf: Propagate scalar ranges through register assignments.")
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 include/linux/bpf_verifier.h |  17 ++++--
 kernel/bpf/verifier.c        | 108 ++++++++++++++++++++++++++++-------
 2 files changed, 97 insertions(+), 28 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 73a98f6240fd..042b76fe8e29 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -313,11 +313,6 @@ struct bpf_idx_pair {
 	u32 idx;
 };
 
-struct bpf_id_pair {
-	u32 old;
-	u32 cur;
-};
-
 #define MAX_CALL_FRAMES 8
 /* Maximum number of register states that can exist at once */
 #define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
@@ -559,6 +554,16 @@ struct backtrack_state {
 	u64 stack_masks[MAX_CALL_FRAMES];
 };
 
+struct bpf_id_pair {
+	u32 old;
+	u32 cur;
+};
+
+struct bpf_idmap {
+	u32 tmp_id_gen;
+	struct bpf_id_pair map[BPF_ID_MAP_SIZE];
+};
+
 struct bpf_idset {
 	u32 count;
 	u32 ids[BPF_ID_MAP_SIZE];
@@ -596,7 +601,7 @@ struct bpf_verifier_env {
 	struct bpf_verifier_log log;
 	struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
 	union {
-		struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
+		struct bpf_idmap idmap_scratch;
 		struct bpf_idset idset_scratch;
 	};
 	struct {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9b5f2433194f..b15ebfed207a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12942,12 +12942,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 		if (BPF_SRC(insn->code) == BPF_X) {
 			struct bpf_reg_state *src_reg = regs + insn->src_reg;
 			struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
+			bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
+				       !tnum_is_const(src_reg->var_off);
 
 			if (BPF_CLASS(insn->code) == BPF_ALU64) {
 				/* case: R1 = R2
 				 * copy register state to dest reg
 				 */
-				if (src_reg->type == SCALAR_VALUE && !src_reg->id)
+				if (need_id)
 					/* Assign src and dst registers the same ID
 					 * that will be used by find_equal_scalars()
 					 * to propagate min/max range.
@@ -12966,7 +12968,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 				} else if (src_reg->type == SCALAR_VALUE) {
 					bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
 
-					if (is_src_reg_u32 && !src_reg->id)
+					if (is_src_reg_u32 && need_id)
 						src_reg->id = ++env->id_gen;
 					copy_register_state(dst_reg, src_reg);
 					/* Make sure ID is cleared if src_reg is not in u32 range otherwise
@@ -15122,8 +15124,9 @@ static bool range_within(struct bpf_reg_state *old,
  * So we look through our idmap to see if this old id has been seen before.  If
  * so, we require the new id to match; otherwise, we add the id pair to the map.
  */
-static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
+static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
 {
+	struct bpf_id_pair *map = idmap->map;
 	unsigned int i;
 
 	/* either both IDs should be set or both should be zero */
@@ -15134,14 +15137,44 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
 		return true;
 
 	for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
-		if (!idmap[i].old) {
+		if (!map[i].old) {
 			/* Reached an empty slot; haven't seen this id before */
-			idmap[i].old = old_id;
-			idmap[i].cur = cur_id;
+			map[i].old = old_id;
+			map[i].cur = cur_id;
 			return true;
 		}
-		if (idmap[i].old == old_id)
-			return idmap[i].cur == cur_id;
+		if (map[i].old == old_id)
+			return map[i].cur == cur_id;
+	}
+	/* We ran out of idmap slots, which should be impossible */
+	WARN_ON_ONCE(1);
+	return false;
+}
+
+/* Similar to check_ids(), but:
+ * - disallow mapping of different 'old_id' values to same 'cur_id' value;
+ * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
+ *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
+ */
+static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
+{
+	struct bpf_id_pair *map = idmap->map;
+	unsigned int i;
+
+	old_id = old_id ? old_id : ++idmap->tmp_id_gen;
+	cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
+
+	for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
+		if (!map[i].old) {
+			/* Reached an empty slot; haven't seen this id before */
+			map[i].old = old_id;
+			map[i].cur = cur_id;
+			return true;
+		}
+		if (map[i].old == old_id)
+			return map[i].cur == cur_id;
+		if (map[i].cur == cur_id)
+			return false;
 	}
 	/* We ran out of idmap slots, which should be impossible */
 	WARN_ON_ONCE(1);
@@ -15246,16 +15279,24 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn,
 
 static bool regs_exact(const struct bpf_reg_state *rold,
 		       const struct bpf_reg_state *rcur,
-		       struct bpf_id_pair *idmap)
+		       struct bpf_idmap *idmap)
 {
 	return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
 	       check_ids(rold->id, rcur->id, idmap) &&
 	       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
 }
 
+static bool scalar_regs_exact(const struct bpf_reg_state *rold,
+			      const struct bpf_reg_state *rcur,
+			      struct bpf_idmap *idmap)
+{
+	return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
+	       check_scalar_ids(rold->id, rcur->id, idmap);
+}
+
 /* Returns true if (rold safe implies rcur safe) */
 static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
-		    struct bpf_reg_state *rcur, struct bpf_id_pair *idmap)
+		    struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
 {
 	if (!(rold->live & REG_LIVE_READ))
 		/* explored state didn't use this */
@@ -15292,15 +15333,37 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
 
 	switch (base_type(rold->type)) {
 	case SCALAR_VALUE:
-		if (regs_exact(rold, rcur, idmap))
-			return true;
 		if (env->explore_alu_limits)
-			return false;
+			return scalar_regs_exact(rold, rcur, idmap);
 		if (!rold->precise)
 			return true;
-		/* new val must satisfy old val knowledge */
+		/* Why check_ids() for scalar registers?
+		 *
+		 * Consider the following BPF code:
+		 *   1: r6 = ... unbound scalar, ID=a ...
+		 *   2: r7 = ... unbound scalar, ID=b ...
+		 *   3: if (r6 > r7) goto +1
+		 *   4: r6 = r7
+		 *   5: if (r6 > X) goto ...
+		 *   6: ... memory operation using r7 ...
+		 *
+		 * First verification path is [1-6]:
+		 * - at (4) same bpf_reg_state::id (b) would be assigned to r6 and r7;
+		 * - at (5) r6 would be marked <= X, find_equal_scalars() would also mark
+		 *   r7 <= X, because r6 and r7 share same id.
+		 * Next verification path is [1-4, 6].
+		 *
+		 * Instruction (6) would be reached in two states:
+		 *   I.  r6{.id=b}, r7{.id=b} via path 1-6;
+		 *   II. r6{.id=a}, r7{.id=b} via path 1-4, 6.
+		 *
+		 * Use check_ids() to distinguish these states.
+		 * ---
+		 * Also verify that new value satisfies old value range knowledge.
+		 */
 		return range_within(rold, rcur) &&
-		       tnum_in(rold->var_off, rcur->var_off);
+		       tnum_in(rold->var_off, rcur->var_off) &&
+		       check_scalar_ids(rold->id, rcur->id, idmap);
 	case PTR_TO_MAP_KEY:
 	case PTR_TO_MAP_VALUE:
 	case PTR_TO_MEM:
@@ -15346,7 +15409,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
 }
 
 static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
-		      struct bpf_func_state *cur, struct bpf_id_pair *idmap)
+		      struct bpf_func_state *cur, struct bpf_idmap *idmap)
 {
 	int i, spi;
 
@@ -15449,7 +15512,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
 }
 
 static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur,
-		    struct bpf_id_pair *idmap)
+		    struct bpf_idmap *idmap)
 {
 	int i;
 
@@ -15497,13 +15560,13 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
 
 	for (i = 0; i < MAX_BPF_REG; i++)
 		if (!regsafe(env, &old->regs[i], &cur->regs[i],
-			     env->idmap_scratch))
+			     &env->idmap_scratch))
 			return false;
 
-	if (!stacksafe(env, old, cur, env->idmap_scratch))
+	if (!stacksafe(env, old, cur, &env->idmap_scratch))
 		return false;
 
-	if (!refsafe(old, cur, env->idmap_scratch))
+	if (!refsafe(old, cur, &env->idmap_scratch))
 		return false;
 
 	return true;
@@ -15518,7 +15581,8 @@ static bool states_equal(struct bpf_verifier_env *env,
 	if (old->curframe != cur->curframe)
 		return false;
 
-	memset(env->idmap_scratch, 0, sizeof(env->idmap_scratch));
+	env->idmap_scratch.tmp_id_gen = env->id_gen;
+	memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch));
 
 	/* Verification state from speculative execution simulation
 	 * must never prune a non-speculative execution one.
@@ -15536,7 +15600,7 @@ static bool states_equal(struct bpf_verifier_env *env,
 		return false;
 
 	if (old->active_lock.id &&
-	    !check_ids(old->active_lock.id, cur->active_lock.id, env->idmap_scratch))
+	    !check_ids(old->active_lock.id, cur->active_lock.id, &env->idmap_scratch))
 		return false;
 
 	if (old->active_rcu_lock != cur->active_rcu_lock)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe()
  2023-06-12 16:07 [PATCH bpf-next v5 0/4] verify scalar ids mapping in regsafe() Eduard Zingerman
                   ` (2 preceding siblings ...)
  2023-06-12 16:08 ` [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids() Eduard Zingerman
@ 2023-06-12 16:08 ` Eduard Zingerman
  2023-06-12 17:40   ` Maxim Mikityanskiy
  3 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-12 16:08 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, martin.lau, kernel-team, yhs, Eduard Zingerman

Verify that the following example is rejected by verifier:

  r9 = ... some pointer with range X ...
  r6 = ... unbound scalar ID=a ...
  r7 = ... unbound scalar ID=b ...
  if (r6 > r7) goto +1
  r7 = r6
  if (r7 > X) goto exit
  r9 += r6
  *(u64 *)r9 = Y

Also add test cases to:
- check that check_alu_op() for BPF_MOV instruction does not allocate
  scalar ID if source register is a constant;
- check that unique scalar IDs are ignored when new verifier state is
  compared to cached verifier state;
- check that two different scalar IDs in a verified state can't be
  mapped to the same scalar ID in current state.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/progs/verifier_scalar_ids.c | 313 ++++++++++++++++++
 1 file changed, 313 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
index 8a5203fb14ca..5d56e764fe43 100644
--- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
+++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
@@ -341,4 +341,317 @@ __naked void precision_two_ids(void)
 	: __clobber_all);
 }
 
+/* Verify that check_ids() is used by regsafe() for scalars.
+ *
+ * r9 = ... some pointer with range X ...
+ * r6 = ... unbound scalar ID=a ...
+ * r7 = ... unbound scalar ID=b ...
+ * if (r6 > r7) goto +1
+ * r6 = r7
+ * if (r6 > X) goto exit
+ * r9 += r7
+ * *(u8 *)r9 = Y
+ *
+ * The memory access is safe only if r7 is bounded,
+ * which is true for one branch and not true for another.
+ */
+SEC("socket")
+__failure __msg("register with unbounded min value")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void check_ids_in_regsafe(void)
+{
+	asm volatile (
+	/* Bump allocated stack */
+	"r1 = 0;"
+	"*(u64*)(r10 - 8) = r1;"
+	/* r9 = pointer to stack */
+	"r9 = r10;"
+	"r9 += -8;"
+	/* r7 = ktime_get_ns() */
+	"call %[bpf_ktime_get_ns];"
+	"r7 = r0;"
+	/* r6 = ktime_get_ns() */
+	"call %[bpf_ktime_get_ns];"
+	"r6 = r0;"
+	/* if r6 > r7 is an unpredictable jump */
+	"if r6 > r7 goto l1_%=;"
+	"r7 = r6;"
+"l1_%=:"
+	/* if r6 > 4 exit(0) */
+	"if r7 > 4 goto l2_%=;"
+	/* Access memory at r9[r7] */
+	"r9 += r6;"
+	"r0 = *(u8*)(r9 + 0);"
+"l2_%=:"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Similar to check_ids_in_regsafe.
+ * The l0 could be reached in two states:
+ *
+ *   (1) r6{.id=A}, r7{.id=A}, r8{.id=B}
+ *   (2) r6{.id=B}, r7{.id=A}, r8{.id=B}
+ *
+ * Where (2) is not safe, as "r7 > 4" check won't propagate range for it.
+ * This example would be considered safe without changes to
+ * mark_chain_precision() to track scalar values with equal IDs.
+ */
+SEC("socket")
+__failure __msg("register with unbounded min value")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void check_ids_in_regsafe_2(void)
+{
+	asm volatile (
+	/* Bump allocated stack */
+	"r1 = 0;"
+	"*(u64*)(r10 - 8) = r1;"
+	/* r9 = pointer to stack */
+	"r9 = r10;"
+	"r9 += -8;"
+	/* r8 = ktime_get_ns() */
+	"call %[bpf_ktime_get_ns];"
+	"r8 = r0;"
+	/* r7 = ktime_get_ns() */
+	"call %[bpf_ktime_get_ns];"
+	"r7 = r0;"
+	/* r6 = ktime_get_ns() */
+	"call %[bpf_ktime_get_ns];"
+	"r6 = r0;"
+	/* scratch .id from r0 */
+	"r0 = 0;"
+	/* if r6 > r7 is an unpredictable jump */
+	"if r6 > r7 goto l1_%=;"
+	/* tie r6 and r7 .id */
+	"r6 = r7;"
+"l0_%=:"
+	/* if r7 > 4 exit(0) */
+	"if r7 > 4 goto l2_%=;"
+	/* Access memory at r9[r7] */
+	"r9 += r6;"
+	"r0 = *(u8*)(r9 + 0);"
+"l2_%=:"
+	"r0 = 0;"
+	"exit;"
+"l1_%=:"
+	/* tie r6 and r8 .id */
+	"r6 = r8;"
+	"goto l0_%=;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Check that scalar IDs *are not* generated on register to register
+ * assignments if source register is a constant.
+ *
+ * If such IDs *are* generated the 'l1' below would be reached in
+ * two states:
+ *
+ *   (1) r1{.id=A}, r2{.id=A}
+ *   (2) r1{.id=C}, r2{.id=C}
+ *
+ * Thus forcing 'if r1 == r2' verification twice.
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("11: (1d) if r3 == r4 goto pc+0")
+__msg("frame 0: propagating r3,r4")
+__msg("11: safe")
+__msg("processed 15 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void no_scalar_id_for_const(void)
+{
+	asm volatile (
+	"call %[bpf_ktime_get_ns];"
+	/* unpredictable jump */
+	"if r0 > 7 goto l0_%=;"
+	/* possibly generate same scalar ids for r3 and r4 */
+	"r1 = 0;"
+	"r1 = r1;"
+	"r3 = r1;"
+	"r4 = r1;"
+	"goto l1_%=;"
+"l0_%=:"
+	/* possibly generate different scalar ids for r3 and r4 */
+	"r1 = 0;"
+	"r2 = 0;"
+	"r3 = r1;"
+	"r4 = r2;"
+"l1_%=:"
+	/* predictable jump, marks r3 and r4 precise */
+	"if r3 == r4 goto +0;"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Same as no_scalar_id_for_const() but for 32-bit values */
+SEC("socket")
+__success __log_level(2)
+__msg("11: (1e) if w3 == w4 goto pc+0")
+__msg("frame 0: propagating r3,r4")
+__msg("11: safe")
+__msg("processed 15 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void no_scalar_id_for_const32(void)
+{
+	asm volatile (
+	"call %[bpf_ktime_get_ns];"
+	/* unpredictable jump */
+	"if r0 > 7 goto l0_%=;"
+	/* possibly generate same scalar ids for r3 and r4 */
+	"w1 = 0;"
+	"w1 = w1;"
+	"w3 = w1;"
+	"w4 = w1;"
+	"goto l1_%=;"
+"l0_%=:"
+	/* possibly generate different scalar ids for r3 and r4 */
+	"w1 = 0;"
+	"w2 = 0;"
+	"w3 = w1;"
+	"w4 = w2;"
+"l1_%=:"
+	/* predictable jump, marks r1 and r2 precise */
+	"if w3 == w4 goto +0;"
+	"r0 = 0;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Check that unique scalar IDs are ignored when new verifier state is
+ * compared to cached verifier state. For this test:
+ * - cached state has no id on r1
+ * - new state has a unique id on r1
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("6: (25) if r6 > 0x7 goto pc+1")
+__msg("7: (57) r1 &= 255")
+__msg("8: (bf) r2 = r10")
+__msg("from 6 to 8: safe")
+__msg("processed 12 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void ignore_unique_scalar_ids_cur(void)
+{
+	asm volatile (
+	"call %[bpf_ktime_get_ns];"
+	"r6 = r0;"
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* r1.id == r0.id */
+	"r1 = r0;"
+	/* make r1.id unique */
+	"r0 = 0;"
+	"if r6 > 7 goto l0_%=;"
+	/* clear r1 id, but keep the range compatible */
+	"r1 &= 0xff;"
+"l0_%=:"
+	/* get here in two states:
+	 * - first: r1 has no id (cached state)
+	 * - second: r1 has a unique id (should be considered equivalent)
+	 */
+	"r2 = r10;"
+	"r2 += r1;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Check that unique scalar IDs are ignored when new verifier state is
+ * compared to cached verifier state. For this test:
+ * - cached state has a unique id on r1
+ * - new state has no id on r1
+ */
+SEC("socket")
+__success __log_level(2)
+__msg("6: (25) if r6 > 0x7 goto pc+1")
+__msg("7: (05) goto pc+1")
+__msg("9: (bf) r2 = r10")
+__msg("9: safe")
+__msg("processed 13 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void ignore_unique_scalar_ids_old(void)
+{
+	asm volatile (
+	"call %[bpf_ktime_get_ns];"
+	"r6 = r0;"
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	/* r1.id == r0.id */
+	"r1 = r0;"
+	/* make r1.id unique */
+	"r0 = 0;"
+	"if r6 > 7 goto l1_%=;"
+	"goto l0_%=;"
+"l1_%=:"
+	/* clear r1 id, but keep the range compatible */
+	"r1 &= 0xff;"
+"l0_%=:"
+	/* get here in two states:
+	 * - first: r1 has a unique id (cached state)
+	 * - second: r1 has no id (should be considered equivalent)
+	 */
+	"r2 = r10;"
+	"r2 += r1;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+/* Check that two different scalar IDs in a verified state can't be
+ * mapped to the same scalar ID in current state.
+ */
+SEC("socket")
+__success __log_level(2)
+/* The exit instruction should be reachable from two states,
+ * use two matches and "processed .. insns" to ensure this.
+ */
+__msg("13: (95) exit")
+__msg("13: (95) exit")
+__msg("processed 18 insns")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked void two_old_ids_one_cur_id(void)
+{
+	asm volatile (
+	/* Give unique scalar IDs to r{6,7} */
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	"r6 = r0;"
+	"call %[bpf_ktime_get_ns];"
+	"r0 &= 0xff;"
+	"r7 = r0;"
+	"r0 = 0;"
+	/* Maybe make r{6,7} IDs identical */
+	"if r6 > r7 goto l0_%=;"
+	"goto l1_%=;"
+"l0_%=:"
+	"r6 = r7;"
+"l1_%=:"
+	/* Mark r{6,7} precise.
+	 * Get here in two states:
+	 * - first:  r6{.id=A}, r7{.id=B} (cached state)
+	 * - second: r6{.id=A}, r7{.id=A}
+	 * Currently we don't want to consider such states equivalent.
+	 * Thus, marker instruction "r0 = r0;" would be verified twice.
+	 */
+	"r2 = r10;"
+	"r2 += r6;"
+	"r2 += r7;"
+	"exit;"
+	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe()
  2023-06-12 16:08 ` [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe() Eduard Zingerman
@ 2023-06-12 17:40   ` Maxim Mikityanskiy
  2023-06-12 19:42     ` Eduard Zingerman
  0 siblings, 1 reply; 13+ messages in thread
From: Maxim Mikityanskiy @ 2023-06-12 17:40 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

On Mon, 12 Jun 2023 at 19:08:01 +0300, Eduard Zingerman wrote:
> Verify that the following example is rejected by verifier:
> 
>   r9 = ... some pointer with range X ...
>   r6 = ... unbound scalar ID=a ...
>   r7 = ... unbound scalar ID=b ...
>   if (r6 > r7) goto +1
>   r7 = r6
>   if (r7 > X) goto exit
>   r9 += r6
>   *(u64 *)r9 = Y
> 
> Also add test cases to:
> - check that check_alu_op() for BPF_MOV instruction does not allocate
>   scalar ID if source register is a constant;
> - check that unique scalar IDs are ignored when new verifier state is
>   compared to cached verifier state;
> - check that two different scalar IDs in a verified state can't be
>   mapped to the same scalar ID in current state.
> 
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  .../selftests/bpf/progs/verifier_scalar_ids.c | 313 ++++++++++++++++++
>  1 file changed, 313 insertions(+)
> 
> diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
> index 8a5203fb14ca..5d56e764fe43 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
> @@ -341,4 +341,317 @@ __naked void precision_two_ids(void)
>  	: __clobber_all);
>  }
>  
> +/* Verify that check_ids() is used by regsafe() for scalars.
> + *
> + * r9 = ... some pointer with range X ...
> + * r6 = ... unbound scalar ID=a ...
> + * r7 = ... unbound scalar ID=b ...
> + * if (r6 > r7) goto +1
> + * r6 = r7
> + * if (r6 > X) goto exit
> + * r9 += r7
> + * *(u8 *)r9 = Y
> + *
> + * The memory access is safe only if r7 is bounded,
> + * which is true for one branch and not true for another.
> + */
> +SEC("socket")
> +__failure __msg("register with unbounded min value")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void check_ids_in_regsafe(void)
> +{
> +	asm volatile (
> +	/* Bump allocated stack */
> +	"r1 = 0;"
> +	"*(u64*)(r10 - 8) = r1;"
> +	/* r9 = pointer to stack */
> +	"r9 = r10;"
> +	"r9 += -8;"
> +	/* r7 = ktime_get_ns() */
> +	"call %[bpf_ktime_get_ns];"
> +	"r7 = r0;"
> +	/* r6 = ktime_get_ns() */
> +	"call %[bpf_ktime_get_ns];"
> +	"r6 = r0;"
> +	/* if r6 > r7 is an unpredictable jump */
> +	"if r6 > r7 goto l1_%=;"
> +	"r7 = r6;"
> +"l1_%=:"
> +	/* if r6 > 4 exit(0) */
> +	"if r7 > 4 goto l2_%=;"
> +	/* Access memory at r9[r7] */
> +	"r9 += r6;"

Sorry if I'm missing some context, but there seem to be discrepancies
between the code of this test, the comments right here, the comment
above the test and the commit message. r6 vs r7 don't match in a few
places.

The code matches the commit message and looks correct (unsafe). The code
sample in the comments, however, is different and looks safe to me
(r7 <= r6 <= X, accessing r9[r7]).

> +	"r0 = *(u8*)(r9 + 0);"
> +"l2_%=:"
> +	"r0 = 0;"
> +	"exit;"
> +	:
> +	: __imm(bpf_ktime_get_ns)
> +	: __clobber_all);
> +}
> +
> +/* Similar to check_ids_in_regsafe.
> + * The l0 could be reached in two states:
> + *
> + *   (1) r6{.id=A}, r7{.id=A}, r8{.id=B}
> + *   (2) r6{.id=B}, r7{.id=A}, r8{.id=B}
> + *
> + * Where (2) is not safe, as "r7 > 4" check won't propagate range for it.
> + * This example would be considered safe without changes to
> + * mark_chain_precision() to track scalar values with equal IDs.
> + */
> +SEC("socket")
> +__failure __msg("register with unbounded min value")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void check_ids_in_regsafe_2(void)
> +{
> +	asm volatile (
> +	/* Bump allocated stack */
> +	"r1 = 0;"
> +	"*(u64*)(r10 - 8) = r1;"
> +	/* r9 = pointer to stack */
> +	"r9 = r10;"
> +	"r9 += -8;"
> +	/* r8 = ktime_get_ns() */
> +	"call %[bpf_ktime_get_ns];"
> +	"r8 = r0;"
> +	/* r7 = ktime_get_ns() */
> +	"call %[bpf_ktime_get_ns];"
> +	"r7 = r0;"
> +	/* r6 = ktime_get_ns() */
> +	"call %[bpf_ktime_get_ns];"
> +	"r6 = r0;"
> +	/* scratch .id from r0 */
> +	"r0 = 0;"
> +	/* if r6 > r7 is an unpredictable jump */
> +	"if r6 > r7 goto l1_%=;"
> +	/* tie r6 and r7 .id */
> +	"r6 = r7;"
> +"l0_%=:"
> +	/* if r7 > 4 exit(0) */
> +	"if r7 > 4 goto l2_%=;"
> +	/* Access memory at r9[r7] */
> +	"r9 += r6;"
> +	"r0 = *(u8*)(r9 + 0);"
> +"l2_%=:"
> +	"r0 = 0;"
> +	"exit;"
> +"l1_%=:"
> +	/* tie r6 and r8 .id */
> +	"r6 = r8;"
> +	"goto l0_%=;"
> +	:
> +	: __imm(bpf_ktime_get_ns)
> +	: __clobber_all);
> +}
> +
> +/* Check that scalar IDs *are not* generated on register to register
> + * assignments if source register is a constant.
> + *
> + * If such IDs *are* generated the 'l1' below would be reached in
> + * two states:
> + *
> + *   (1) r1{.id=A}, r2{.id=A}
> + *   (2) r1{.id=C}, r2{.id=C}
> + *
> + * Thus forcing 'if r1 == r2' verification twice.
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("11: (1d) if r3 == r4 goto pc+0")
> +__msg("frame 0: propagating r3,r4")
> +__msg("11: safe")
> +__msg("processed 15 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void no_scalar_id_for_const(void)
> +{
> +	asm volatile (
> +	"call %[bpf_ktime_get_ns];"
> +	/* unpredictable jump */
> +	"if r0 > 7 goto l0_%=;"
> +	/* possibly generate same scalar ids for r3 and r4 */
> +	"r1 = 0;"
> +	"r1 = r1;"
> +	"r3 = r1;"
> +	"r4 = r1;"
> +	"goto l1_%=;"
> +"l0_%=:"
> +	/* possibly generate different scalar ids for r3 and r4 */
> +	"r1 = 0;"
> +	"r2 = 0;"
> +	"r3 = r1;"
> +	"r4 = r2;"
> +"l1_%=:"
> +	/* predictable jump, marks r3 and r4 precise */
> +	"if r3 == r4 goto +0;"
> +	"r0 = 0;"
> +	"exit;"
> +	:
> +	: __imm(bpf_ktime_get_ns)
> +	: __clobber_all);
> +}
> +
> +/* Same as no_scalar_id_for_const() but for 32-bit values */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("11: (1e) if w3 == w4 goto pc+0")
> +__msg("frame 0: propagating r3,r4")
> +__msg("11: safe")
> +__msg("processed 15 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void no_scalar_id_for_const32(void)
> +{
> +	asm volatile (
> +	"call %[bpf_ktime_get_ns];"
> +	/* unpredictable jump */
> +	"if r0 > 7 goto l0_%=;"
> +	/* possibly generate same scalar ids for r3 and r4 */
> +	"w1 = 0;"
> +	"w1 = w1;"
> +	"w3 = w1;"
> +	"w4 = w1;"
> +	"goto l1_%=;"
> +"l0_%=:"
> +	/* possibly generate different scalar ids for r3 and r4 */
> +	"w1 = 0;"
> +	"w2 = 0;"
> +	"w3 = w1;"
> +	"w4 = w2;"
> +"l1_%=:"
> +	/* predictable jump, marks r1 and r2 precise */
> +	"if w3 == w4 goto +0;"
> +	"r0 = 0;"
> +	"exit;"
> +	:
> +	: __imm(bpf_ktime_get_ns)
> +	: __clobber_all);
> +}
> +
> +/* Check that unique scalar IDs are ignored when new verifier state is
> + * compared to cached verifier state. For this test:
> + * - cached state has no id on r1
> + * - new state has a unique id on r1
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("6: (25) if r6 > 0x7 goto pc+1")
> +__msg("7: (57) r1 &= 255")
> +__msg("8: (bf) r2 = r10")
> +__msg("from 6 to 8: safe")
> +__msg("processed 12 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void ignore_unique_scalar_ids_cur(void)
> +{
> +	asm volatile (
> +	"call %[bpf_ktime_get_ns];"
> +	"r6 = r0;"
> +	"call %[bpf_ktime_get_ns];"
> +	"r0 &= 0xff;"
> +	/* r1.id == r0.id */
> +	"r1 = r0;"
> +	/* make r1.id unique */
> +	"r0 = 0;"
> +	"if r6 > 7 goto l0_%=;"
> +	/* clear r1 id, but keep the range compatible */
> +	"r1 &= 0xff;"
> +"l0_%=:"
> +	/* get here in two states:
> +	 * - first: r1 has no id (cached state)
> +	 * - second: r1 has a unique id (should be considered equivalent)
> +	 */
> +	"r2 = r10;"
> +	"r2 += r1;"
> +	"exit;"
> +	:
> +	: __imm(bpf_ktime_get_ns)
> +	: __clobber_all);
> +}
> +
> +/* Check that unique scalar IDs are ignored when new verifier state is
> + * compared to cached verifier state. For this test:
> + * - cached state has a unique id on r1
> + * - new state has no id on r1
> + */
> +SEC("socket")
> +__success __log_level(2)
> +__msg("6: (25) if r6 > 0x7 goto pc+1")
> +__msg("7: (05) goto pc+1")
> +__msg("9: (bf) r2 = r10")
> +__msg("9: safe")
> +__msg("processed 13 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void ignore_unique_scalar_ids_old(void)
> +{
> +	asm volatile (
> +	"call %[bpf_ktime_get_ns];"
> +	"r6 = r0;"
> +	"call %[bpf_ktime_get_ns];"
> +	"r0 &= 0xff;"
> +	/* r1.id == r0.id */
> +	"r1 = r0;"
> +	/* make r1.id unique */
> +	"r0 = 0;"
> +	"if r6 > 7 goto l1_%=;"
> +	"goto l0_%=;"
> +"l1_%=:"
> +	/* clear r1 id, but keep the range compatible */
> +	"r1 &= 0xff;"
> +"l0_%=:"
> +	/* get here in two states:
> +	 * - first: r1 has a unique id (cached state)
> +	 * - second: r1 has no id (should be considered equivalent)
> +	 */
> +	"r2 = r10;"
> +	"r2 += r1;"
> +	"exit;"
> +	:
> +	: __imm(bpf_ktime_get_ns)
> +	: __clobber_all);
> +}
> +
> +/* Check that two different scalar IDs in a verified state can't be
> + * mapped to the same scalar ID in current state.
> + */
> +SEC("socket")
> +__success __log_level(2)
> +/* The exit instruction should be reachable from two states,
> + * use two matches and "processed .. insns" to ensure this.
> + */
> +__msg("13: (95) exit")
> +__msg("13: (95) exit")
> +__msg("processed 18 insns")
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__naked void two_old_ids_one_cur_id(void)
> +{
> +	asm volatile (
> +	/* Give unique scalar IDs to r{6,7} */
> +	"call %[bpf_ktime_get_ns];"
> +	"r0 &= 0xff;"
> +	"r6 = r0;"
> +	"call %[bpf_ktime_get_ns];"
> +	"r0 &= 0xff;"
> +	"r7 = r0;"
> +	"r0 = 0;"
> +	/* Maybe make r{6,7} IDs identical */
> +	"if r6 > r7 goto l0_%=;"
> +	"goto l1_%=;"
> +"l0_%=:"
> +	"r6 = r7;"
> +"l1_%=:"
> +	/* Mark r{6,7} precise.
> +	 * Get here in two states:
> +	 * - first:  r6{.id=A}, r7{.id=B} (cached state)
> +	 * - second: r6{.id=A}, r7{.id=A}
> +	 * Currently we don't want to consider such states equivalent.
> +	 * Thus, marker instruction "r0 = r0;" would be verified twice.
> +	 */
> +	"r2 = r10;"
> +	"r2 += r6;"
> +	"r2 += r7;"
> +	"exit;"
> +	:
> +	: __imm(bpf_ktime_get_ns)
> +	: __clobber_all);
> +}
> +
>  char _license[] SEC("license") = "GPL";
> -- 
> 2.40.1
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe()
  2023-06-12 17:40   ` Maxim Mikityanskiy
@ 2023-06-12 19:42     ` Eduard Zingerman
  0 siblings, 0 replies; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-12 19:42 UTC (permalink / raw)
  To: Maxim Mikityanskiy; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

On Mon, 2023-06-12 at 20:40 +0300, Maxim Mikityanskiy wrote:
> On Mon, 12 Jun 2023 at 19:08:01 +0300, Eduard Zingerman wrote:
> > Verify that the following example is rejected by verifier:
> > 
> >   r9 = ... some pointer with range X ...
> >   r6 = ... unbound scalar ID=a ...
> >   r7 = ... unbound scalar ID=b ...
> >   if (r6 > r7) goto +1
> >   r7 = r6
> >   if (r7 > X) goto exit
> >   r9 += r6
> >   *(u64 *)r9 = Y
> > 
> > Also add test cases to:
> > - check that check_alu_op() for BPF_MOV instruction does not allocate
> >   scalar ID if source register is a constant;
> > - check that unique scalar IDs are ignored when new verifier state is
> >   compared to cached verifier state;
> > - check that two different scalar IDs in a verified state can't be
> >   mapped to the same scalar ID in current state.
> > 
> > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> >  .../selftests/bpf/progs/verifier_scalar_ids.c | 313 ++++++++++++++++++
> >  1 file changed, 313 insertions(+)
> > 
> > diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
> > index 8a5203fb14ca..5d56e764fe43 100644
> > --- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
> > +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
> > @@ -341,4 +341,317 @@ __naked void precision_two_ids(void)
> >  	: __clobber_all);
> >  }
> >  
> > +/* Verify that check_ids() is used by regsafe() for scalars.
> > + *
> > + * r9 = ... some pointer with range X ...
> > + * r6 = ... unbound scalar ID=a ...
> > + * r7 = ... unbound scalar ID=b ...
> > + * if (r6 > r7) goto +1
> > + * r6 = r7
> > + * if (r6 > X) goto exit
> > + * r9 += r7
> > + * *(u8 *)r9 = Y
> > + *
> > + * The memory access is safe only if r7 is bounded,
> > + * which is true for one branch and not true for another.
> > + */
> > +SEC("socket")
> > +__failure __msg("register with unbounded min value")
> > +__flag(BPF_F_TEST_STATE_FREQ)
> > +__naked void check_ids_in_regsafe(void)
> > +{
> > +	asm volatile (
> > +	/* Bump allocated stack */
> > +	"r1 = 0;"
> > +	"*(u64*)(r10 - 8) = r1;"
> > +	/* r9 = pointer to stack */
> > +	"r9 = r10;"
> > +	"r9 += -8;"
> > +	/* r7 = ktime_get_ns() */
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r7 = r0;"
> > +	/* r6 = ktime_get_ns() */
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r6 = r0;"
> > +	/* if r6 > r7 is an unpredictable jump */
> > +	"if r6 > r7 goto l1_%=;"
> > +	"r7 = r6;"
> > +"l1_%=:"
> > +	/* if r6 > 4 exit(0) */
> > +	"if r7 > 4 goto l2_%=;"
> > +	/* Access memory at r9[r7] */
> > +	"r9 += r6;"
> 
> Sorry if I'm missing some context, but there seem to be discrepancies
> between the code of this test, the comments right here, the comment
> above the test and the commit message. r6 vs r7 don't match in a few
> places.
> 
> The code matches the commit message and looks correct (unsafe). The code
> sample in the comments, however, is different and looks safe to me
> (r7 <= r6 <= X, accessing r9[r7]).

Yep, thank you for catching this. I need top update comments.
Will wait a couple of hours for other comments and re-send v6 with a fix.

> 
> > +	"r0 = *(u8*)(r9 + 0);"
> > +"l2_%=:"
> > +	"r0 = 0;"
> > +	"exit;"
> > +	:
> > +	: __imm(bpf_ktime_get_ns)
> > +	: __clobber_all);
> > +}
> > +
> > +/* Similar to check_ids_in_regsafe.
> > + * The l0 could be reached in two states:
> > + *
> > + *   (1) r6{.id=A}, r7{.id=A}, r8{.id=B}
> > + *   (2) r6{.id=B}, r7{.id=A}, r8{.id=B}
> > + *
> > + * Where (2) is not safe, as "r7 > 4" check won't propagate range for it.
> > + * This example would be considered safe without changes to
> > + * mark_chain_precision() to track scalar values with equal IDs.
> > + */
> > +SEC("socket")
> > +__failure __msg("register with unbounded min value")
> > +__flag(BPF_F_TEST_STATE_FREQ)
> > +__naked void check_ids_in_regsafe_2(void)
> > +{
> > +	asm volatile (
> > +	/* Bump allocated stack */
> > +	"r1 = 0;"
> > +	"*(u64*)(r10 - 8) = r1;"
> > +	/* r9 = pointer to stack */
> > +	"r9 = r10;"
> > +	"r9 += -8;"
> > +	/* r8 = ktime_get_ns() */
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r8 = r0;"
> > +	/* r7 = ktime_get_ns() */
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r7 = r0;"
> > +	/* r6 = ktime_get_ns() */
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r6 = r0;"
> > +	/* scratch .id from r0 */
> > +	"r0 = 0;"
> > +	/* if r6 > r7 is an unpredictable jump */
> > +	"if r6 > r7 goto l1_%=;"
> > +	/* tie r6 and r7 .id */
> > +	"r6 = r7;"
> > +"l0_%=:"
> > +	/* if r7 > 4 exit(0) */
> > +	"if r7 > 4 goto l2_%=;"
> > +	/* Access memory at r9[r7] */
> > +	"r9 += r6;"
> > +	"r0 = *(u8*)(r9 + 0);"
> > +"l2_%=:"
> > +	"r0 = 0;"
> > +	"exit;"
> > +"l1_%=:"
> > +	/* tie r6 and r8 .id */
> > +	"r6 = r8;"
> > +	"goto l0_%=;"
> > +	:
> > +	: __imm(bpf_ktime_get_ns)
> > +	: __clobber_all);
> > +}
> > +
> > +/* Check that scalar IDs *are not* generated on register to register
> > + * assignments if source register is a constant.
> > + *
> > + * If such IDs *are* generated the 'l1' below would be reached in
> > + * two states:
> > + *
> > + *   (1) r1{.id=A}, r2{.id=A}
> > + *   (2) r1{.id=C}, r2{.id=C}
> > + *
> > + * Thus forcing 'if r1 == r2' verification twice.
> > + */
> > +SEC("socket")
> > +__success __log_level(2)
> > +__msg("11: (1d) if r3 == r4 goto pc+0")
> > +__msg("frame 0: propagating r3,r4")
> > +__msg("11: safe")
> > +__msg("processed 15 insns")
> > +__flag(BPF_F_TEST_STATE_FREQ)
> > +__naked void no_scalar_id_for_const(void)
> > +{
> > +	asm volatile (
> > +	"call %[bpf_ktime_get_ns];"
> > +	/* unpredictable jump */
> > +	"if r0 > 7 goto l0_%=;"
> > +	/* possibly generate same scalar ids for r3 and r4 */
> > +	"r1 = 0;"
> > +	"r1 = r1;"
> > +	"r3 = r1;"
> > +	"r4 = r1;"
> > +	"goto l1_%=;"
> > +"l0_%=:"
> > +	/* possibly generate different scalar ids for r3 and r4 */
> > +	"r1 = 0;"
> > +	"r2 = 0;"
> > +	"r3 = r1;"
> > +	"r4 = r2;"
> > +"l1_%=:"
> > +	/* predictable jump, marks r3 and r4 precise */
> > +	"if r3 == r4 goto +0;"
> > +	"r0 = 0;"
> > +	"exit;"
> > +	:
> > +	: __imm(bpf_ktime_get_ns)
> > +	: __clobber_all);
> > +}
> > +
> > +/* Same as no_scalar_id_for_const() but for 32-bit values */
> > +SEC("socket")
> > +__success __log_level(2)
> > +__msg("11: (1e) if w3 == w4 goto pc+0")
> > +__msg("frame 0: propagating r3,r4")
> > +__msg("11: safe")
> > +__msg("processed 15 insns")
> > +__flag(BPF_F_TEST_STATE_FREQ)
> > +__naked void no_scalar_id_for_const32(void)
> > +{
> > +	asm volatile (
> > +	"call %[bpf_ktime_get_ns];"
> > +	/* unpredictable jump */
> > +	"if r0 > 7 goto l0_%=;"
> > +	/* possibly generate same scalar ids for r3 and r4 */
> > +	"w1 = 0;"
> > +	"w1 = w1;"
> > +	"w3 = w1;"
> > +	"w4 = w1;"
> > +	"goto l1_%=;"
> > +"l0_%=:"
> > +	/* possibly generate different scalar ids for r3 and r4 */
> > +	"w1 = 0;"
> > +	"w2 = 0;"
> > +	"w3 = w1;"
> > +	"w4 = w2;"
> > +"l1_%=:"
> > +	/* predictable jump, marks r1 and r2 precise */
> > +	"if w3 == w4 goto +0;"
> > +	"r0 = 0;"
> > +	"exit;"
> > +	:
> > +	: __imm(bpf_ktime_get_ns)
> > +	: __clobber_all);
> > +}
> > +
> > +/* Check that unique scalar IDs are ignored when new verifier state is
> > + * compared to cached verifier state. For this test:
> > + * - cached state has no id on r1
> > + * - new state has a unique id on r1
> > + */
> > +SEC("socket")
> > +__success __log_level(2)
> > +__msg("6: (25) if r6 > 0x7 goto pc+1")
> > +__msg("7: (57) r1 &= 255")
> > +__msg("8: (bf) r2 = r10")
> > +__msg("from 6 to 8: safe")
> > +__msg("processed 12 insns")
> > +__flag(BPF_F_TEST_STATE_FREQ)
> > +__naked void ignore_unique_scalar_ids_cur(void)
> > +{
> > +	asm volatile (
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r6 = r0;"
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r0 &= 0xff;"
> > +	/* r1.id == r0.id */
> > +	"r1 = r0;"
> > +	/* make r1.id unique */
> > +	"r0 = 0;"
> > +	"if r6 > 7 goto l0_%=;"
> > +	/* clear r1 id, but keep the range compatible */
> > +	"r1 &= 0xff;"
> > +"l0_%=:"
> > +	/* get here in two states:
> > +	 * - first: r1 has no id (cached state)
> > +	 * - second: r1 has a unique id (should be considered equivalent)
> > +	 */
> > +	"r2 = r10;"
> > +	"r2 += r1;"
> > +	"exit;"
> > +	:
> > +	: __imm(bpf_ktime_get_ns)
> > +	: __clobber_all);
> > +}
> > +
> > +/* Check that unique scalar IDs are ignored when new verifier state is
> > + * compared to cached verifier state. For this test:
> > + * - cached state has a unique id on r1
> > + * - new state has no id on r1
> > + */
> > +SEC("socket")
> > +__success __log_level(2)
> > +__msg("6: (25) if r6 > 0x7 goto pc+1")
> > +__msg("7: (05) goto pc+1")
> > +__msg("9: (bf) r2 = r10")
> > +__msg("9: safe")
> > +__msg("processed 13 insns")
> > +__flag(BPF_F_TEST_STATE_FREQ)
> > +__naked void ignore_unique_scalar_ids_old(void)
> > +{
> > +	asm volatile (
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r6 = r0;"
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r0 &= 0xff;"
> > +	/* r1.id == r0.id */
> > +	"r1 = r0;"
> > +	/* make r1.id unique */
> > +	"r0 = 0;"
> > +	"if r6 > 7 goto l1_%=;"
> > +	"goto l0_%=;"
> > +"l1_%=:"
> > +	/* clear r1 id, but keep the range compatible */
> > +	"r1 &= 0xff;"
> > +"l0_%=:"
> > +	/* get here in two states:
> > +	 * - first: r1 has a unique id (cached state)
> > +	 * - second: r1 has no id (should be considered equivalent)
> > +	 */
> > +	"r2 = r10;"
> > +	"r2 += r1;"
> > +	"exit;"
> > +	:
> > +	: __imm(bpf_ktime_get_ns)
> > +	: __clobber_all);
> > +}
> > +
> > +/* Check that two different scalar IDs in a verified state can't be
> > + * mapped to the same scalar ID in current state.
> > + */
> > +SEC("socket")
> > +__success __log_level(2)
> > +/* The exit instruction should be reachable from two states,
> > + * use two matches and "processed .. insns" to ensure this.
> > + */
> > +__msg("13: (95) exit")
> > +__msg("13: (95) exit")
> > +__msg("processed 18 insns")
> > +__flag(BPF_F_TEST_STATE_FREQ)
> > +__naked void two_old_ids_one_cur_id(void)
> > +{
> > +	asm volatile (
> > +	/* Give unique scalar IDs to r{6,7} */
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r0 &= 0xff;"
> > +	"r6 = r0;"
> > +	"call %[bpf_ktime_get_ns];"
> > +	"r0 &= 0xff;"
> > +	"r7 = r0;"
> > +	"r0 = 0;"
> > +	/* Maybe make r{6,7} IDs identical */
> > +	"if r6 > r7 goto l0_%=;"
> > +	"goto l1_%=;"
> > +"l0_%=:"
> > +	"r6 = r7;"
> > +"l1_%=:"
> > +	/* Mark r{6,7} precise.
> > +	 * Get here in two states:
> > +	 * - first:  r6{.id=A}, r7{.id=B} (cached state)
> > +	 * - second: r6{.id=A}, r7{.id=A}
> > +	 * Currently we don't want to consider such states equivalent.
> > +	 * Thus, marker instruction "r0 = r0;" would be verified twice.
> > +	 */
> > +	"r2 = r10;"
> > +	"r2 += r6;"
> > +	"r2 += r7;"
> > +	"exit;"
> > +	:
> > +	: __imm(bpf_ktime_get_ns)
> > +	: __clobber_all);
> > +}
> > +
> >  char _license[] SEC("license") = "GPL";
> > -- 
> > 2.40.1
> > 
> > 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids()
  2023-06-12 16:08 ` [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids() Eduard Zingerman
@ 2023-06-12 19:56   ` Andrii Nakryiko
  2023-06-12 21:01     ` Eduard Zingerman
  2023-06-13  8:02   ` kernel test robot
  1 sibling, 1 reply; 13+ messages in thread
From: Andrii Nakryiko @ 2023-06-12 19:56 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

On Mon, Jun 12, 2023 at 9:08 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> Make sure that the following unsafe example is rejected by verifier:
>
> 1: r9 = ... some pointer with range X ...
> 2: r6 = ... unbound scalar ID=a ...
> 3: r7 = ... unbound scalar ID=b ...
> 4: if (r6 > r7) goto +1
> 5: r6 = r7
> 6: if (r6 > X) goto ...
> --- checkpoint ---
> 7: r9 += r7
> 8: *(u64 *)r9 = Y
>
> This example is unsafe because not all execution paths verify r7 range.
> Because of the jump at (4) the verifier would arrive at (6) in two states:
> I.  r6{.id=b}, r7{.id=b} via path 1-6;
> II. r6{.id=a}, r7{.id=b} via path 1-4, 6.
>
> Currently regsafe() does not call check_ids() for scalar registers,
> thus from POV of regsafe() states (I) and (II) are identical. If the
> path 1-6 is taken by verifier first, and checkpoint is created at (6)
> the path [1-4, 6] would be considered safe.
>
> Changes in this commit:
> - Function check_scalar_ids() is added, it differs from check_ids() in
>   the following aspects:
>   - treats ID zero as a unique scalar ID;
>   - disallows mapping same 'cur_id' to multiple 'old_id'.
> - check_scalar_ids() needs to generate temporary unique IDs, field
>   'tmp_id_gen' is added to bpf_verifier_env::idmap_scratch to
>   facilitate this.
> - Function scalar_regs_exact() is added, it differs from regs_exact()
>   in the following aspects:
>   - uses check_scalar_ids() for ID comparison;
>   - does not check reg_obj_id as this field is not used for scalar
>     values.
> - regsafe() is updated to:
>   - use check_scalar_ids() for precise scalar registers.
>   - use scalar_regs_exact() for scalar registers, but only for
>     explore_alu_limits branch. This simplifies control flow for scalar
>     case, and has no measurable performance impact.
> - check_alu_op() is updated avoid generating bpf_reg_state::id for
>   constant scalar values when processing BPF_MOV. ID is needed to
>   propagate range information for identical values, but there is
>   nothing to propagate for constants.
>
> Fixes: 75748837b7e5 ("bpf: Propagate scalar ranges through register assignments.")
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  include/linux/bpf_verifier.h |  17 ++++--
>  kernel/bpf/verifier.c        | 108 ++++++++++++++++++++++++++++-------
>  2 files changed, 97 insertions(+), 28 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 73a98f6240fd..042b76fe8e29 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -313,11 +313,6 @@ struct bpf_idx_pair {
>         u32 idx;
>  };
>
> -struct bpf_id_pair {
> -       u32 old;
> -       u32 cur;
> -};
> -
>  #define MAX_CALL_FRAMES 8
>  /* Maximum number of register states that can exist at once */
>  #define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
> @@ -559,6 +554,16 @@ struct backtrack_state {
>         u64 stack_masks[MAX_CALL_FRAMES];
>  };
>
> +struct bpf_id_pair {
> +       u32 old;
> +       u32 cur;
> +};
> +
> +struct bpf_idmap {
> +       u32 tmp_id_gen;
> +       struct bpf_id_pair map[BPF_ID_MAP_SIZE];
> +};
> +
>  struct bpf_idset {
>         u32 count;
>         u32 ids[BPF_ID_MAP_SIZE];
> @@ -596,7 +601,7 @@ struct bpf_verifier_env {
>         struct bpf_verifier_log log;
>         struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
>         union {
> -               struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
> +               struct bpf_idmap idmap_scratch;
>                 struct bpf_idset idset_scratch;
>         };
>         struct {
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 9b5f2433194f..b15ebfed207a 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12942,12 +12942,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
>                 if (BPF_SRC(insn->code) == BPF_X) {
>                         struct bpf_reg_state *src_reg = regs + insn->src_reg;
>                         struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
> +                       bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
> +                                      !tnum_is_const(src_reg->var_off);
>
>                         if (BPF_CLASS(insn->code) == BPF_ALU64) {
>                                 /* case: R1 = R2
>                                  * copy register state to dest reg
>                                  */
> -                               if (src_reg->type == SCALAR_VALUE && !src_reg->id)
> +                               if (need_id)
>                                         /* Assign src and dst registers the same ID
>                                          * that will be used by find_equal_scalars()
>                                          * to propagate min/max range.
> @@ -12966,7 +12968,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
>                                 } else if (src_reg->type == SCALAR_VALUE) {
>                                         bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
>
> -                                       if (is_src_reg_u32 && !src_reg->id)
> +                                       if (is_src_reg_u32 && need_id)
>                                                 src_reg->id = ++env->id_gen;
>                                         copy_register_state(dst_reg, src_reg);
>                                         /* Make sure ID is cleared if src_reg is not in u32 range otherwise
> @@ -15122,8 +15124,9 @@ static bool range_within(struct bpf_reg_state *old,
>   * So we look through our idmap to see if this old id has been seen before.  If
>   * so, we require the new id to match; otherwise, we add the id pair to the map.
>   */
> -static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
> +static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
>  {
> +       struct bpf_id_pair *map = idmap->map;
>         unsigned int i;
>
>         /* either both IDs should be set or both should be zero */
> @@ -15134,14 +15137,44 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
>                 return true;
>
>         for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> -               if (!idmap[i].old) {
> +               if (!map[i].old) {
>                         /* Reached an empty slot; haven't seen this id before */
> -                       idmap[i].old = old_id;
> -                       idmap[i].cur = cur_id;
> +                       map[i].old = old_id;
> +                       map[i].cur = cur_id;
>                         return true;
>                 }
> -               if (idmap[i].old == old_id)
> -                       return idmap[i].cur == cur_id;
> +               if (map[i].old == old_id)
> +                       return map[i].cur == cur_id;
> +       }
> +       /* We ran out of idmap slots, which should be impossible */
> +       WARN_ON_ONCE(1);
> +       return false;
> +}
> +
> +/* Similar to check_ids(), but:
> + * - disallow mapping of different 'old_id' values to same 'cur_id' value;
> + * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
> + *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
> + */
> +static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> +{
> +       struct bpf_id_pair *map = idmap->map;
> +       unsigned int i;
> +
> +       old_id = old_id ? old_id : ++idmap->tmp_id_gen;
> +       cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
> +
> +       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> +               if (!map[i].old) {
> +                       /* Reached an empty slot; haven't seen this id before */
> +                       map[i].old = old_id;
> +                       map[i].cur = cur_id;
> +                       return true;
> +               }
> +               if (map[i].old == old_id)
> +                       return map[i].cur == cur_id;
> +               if (map[i].cur == cur_id)
> +                       return false;

We were discussing w/ Alexei (offline) making these changes as
backportable and minimal as possible, so I looked again how to
minimize all the extra code added.

I still insist that the current logic in check_ids() is invalid to
allow the same cur_id to map to two different old_ids, especially for
non-SCALAR, actually. E.g., a situation where we have a register that
is auto-invalidated. E.g., bpf_iter element (each gets an ID), or it
could be id'ed dynptr_ringbuf.

Think about the following situation:

In the old state, we could have r1.id = 1; r2.id = 2; Two registers
keep two separate pointers to ringbuf.

In the new state, we have r1.id = r2.id = 3; That is, two registers
keep the *same* pointer to ringbuf elem.

Now imagine that a BPF program has bpf_ringbuf_submit(r1) and
invalidates this register. With the old state it will invalidate r1
and will keep r2 valid. So it's safe for the BPF program to keep
working with r2 as valid ringbuf (and eventually submit it as well).

Now this is entirely unsafe for the new state. Once
bpf_ringbuf_submit(r1) happens, r2 shouldn't be accessed anymore. But
yet it will be declared safe with current check_ids() logic.

Ergo, check_ids() should make sure we do not have multi-mapping for
any of the IDs. Even if in some corner case that might be ok.

I actually tested this change with veristat, there are no regressions
at all. I think it's both safe from a perf standpoint, and necessary
and safe from a correctness standpoint.

So all in all (I did inline scalar_regs_exact in a separate patch, up
to you), I have these changes on top and they all are good from
veristat perspective:

Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Jun 12 12:53:25 2023 -0700

    bpf: inline scalar_regs_exact

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9d5fefd970a3..c5606613136d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15262,14 +15262,6 @@ static bool regs_exact(const struct
bpf_reg_state *rold,
               check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
 }

-static bool scalar_regs_exact(const struct bpf_reg_state *rold,
-                             const struct bpf_reg_state *rcur,
-                             struct bpf_idmap *idmap)
-{
-       return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
-              check_scalar_ids(rold->id, rcur->id, idmap);
-}
-
 /* Returns true if (rold safe implies rcur safe) */
 static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
                    struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
@@ -15309,8 +15301,13 @@ static bool regsafe(struct bpf_verifier_env
*env, struct bpf_reg_state *rold,

        switch (base_type(rold->type)) {
        case SCALAR_VALUE:
-               if (env->explore_alu_limits)
-                       return scalar_regs_exact(rold, rcur, idmap);
+               if (env->explore_alu_limits) {
+                       /* explore_alu_limits disables tnum_in() and
range_within()
+                        * logic and requires everything to be strict
+                        */
+                       return memcmp(rold, rcur, offsetof(struct
bpf_reg_state, id)) == 0 &&
+                              check_scalar_ids(rold->id, rcur->id, idmap);
+               }
                if (!rold->precise)
                        return true;
                /* Why check_ids() for scalar registers?

commit 57297c01fe747e423cd3c924ef492c0688d8057a
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Jun 12 11:46:46 2023 -0700

    bpf: make check_ids disallow multi-mapping of IDs universally

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3da77713d1ca..9d5fefd970a3 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15137,6 +15137,8 @@ static bool check_ids(u32 old_id, u32 cur_id,
struct bpf_idmap *idmap)
                }
                if (map[i].old == old_id)
                        return map[i].cur == cur_id;
+               if (map[i].cur == cur_id)
+                       return false;
        }
        /* We ran out of idmap slots, which should be impossible */
        WARN_ON_ONCE(1);
@@ -15144,33 +15146,15 @@ static bool check_ids(u32 old_id, u32
cur_id, struct bpf_idmap *idmap)
 }

 /* Similar to check_ids(), but:
- * - disallow mapping of different 'old_id' values to same 'cur_id' value;
  * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
  *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
  */
 static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
 {
-       struct bpf_id_pair *map = idmap->map;
-       unsigned int i;
-
        old_id = old_id ? old_id : ++idmap->tmp_id_gen;
        cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;

-       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
-               if (!map[i].old) {
-                       /* Reached an empty slot; haven't seen this id before */
-                       map[i].old = old_id;
-                       map[i].cur = cur_id;
-                       return true;
-               }
-               if (map[i].old == old_id)
-                       return map[i].cur == cur_id;
-               if (map[i].cur == cur_id)
-                       return false;
-       }
-       /* We ran out of idmap slots, which should be impossible */
-       WARN_ON_ONCE(1);
-       return false;
+       return check_ids(old_id, cur_id, idmap);
 }

 static void clean_func_state(struct bpf_verifier_env *env,



>         }
>         /* We ran out of idmap slots, which should be impossible */
>         WARN_ON_ONCE(1);
> @@ -15246,16 +15279,24 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn,
>
>  static bool regs_exact(const struct bpf_reg_state *rold,
>                        const struct bpf_reg_state *rcur,
> -                      struct bpf_id_pair *idmap)
> +                      struct bpf_idmap *idmap)
>  {
>         return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
>                check_ids(rold->id, rcur->id, idmap) &&
>                check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
>  }
>

[...]

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision()
  2023-06-12 16:07 ` [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision() Eduard Zingerman
@ 2023-06-12 19:56   ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2023-06-12 19:56 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

On Mon, Jun 12, 2023 at 9:08 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> Change mark_chain_precision() to track precision in situations
> like below:
>
>     r2 = unknown value
>     ...
>   --- state #0 ---
>     ...
>     r1 = r2                 // r1 and r2 now share the same ID
>     ...
>   --- state #1 {r1.id = A, r2.id = A} ---
>     ...
>     if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1
>     ...
>   --- state #2 {r1.id = A, r2.id = A} ---
>     r3 = r10
>     r3 += r1                // need to mark both r1 and r2
>
> At the beginning of the processing of each state, ensure that if a
> register with a scalar ID is marked as precise, all registers sharing
> this ID are also marked as precise.
>
> This property would be used by a follow-up change in regsafe().
>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  include/linux/bpf_verifier.h                  |  10 +-
>  kernel/bpf/verifier.c                         | 115 ++++++++++++++++++
>  .../testing/selftests/bpf/verifier/precise.c  |   8 +-
>  3 files changed, 128 insertions(+), 5 deletions(-)
>

[...]

> +static bool idset_contains(struct bpf_idset *s, u32 id)
> +{
> +       u32 i;
> +
> +       for (i = 0; i < s->count; ++i)
> +               if (s->ids[i] == id)
> +                       return true;
> +
> +       return false;
> +}
> +
> +static int idset_push(struct bpf_idset *s, u32 id)
> +{
> +       if (WARN_ON_ONCE(s->count >= ARRAY_SIZE(s->ids)))
> +               return -1;

minor, but should be -EFAULT as well


> +       s->ids[s->count++] = id;
> +       return 0;
> +}
> +
> +static void idset_reset(struct bpf_idset *s)
> +{
> +       s->count = 0;
> +}
> +

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids()
  2023-06-12 19:56   ` Andrii Nakryiko
@ 2023-06-12 21:01     ` Eduard Zingerman
  2023-06-13  0:10       ` Eduard Zingerman
  0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-12 21:01 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

On Mon, 2023-06-12 at 12:56 -0700, Andrii Nakryiko wrote:
> On Mon, Jun 12, 2023 at 9:08 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > 
> > Make sure that the following unsafe example is rejected by verifier:
> > 
> > 1: r9 = ... some pointer with range X ...
> > 2: r6 = ... unbound scalar ID=a ...
> > 3: r7 = ... unbound scalar ID=b ...
> > 4: if (r6 > r7) goto +1
> > 5: r6 = r7
> > 6: if (r6 > X) goto ...
> > --- checkpoint ---
> > 7: r9 += r7
> > 8: *(u64 *)r9 = Y
> > 
> > This example is unsafe because not all execution paths verify r7 range.
> > Because of the jump at (4) the verifier would arrive at (6) in two states:
> > I.  r6{.id=b}, r7{.id=b} via path 1-6;
> > II. r6{.id=a}, r7{.id=b} via path 1-4, 6.
> > 
> > Currently regsafe() does not call check_ids() for scalar registers,
> > thus from POV of regsafe() states (I) and (II) are identical. If the
> > path 1-6 is taken by verifier first, and checkpoint is created at (6)
> > the path [1-4, 6] would be considered safe.
> > 
> > Changes in this commit:
> > - Function check_scalar_ids() is added, it differs from check_ids() in
> >   the following aspects:
> >   - treats ID zero as a unique scalar ID;
> >   - disallows mapping same 'cur_id' to multiple 'old_id'.
> > - check_scalar_ids() needs to generate temporary unique IDs, field
> >   'tmp_id_gen' is added to bpf_verifier_env::idmap_scratch to
> >   facilitate this.
> > - Function scalar_regs_exact() is added, it differs from regs_exact()
> >   in the following aspects:
> >   - uses check_scalar_ids() for ID comparison;
> >   - does not check reg_obj_id as this field is not used for scalar
> >     values.
> > - regsafe() is updated to:
> >   - use check_scalar_ids() for precise scalar registers.
> >   - use scalar_regs_exact() for scalar registers, but only for
> >     explore_alu_limits branch. This simplifies control flow for scalar
> >     case, and has no measurable performance impact.
> > - check_alu_op() is updated avoid generating bpf_reg_state::id for
> >   constant scalar values when processing BPF_MOV. ID is needed to
> >   propagate range information for identical values, but there is
> >   nothing to propagate for constants.
> > 
> > Fixes: 75748837b7e5 ("bpf: Propagate scalar ranges through register assignments.")
> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> >  include/linux/bpf_verifier.h |  17 ++++--
> >  kernel/bpf/verifier.c        | 108 ++++++++++++++++++++++++++++-------
> >  2 files changed, 97 insertions(+), 28 deletions(-)
> > 
> > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> > index 73a98f6240fd..042b76fe8e29 100644
> > --- a/include/linux/bpf_verifier.h
> > +++ b/include/linux/bpf_verifier.h
> > @@ -313,11 +313,6 @@ struct bpf_idx_pair {
> >         u32 idx;
> >  };
> > 
> > -struct bpf_id_pair {
> > -       u32 old;
> > -       u32 cur;
> > -};
> > -
> >  #define MAX_CALL_FRAMES 8
> >  /* Maximum number of register states that can exist at once */
> >  #define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
> > @@ -559,6 +554,16 @@ struct backtrack_state {
> >         u64 stack_masks[MAX_CALL_FRAMES];
> >  };
> > 
> > +struct bpf_id_pair {
> > +       u32 old;
> > +       u32 cur;
> > +};
> > +
> > +struct bpf_idmap {
> > +       u32 tmp_id_gen;
> > +       struct bpf_id_pair map[BPF_ID_MAP_SIZE];
> > +};
> > +
> >  struct bpf_idset {
> >         u32 count;
> >         u32 ids[BPF_ID_MAP_SIZE];
> > @@ -596,7 +601,7 @@ struct bpf_verifier_env {
> >         struct bpf_verifier_log log;
> >         struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
> >         union {
> > -               struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
> > +               struct bpf_idmap idmap_scratch;
> >                 struct bpf_idset idset_scratch;
> >         };
> >         struct {
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 9b5f2433194f..b15ebfed207a 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -12942,12 +12942,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
> >                 if (BPF_SRC(insn->code) == BPF_X) {
> >                         struct bpf_reg_state *src_reg = regs + insn->src_reg;
> >                         struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
> > +                       bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
> > +                                      !tnum_is_const(src_reg->var_off);
> > 
> >                         if (BPF_CLASS(insn->code) == BPF_ALU64) {
> >                                 /* case: R1 = R2
> >                                  * copy register state to dest reg
> >                                  */
> > -                               if (src_reg->type == SCALAR_VALUE && !src_reg->id)
> > +                               if (need_id)
> >                                         /* Assign src and dst registers the same ID
> >                                          * that will be used by find_equal_scalars()
> >                                          * to propagate min/max range.
> > @@ -12966,7 +12968,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
> >                                 } else if (src_reg->type == SCALAR_VALUE) {
> >                                         bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
> > 
> > -                                       if (is_src_reg_u32 && !src_reg->id)
> > +                                       if (is_src_reg_u32 && need_id)
> >                                                 src_reg->id = ++env->id_gen;
> >                                         copy_register_state(dst_reg, src_reg);
> >                                         /* Make sure ID is cleared if src_reg is not in u32 range otherwise
> > @@ -15122,8 +15124,9 @@ static bool range_within(struct bpf_reg_state *old,
> >   * So we look through our idmap to see if this old id has been seen before.  If
> >   * so, we require the new id to match; otherwise, we add the id pair to the map.
> >   */
> > -static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
> > +static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> >  {
> > +       struct bpf_id_pair *map = idmap->map;
> >         unsigned int i;
> > 
> >         /* either both IDs should be set or both should be zero */
> > @@ -15134,14 +15137,44 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
> >                 return true;
> > 
> >         for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > -               if (!idmap[i].old) {
> > +               if (!map[i].old) {
> >                         /* Reached an empty slot; haven't seen this id before */
> > -                       idmap[i].old = old_id;
> > -                       idmap[i].cur = cur_id;
> > +                       map[i].old = old_id;
> > +                       map[i].cur = cur_id;
> >                         return true;
> >                 }
> > -               if (idmap[i].old == old_id)
> > -                       return idmap[i].cur == cur_id;
> > +               if (map[i].old == old_id)
> > +                       return map[i].cur == cur_id;
> > +       }
> > +       /* We ran out of idmap slots, which should be impossible */
> > +       WARN_ON_ONCE(1);
> > +       return false;
> > +}
> > +
> > +/* Similar to check_ids(), but:
> > + * - disallow mapping of different 'old_id' values to same 'cur_id' value;
> > + * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
> > + *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
> > + */
> > +static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> > +{
> > +       struct bpf_id_pair *map = idmap->map;
> > +       unsigned int i;
> > +
> > +       old_id = old_id ? old_id : ++idmap->tmp_id_gen;
> > +       cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
> > +
> > +       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > +               if (!map[i].old) {
> > +                       /* Reached an empty slot; haven't seen this id before */
> > +                       map[i].old = old_id;
> > +                       map[i].cur = cur_id;
> > +                       return true;
> > +               }
> > +               if (map[i].old == old_id)
> > +                       return map[i].cur == cur_id;
> > +               if (map[i].cur == cur_id)
> > +                       return false;
> 
> We were discussing w/ Alexei (offline) making these changes as
> backportable and minimal as possible, so I looked again how to
> minimize all the extra code added.
> 
> I still insist that the current logic in check_ids() is invalid to
> allow the same cur_id to map to two different old_ids, especially for
> non-SCALAR, actually. E.g., a situation where we have a register that
> is auto-invalidated. E.g., bpf_iter element (each gets an ID), or it
> could be id'ed dynptr_ringbuf.
> 
> Think about the following situation:
> 
> In the old state, we could have r1.id = 1; r2.id = 2; Two registers
> keep two separate pointers to ringbuf.
> 
> In the new state, we have r1.id = r2.id = 3; That is, two registers
> keep the *same* pointer to ringbuf elem.
> 
> Now imagine that a BPF program has bpf_ringbuf_submit(r1) and
> invalidates this register. With the old state it will invalidate r1
> and will keep r2 valid. So it's safe for the BPF program to keep
> working with r2 as valid ringbuf (and eventually submit it as well).
> 
> Now this is entirely unsafe for the new state. Once
> bpf_ringbuf_submit(r1) happens, r2 shouldn't be accessed anymore. But
> yet it will be declared safe with current check_ids() logic.
> 
> Ergo, check_ids() should make sure we do not have multi-mapping for
> any of the IDs. Even if in some corner case that might be ok.
> 
> I actually tested this change with veristat, there are no regressions
> at all. I think it's both safe from a perf standpoint, and necessary
> and safe from a correctness standpoint.
> 
> So all in all (I did inline scalar_regs_exact in a separate patch, up
> to you), I have these changes on top and they all are good from
> veristat perspective:

Ok, rinbuf is a good example.
To minimize the patch-set I'll do the following:
- move your check_ids patch to the beginning of the series
- add selftest for ringbuf
- squash the scalar_regs_exact patch with current patch #3

And resubmit with EFAULT fix in patch #1.
Thank you for looking into this.

> 
> Author: Andrii Nakryiko <andrii@kernel.org>
> Date:   Mon Jun 12 12:53:25 2023 -0700
> 
>     bpf: inline scalar_regs_exact
> 
>     Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 9d5fefd970a3..c5606613136d 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -15262,14 +15262,6 @@ static bool regs_exact(const struct
> bpf_reg_state *rold,
>                check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
>  }
> 
> -static bool scalar_regs_exact(const struct bpf_reg_state *rold,
> -                             const struct bpf_reg_state *rcur,
> -                             struct bpf_idmap *idmap)
> -{
> -       return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
> -              check_scalar_ids(rold->id, rcur->id, idmap);
> -}
> -
>  /* Returns true if (rold safe implies rcur safe) */
>  static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
>                     struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
> @@ -15309,8 +15301,13 @@ static bool regsafe(struct bpf_verifier_env
> *env, struct bpf_reg_state *rold,
> 
>         switch (base_type(rold->type)) {
>         case SCALAR_VALUE:
> -               if (env->explore_alu_limits)
> -                       return scalar_regs_exact(rold, rcur, idmap);
> +               if (env->explore_alu_limits) {
> +                       /* explore_alu_limits disables tnum_in() and
> range_within()
> +                        * logic and requires everything to be strict
> +                        */
> +                       return memcmp(rold, rcur, offsetof(struct
> bpf_reg_state, id)) == 0 &&
> +                              check_scalar_ids(rold->id, rcur->id, idmap);
> +               }
>                 if (!rold->precise)
>                         return true;
>                 /* Why check_ids() for scalar registers?
> 
> commit 57297c01fe747e423cd3c924ef492c0688d8057a
> Author: Andrii Nakryiko <andrii@kernel.org>
> Date:   Mon Jun 12 11:46:46 2023 -0700
> 
>     bpf: make check_ids disallow multi-mapping of IDs universally
> 
>     Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 3da77713d1ca..9d5fefd970a3 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -15137,6 +15137,8 @@ static bool check_ids(u32 old_id, u32 cur_id,
> struct bpf_idmap *idmap)
>                 }
>                 if (map[i].old == old_id)
>                         return map[i].cur == cur_id;
> +               if (map[i].cur == cur_id)
> +                       return false;
>         }
>         /* We ran out of idmap slots, which should be impossible */
>         WARN_ON_ONCE(1);
> @@ -15144,33 +15146,15 @@ static bool check_ids(u32 old_id, u32
> cur_id, struct bpf_idmap *idmap)
>  }
> 
>  /* Similar to check_ids(), but:
> - * - disallow mapping of different 'old_id' values to same 'cur_id' value;
>   * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
>   *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
>   */
>  static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
>  {
> -       struct bpf_id_pair *map = idmap->map;
> -       unsigned int i;
> -
>         old_id = old_id ? old_id : ++idmap->tmp_id_gen;
>         cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
> 
> -       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> -               if (!map[i].old) {
> -                       /* Reached an empty slot; haven't seen this id before */
> -                       map[i].old = old_id;
> -                       map[i].cur = cur_id;
> -                       return true;
> -               }
> -               if (map[i].old == old_id)
> -                       return map[i].cur == cur_id;
> -               if (map[i].cur == cur_id)
> -                       return false;
> -       }
> -       /* We ran out of idmap slots, which should be impossible */
> -       WARN_ON_ONCE(1);
> -       return false;
> +       return check_ids(old_id, cur_id, idmap);
>  }
> 
>  static void clean_func_state(struct bpf_verifier_env *env,
> 
> 
> 
> >         }
> >         /* We ran out of idmap slots, which should be impossible */
> >         WARN_ON_ONCE(1);
> > @@ -15246,16 +15279,24 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn,
> > 
> >  static bool regs_exact(const struct bpf_reg_state *rold,
> >                        const struct bpf_reg_state *rcur,
> > -                      struct bpf_id_pair *idmap)
> > +                      struct bpf_idmap *idmap)
> >  {
> >         return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
> >                check_ids(rold->id, rcur->id, idmap) &&
> >                check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> >  }
> > 
> 
> [...]


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids()
  2023-06-12 21:01     ` Eduard Zingerman
@ 2023-06-13  0:10       ` Eduard Zingerman
  2023-06-13  0:28         ` Andrii Nakryiko
  0 siblings, 1 reply; 13+ messages in thread
From: Eduard Zingerman @ 2023-06-13  0:10 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

On Tue, 2023-06-13 at 00:01 +0300, Eduard Zingerman wrote:
> On Mon, 2023-06-12 at 12:56 -0700, Andrii Nakryiko wrote:
> > On Mon, Jun 12, 2023 at 9:08 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > > 
> > > Make sure that the following unsafe example is rejected by verifier:
> > > 
> > > 1: r9 = ... some pointer with range X ...
> > > 2: r6 = ... unbound scalar ID=a ...
> > > 3: r7 = ... unbound scalar ID=b ...
> > > 4: if (r6 > r7) goto +1
> > > 5: r6 = r7
> > > 6: if (r6 > X) goto ...
> > > --- checkpoint ---
> > > 7: r9 += r7
> > > 8: *(u64 *)r9 = Y
> > > 
> > > This example is unsafe because not all execution paths verify r7 range.
> > > Because of the jump at (4) the verifier would arrive at (6) in two states:
> > > I.  r6{.id=b}, r7{.id=b} via path 1-6;
> > > II. r6{.id=a}, r7{.id=b} via path 1-4, 6.
> > > 
> > > Currently regsafe() does not call check_ids() for scalar registers,
> > > thus from POV of regsafe() states (I) and (II) are identical. If the
> > > path 1-6 is taken by verifier first, and checkpoint is created at (6)
> > > the path [1-4, 6] would be considered safe.
> > > 
> > > Changes in this commit:
> > > - Function check_scalar_ids() is added, it differs from check_ids() in
> > >   the following aspects:
> > >   - treats ID zero as a unique scalar ID;
> > >   - disallows mapping same 'cur_id' to multiple 'old_id'.
> > > - check_scalar_ids() needs to generate temporary unique IDs, field
> > >   'tmp_id_gen' is added to bpf_verifier_env::idmap_scratch to
> > >   facilitate this.
> > > - Function scalar_regs_exact() is added, it differs from regs_exact()
> > >   in the following aspects:
> > >   - uses check_scalar_ids() for ID comparison;
> > >   - does not check reg_obj_id as this field is not used for scalar
> > >     values.
> > > - regsafe() is updated to:
> > >   - use check_scalar_ids() for precise scalar registers.
> > >   - use scalar_regs_exact() for scalar registers, but only for
> > >     explore_alu_limits branch. This simplifies control flow for scalar
> > >     case, and has no measurable performance impact.
> > > - check_alu_op() is updated avoid generating bpf_reg_state::id for
> > >   constant scalar values when processing BPF_MOV. ID is needed to
> > >   propagate range information for identical values, but there is
> > >   nothing to propagate for constants.
> > > 
> > > Fixes: 75748837b7e5 ("bpf: Propagate scalar ranges through register assignments.")
> > > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > > ---
> > >  include/linux/bpf_verifier.h |  17 ++++--
> > >  kernel/bpf/verifier.c        | 108 ++++++++++++++++++++++++++++-------
> > >  2 files changed, 97 insertions(+), 28 deletions(-)
> > > 
> > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> > > index 73a98f6240fd..042b76fe8e29 100644
> > > --- a/include/linux/bpf_verifier.h
> > > +++ b/include/linux/bpf_verifier.h
> > > @@ -313,11 +313,6 @@ struct bpf_idx_pair {
> > >         u32 idx;
> > >  };
> > > 
> > > -struct bpf_id_pair {
> > > -       u32 old;
> > > -       u32 cur;
> > > -};
> > > -
> > >  #define MAX_CALL_FRAMES 8
> > >  /* Maximum number of register states that can exist at once */
> > >  #define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
> > > @@ -559,6 +554,16 @@ struct backtrack_state {
> > >         u64 stack_masks[MAX_CALL_FRAMES];
> > >  };
> > > 
> > > +struct bpf_id_pair {
> > > +       u32 old;
> > > +       u32 cur;
> > > +};
> > > +
> > > +struct bpf_idmap {
> > > +       u32 tmp_id_gen;
> > > +       struct bpf_id_pair map[BPF_ID_MAP_SIZE];
> > > +};
> > > +
> > >  struct bpf_idset {
> > >         u32 count;
> > >         u32 ids[BPF_ID_MAP_SIZE];
> > > @@ -596,7 +601,7 @@ struct bpf_verifier_env {
> > >         struct bpf_verifier_log log;
> > >         struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
> > >         union {
> > > -               struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
> > > +               struct bpf_idmap idmap_scratch;
> > >                 struct bpf_idset idset_scratch;
> > >         };
> > >         struct {
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 9b5f2433194f..b15ebfed207a 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -12942,12 +12942,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
> > >                 if (BPF_SRC(insn->code) == BPF_X) {
> > >                         struct bpf_reg_state *src_reg = regs + insn->src_reg;
> > >                         struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
> > > +                       bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
> > > +                                      !tnum_is_const(src_reg->var_off);
> > > 
> > >                         if (BPF_CLASS(insn->code) == BPF_ALU64) {
> > >                                 /* case: R1 = R2
> > >                                  * copy register state to dest reg
> > >                                  */
> > > -                               if (src_reg->type == SCALAR_VALUE && !src_reg->id)
> > > +                               if (need_id)
> > >                                         /* Assign src and dst registers the same ID
> > >                                          * that will be used by find_equal_scalars()
> > >                                          * to propagate min/max range.
> > > @@ -12966,7 +12968,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
> > >                                 } else if (src_reg->type == SCALAR_VALUE) {
> > >                                         bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
> > > 
> > > -                                       if (is_src_reg_u32 && !src_reg->id)
> > > +                                       if (is_src_reg_u32 && need_id)
> > >                                                 src_reg->id = ++env->id_gen;
> > >                                         copy_register_state(dst_reg, src_reg);
> > >                                         /* Make sure ID is cleared if src_reg is not in u32 range otherwise
> > > @@ -15122,8 +15124,9 @@ static bool range_within(struct bpf_reg_state *old,
> > >   * So we look through our idmap to see if this old id has been seen before.  If
> > >   * so, we require the new id to match; otherwise, we add the id pair to the map.
> > >   */
> > > -static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
> > > +static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> > >  {
> > > +       struct bpf_id_pair *map = idmap->map;
> > >         unsigned int i;
> > > 
> > >         /* either both IDs should be set or both should be zero */
> > > @@ -15134,14 +15137,44 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
> > >                 return true;
> > > 
> > >         for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > > -               if (!idmap[i].old) {
> > > +               if (!map[i].old) {
> > >                         /* Reached an empty slot; haven't seen this id before */
> > > -                       idmap[i].old = old_id;
> > > -                       idmap[i].cur = cur_id;
> > > +                       map[i].old = old_id;
> > > +                       map[i].cur = cur_id;
> > >                         return true;
> > >                 }
> > > -               if (idmap[i].old == old_id)
> > > -                       return idmap[i].cur == cur_id;
> > > +               if (map[i].old == old_id)
> > > +                       return map[i].cur == cur_id;
> > > +       }
> > > +       /* We ran out of idmap slots, which should be impossible */
> > > +       WARN_ON_ONCE(1);
> > > +       return false;
> > > +}
> > > +
> > > +/* Similar to check_ids(), but:
> > > + * - disallow mapping of different 'old_id' values to same 'cur_id' value;
> > > + * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
> > > + *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
> > > + */
> > > +static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> > > +{
> > > +       struct bpf_id_pair *map = idmap->map;
> > > +       unsigned int i;
> > > +
> > > +       old_id = old_id ? old_id : ++idmap->tmp_id_gen;
> > > +       cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
> > > +
> > > +       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > > +               if (!map[i].old) {
> > > +                       /* Reached an empty slot; haven't seen this id before */
> > > +                       map[i].old = old_id;
> > > +                       map[i].cur = cur_id;
> > > +                       return true;
> > > +               }
> > > +               if (map[i].old == old_id)
> > > +                       return map[i].cur == cur_id;
> > > +               if (map[i].cur == cur_id)
> > > +                       return false;
> > 
> > We were discussing w/ Alexei (offline) making these changes as
> > backportable and minimal as possible, so I looked again how to
> > minimize all the extra code added.
> > 
> > I still insist that the current logic in check_ids() is invalid to
> > allow the same cur_id to map to two different old_ids, especially for
> > non-SCALAR, actually. E.g., a situation where we have a register that
> > is auto-invalidated. E.g., bpf_iter element (each gets an ID), or it
> > could be id'ed dynptr_ringbuf.
> > 
> > Think about the following situation:
> > 
> > In the old state, we could have r1.id = 1; r2.id = 2; Two registers
> > keep two separate pointers to ringbuf.
> > 
> > In the new state, we have r1.id = r2.id = 3; That is, two registers
> > keep the *same* pointer to ringbuf elem.
> > 
> > Now imagine that a BPF program has bpf_ringbuf_submit(r1) and
> > invalidates this register. With the old state it will invalidate r1
> > and will keep r2 valid. So it's safe for the BPF program to keep
> > working with r2 as valid ringbuf (and eventually submit it as well).
> > 
> > Now this is entirely unsafe for the new state. Once
> > bpf_ringbuf_submit(r1) happens, r2 shouldn't be accessed anymore. But
> > yet it will be declared safe with current check_ids() logic.
> > 
> > Ergo, check_ids() should make sure we do not have multi-mapping for
> > any of the IDs. Even if in some corner case that might be ok.
> > 
> > I actually tested this change with veristat, there are no regressions
> > at all. I think it's both safe from a perf standpoint, and necessary
> > and safe from a correctness standpoint.
> > 
> > So all in all (I did inline scalar_regs_exact in a separate patch, up
> > to you), I have these changes on top and they all are good from
> > veristat perspective:
> 
> Ok, rinbuf is a good example.
> To minimize the patch-set I'll do the following:
> - move your check_ids patch to the beginning of the series
> - add selftest for ringbuf
> - squash the scalar_regs_exact patch with current patch #3
> 
> And resubmit with EFAULT fix in patch #1.
> Thank you for looking into this.

Welp, it turns out it is not possible to create a failing example with
ringbuf. This is so, because ringbuf objects are tracked in
bpf_func_state::refs (of type struct bpf_reference_state).

ID of each acquired object is stored in this array and compared in
func_states_equal() using refsafe() function:

  static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur,
  		    struct bpf_idmap *idmap)
  {
  	int i;
  
  	if (old->acquired_refs != cur->acquired_refs)
  		return false;
  
  	for (i = 0; i < old->acquired_refs; i++) {
  		if (!check_ids(old->refs[i].id, cur->refs[i].id, idmap))
  			return false;
  	}
  
  	return true;
  }
  
Whatever combination of old and current IDs we get in register is
later checked against full list of acquired IDs.

So far I haven't been able to create a counter-example that shows the
following snippet is necessary in check_ids():

+               if (map[i].cur == cur_id)
+                       return false;

But this saga has to end somehow :)
I'll just squash your patches on top of patch #3, since there are not
verification performance regressions.

> 
> > 
> > Author: Andrii Nakryiko <andrii@kernel.org>
> > Date:   Mon Jun 12 12:53:25 2023 -0700
> > 
> >     bpf: inline scalar_regs_exact
> > 
> >     Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 9d5fefd970a3..c5606613136d 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -15262,14 +15262,6 @@ static bool regs_exact(const struct
> > bpf_reg_state *rold,
> >                check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> >  }
> > 
> > -static bool scalar_regs_exact(const struct bpf_reg_state *rold,
> > -                             const struct bpf_reg_state *rcur,
> > -                             struct bpf_idmap *idmap)
> > -{
> > -       return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
> > -              check_scalar_ids(rold->id, rcur->id, idmap);
> > -}
> > -
> >  /* Returns true if (rold safe implies rcur safe) */
> >  static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
> >                     struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
> > @@ -15309,8 +15301,13 @@ static bool regsafe(struct bpf_verifier_env
> > *env, struct bpf_reg_state *rold,
> > 
> >         switch (base_type(rold->type)) {
> >         case SCALAR_VALUE:
> > -               if (env->explore_alu_limits)
> > -                       return scalar_regs_exact(rold, rcur, idmap);
> > +               if (env->explore_alu_limits) {
> > +                       /* explore_alu_limits disables tnum_in() and
> > range_within()
> > +                        * logic and requires everything to be strict
> > +                        */
> > +                       return memcmp(rold, rcur, offsetof(struct
> > bpf_reg_state, id)) == 0 &&
> > +                              check_scalar_ids(rold->id, rcur->id, idmap);
> > +               }
> >                 if (!rold->precise)
> >                         return true;
> >                 /* Why check_ids() for scalar registers?
> > 
> > commit 57297c01fe747e423cd3c924ef492c0688d8057a
> > Author: Andrii Nakryiko <andrii@kernel.org>
> > Date:   Mon Jun 12 11:46:46 2023 -0700
> > 
> >     bpf: make check_ids disallow multi-mapping of IDs universally
> > 
> >     Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 3da77713d1ca..9d5fefd970a3 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -15137,6 +15137,8 @@ static bool check_ids(u32 old_id, u32 cur_id,
> > struct bpf_idmap *idmap)
> >                 }
> >                 if (map[i].old == old_id)
> >                         return map[i].cur == cur_id;
> > +               if (map[i].cur == cur_id)
> > +                       return false;
> >         }
> >         /* We ran out of idmap slots, which should be impossible */
> >         WARN_ON_ONCE(1);
> > @@ -15144,33 +15146,15 @@ static bool check_ids(u32 old_id, u32
> > cur_id, struct bpf_idmap *idmap)
> >  }
> > 
> >  /* Similar to check_ids(), but:
> > - * - disallow mapping of different 'old_id' values to same 'cur_id' value;
> >   * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
> >   *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
> >   */
> >  static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> >  {
> > -       struct bpf_id_pair *map = idmap->map;
> > -       unsigned int i;
> > -
> >         old_id = old_id ? old_id : ++idmap->tmp_id_gen;
> >         cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
> > 
> > -       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > -               if (!map[i].old) {
> > -                       /* Reached an empty slot; haven't seen this id before */
> > -                       map[i].old = old_id;
> > -                       map[i].cur = cur_id;
> > -                       return true;
> > -               }
> > -               if (map[i].old == old_id)
> > -                       return map[i].cur == cur_id;
> > -               if (map[i].cur == cur_id)
> > -                       return false;
> > -       }
> > -       /* We ran out of idmap slots, which should be impossible */
> > -       WARN_ON_ONCE(1);
> > -       return false;
> > +       return check_ids(old_id, cur_id, idmap);
> >  }
> > 
> >  static void clean_func_state(struct bpf_verifier_env *env,
> > 
> > 
> > 
> > >         }
> > >         /* We ran out of idmap slots, which should be impossible */
> > >         WARN_ON_ONCE(1);
> > > @@ -15246,16 +15279,24 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn,
> > > 
> > >  static bool regs_exact(const struct bpf_reg_state *rold,
> > >                        const struct bpf_reg_state *rcur,
> > > -                      struct bpf_id_pair *idmap)
> > > +                      struct bpf_idmap *idmap)
> > >  {
> > >         return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
> > >                check_ids(rold->id, rcur->id, idmap) &&
> > >                check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> > >  }
> > > 
> > 
> > [...]
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids()
  2023-06-13  0:10       ` Eduard Zingerman
@ 2023-06-13  0:28         ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2023-06-13  0:28 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, martin.lau, kernel-team, yhs

On Mon, Jun 12, 2023 at 5:10 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Tue, 2023-06-13 at 00:01 +0300, Eduard Zingerman wrote:
> > On Mon, 2023-06-12 at 12:56 -0700, Andrii Nakryiko wrote:
> > > On Mon, Jun 12, 2023 at 9:08 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > > >
> > > > Make sure that the following unsafe example is rejected by verifier:
> > > >
> > > > 1: r9 = ... some pointer with range X ...
> > > > 2: r6 = ... unbound scalar ID=a ...
> > > > 3: r7 = ... unbound scalar ID=b ...
> > > > 4: if (r6 > r7) goto +1
> > > > 5: r6 = r7
> > > > 6: if (r6 > X) goto ...
> > > > --- checkpoint ---
> > > > 7: r9 += r7
> > > > 8: *(u64 *)r9 = Y
> > > >
> > > > This example is unsafe because not all execution paths verify r7 range.
> > > > Because of the jump at (4) the verifier would arrive at (6) in two states:
> > > > I.  r6{.id=b}, r7{.id=b} via path 1-6;
> > > > II. r6{.id=a}, r7{.id=b} via path 1-4, 6.
> > > >
> > > > Currently regsafe() does not call check_ids() for scalar registers,
> > > > thus from POV of regsafe() states (I) and (II) are identical. If the
> > > > path 1-6 is taken by verifier first, and checkpoint is created at (6)
> > > > the path [1-4, 6] would be considered safe.
> > > >
> > > > Changes in this commit:
> > > > - Function check_scalar_ids() is added, it differs from check_ids() in
> > > >   the following aspects:
> > > >   - treats ID zero as a unique scalar ID;
> > > >   - disallows mapping same 'cur_id' to multiple 'old_id'.
> > > > - check_scalar_ids() needs to generate temporary unique IDs, field
> > > >   'tmp_id_gen' is added to bpf_verifier_env::idmap_scratch to
> > > >   facilitate this.
> > > > - Function scalar_regs_exact() is added, it differs from regs_exact()
> > > >   in the following aspects:
> > > >   - uses check_scalar_ids() for ID comparison;
> > > >   - does not check reg_obj_id as this field is not used for scalar
> > > >     values.
> > > > - regsafe() is updated to:
> > > >   - use check_scalar_ids() for precise scalar registers.
> > > >   - use scalar_regs_exact() for scalar registers, but only for
> > > >     explore_alu_limits branch. This simplifies control flow for scalar
> > > >     case, and has no measurable performance impact.
> > > > - check_alu_op() is updated avoid generating bpf_reg_state::id for
> > > >   constant scalar values when processing BPF_MOV. ID is needed to
> > > >   propagate range information for identical values, but there is
> > > >   nothing to propagate for constants.
> > > >
> > > > Fixes: 75748837b7e5 ("bpf: Propagate scalar ranges through register assignments.")
> > > > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > > > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > > > ---
> > > >  include/linux/bpf_verifier.h |  17 ++++--
> > > >  kernel/bpf/verifier.c        | 108 ++++++++++++++++++++++++++++-------
> > > >  2 files changed, 97 insertions(+), 28 deletions(-)
> > > >
> > > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> > > > index 73a98f6240fd..042b76fe8e29 100644
> > > > --- a/include/linux/bpf_verifier.h
> > > > +++ b/include/linux/bpf_verifier.h
> > > > @@ -313,11 +313,6 @@ struct bpf_idx_pair {
> > > >         u32 idx;
> > > >  };
> > > >
> > > > -struct bpf_id_pair {
> > > > -       u32 old;
> > > > -       u32 cur;
> > > > -};
> > > > -
> > > >  #define MAX_CALL_FRAMES 8
> > > >  /* Maximum number of register states that can exist at once */
> > > >  #define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
> > > > @@ -559,6 +554,16 @@ struct backtrack_state {
> > > >         u64 stack_masks[MAX_CALL_FRAMES];
> > > >  };
> > > >
> > > > +struct bpf_id_pair {
> > > > +       u32 old;
> > > > +       u32 cur;
> > > > +};
> > > > +
> > > > +struct bpf_idmap {
> > > > +       u32 tmp_id_gen;
> > > > +       struct bpf_id_pair map[BPF_ID_MAP_SIZE];
> > > > +};
> > > > +
> > > >  struct bpf_idset {
> > > >         u32 count;
> > > >         u32 ids[BPF_ID_MAP_SIZE];
> > > > @@ -596,7 +601,7 @@ struct bpf_verifier_env {
> > > >         struct bpf_verifier_log log;
> > > >         struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
> > > >         union {
> > > > -               struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
> > > > +               struct bpf_idmap idmap_scratch;
> > > >                 struct bpf_idset idset_scratch;
> > > >         };
> > > >         struct {
> > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > > index 9b5f2433194f..b15ebfed207a 100644
> > > > --- a/kernel/bpf/verifier.c
> > > > +++ b/kernel/bpf/verifier.c
> > > > @@ -12942,12 +12942,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
> > > >                 if (BPF_SRC(insn->code) == BPF_X) {
> > > >                         struct bpf_reg_state *src_reg = regs + insn->src_reg;
> > > >                         struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
> > > > +                       bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
> > > > +                                      !tnum_is_const(src_reg->var_off);
> > > >
> > > >                         if (BPF_CLASS(insn->code) == BPF_ALU64) {
> > > >                                 /* case: R1 = R2
> > > >                                  * copy register state to dest reg
> > > >                                  */
> > > > -                               if (src_reg->type == SCALAR_VALUE && !src_reg->id)
> > > > +                               if (need_id)
> > > >                                         /* Assign src and dst registers the same ID
> > > >                                          * that will be used by find_equal_scalars()
> > > >                                          * to propagate min/max range.
> > > > @@ -12966,7 +12968,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
> > > >                                 } else if (src_reg->type == SCALAR_VALUE) {
> > > >                                         bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
> > > >
> > > > -                                       if (is_src_reg_u32 && !src_reg->id)
> > > > +                                       if (is_src_reg_u32 && need_id)
> > > >                                                 src_reg->id = ++env->id_gen;
> > > >                                         copy_register_state(dst_reg, src_reg);
> > > >                                         /* Make sure ID is cleared if src_reg is not in u32 range otherwise
> > > > @@ -15122,8 +15124,9 @@ static bool range_within(struct bpf_reg_state *old,
> > > >   * So we look through our idmap to see if this old id has been seen before.  If
> > > >   * so, we require the new id to match; otherwise, we add the id pair to the map.
> > > >   */
> > > > -static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
> > > > +static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> > > >  {
> > > > +       struct bpf_id_pair *map = idmap->map;
> > > >         unsigned int i;
> > > >
> > > >         /* either both IDs should be set or both should be zero */
> > > > @@ -15134,14 +15137,44 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
> > > >                 return true;
> > > >
> > > >         for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > > > -               if (!idmap[i].old) {
> > > > +               if (!map[i].old) {
> > > >                         /* Reached an empty slot; haven't seen this id before */
> > > > -                       idmap[i].old = old_id;
> > > > -                       idmap[i].cur = cur_id;
> > > > +                       map[i].old = old_id;
> > > > +                       map[i].cur = cur_id;
> > > >                         return true;
> > > >                 }
> > > > -               if (idmap[i].old == old_id)
> > > > -                       return idmap[i].cur == cur_id;
> > > > +               if (map[i].old == old_id)
> > > > +                       return map[i].cur == cur_id;
> > > > +       }
> > > > +       /* We ran out of idmap slots, which should be impossible */
> > > > +       WARN_ON_ONCE(1);
> > > > +       return false;
> > > > +}
> > > > +
> > > > +/* Similar to check_ids(), but:
> > > > + * - disallow mapping of different 'old_id' values to same 'cur_id' value;
> > > > + * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
> > > > + *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
> > > > + */
> > > > +static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> > > > +{
> > > > +       struct bpf_id_pair *map = idmap->map;
> > > > +       unsigned int i;
> > > > +
> > > > +       old_id = old_id ? old_id : ++idmap->tmp_id_gen;
> > > > +       cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
> > > > +
> > > > +       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > > > +               if (!map[i].old) {
> > > > +                       /* Reached an empty slot; haven't seen this id before */
> > > > +                       map[i].old = old_id;
> > > > +                       map[i].cur = cur_id;
> > > > +                       return true;
> > > > +               }
> > > > +               if (map[i].old == old_id)
> > > > +                       return map[i].cur == cur_id;
> > > > +               if (map[i].cur == cur_id)
> > > > +                       return false;
> > >
> > > We were discussing w/ Alexei (offline) making these changes as
> > > backportable and minimal as possible, so I looked again how to
> > > minimize all the extra code added.
> > >
> > > I still insist that the current logic in check_ids() is invalid to
> > > allow the same cur_id to map to two different old_ids, especially for
> > > non-SCALAR, actually. E.g., a situation where we have a register that
> > > is auto-invalidated. E.g., bpf_iter element (each gets an ID), or it
> > > could be id'ed dynptr_ringbuf.
> > >
> > > Think about the following situation:
> > >
> > > In the old state, we could have r1.id = 1; r2.id = 2; Two registers
> > > keep two separate pointers to ringbuf.
> > >
> > > In the new state, we have r1.id = r2.id = 3; That is, two registers
> > > keep the *same* pointer to ringbuf elem.
> > >
> > > Now imagine that a BPF program has bpf_ringbuf_submit(r1) and
> > > invalidates this register. With the old state it will invalidate r1
> > > and will keep r2 valid. So it's safe for the BPF program to keep
> > > working with r2 as valid ringbuf (and eventually submit it as well).
> > >
> > > Now this is entirely unsafe for the new state. Once
> > > bpf_ringbuf_submit(r1) happens, r2 shouldn't be accessed anymore. But
> > > yet it will be declared safe with current check_ids() logic.
> > >
> > > Ergo, check_ids() should make sure we do not have multi-mapping for
> > > any of the IDs. Even if in some corner case that might be ok.
> > >
> > > I actually tested this change with veristat, there are no regressions
> > > at all. I think it's both safe from a perf standpoint, and necessary
> > > and safe from a correctness standpoint.
> > >
> > > So all in all (I did inline scalar_regs_exact in a separate patch, up
> > > to you), I have these changes on top and they all are good from
> > > veristat perspective:
> >
> > Ok, rinbuf is a good example.
> > To minimize the patch-set I'll do the following:
> > - move your check_ids patch to the beginning of the series
> > - add selftest for ringbuf
> > - squash the scalar_regs_exact patch with current patch #3
> >
> > And resubmit with EFAULT fix in patch #1.
> > Thank you for looking into this.
>
> Welp, it turns out it is not possible to create a failing example with
> ringbuf. This is so, because ringbuf objects are tracked in
> bpf_func_state::refs (of type struct bpf_reference_state).
>
> ID of each acquired object is stored in this array and compared in
> func_states_equal() using refsafe() function:
>
>   static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur,
>                     struct bpf_idmap *idmap)
>   {
>         int i;
>
>         if (old->acquired_refs != cur->acquired_refs)
>                 return false;
>
>         for (i = 0; i < old->acquired_refs; i++) {
>                 if (!check_ids(old->refs[i].id, cur->refs[i].id, idmap))
>                         return false;
>         }
>
>         return true;
>   }
>
> Whatever combination of old and current IDs we get in register is
> later checked against full list of acquired IDs.
>
> So far I haven't been able to create a counter-example that shows the
> following snippet is necessary in check_ids():
>
> +               if (map[i].cur == cur_id)
> +                       return false;
>
> But this saga has to end somehow :)

ha-ha, yep. I think we should add this check and not rely on some
side-effects of other checks for correctness.

> I'll just squash your patches on top of patch #3, since there are not
> verification performance regressions.

Thanks!

>
> >
> > >
> > > Author: Andrii Nakryiko <andrii@kernel.org>
> > > Date:   Mon Jun 12 12:53:25 2023 -0700
> > >
> > >     bpf: inline scalar_regs_exact
> > >
> > >     Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 9d5fefd970a3..c5606613136d 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -15262,14 +15262,6 @@ static bool regs_exact(const struct
> > > bpf_reg_state *rold,
> > >                check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> > >  }
> > >
> > > -static bool scalar_regs_exact(const struct bpf_reg_state *rold,
> > > -                             const struct bpf_reg_state *rcur,
> > > -                             struct bpf_idmap *idmap)
> > > -{
> > > -       return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
> > > -              check_scalar_ids(rold->id, rcur->id, idmap);
> > > -}
> > > -
> > >  /* Returns true if (rold safe implies rcur safe) */
> > >  static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
> > >                     struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
> > > @@ -15309,8 +15301,13 @@ static bool regsafe(struct bpf_verifier_env
> > > *env, struct bpf_reg_state *rold,
> > >
> > >         switch (base_type(rold->type)) {
> > >         case SCALAR_VALUE:
> > > -               if (env->explore_alu_limits)
> > > -                       return scalar_regs_exact(rold, rcur, idmap);
> > > +               if (env->explore_alu_limits) {
> > > +                       /* explore_alu_limits disables tnum_in() and
> > > range_within()
> > > +                        * logic and requires everything to be strict
> > > +                        */
> > > +                       return memcmp(rold, rcur, offsetof(struct
> > > bpf_reg_state, id)) == 0 &&
> > > +                              check_scalar_ids(rold->id, rcur->id, idmap);
> > > +               }
> > >                 if (!rold->precise)
> > >                         return true;
> > >                 /* Why check_ids() for scalar registers?
> > >
> > > commit 57297c01fe747e423cd3c924ef492c0688d8057a
> > > Author: Andrii Nakryiko <andrii@kernel.org>
> > > Date:   Mon Jun 12 11:46:46 2023 -0700
> > >
> > >     bpf: make check_ids disallow multi-mapping of IDs universally
> > >
> > >     Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 3da77713d1ca..9d5fefd970a3 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -15137,6 +15137,8 @@ static bool check_ids(u32 old_id, u32 cur_id,
> > > struct bpf_idmap *idmap)
> > >                 }
> > >                 if (map[i].old == old_id)
> > >                         return map[i].cur == cur_id;
> > > +               if (map[i].cur == cur_id)
> > > +                       return false;
> > >         }
> > >         /* We ran out of idmap slots, which should be impossible */
> > >         WARN_ON_ONCE(1);
> > > @@ -15144,33 +15146,15 @@ static bool check_ids(u32 old_id, u32
> > > cur_id, struct bpf_idmap *idmap)
> > >  }
> > >
> > >  /* Similar to check_ids(), but:
> > > - * - disallow mapping of different 'old_id' values to same 'cur_id' value;
> > >   * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID
> > >   *   to allow pairs like '0 vs unique ID', 'unique ID vs 0'.
> > >   */
> > >  static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
> > >  {
> > > -       struct bpf_id_pair *map = idmap->map;
> > > -       unsigned int i;
> > > -
> > >         old_id = old_id ? old_id : ++idmap->tmp_id_gen;
> > >         cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
> > >
> > > -       for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
> > > -               if (!map[i].old) {
> > > -                       /* Reached an empty slot; haven't seen this id before */
> > > -                       map[i].old = old_id;
> > > -                       map[i].cur = cur_id;
> > > -                       return true;
> > > -               }
> > > -               if (map[i].old == old_id)
> > > -                       return map[i].cur == cur_id;
> > > -               if (map[i].cur == cur_id)
> > > -                       return false;
> > > -       }
> > > -       /* We ran out of idmap slots, which should be impossible */
> > > -       WARN_ON_ONCE(1);
> > > -       return false;
> > > +       return check_ids(old_id, cur_id, idmap);
> > >  }
> > >
> > >  static void clean_func_state(struct bpf_verifier_env *env,
> > >
> > >
> > >
> > > >         }
> > > >         /* We ran out of idmap slots, which should be impossible */
> > > >         WARN_ON_ONCE(1);
> > > > @@ -15246,16 +15279,24 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn,
> > > >
> > > >  static bool regs_exact(const struct bpf_reg_state *rold,
> > > >                        const struct bpf_reg_state *rcur,
> > > > -                      struct bpf_id_pair *idmap)
> > > > +                      struct bpf_idmap *idmap)
> > > >  {
> > > >         return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
> > > >                check_ids(rold->id, rcur->id, idmap) &&
> > > >                check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> > > >  }
> > > >
> > >
> > > [...]
> >
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids()
  2023-06-12 16:08 ` [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids() Eduard Zingerman
  2023-06-12 19:56   ` Andrii Nakryiko
@ 2023-06-13  8:02   ` kernel test robot
  1 sibling, 0 replies; 13+ messages in thread
From: kernel test robot @ 2023-06-13  8:02 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast
  Cc: llvm, oe-kbuild-all, andrii, daniel, martin.lau, kernel-team,
	yhs, Eduard Zingerman

Hi Eduard,

kernel test robot noticed the following build warnings:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Eduard-Zingerman/bpf-use-scalar-ids-in-mark_chain_precision/20230613-001651
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230612160801.2804666-4-eddyz87%40gmail.com
patch subject: [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids()
config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20230613/202306131550.U3M9AJGm-lkp@intel.com/config)
compiler: clang version 15.0.7 (https://github.com/llvm/llvm-project.git 8dfdcc7b7bf66834a761bd8de445840ef68e4d1a)
reproduce (this is a W=1 build):
        mkdir -p ~/bin
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        git remote add bpf-next https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
        git fetch bpf-next master
        git checkout bpf-next/master
        b4 shazam https://lore.kernel.org/r/20230612160801.2804666-4-eddyz87@gmail.com
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=x86_64 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash kernel/bpf/

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202306131550.U3M9AJGm-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from kernel/bpf/verifier.c:7:
   In file included from include/linux/bpf-cgroup.h:5:
   In file included from include/linux/bpf.h:10:
   In file included from include/linux/workqueue.h:9:
   In file included from include/linux/timer.h:6:
   In file included from include/linux/ktime.h:24:
   In file included from include/linux/time.h:60:
   In file included from include/linux/time32.h:13:
   In file included from include/linux/timex.h:67:
   In file included from arch/x86/include/asm/timex.h:5:
   In file included from arch/x86/include/asm/processor.h:23:
   In file included from arch/x86/include/asm/msr.h:11:
   In file included from arch/x86/include/asm/cpumask.h:5:
   In file included from include/linux/cpumask.h:12:
   In file included from include/linux/bitmap.h:11:
   In file included from include/linux/string.h:254:
>> include/linux/fortify-string.h:430:4: warning: call to __write_overflow_field declared with 'warning' attribute: detected write beyond size of field (1st parameter); maybe use struct_group()? [-Wattribute-warning]
                           __write_overflow_field(p_size_field, size);
                           ^
   1 warning generated.


vim +/warning +430 include/linux/fortify-string.h

a28a6e860c6cf2 Francis Laniel 2021-02-25  411  
28e77cc1c06866 Kees Cook      2021-06-16  412  __FORTIFY_INLINE void fortify_memset_chk(__kernel_size_t size,
28e77cc1c06866 Kees Cook      2021-06-16  413  					 const size_t p_size,
28e77cc1c06866 Kees Cook      2021-06-16  414  					 const size_t p_size_field)
a28a6e860c6cf2 Francis Laniel 2021-02-25  415  {
28e77cc1c06866 Kees Cook      2021-06-16  416  	if (__builtin_constant_p(size)) {
28e77cc1c06866 Kees Cook      2021-06-16  417  		/*
28e77cc1c06866 Kees Cook      2021-06-16  418  		 * Length argument is a constant expression, so we
28e77cc1c06866 Kees Cook      2021-06-16  419  		 * can perform compile-time bounds checking where
fa35198f39571b Kees Cook      2022-09-19  420  		 * buffer sizes are also known at compile time.
28e77cc1c06866 Kees Cook      2021-06-16  421  		 */
a28a6e860c6cf2 Francis Laniel 2021-02-25  422  
28e77cc1c06866 Kees Cook      2021-06-16  423  		/* Error when size is larger than enclosing struct. */
fa35198f39571b Kees Cook      2022-09-19  424  		if (__compiletime_lessthan(p_size_field, p_size) &&
fa35198f39571b Kees Cook      2022-09-19  425  		    __compiletime_lessthan(p_size, size))
a28a6e860c6cf2 Francis Laniel 2021-02-25  426  			__write_overflow();
28e77cc1c06866 Kees Cook      2021-06-16  427  
28e77cc1c06866 Kees Cook      2021-06-16  428  		/* Warn when write size is larger than dest field. */
fa35198f39571b Kees Cook      2022-09-19  429  		if (__compiletime_lessthan(p_size_field, size))
28e77cc1c06866 Kees Cook      2021-06-16 @430  			__write_overflow_field(p_size_field, size);
a28a6e860c6cf2 Francis Laniel 2021-02-25  431  	}
28e77cc1c06866 Kees Cook      2021-06-16  432  	/*
28e77cc1c06866 Kees Cook      2021-06-16  433  	 * At this point, length argument may not be a constant expression,
28e77cc1c06866 Kees Cook      2021-06-16  434  	 * so run-time bounds checking can be done where buffer sizes are
28e77cc1c06866 Kees Cook      2021-06-16  435  	 * known. (This is not an "else" because the above checks may only
28e77cc1c06866 Kees Cook      2021-06-16  436  	 * be compile-time warnings, and we want to still warn for run-time
28e77cc1c06866 Kees Cook      2021-06-16  437  	 * overflows.)
28e77cc1c06866 Kees Cook      2021-06-16  438  	 */
28e77cc1c06866 Kees Cook      2021-06-16  439  
28e77cc1c06866 Kees Cook      2021-06-16  440  	/*
28e77cc1c06866 Kees Cook      2021-06-16  441  	 * Always stop accesses beyond the struct that contains the
28e77cc1c06866 Kees Cook      2021-06-16  442  	 * field, when the buffer's remaining size is known.
311fb40aa0569a Kees Cook      2022-09-02  443  	 * (The SIZE_MAX test is to optimize away checks where the buffer
28e77cc1c06866 Kees Cook      2021-06-16  444  	 * lengths are unknown.)
28e77cc1c06866 Kees Cook      2021-06-16  445  	 */
311fb40aa0569a Kees Cook      2022-09-02  446  	if (p_size != SIZE_MAX && p_size < size)
28e77cc1c06866 Kees Cook      2021-06-16  447  		fortify_panic("memset");
28e77cc1c06866 Kees Cook      2021-06-16  448  }
28e77cc1c06866 Kees Cook      2021-06-16  449  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-06-13  8:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-12 16:07 [PATCH bpf-next v5 0/4] verify scalar ids mapping in regsafe() Eduard Zingerman
2023-06-12 16:07 ` [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision() Eduard Zingerman
2023-06-12 19:56   ` Andrii Nakryiko
2023-06-12 16:07 ` [PATCH bpf-next v5 2/4] selftests/bpf: check if mark_chain_precision() follows scalar ids Eduard Zingerman
2023-06-12 16:08 ` [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids() Eduard Zingerman
2023-06-12 19:56   ` Andrii Nakryiko
2023-06-12 21:01     ` Eduard Zingerman
2023-06-13  0:10       ` Eduard Zingerman
2023-06-13  0:28         ` Andrii Nakryiko
2023-06-13  8:02   ` kernel test robot
2023-06-12 16:08 ` [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe() Eduard Zingerman
2023-06-12 17:40   ` Maxim Mikityanskiy
2023-06-12 19:42     ` Eduard Zingerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.