bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v1 0/8] Dynptr fixes
@ 2023-01-01  8:33 Kumar Kartikeya Dwivedi
  2023-01-01  8:33 ` [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots Kumar Kartikeya Dwivedi
                   ` (8 more replies)
  0 siblings, 9 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:33 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

Happy New Year!

This is part 2 of https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com.

Changelog:
----------
Old v1 -> v1
Old v1: https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com

 * Allow overwriting dynptr stack slots from dynptr init helpers
 * Fix a bug in alignment check where reg->var_off.value was still not included
 * Address other minor nits

Kumar Kartikeya Dwivedi (8):
  bpf: Fix state pruning for STACK_DYNPTR stack slots
  bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
  bpf: Fix partial dynptr stack slot reads/writes
  bpf: Allow reinitializing unreferenced dynptr stack slots
  selftests/bpf: Add dynptr pruning tests
  selftests/bpf: Add dynptr var_off tests
  selftests/bpf: Add dynptr partial slot overwrite tests
  selftests/bpf: Add dynptr helper tests

 kernel/bpf/verifier.c                         | 243 ++++++++++++++++--
 .../bpf/prog_tests/kfunc_dynptr_param.c       |   2 +-
 .../testing/selftests/bpf/progs/dynptr_fail.c |  68 ++++-
 tools/testing/selftests/bpf/verifier/dynptr.c | 182 +++++++++++++
 4 files changed, 464 insertions(+), 31 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/verifier/dynptr.c


base-commit: bb5747cfbc4b7fe29621ca6cd4a695d2723bf2e8
-- 
2.39.0


^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
@ 2023-01-01  8:33 ` Kumar Kartikeya Dwivedi
  2023-01-02 19:28   ` Eduard Zingerman
                     ` (2 more replies)
  2023-01-01  8:33 ` [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR Kumar Kartikeya Dwivedi
                   ` (7 subsequent siblings)
  8 siblings, 3 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:33 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

The root of the problem is missing liveness marking for STACK_DYNPTR
slots. This leads to all kinds of problems inside stacksafe.

The verifier by default inside stacksafe ignores spilled_ptr in stack
slots which do not have REG_LIVE_READ marks. Since this is being checked
in the 'old' explored state, it must have already done clean_live_states
for this old bpf_func_state. Hence, it won't be receiving any more
liveness marks from to be explored insns (it has received REG_LIVE_DONE
marking from liveness point of view).

What this means is that verifier considers that it's safe to not compare
the stack slot if was never read by children states. While liveness
marks are usually propagated correctly following the parentage chain for
spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
case for STACK_DYNPTR.

clean_live_states hence simply rewrites these stack slots to the type
STACK_INVALID since it sees no REG_LIVE_READ marks.

The end result is that we will never see STACK_DYNPTR slots in explored
state. Even if verifier was conservatively matching !REG_LIVE_READ
slots, very next check continuing the stacksafe loop on seeing
STACK_INVALID would again prevent further checks.

Now as long as verifier stores an explored state which we can compare to
when reaching a pruning point, we can abuse this bug to make verifier
prune search for obviously unsafe paths using STACK_DYNPTR slots
thinking they are never used hence safe.

Doing this in unprivileged mode is a bit challenging. add_new_state is
only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
or when jmps_processed difference is >= 2 and insn_processed difference
is >= 8. So coming up with the unprivileged case requires a little more
work, but it is still totally possible. The test case being discussed
below triggers the heuristic even in unprivileged mode.

However, it no longer works since commit
8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").

Let's try to study the test step by step.

Consider the following program (C style BPF ASM):

0  r0 = 0;
1  r6 = &ringbuf_map;
3  r1 = r6;
4  r2 = 8;
5  r3 = 0;
6  r4 = r10;
7  r4 -= -16;
8  call bpf_ringbuf_reserve_dynptr;
9  if r0 == 0 goto pc+1;
10 goto pc+1;
11 *(r10 - 16) = 0xeB9F;
12 r1 = r10;
13 r1 -= -16;
14 r2 = 0;
15 call bpf_ringbuf_discard_dynptr;
16 r0 = 0;
17 exit;

We know that insn 12 will be a pruning point, hence if we force
add_new_state for it, it will first verify the following path as
safe in straight line exploration:
0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17

Then, when we arrive at insn 12 from the following path:
0 1 3 4 5 6 7 8 9 -> 11 (12)

We will find a state that has been verified as safe already at insn 12.
Since register state is same at this point, regsafe will pass. Next, in
stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
Next, refsafe is also true as reference state is unchanged in both
states.

The states are considered equivalent and search is pruned.

Hence, we are able to construct a dynptr with arbitrary contents and use
the dynptr API to operate on this arbitrary pointer and arbitrary size +
offset.

To fix this, first define a mark_dynptr_read function that propagates
liveness marks whenever a valid initialized dynptr is accessed by dynptr
helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
screening off mark_reg_read and not propagating marks upwards from that
point.

This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
none, so clean_live_states either sets both slots to STACK_INVALID or
none of them. This is the invariant the checks inside stacksafe rely on.

Next, do a complete comparison of both stack slots whenever they have
STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
also whether both form the same first_slot. Only then is the later path
safe.

Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4a25375ebb0d..f7248235e119 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
 		state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
 	}
 
+	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
+	state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
+
 	return 0;
 }
 
@@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
 
 	__mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
 	__mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
+
+	/* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
+	 *
+	 * While we don't allow reading STACK_INVALID, it is still possible to
+	 * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
+	 * helpers or insns can do partial read of that part without failing,
+	 * but check_stack_range_initialized, check_stack_read_var_off, and
+	 * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
+	 * the slot conservatively. Hence we need to screen off those liveness
+	 * marking walks.
+	 *
+	 * This was not a problem before because STACK_INVALID is only set by
+	 * default, or in clean_live_states after REG_LIVE_DONE, not randomly
+	 * during verifier state exploration. Hence, for this case parentage
+	 * chain will still be live, while earlier reg->parent was NULL, so we
+	 * need REG_LIVE_WRITTEN to screen off read marker propagation.
+	 */
+	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
+	state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
+
 	return 0;
 }
 
@@ -2388,6 +2411,30 @@ static int mark_reg_read(struct bpf_verifier_env *env,
 	return 0;
 }
 
+static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+	struct bpf_func_state *state = func(env, reg);
+	int spi, ret;
+
+	/* For CONST_PTR_TO_DYNPTR, it must have already been done by
+	 * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
+	 * check_kfunc_call.
+	 */
+	if (reg->type == CONST_PTR_TO_DYNPTR)
+		return 0;
+	spi = get_spi(reg->off);
+	/* Caller ensures dynptr is valid and initialized, which means spi is in
+	 * bounds and spi is the first dynptr slot. Simply mark stack slot as
+	 * read.
+	 */
+	ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
+			    state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
+	if (ret)
+		return ret;
+	return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
+			     state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
+}
+
 /* This function is supposed to be used by the following 32-bit optimization
  * code only. It returns TRUE if the source or destination register operates
  * on 64-bit, otherwise return FALSE.
@@ -5928,6 +5975,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
 			enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
 {
 	struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
+	int err;
 
 	/* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
 	 * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
@@ -6008,6 +6056,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
 				err_extra, regno);
 			return -EINVAL;
 		}
+
+		err = mark_dynptr_read(env, reg);
+		if (err)
+			return err;
 	}
 	return 0;
 }
@@ -13204,6 +13256,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
 			 * return false to continue verification of this path
 			 */
 			return false;
+		/* Both are same slot_type, but STACK_DYNPTR requires more
+		 * checks before it can considered safe.
+		 */
+		if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_DYNPTR) {
+			/* If both are STACK_DYNPTR, type must be same */
+			if (old->stack[spi].spilled_ptr.dynptr.type != cur->stack[spi].spilled_ptr.dynptr.type)
+				return false;
+			/* Both should also have first slot at same spi */
+			if (old->stack[spi].spilled_ptr.dynptr.first_slot != cur->stack[spi].spilled_ptr.dynptr.first_slot)
+				return false;
+			/* ids should be same */
+			if (!!old->stack[spi].spilled_ptr.ref_obj_id != !!cur->stack[spi].spilled_ptr.ref_obj_id)
+				return false;
+			if (old->stack[spi].spilled_ptr.ref_obj_id &&
+			    !check_ids(old->stack[spi].spilled_ptr.ref_obj_id,
+				       cur->stack[spi].spilled_ptr.ref_obj_id, idmap))
+				return false;
+			WARN_ON_ONCE(i % BPF_REG_SIZE);
+			i += BPF_REG_SIZE - 1;
+			continue;
+		}
 		if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
 			continue;
 		if (!is_spilled_reg(&old->stack[spi]))
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
  2023-01-01  8:33 ` [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots Kumar Kartikeya Dwivedi
@ 2023-01-01  8:33 ` Kumar Kartikeya Dwivedi
  2023-01-04 22:32   ` Andrii Nakryiko
  2023-01-06  0:57   ` Joanne Koong
  2023-01-01  8:33 ` [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes Kumar Kartikeya Dwivedi
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:33 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

Currently, the dynptr function is not checking the variable offset part
of PTR_TO_STACK that it needs to check. The fixed offset is considered
when computing the stack pointer index, but if the variable offset was
not a constant (such that it could not be accumulated in reg->off), we
will end up a discrepency where runtime pointer does not point to the
actual stack slot we mark as STACK_DYNPTR.

It is impossible to precisely track dynptr state when variable offset is
not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc.
simply reject the case where reg->var_off is not constant. Then,
consider both reg->off and reg->var_off.value when computing the stack
pointer index.

A new helper dynptr_get_spi is introduced to hide over these details
since the dynptr needs to be located in multiple places outside the
process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we
need to enforce these checks in all places.

Note that it is disallowed for unprivileged users to have a non-constant
var_off, so this problem should only be possible to trigger from
programs having CAP_PERFMON. However, its effects can vary.

Without the fix, it is possible to replace the contents of the dynptr
arbitrarily by making verifier mark different stack slots than actual
location and then doing writes to the actual stack address of dynptr at
runtime.

Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 kernel/bpf/verifier.c                         | 83 ++++++++++++++-----
 .../bpf/prog_tests/kfunc_dynptr_param.c       |  2 +-
 .../testing/selftests/bpf/progs/dynptr_fail.c |  6 +-
 3 files changed, 66 insertions(+), 25 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index f7248235e119..ca970f80e395 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env,
 		verbose(env, "D");
 }
 
-static int get_spi(s32 off)
+static int __get_spi(s32 off)
 {
 	return (-off - 1) / BPF_REG_SIZE;
 }
 
+static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+	int off, spi;
+
+	if (!tnum_is_const(reg->var_off)) {
+		verbose(env, "dynptr has to be at the constant offset\n");
+		return -EINVAL;
+	}
+
+	off = reg->off + reg->var_off.value;
+	if (off % BPF_REG_SIZE) {
+		verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
+		return -EINVAL;
+	}
+
+	spi = __get_spi(off);
+	if (spi < 1) {
+		verbose(env, "cannot pass in dynptr at an offset=%d\n", (int)(off + reg->var_off.value));
+		return -EINVAL;
+	}
+	return spi;
+}
+
 static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots)
 {
 	int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
@@ -754,7 +777,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
 	enum bpf_dynptr_type type;
 	int spi, i, id;
 
-	spi = get_spi(reg->off);
+	spi = dynptr_get_spi(env, reg);
+	if (spi < 0)
+		return spi;
 
 	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
 		return -EINVAL;
@@ -792,7 +817,9 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
 	struct bpf_func_state *state = func(env, reg);
 	int spi, i;
 
-	spi = get_spi(reg->off);
+	spi = dynptr_get_spi(env, reg);
+	if (spi < 0)
+		return spi;
 
 	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
 		return -EINVAL;
@@ -839,7 +866,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
 	if (reg->type == CONST_PTR_TO_DYNPTR)
 		return false;
 
-	spi = get_spi(reg->off);
+	spi = dynptr_get_spi(env, reg);
+	if (spi < 0)
+		return spi;
+
+	/* We will do check_mem_access to check and update stack bounds later */
 	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
 		return true;
 
@@ -855,14 +886,15 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
 static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
 {
 	struct bpf_func_state *state = func(env, reg);
-	int spi;
-	int i;
+	int spi, i;
 
 	/* This already represents first slot of initialized bpf_dynptr */
 	if (reg->type == CONST_PTR_TO_DYNPTR)
 		return true;
 
-	spi = get_spi(reg->off);
+	spi = dynptr_get_spi(env, reg);
+	if (spi < 0)
+		return false;
 	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
 	    !state->stack[spi].spilled_ptr.dynptr.first_slot)
 		return false;
@@ -891,7 +923,9 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg
 	if (reg->type == CONST_PTR_TO_DYNPTR) {
 		return reg->dynptr.type == dynptr_type;
 	} else {
-		spi = get_spi(reg->off);
+		spi = dynptr_get_spi(env, reg);
+		if (WARN_ON_ONCE(spi < 0))
+			return false;
 		return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type;
 	}
 }
@@ -2422,7 +2456,9 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *
 	 */
 	if (reg->type == CONST_PTR_TO_DYNPTR)
 		return 0;
-	spi = get_spi(reg->off);
+	spi = dynptr_get_spi(env, reg);
+	if (WARN_ON_ONCE(spi < 0))
+		return spi;
 	/* Caller ensures dynptr is valid and initialized, which means spi is in
 	 * bounds and spi is the first dynptr slot. Simply mark stack slot as
 	 * read.
@@ -5946,6 +5982,11 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
 	return 0;
 }
 
+static bool arg_type_is_release(enum bpf_arg_type type)
+{
+	return type & OBJ_RELEASE;
+}
+
 /* There are two register types representing a bpf_dynptr, one is PTR_TO_STACK
  * which points to a stack slot, and the other is CONST_PTR_TO_DYNPTR.
  *
@@ -5986,12 +6027,14 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
 	}
 	/* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
 	 * check_func_arg_reg_off's logic. We only need to check offset
-	 * alignment for PTR_TO_STACK.
+	 * and its alignment for PTR_TO_STACK.
 	 */
-	if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) {
-		verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
-		return -EINVAL;
+	if (reg->type == PTR_TO_STACK) {
+		err = dynptr_get_spi(env, reg);
+		if (err < 0)
+			return err;
 	}
+
 	/*  MEM_UNINIT - Points to memory that is an appropriate candidate for
 	 *		 constructing a mutable bpf_dynptr object.
 	 *
@@ -6070,11 +6113,6 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
 	       type == ARG_CONST_SIZE_OR_ZERO;
 }
 
-static bool arg_type_is_release(enum bpf_arg_type type)
-{
-	return type & OBJ_RELEASE;
-}
-
 static bool arg_type_is_dynptr(enum bpf_arg_type type)
 {
 	return base_type(type) == ARG_PTR_TO_DYNPTR;
@@ -6404,8 +6442,9 @@ static u32 dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state
 
 	if (reg->type == CONST_PTR_TO_DYNPTR)
 		return reg->ref_obj_id;
-
-	spi = get_spi(reg->off);
+	spi = dynptr_get_spi(env, reg);
+	if (WARN_ON_ONCE(spi < 0))
+		return U32_MAX;
 	return state->stack[spi].spilled_ptr.ref_obj_id;
 }
 
@@ -6479,7 +6518,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			 * PTR_TO_STACK.
 			 */
 			if (reg->type == PTR_TO_STACK) {
-				spi = get_spi(reg->off);
+				spi = dynptr_get_spi(env, reg);
+				if (spi < 0)
+					return spi;
 				if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
 				    !state->stack[spi].spilled_ptr.ref_obj_id) {
 					verbose(env, "arg %d is an unacquired reference\n", regno);
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
index a9229260a6ce..72800b1e8395 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
@@ -18,7 +18,7 @@ static struct {
 	const char *expected_verifier_err_msg;
 	int expected_runtime_err;
 } kfunc_dynptr_tests[] = {
-	{"not_valid_dynptr", "Expected an initialized dynptr as arg #1", 0},
+	{"not_valid_dynptr", "cannot pass in dynptr at an offset=-8", 0},
 	{"not_ptr_to_stack", "arg#0 expected pointer to stack or dynptr_ptr", 0},
 	{"dynptr_data_null", NULL, -EBADMSG},
 };
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index 78debc1b3820..32df3647b794 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -382,7 +382,7 @@ int invalid_helper1(void *ctx)
 
 /* A dynptr can't be passed into a helper function at a non-zero offset */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #3")
+__failure __msg("cannot pass in dynptr at an offset=-8")
 int invalid_helper2(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -444,7 +444,7 @@ int invalid_write2(void *ctx)
  * non-const offset
  */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #1")
+__failure __msg("arg 1 is an unacquired reference")
 int invalid_write3(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -584,7 +584,7 @@ int invalid_read4(void *ctx)
 
 /* Initializing a dynptr on an offset should fail */
 SEC("?raw_tp")
-__failure __msg("invalid write to stack")
+__failure __msg("cannot pass in dynptr at an offset=0")
 int invalid_offset(void *ctx)
 {
 	struct bpf_dynptr ptr;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
  2023-01-01  8:33 ` [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots Kumar Kartikeya Dwivedi
  2023-01-01  8:33 ` [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR Kumar Kartikeya Dwivedi
@ 2023-01-01  8:33 ` Kumar Kartikeya Dwivedi
  2023-01-04 22:42   ` Andrii Nakryiko
                     ` (2 more replies)
  2023-01-01  8:33 ` [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots Kumar Kartikeya Dwivedi
                   ` (5 subsequent siblings)
  8 siblings, 3 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:33 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

Currently, while reads are disallowed for dynptr stack slots, writes are
not. Reads don't work from both direct access and helpers, while writes
do work in both cases, but have the effect of overwriting the slot_type.

While this is fine, handling for a few edge cases is missing. Firstly,
a user can overwrite the stack slots of dynptr partially.

Consider the following layout:
spi: [d][d][?]
      2  1  0

First slot is at spi 2, second at spi 1.
Now, do a write of 1 to 8 bytes for spi 1.

This will essentially either write STACK_MISC for all slot_types or
STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
of zeroes). The end result is that slot is scrubbed.

Now, the layout is:
spi: [d][m][?]
      2  1  0

Suppose if user initializes spi = 1 as dynptr.
We get:
spi: [d][d][d]
      2  1  0

But this time, both spi 2 and spi 1 have first_slot = true.

Now, when passing spi 2 to dynptr helper, it will consider it as
initialized as it does not check whether second slot has first_slot ==
false. And spi 1 should already work as normal.

This effectively replaced size + offset of first dynptr, hence allowing
invalid OOB reads and writes.

Make a few changes to protect against this:
When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
STACK_DYNPTR type, mark both first and second slot (regardless of which
slot we touch) as STACK_INVALID. Reads are already prevented.

Second, prevent writing	to stack memory from helpers if the range may
contain any STACK_DYNPTR slots. Reads are already prevented.

For helpers, we cannot allow it to destroy dynptrs from the writes as
depending on arguments, helper may take uninit_mem and dynptr both at
the same time. This would mean that helper may write to uninit_mem
before it reads the dynptr, which would be bad.

PTR_TO_MEM: [?????dd]

Depending on the code inside the helper, it may end up overwriting the
dynptr contents first and then read those as the dynptr argument.

Verifier would only simulate destruction when it does byte by byte
access simulation in check_helper_call for meta.access_size, and
fail to catch this case, as it happens after argument checks.

The same would need to be done for any other non-trivial objects created
on the stack in the future, such as bpf_list_head on stack, or
bpf_rb_root on stack.

A common misunderstanding in the current code is that MEM_UNINIT means
writes, but note that writes may also be performed even without
MEM_UNINIT in case of helpers, in that case the code after handling meta
&& meta->raw_mode will complain when it sees STACK_DYNPTR. So that
invalid read case also covers writes to potential STACK_DYNPTR slots.
The only loophole was in case of meta->raw_mode which simulated writes
through instructions which could overwrite them.

A future series sequenced after this will focus on the clean up of
helper access checks and bugs around that.

Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ca970f80e395..b985d90505cc 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
 	__mark_dynptr_reg(reg, type, true);
 }
 
+static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
+				       struct bpf_func_state *state, int spi);
 
 static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
 				   enum bpf_arg_type arg_type, int insn_idx)
@@ -858,6 +860,44 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
 	return 0;
 }
 
+static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
+				       struct bpf_func_state *state, int spi)
+{
+	int i;
+
+	/* We always ensure that STACK_DYNPTR is never set partially,
+	 * hence just checking for slot_type[0] is enough. This is
+	 * different for STACK_SPILL, where it may be only set for
+	 * 1 byte, so code has to use is_spilled_reg.
+	 */
+	if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
+		return;
+	/* Reposition spi to first slot */
+	if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
+		spi = spi + 1;
+
+	mark_stack_slot_scratched(env, spi);
+	mark_stack_slot_scratched(env, spi - 1);
+
+	/* Writing partially to one dynptr stack slot destroys both. */
+	for (i = 0; i < BPF_REG_SIZE; i++) {
+		state->stack[spi].slot_type[i] = STACK_INVALID;
+		state->stack[spi - 1].slot_type[i] = STACK_INVALID;
+	}
+
+	/* Do not release reference state, we are destroying dynptr on stack,
+	 * not using some helper to release it. Just reset register.
+	 */
+	__mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
+	__mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
+
+	/* Same reason as unmark_stack_slots_dynptr above */
+	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
+	state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
+
+	return;
+}
+
 static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
 {
 	struct bpf_func_state *state = func(env, reg);
@@ -3384,6 +3424,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 			env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
 	}
 
+	destroy_stack_slots_dynptr(env, state, spi);
+
 	mark_stack_slot_scratched(env, spi);
 	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
 	    !register_is_null(reg) && env->bpf_capable) {
@@ -3497,6 +3539,13 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
 	if (err)
 		return err;
 
+	for (i = min_off; i < max_off; i++) {
+		int slot, spi;
+
+		slot = -i - 1;
+		spi = slot / BPF_REG_SIZE;
+		destroy_stack_slots_dynptr(env, state, spi);
+	}
 
 	/* Variable offset writes destroy any spilled pointers in range. */
 	for (i = min_off; i < max_off; i++) {
@@ -5524,6 +5573,30 @@ static int check_stack_range_initialized(
 	}
 
 	if (meta && meta->raw_mode) {
+		/* Ensure we won't be overwriting dynptrs when simulating byte
+		 * by byte access in check_helper_call using meta.access_size.
+		 * This would be a problem if we have a helper in the future
+		 * which takes:
+		 *
+		 *	helper(uninit_mem, len, dynptr)
+		 *
+		 * Now, uninint_mem may overlap with dynptr pointer. Hence, it
+		 * may end up writing to dynptr itself when touching memory from
+		 * arg 1. This can be relaxed on a case by case basis for known
+		 * safe cases, but reject due to the possibilitiy of aliasing by
+		 * default.
+		 */
+		for (i = min_off; i < max_off + access_size; i++) {
+			slot = -i - 1;
+			spi = slot / BPF_REG_SIZE;
+			/* raw_mode may write past allocated_stack */
+			if (state->allocated_stack <= slot)
+				continue;
+			if (state->stack[spi].slot_type[slot % BPF_REG_SIZE] == STACK_DYNPTR) {
+				verbose(env, "potential write to dynptr at off=%d disallowed\n", i);
+				return -EACCES;
+			}
+		}
 		meta->access_size = access_size;
 		meta->regno = regno;
 		return 0;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
                   ` (2 preceding siblings ...)
  2023-01-01  8:33 ` [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes Kumar Kartikeya Dwivedi
@ 2023-01-01  8:33 ` Kumar Kartikeya Dwivedi
  2023-01-04 22:44   ` Andrii Nakryiko
  2023-01-01  8:33 ` [PATCH bpf-next v1 5/8] selftests/bpf: Add dynptr pruning tests Kumar Kartikeya Dwivedi
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:33 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

Consider a program like below:

void prog(void)
{
	{
		struct bpf_dynptr ptr;
		bpf_dynptr_from_mem(...);
	}
	...
	{
		struct bpf_dynptr ptr;
		bpf_dynptr_from_mem(...);
	}
}

Here, the C compiler based on lifetime rules in the C standard would be
well within in its rights to share stack storage for dynptr 'ptr' as
their lifetimes do not overlap in the two distinct scopes. Currently,
such an example would be rejected by the verifier, but this is too
strict. Instead, we should allow reinitializing over dynptr stack slots
and forget information about the old dynptr object.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 kernel/bpf/verifier.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b985d90505cc..e85e8c4be00d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -786,6 +786,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
 	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
 		return -EINVAL;
 
+	destroy_stack_slots_dynptr(env, state, spi);
+	destroy_stack_slots_dynptr(env, state, spi - 1);
+
 	for (i = 0; i < BPF_REG_SIZE; i++) {
 		state->stack[spi].slot_type[i] = STACK_DYNPTR;
 		state->stack[spi - 1].slot_type[i] = STACK_DYNPTR;
@@ -901,7 +904,7 @@ static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
 static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
 {
 	struct bpf_func_state *state = func(env, reg);
-	int spi, i;
+	int spi;
 
 	if (reg->type == CONST_PTR_TO_DYNPTR)
 		return false;
@@ -914,12 +917,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
 	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
 		return true;
 
-	for (i = 0; i < BPF_REG_SIZE; i++) {
-		if (state->stack[spi].slot_type[i] == STACK_DYNPTR ||
-		    state->stack[spi - 1].slot_type[i] == STACK_DYNPTR)
-			return false;
-	}
-
+	/* We allow overwriting existing STACK_DYNPTR slots, see
+	 * mark_stack_slots_dynptr which calls destroy_stack_slots_dynptr to
+	 * ensure dynptr objects at the slots we are touching are completely
+	 * destructed before we reinitialize them for a new one.
+	 */
 	return true;
 }
 
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 5/8] selftests/bpf: Add dynptr pruning tests
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
                   ` (3 preceding siblings ...)
  2023-01-01  8:33 ` [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots Kumar Kartikeya Dwivedi
@ 2023-01-01  8:33 ` Kumar Kartikeya Dwivedi
  2023-01-04 22:49   ` Andrii Nakryiko
  2023-01-01  8:34 ` [PATCH bpf-next v1 6/8] selftests/bpf: Add dynptr var_off tests Kumar Kartikeya Dwivedi
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:33 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

Add verifier tests that verify the new pruning behavior for STACK_DYNPTR
slots, and ensure that state equivalence takes into account changes to
the old and current verifier state correctly.

Without the prior fixes, both of these bugs trigger with unprivileged
BPF mode.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/testing/selftests/bpf/verifier/dynptr.c | 90 +++++++++++++++++++
 1 file changed, 90 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/verifier/dynptr.c

diff --git a/tools/testing/selftests/bpf/verifier/dynptr.c b/tools/testing/selftests/bpf/verifier/dynptr.c
new file mode 100644
index 000000000000..798f4f7e0c57
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/dynptr.c
@@ -0,0 +1,90 @@
+{
+       "dynptr: rewrite dynptr slot",
+        .insns = {
+        BPF_MOV64_IMM(BPF_REG_0, 0),
+        BPF_LD_MAP_FD(BPF_REG_6, 0),
+        BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+        BPF_MOV64_IMM(BPF_REG_2, 8),
+        BPF_MOV64_IMM(BPF_REG_3, 0),
+        BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
+        BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
+        BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
+        BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+        BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+        BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0xeB9F),
+        BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+        BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -16),
+        BPF_MOV64_IMM(BPF_REG_2, 0),
+        BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
+        BPF_MOV64_IMM(BPF_REG_0, 0),
+        BPF_EXIT_INSN(),
+        },
+	.fixup_map_ringbuf = { 1 },
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "unknown func bpf_ringbuf_reserve_dynptr#198",
+	.result = REJECT,
+	.errstr = "arg 1 is an unacquired reference",
+},
+{
+       "dynptr: type confusion",
+       .insns = {
+       BPF_MOV64_IMM(BPF_REG_0, 0),
+       BPF_LD_MAP_FD(BPF_REG_6, 0),
+       BPF_LD_MAP_FD(BPF_REG_7, 0),
+       BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+       BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+       BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+       BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+       BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
+       BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -24),
+       BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0xeB9FeB9F),
+       BPF_ST_MEM(BPF_DW, BPF_REG_10, -24, 0xeB9FeB9F),
+       BPF_MOV64_IMM(BPF_REG_4, 0),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_2),
+       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
+       BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+       BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
+       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+       BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+       BPF_EXIT_INSN(),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
+       BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
+       BPF_MOV64_IMM(BPF_REG_2, 8),
+       BPF_MOV64_IMM(BPF_REG_3, 0),
+       BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
+       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
+       BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
+       BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+       /* pad with insns to trigger add_new_state heuristic for straight line path */
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
+       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
+       BPF_JMP_IMM(BPF_JA, 0, 0, 9),
+       BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+       BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0),
+       BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
+       BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+       BPF_MOV64_IMM(BPF_REG_2, 0),
+       BPF_MOV64_IMM(BPF_REG_3, 0),
+       BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
+       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
+       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_dynptr_from_mem),
+       BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+       BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -16),
+       BPF_MOV64_IMM(BPF_REG_2, 0),
+       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
+       BPF_MOV64_IMM(BPF_REG_0, 0),
+       BPF_EXIT_INSN(),
+       },
+       .fixup_map_hash_16b = { 1 },
+       .fixup_map_ringbuf = { 3 },
+       .result_unpriv = REJECT,
+       .errstr_unpriv = "unknown func bpf_ringbuf_reserve_dynptr#198",
+       .result = REJECT,
+       .errstr = "arg 1 is an unacquired reference",
+},
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 6/8] selftests/bpf: Add dynptr var_off tests
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
                   ` (4 preceding siblings ...)
  2023-01-01  8:33 ` [PATCH bpf-next v1 5/8] selftests/bpf: Add dynptr pruning tests Kumar Kartikeya Dwivedi
@ 2023-01-01  8:34 ` Kumar Kartikeya Dwivedi
  2023-01-01  8:34 ` [PATCH bpf-next v1 7/8] selftests/bpf: Add dynptr partial slot overwrite tests Kumar Kartikeya Dwivedi
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:34 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

Ensure that variable offset is handled correctly, and verifier takes
both fixed and variable part into account. Also ensure that only
constant var_off is allowed.

Make sure that unprivileged BPF cannot use var_off for dynptr.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/testing/selftests/bpf/verifier/dynptr.c | 38 ++++++++++++++++++-
 1 file changed, 36 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/verifier/dynptr.c b/tools/testing/selftests/bpf/verifier/dynptr.c
index 798f4f7e0c57..1aa7241e8a9e 100644
--- a/tools/testing/selftests/bpf/verifier/dynptr.c
+++ b/tools/testing/selftests/bpf/verifier/dynptr.c
@@ -1,5 +1,5 @@
 {
-       "dynptr: rewrite dynptr slot",
+       "dynptr: rewrite dynptr slot (pruning)",
         .insns = {
         BPF_MOV64_IMM(BPF_REG_0, 0),
         BPF_LD_MAP_FD(BPF_REG_6, 0),
@@ -26,7 +26,7 @@
 	.errstr = "arg 1 is an unacquired reference",
 },
 {
-       "dynptr: type confusion",
+       "dynptr: type confusion (pruning)",
        .insns = {
        BPF_MOV64_IMM(BPF_REG_0, 0),
        BPF_LD_MAP_FD(BPF_REG_6, 0),
@@ -88,3 +88,37 @@
        .result = REJECT,
        .errstr = "arg 1 is an unacquired reference",
 },
+{
+       "dynptr: rewrite dynptr slot (var_off)",
+	.insns = {
+	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 16),
+	BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_10, -4),
+	BPF_JMP_IMM(BPF_JGE, BPF_REG_8, 0, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 1),
+	BPF_EXIT_INSN(),
+	BPF_JMP_IMM(BPF_JLE, BPF_REG_8, 16, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 1),
+	BPF_EXIT_INSN(),
+	BPF_ALU64_IMM(BPF_AND, BPF_REG_8, 16),
+	BPF_LD_MAP_FD(BPF_REG_1, 0),
+	BPF_MOV64_IMM(BPF_REG_2, 8),
+	BPF_MOV64_IMM(BPF_REG_3, 0),
+	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -32),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_8),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
+	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0xeB9F),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -32),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_8),
+	BPF_MOV64_IMM(BPF_REG_2, 0),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_8),
+	BPF_EXIT_INSN(),
+	},
+	.fixup_map_ringbuf = { 9 },
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "R4 variable stack access prohibited for !root, var_off=(0x0; 0x10) off=-32",
+	.result = REJECT,
+	.errstr = "dynptr has to be at the constant offset",
+},
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 7/8] selftests/bpf: Add dynptr partial slot overwrite tests
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
                   ` (5 preceding siblings ...)
  2023-01-01  8:34 ` [PATCH bpf-next v1 6/8] selftests/bpf: Add dynptr var_off tests Kumar Kartikeya Dwivedi
@ 2023-01-01  8:34 ` Kumar Kartikeya Dwivedi
  2023-01-01  8:34 ` [PATCH bpf-next v1 8/8] selftests/bpf: Add dynptr helper tests Kumar Kartikeya Dwivedi
  2023-01-04 22:51 ` [PATCH bpf-next v1 0/8] Dynptr fixes Andrii Nakryiko
  8 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:34 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

Try creating a dynptr, then overwriting second slot with first slot of
another dynptr. Then, the first slot of first dynptr should also be
invalidated, but without our fix that does not happen. As a consequence,
the unfixed case allows passing first dynptr (as the kernel check only
checks for slot_type and then first_slot == true).

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/testing/selftests/bpf/verifier/dynptr.c | 58 +++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/tools/testing/selftests/bpf/verifier/dynptr.c b/tools/testing/selftests/bpf/verifier/dynptr.c
index 1aa7241e8a9e..8c57bc9e409f 100644
--- a/tools/testing/selftests/bpf/verifier/dynptr.c
+++ b/tools/testing/selftests/bpf/verifier/dynptr.c
@@ -122,3 +122,61 @@
 	.result = REJECT,
 	.errstr = "dynptr has to be at the constant offset",
 },
+{
+       "dynptr: partial dynptr slot invalidate",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_LD_MAP_FD(BPF_REG_6, 0),
+	BPF_LD_MAP_FD(BPF_REG_7, 0),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
+	BPF_MOV64_IMM(BPF_REG_4, 0),
+	BPF_MOV64_REG(BPF_REG_8, BPF_REG_2),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+	BPF_MOV64_IMM(BPF_REG_2, 8),
+	BPF_MOV64_IMM(BPF_REG_3, 0),
+	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -24),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
+	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
+	BPF_MOV64_IMM(BPF_REG_2, 8),
+	BPF_MOV64_IMM(BPF_REG_3, 0),
+	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_dynptr_from_mem),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -512),
+	BPF_MOV64_IMM(BPF_REG_2, 488),
+	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -24),
+	BPF_MOV64_IMM(BPF_REG_4, 0),
+	BPF_MOV64_IMM(BPF_REG_5, 0),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_dynptr_read),
+	BPF_MOV64_IMM(BPF_REG_8, 1),
+	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_MOV64_IMM(BPF_REG_8, 0),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -24),
+	BPF_MOV64_IMM(BPF_REG_2, 0),
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_8),
+	BPF_EXIT_INSN(),
+	},
+	.fixup_map_ringbuf = { 1 },
+	.fixup_map_hash_8b = { 3 },
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "unknown func bpf_ringbuf_reserve_dynptr#198",
+	.result = REJECT,
+	.errstr = "Expected an initialized dynptr as arg #3",
+},
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v1 8/8] selftests/bpf: Add dynptr helper tests
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
                   ` (6 preceding siblings ...)
  2023-01-01  8:34 ` [PATCH bpf-next v1 7/8] selftests/bpf: Add dynptr partial slot overwrite tests Kumar Kartikeya Dwivedi
@ 2023-01-01  8:34 ` Kumar Kartikeya Dwivedi
  2023-01-04 22:51 ` [PATCH bpf-next v1 0/8] Dynptr fixes Andrii Nakryiko
  8 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-01  8:34 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

First test that we allow overwriting dynptr slots and reinitializing
them. Next, test that MEM_UNINIT doesn't allow writing dynptr stack
slots.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 .../testing/selftests/bpf/progs/dynptr_fail.c | 62 +++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index 32df3647b794..73ae93dedaba 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -653,3 +653,65 @@ int dynptr_from_mem_invalid_api(void *ctx)
 
 	return 0;
 }
+
+SEC("?raw_tp")
+__success
+int dynptr_overwrite_unref(void *ctx)
+{
+	struct bpf_dynptr ptr;
+
+	get_map_val_dynptr(&ptr);
+	get_map_val_dynptr(&ptr);
+	get_map_val_dynptr(&ptr);
+
+	return 0;
+}
+
+SEC("?raw_tp")
+__failure __msg("Unreleased reference")
+int dynptr_overwrite_ref(void *ctx)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &ptr);
+	if (get_map_val_dynptr(&ptr))
+		bpf_ringbuf_discard_dynptr(&ptr, 0);
+	return 0;
+}
+
+/* Reject writes to dynptr slot from bpf_dynptr_read */
+SEC("?raw_tp")
+__failure __msg("potential write to dynptr at off=-16")
+int dynptr_read_into_slot(void *ctx)
+{
+	union {
+		struct {
+			char _pad[48];
+			struct bpf_dynptr ptr;
+		};
+		char buf[64];
+	} data;
+
+	bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &data.ptr);
+	/* this should fail */
+	bpf_dynptr_read(data.buf, sizeof(data.buf), &data.ptr, 0, 0);
+
+	return 0;
+}
+
+/* Reject writes to dynptr slot for uninit arg */
+SEC("?raw_tp")
+__failure __msg("potential write to dynptr at off=-16")
+int uninit_write_into_slot(void *ctx)
+{
+	struct {
+		char buf[64];
+		struct bpf_dynptr ptr;
+	} data;
+
+	bpf_ringbuf_reserve_dynptr(&ringbuf, 80, 0, &data.ptr);
+	/* this should fail */
+	bpf_get_current_comm(data.buf, 80);
+
+	return 0;
+}
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-01  8:33 ` [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots Kumar Kartikeya Dwivedi
@ 2023-01-02 19:28   ` Eduard Zingerman
  2023-01-09 10:59     ` Kumar Kartikeya Dwivedi
  2023-01-04 22:24   ` Andrii Nakryiko
  2023-01-06  0:18   ` Joanne Koong
  2 siblings, 1 reply; 38+ messages in thread
From: Eduard Zingerman @ 2023-01-02 19:28 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi, bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet

On Sun, 2023-01-01 at 14:03 +0530, Kumar Kartikeya Dwivedi wrote:
> The root of the problem is missing liveness marking for STACK_DYNPTR
> slots. This leads to all kinds of problems inside stacksafe.
> 
> The verifier by default inside stacksafe ignores spilled_ptr in stack
> slots which do not have REG_LIVE_READ marks. Since this is being checked
> in the 'old' explored state, it must have already done clean_live_states
> for this old bpf_func_state. Hence, it won't be receiving any more
> liveness marks from to be explored insns (it has received REG_LIVE_DONE
> marking from liveness point of view).
> 
> What this means is that verifier considers that it's safe to not compare
> the stack slot if was never read by children states. While liveness
> marks are usually propagated correctly following the parentage chain for
> spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
> case for STACK_DYNPTR.
> 
> clean_live_states hence simply rewrites these stack slots to the type
> STACK_INVALID since it sees no REG_LIVE_READ marks.
> 
> The end result is that we will never see STACK_DYNPTR slots in explored
> state. Even if verifier was conservatively matching !REG_LIVE_READ
> slots, very next check continuing the stacksafe loop on seeing
> STACK_INVALID would again prevent further checks.
> 
> Now as long as verifier stores an explored state which we can compare to
> when reaching a pruning point, we can abuse this bug to make verifier
> prune search for obviously unsafe paths using STACK_DYNPTR slots
> thinking they are never used hence safe.
> 
> Doing this in unprivileged mode is a bit challenging. add_new_state is
> only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
> or when jmps_processed difference is >= 2 and insn_processed difference
> is >= 8. So coming up with the unprivileged case requires a little more
> work, but it is still totally possible. The test case being discussed
> below triggers the heuristic even in unprivileged mode.
> 
> However, it no longer works since commit
> 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").
> 
> Let's try to study the test step by step.
> 
> Consider the following program (C style BPF ASM):
> 
> 0  r0 = 0;
> 1  r6 = &ringbuf_map;
> 3  r1 = r6;
> 4  r2 = 8;
> 5  r3 = 0;
> 6  r4 = r10;
> 7  r4 -= -16;
> 8  call bpf_ringbuf_reserve_dynptr;
> 9  if r0 == 0 goto pc+1;
> 10 goto pc+1;
> 11 *(r10 - 16) = 0xeB9F;
> 12 r1 = r10;
> 13 r1 -= -16;
> 14 r2 = 0;
> 15 call bpf_ringbuf_discard_dynptr;
> 16 r0 = 0;
> 17 exit;
> 
> We know that insn 12 will be a pruning point, hence if we force
> add_new_state for it, it will first verify the following path as
> safe in straight line exploration:
> 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
> 
> Then, when we arrive at insn 12 from the following path:
> 0 1 3 4 5 6 7 8 9 -> 11 (12)
> 
> We will find a state that has been verified as safe already at insn 12.
> Since register state is same at this point, regsafe will pass. Next, in
> stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
> seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
> Next, refsafe is also true as reference state is unchanged in both
> states.
> 
> The states are considered equivalent and search is pruned.
> 
> Hence, we are able to construct a dynptr with arbitrary contents and use
> the dynptr API to operate on this arbitrary pointer and arbitrary size +
> offset.
> 
> To fix this, first define a mark_dynptr_read function that propagates
> liveness marks whenever a valid initialized dynptr is accessed by dynptr
> helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
> uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
> screening off mark_reg_read and not propagating marks upwards from that
> point.
> 
> This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
> none, so clean_live_states either sets both slots to STACK_INVALID or
> none of them. This is the invariant the checks inside stacksafe rely on.
> 
> Next, do a complete comparison of both stack slots whenever they have
> STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
> also whether both form the same first_slot. Only then is the later path
> safe.
> 
> Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 73 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 4a25375ebb0d..f7248235e119 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
>  		state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
>  	}
>  
> +	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +	state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +
>  	return 0;
>  }
>  
> @@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
>  
>  	__mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
>  	__mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> +
> +	/* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
> +	 *
> +	 * While we don't allow reading STACK_INVALID, it is still possible to
> +	 * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
> +	 * helpers or insns can do partial read of that part without failing,
> +	 * but check_stack_range_initialized, check_stack_read_var_off, and
> +	 * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
> +	 * the slot conservatively. Hence we need to screen off those liveness
> +	 * marking walks.
> +	 *
> +	 * This was not a problem before because STACK_INVALID is only set by
> +	 * default, or in clean_live_states after REG_LIVE_DONE, not randomly
> +	 * during verifier state exploration. Hence, for this case parentage
> +	 * chain will still be live, while earlier reg->parent was NULL, so we
> +	 * need REG_LIVE_WRITTEN to screen off read marker propagation.
> +	 */
> +	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +	state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +

This is purely to assist with verifier state pruning and does not
affect correctness, right?
Commenting the lines does not seem to fail any tests, maybe add one
matching some "77 safe: ..." jump in the log?

>  	return 0;
>  }
>  
> @@ -2388,6 +2411,30 @@ static int mark_reg_read(struct bpf_verifier_env *env,
>  	return 0;
>  }
>  
> +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> +{
> +	struct bpf_func_state *state = func(env, reg);
> +	int spi, ret;
> +
> +	/* For CONST_PTR_TO_DYNPTR, it must have already been done by
> +	 * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
> +	 * check_kfunc_call.
> +	 */
> +	if (reg->type == CONST_PTR_TO_DYNPTR)
> +		return 0;
> +	spi = get_spi(reg->off);
> +	/* Caller ensures dynptr is valid and initialized, which means spi is in
> +	 * bounds and spi is the first dynptr slot. Simply mark stack slot as
> +	 * read.
> +	 */
> +	ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
> +			    state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
> +	if (ret)
> +		return ret;
> +	return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
> +			     state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
> +}
> +
>  /* This function is supposed to be used by the following 32-bit optimization
>   * code only. It returns TRUE if the source or destination register operates
>   * on 64-bit, otherwise return FALSE.
> @@ -5928,6 +5975,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>  			enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
>  {
>  	struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
> +	int err;
>  
>  	/* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
>  	 * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
> @@ -6008,6 +6056,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>  				err_extra, regno);
>  			return -EINVAL;
>  		}
> +
> +		err = mark_dynptr_read(env, reg);
> +		if (err)
> +			return err;
>  	}
>  	return 0;
>  }
> @@ -13204,6 +13256,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>  			 * return false to continue verification of this path
>  			 */
>  			return false;
> +		/* Both are same slot_type, but STACK_DYNPTR requires more
> +		 * checks before it can considered safe.
> +		 */
> +		if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_DYNPTR) {
> +			/* If both are STACK_DYNPTR, type must be same */
> +			if (old->stack[spi].spilled_ptr.dynptr.type != cur->stack[spi].spilled_ptr.dynptr.type)
> +				return false;
> +			/* Both should also have first slot at same spi */
> +			if (old->stack[spi].spilled_ptr.dynptr.first_slot != cur->stack[spi].spilled_ptr.dynptr.first_slot)
> +				return false;
> +			/* ids should be same */
> +			if (!!old->stack[spi].spilled_ptr.ref_obj_id != !!cur->stack[spi].spilled_ptr.ref_obj_id)
> +				return false;
> +			if (old->stack[spi].spilled_ptr.ref_obj_id &&
> +			    !check_ids(old->stack[spi].spilled_ptr.ref_obj_id,
> +				       cur->stack[spi].spilled_ptr.ref_obj_id, idmap))
> +				return false;
> +			WARN_ON_ONCE(i % BPF_REG_SIZE);
> +			i += BPF_REG_SIZE - 1;
> +			continue;
> +		}

Nitpick: maybe move the checks above inside regsafe() as all
conditions operate on old/cur->stack[spi].spilled_ptr ?

Acked-by: Eduard Zingerman <eddyz@gmail.com>

>  		if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
>  			continue;
>  		if (!is_spilled_reg(&old->stack[spi]))


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-01  8:33 ` [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots Kumar Kartikeya Dwivedi
  2023-01-02 19:28   ` Eduard Zingerman
@ 2023-01-04 22:24   ` Andrii Nakryiko
  2023-01-09 11:05     ` Kumar Kartikeya Dwivedi
  2023-01-06  0:18   ` Joanne Koong
  2 siblings, 1 reply; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-04 22:24 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> The root of the problem is missing liveness marking for STACK_DYNPTR
> slots. This leads to all kinds of problems inside stacksafe.
>
> The verifier by default inside stacksafe ignores spilled_ptr in stack
> slots which do not have REG_LIVE_READ marks. Since this is being checked
> in the 'old' explored state, it must have already done clean_live_states
> for this old bpf_func_state. Hence, it won't be receiving any more
> liveness marks from to be explored insns (it has received REG_LIVE_DONE
> marking from liveness point of view).
>
> What this means is that verifier considers that it's safe to not compare
> the stack slot if was never read by children states. While liveness
> marks are usually propagated correctly following the parentage chain for
> spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
> case for STACK_DYNPTR.
>
> clean_live_states hence simply rewrites these stack slots to the type
> STACK_INVALID since it sees no REG_LIVE_READ marks.
>
> The end result is that we will never see STACK_DYNPTR slots in explored
> state. Even if verifier was conservatively matching !REG_LIVE_READ
> slots, very next check continuing the stacksafe loop on seeing
> STACK_INVALID would again prevent further checks.
>
> Now as long as verifier stores an explored state which we can compare to
> when reaching a pruning point, we can abuse this bug to make verifier
> prune search for obviously unsafe paths using STACK_DYNPTR slots
> thinking they are never used hence safe.
>
> Doing this in unprivileged mode is a bit challenging. add_new_state is
> only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
> or when jmps_processed difference is >= 2 and insn_processed difference
> is >= 8. So coming up with the unprivileged case requires a little more
> work, but it is still totally possible. The test case being discussed
> below triggers the heuristic even in unprivileged mode.
>
> However, it no longer works since commit
> 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").
>
> Let's try to study the test step by step.
>
> Consider the following program (C style BPF ASM):
>
> 0  r0 = 0;
> 1  r6 = &ringbuf_map;
> 3  r1 = r6;
> 4  r2 = 8;
> 5  r3 = 0;
> 6  r4 = r10;
> 7  r4 -= -16;
> 8  call bpf_ringbuf_reserve_dynptr;
> 9  if r0 == 0 goto pc+1;
> 10 goto pc+1;
> 11 *(r10 - 16) = 0xeB9F;
> 12 r1 = r10;
> 13 r1 -= -16;
> 14 r2 = 0;
> 15 call bpf_ringbuf_discard_dynptr;
> 16 r0 = 0;
> 17 exit;
>
> We know that insn 12 will be a pruning point, hence if we force
> add_new_state for it, it will first verify the following path as
> safe in straight line exploration:
> 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
>
> Then, when we arrive at insn 12 from the following path:
> 0 1 3 4 5 6 7 8 9 -> 11 (12)
>
> We will find a state that has been verified as safe already at insn 12.
> Since register state is same at this point, regsafe will pass. Next, in
> stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
> seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
> Next, refsafe is also true as reference state is unchanged in both
> states.
>
> The states are considered equivalent and search is pruned.
>
> Hence, we are able to construct a dynptr with arbitrary contents and use
> the dynptr API to operate on this arbitrary pointer and arbitrary size +
> offset.
>
> To fix this, first define a mark_dynptr_read function that propagates
> liveness marks whenever a valid initialized dynptr is accessed by dynptr
> helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
> uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
> screening off mark_reg_read and not propagating marks upwards from that
> point.
>
> This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
> none, so clean_live_states either sets both slots to STACK_INVALID or
> none of them. This is the invariant the checks inside stacksafe rely on.
>
> Next, do a complete comparison of both stack slots whenever they have
> STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
> also whether both form the same first_slot. Only then is the later path
> safe.
>
> Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 73 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 4a25375ebb0d..f7248235e119 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
>                 state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
>         }
>
> +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +
>         return 0;
>  }
>
> @@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
>
>         __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
>         __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> +
> +       /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
> +        *
> +        * While we don't allow reading STACK_INVALID, it is still possible to
> +        * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
> +        * helpers or insns can do partial read of that part without failing,
> +        * but check_stack_range_initialized, check_stack_read_var_off, and
> +        * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
> +        * the slot conservatively. Hence we need to screen off those liveness
> +        * marking walks.
> +        *
> +        * This was not a problem before because STACK_INVALID is only set by
> +        * default, or in clean_live_states after REG_LIVE_DONE, not randomly
> +        * during verifier state exploration. Hence, for this case parentage
> +        * chain will still be live, while earlier reg->parent was NULL, so we
> +        * need REG_LIVE_WRITTEN to screen off read marker propagation.
> +        */
> +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +
>         return 0;
>  }
>
> @@ -2388,6 +2411,30 @@ static int mark_reg_read(struct bpf_verifier_env *env,
>         return 0;
>  }
>
> +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> +{
> +       struct bpf_func_state *state = func(env, reg);
> +       int spi, ret;
> +
> +       /* For CONST_PTR_TO_DYNPTR, it must have already been done by
> +        * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
> +        * check_kfunc_call.
> +        */
> +       if (reg->type == CONST_PTR_TO_DYNPTR)
> +               return 0;
> +       spi = get_spi(reg->off);
> +       /* Caller ensures dynptr is valid and initialized, which means spi is in
> +        * bounds and spi is the first dynptr slot. Simply mark stack slot as
> +        * read.
> +        */
> +       ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
> +                           state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
> +       if (ret)
> +               return ret;
> +       return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
> +                            state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
> +}
> +
>  /* This function is supposed to be used by the following 32-bit optimization
>   * code only. It returns TRUE if the source or destination register operates
>   * on 64-bit, otherwise return FALSE.
> @@ -5928,6 +5975,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>                         enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
>  {
>         struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
> +       int err;
>
>         /* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
>          * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
> @@ -6008,6 +6056,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>                                 err_extra, regno);
>                         return -EINVAL;
>                 }
> +
> +               err = mark_dynptr_read(env, reg);
> +               if (err)
> +                       return err;
>         }
>         return 0;
>  }
> @@ -13204,6 +13256,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>                          * return false to continue verification of this path
>                          */
>                         return false;
> +               /* Both are same slot_type, but STACK_DYNPTR requires more
> +                * checks before it can considered safe.
> +                */
> +               if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_DYNPTR) {

how about moving this check right after `if (i % BPF_REG_SIZE !=
BPF_REG_SIZE - 1)` ? Then we can actually generalize this to a switch
to handle STACK_SPILL and STACK_DYNPTR separately. I'm adding
STACK_ITER in my upcoming patch set, so this will have all the things
ready for that?

switch (old->stack[spi].slot_type[BPF_REG_SIZE - 1]) {
case STACK_SPILL:
  if (!regsafe(...))
     return false;
  break;
case STACK_DYNPTR:
  ...
  break;
/* and then eventually */
case STACK_ITER:
  ...

WDYT?

> +                       /* If both are STACK_DYNPTR, type must be same */
> +                       if (old->stack[spi].spilled_ptr.dynptr.type != cur->stack[spi].spilled_ptr.dynptr.type)

struct bpf_reg_state *old_reg, *cur_reg;

old_reg = &old->stack[spi].spilled_ptr;
cur_reg = &cur->stack[spi].spilled_ptr;

and then use old_reg and cur_reg in one simple if

here's how I have it locally:

                case STACK_DYNPTR:
                        old_reg = &old->stack[spi].spilled_ptr;
                        cur_reg = &cur->stack[spi].spilled_ptr;
                        if (old_reg->dynptr.type != cur_reg->dynptr.type ||
                            old_reg->dynptr.first_slot !=
cur_reg->dynptr.first_slot ||
                            !check_ids(old_reg->ref_obj_id,
cur_reg->ref_obj_id, idmap))
                                return false;
                        break;

seems a bit cleaner?

I'm also thinking of getting rid of first_slot field and instead have
a rule that first slot has proper type set, but the next one has
BPF_DYNPTR_TYPE_INVALID as type. This should simplify things a bit, I
think. At least it seems that way for STACK_ITER state I'm adding. But
that's a separate refactoring, probably.

> +                               return false;
> +                       /* Both should also have first slot at same spi */
> +                       if (old->stack[spi].spilled_ptr.dynptr.first_slot != cur->stack[spi].spilled_ptr.dynptr.first_slot)
> +                               return false;
> +                       /* ids should be same */
> +                       if (!!old->stack[spi].spilled_ptr.ref_obj_id != !!cur->stack[spi].spilled_ptr.ref_obj_id)
> +                               return false;
> +                       if (old->stack[spi].spilled_ptr.ref_obj_id &&
> +                           !check_ids(old->stack[spi].spilled_ptr.ref_obj_id,
> +                                      cur->stack[spi].spilled_ptr.ref_obj_id, idmap))

my previous change to tech check_ids to enforce that either id have to
be zeroes or non-zeroes at the same time already landed, so you don't
need to check `old->stack[spi].spilled_ptr.ref_obj_id`. Even more, it
seems wrong to do this check like this, because if cur has ref_obj_id
set we'll ignore it, right?

> +                               return false;
> +                       WARN_ON_ONCE(i % BPF_REG_SIZE);
> +                       i += BPF_REG_SIZE - 1;
> +                       continue;
> +               }
>                 if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
>                         continue;
>                 if (!is_spilled_reg(&old->stack[spi]))
> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
  2023-01-01  8:33 ` [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR Kumar Kartikeya Dwivedi
@ 2023-01-04 22:32   ` Andrii Nakryiko
  2023-01-09 11:18     ` Kumar Kartikeya Dwivedi
  2023-01-06  0:57   ` Joanne Koong
  1 sibling, 1 reply; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-04 22:32 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Currently, the dynptr function is not checking the variable offset part
> of PTR_TO_STACK that it needs to check. The fixed offset is considered
> when computing the stack pointer index, but if the variable offset was
> not a constant (such that it could not be accumulated in reg->off), we
> will end up a discrepency where runtime pointer does not point to the
> actual stack slot we mark as STACK_DYNPTR.
>
> It is impossible to precisely track dynptr state when variable offset is
> not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc.
> simply reject the case where reg->var_off is not constant. Then,
> consider both reg->off and reg->var_off.value when computing the stack
> pointer index.
>
> A new helper dynptr_get_spi is introduced to hide over these details
> since the dynptr needs to be located in multiple places outside the
> process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we
> need to enforce these checks in all places.
>
> Note that it is disallowed for unprivileged users to have a non-constant
> var_off, so this problem should only be possible to trigger from
> programs having CAP_PERFMON. However, its effects can vary.
>
> Without the fix, it is possible to replace the contents of the dynptr
> arbitrarily by making verifier mark different stack slots than actual
> location and then doing writes to the actual stack address of dynptr at
> runtime.
>
> Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c                         | 83 ++++++++++++++-----
>  .../bpf/prog_tests/kfunc_dynptr_param.c       |  2 +-
>  .../testing/selftests/bpf/progs/dynptr_fail.c |  6 +-
>  3 files changed, 66 insertions(+), 25 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index f7248235e119..ca970f80e395 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env,
>                 verbose(env, "D");
>  }
>
> -static int get_spi(s32 off)
> +static int __get_spi(s32 off)
>  {
>         return (-off - 1) / BPF_REG_SIZE;
>  }
>
> +static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> +{
> +       int off, spi;
> +
> +       if (!tnum_is_const(reg->var_off)) {
> +               verbose(env, "dynptr has to be at the constant offset\n");
> +               return -EINVAL;
> +       }
> +
> +       off = reg->off + reg->var_off.value;
> +       if (off % BPF_REG_SIZE) {
> +               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);

s/reg->off/off/ ?

> +               return -EINVAL;
> +       }
> +
> +       spi = __get_spi(off);
> +       if (spi < 1) {
> +               verbose(env, "cannot pass in dynptr at an offset=%d\n", (int)(off + reg->var_off.value));

s/(int)(off + reg->var_off.value)/off/?

> +               return -EINVAL;
> +       }
> +       return spi;
> +}
> +

[...]

> @@ -2422,7 +2456,9 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *
>          */
>         if (reg->type == CONST_PTR_TO_DYNPTR)
>                 return 0;
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (WARN_ON_ONCE(spi < 0))
> +               return spi;
>         /* Caller ensures dynptr is valid and initialized, which means spi is in
>          * bounds and spi is the first dynptr slot. Simply mark stack slot as
>          * read.
> @@ -5946,6 +5982,11 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
>         return 0;
>  }
>
> +static bool arg_type_is_release(enum bpf_arg_type type)
> +{
> +       return type & OBJ_RELEASE;
> +}
> +

no need to move it?

>  /* There are two register types representing a bpf_dynptr, one is PTR_TO_STACK
>   * which points to a stack slot, and the other is CONST_PTR_TO_DYNPTR.
>   *
> @@ -5986,12 +6027,14 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>         }
>         /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
>          * check_func_arg_reg_off's logic. We only need to check offset
> -        * alignment for PTR_TO_STACK.
> +        * and its alignment for PTR_TO_STACK.
>          */
> -       if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) {
> -               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
> -               return -EINVAL;
> +       if (reg->type == PTR_TO_STACK) {
> +               err = dynptr_get_spi(env, reg);
> +               if (err < 0)
> +                       return err;
>         }
> +
>         /*  MEM_UNINIT - Points to memory that is an appropriate candidate for
>          *               constructing a mutable bpf_dynptr object.
>          *
> @@ -6070,11 +6113,6 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
>                type == ARG_CONST_SIZE_OR_ZERO;
>  }
>
> -static bool arg_type_is_release(enum bpf_arg_type type)
> -{
> -       return type & OBJ_RELEASE;
> -}
> -
>  static bool arg_type_is_dynptr(enum bpf_arg_type type)
>  {
>         return base_type(type) == ARG_PTR_TO_DYNPTR;
> @@ -6404,8 +6442,9 @@ static u32 dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state

why not make dynptr_ref_obj_id return int and <0 on error? There seems
to be just one place where we call dynptr_ref_obj_id and we can check
and report error there

>
>         if (reg->type == CONST_PTR_TO_DYNPTR)
>                 return reg->ref_obj_id;
> -
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (WARN_ON_ONCE(spi < 0))
> +               return U32_MAX;
>         return state->stack[spi].spilled_ptr.ref_obj_id;
>  }
>

[...]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-01  8:33 ` [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes Kumar Kartikeya Dwivedi
@ 2023-01-04 22:42   ` Andrii Nakryiko
  2023-01-09 11:26     ` Kumar Kartikeya Dwivedi
  2023-01-05  3:06   ` Alexei Starovoitov
  2023-01-06 19:16   ` Joanne Koong
  2 siblings, 1 reply; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-04 22:42 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Currently, while reads are disallowed for dynptr stack slots, writes are
> not. Reads don't work from both direct access and helpers, while writes
> do work in both cases, but have the effect of overwriting the slot_type.
>
> While this is fine, handling for a few edge cases is missing. Firstly,
> a user can overwrite the stack slots of dynptr partially.
>
> Consider the following layout:
> spi: [d][d][?]
>       2  1  0
>
> First slot is at spi 2, second at spi 1.
> Now, do a write of 1 to 8 bytes for spi 1.
>
> This will essentially either write STACK_MISC for all slot_types or
> STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
> of zeroes). The end result is that slot is scrubbed.
>
> Now, the layout is:
> spi: [d][m][?]
>       2  1  0
>
> Suppose if user initializes spi = 1 as dynptr.
> We get:
> spi: [d][d][d]
>       2  1  0
>
> But this time, both spi 2 and spi 1 have first_slot = true.
>
> Now, when passing spi 2 to dynptr helper, it will consider it as
> initialized as it does not check whether second slot has first_slot ==
> false. And spi 1 should already work as normal.
>
> This effectively replaced size + offset of first dynptr, hence allowing
> invalid OOB reads and writes.
>
> Make a few changes to protect against this:
> When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
> STACK_DYNPTR type, mark both first and second slot (regardless of which
> slot we touch) as STACK_INVALID. Reads are already prevented.
>
> Second, prevent writing to stack memory from helpers if the range may
> contain any STACK_DYNPTR slots. Reads are already prevented.
>
> For helpers, we cannot allow it to destroy dynptrs from the writes as
> depending on arguments, helper may take uninit_mem and dynptr both at
> the same time. This would mean that helper may write to uninit_mem
> before it reads the dynptr, which would be bad.
>
> PTR_TO_MEM: [?????dd]
>
> Depending on the code inside the helper, it may end up overwriting the
> dynptr contents first and then read those as the dynptr argument.
>
> Verifier would only simulate destruction when it does byte by byte
> access simulation in check_helper_call for meta.access_size, and
> fail to catch this case, as it happens after argument checks.
>
> The same would need to be done for any other non-trivial objects created
> on the stack in the future, such as bpf_list_head on stack, or
> bpf_rb_root on stack.
>
> A common misunderstanding in the current code is that MEM_UNINIT means
> writes, but note that writes may also be performed even without
> MEM_UNINIT in case of helpers, in that case the code after handling meta
> && meta->raw_mode will complain when it sees STACK_DYNPTR. So that
> invalid read case also covers writes to potential STACK_DYNPTR slots.
> The only loophole was in case of meta->raw_mode which simulated writes
> through instructions which could overwrite them.
>
> A future series sequenced after this will focus on the clean up of
> helper access checks and bugs around that.
>
> Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 73 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index ca970f80e395..b985d90505cc 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
>         __mark_dynptr_reg(reg, type, true);
>  }
>
> +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> +                                      struct bpf_func_state *state, int spi);
>
>  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
>                                    enum bpf_arg_type arg_type, int insn_idx)
> @@ -858,6 +860,44 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
>         return 0;
>  }
>
> +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> +                                      struct bpf_func_state *state, int spi)
> +{
> +       int i;
> +
> +       /* We always ensure that STACK_DYNPTR is never set partially,
> +        * hence just checking for slot_type[0] is enough. This is
> +        * different for STACK_SPILL, where it may be only set for
> +        * 1 byte, so code has to use is_spilled_reg.
> +        */
> +       if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
> +               return;
> +       /* Reposition spi to first slot */
> +       if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
> +               spi = spi + 1;
> +
> +       mark_stack_slot_scratched(env, spi);
> +       mark_stack_slot_scratched(env, spi - 1);
> +
> +       /* Writing partially to one dynptr stack slot destroys both. */
> +       for (i = 0; i < BPF_REG_SIZE; i++) {
> +               state->stack[spi].slot_type[i] = STACK_INVALID;
> +               state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> +       }
> +
> +       /* Do not release reference state, we are destroying dynptr on stack,
> +        * not using some helper to release it. Just reset register.
> +        */
> +       __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> +       __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> +
> +       /* Same reason as unmark_stack_slots_dynptr above */
> +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +
> +       return;
> +}
> +
>  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
>  {
>         struct bpf_func_state *state = func(env, reg);
> @@ -3384,6 +3424,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>                         env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
>         }
>
> +       destroy_stack_slots_dynptr(env, state, spi);
> +

subjective, but it feels like having an explicit slot_type !=
STACK_DYNPTR here is better, then "destroy_stack_slots_dynptr"
actually is doing destruction, not "maybe_destroy_stack_slots_dynptr",
which you effectively are implementing here

also, shouldn't overwrite of dynptrs w/ ref_obj_id be prevented early
on with a meaningful error, instead of waiting for "unreleased
reference" error later on? for ref_obj_id dynptrs we know that you
have to call helper with OBJ_RELEASE semantics, at which point we'll
reset stack slots

am I missing something?


>         mark_stack_slot_scratched(env, spi);
>         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
>             !register_is_null(reg) && env->bpf_capable) {
> @@ -3497,6 +3539,13 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
>         if (err)
>                 return err;
>
> +       for (i = min_off; i < max_off; i++) {
> +               int slot, spi;
> +
> +               slot = -i - 1;
> +               spi = slot / BPF_REG_SIZE;
> +               destroy_stack_slots_dynptr(env, state, spi);
> +       }
>
>         /* Variable offset writes destroy any spilled pointers in range. */
>         for (i = min_off; i < max_off; i++) {
> @@ -5524,6 +5573,30 @@ static int check_stack_range_initialized(
>         }
>
>         if (meta && meta->raw_mode) {
> +               /* Ensure we won't be overwriting dynptrs when simulating byte
> +                * by byte access in check_helper_call using meta.access_size.
> +                * This would be a problem if we have a helper in the future
> +                * which takes:
> +                *
> +                *      helper(uninit_mem, len, dynptr)
> +                *
> +                * Now, uninint_mem may overlap with dynptr pointer. Hence, it
> +                * may end up writing to dynptr itself when touching memory from
> +                * arg 1. This can be relaxed on a case by case basis for known
> +                * safe cases, but reject due to the possibilitiy of aliasing by
> +                * default.
> +                */
> +               for (i = min_off; i < max_off + access_size; i++) {
> +                       slot = -i - 1;

nit: slot name is misleading, we normally call entire 8-byte slot a
"slot", while here slot is actually off, right? same above.

> +                       spi = slot / BPF_REG_SIZE;
> +                       /* raw_mode may write past allocated_stack */
> +                       if (state->allocated_stack <= slot)
> +                               continue;
> +                       if (state->stack[spi].slot_type[slot % BPF_REG_SIZE] == STACK_DYNPTR) {
> +                               verbose(env, "potential write to dynptr at off=%d disallowed\n", i);
> +                               return -EACCES;
> +                       }
> +               }
>                 meta->access_size = access_size;
>                 meta->regno = regno;
>                 return 0;
> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots
  2023-01-01  8:33 ` [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots Kumar Kartikeya Dwivedi
@ 2023-01-04 22:44   ` Andrii Nakryiko
  2023-01-06 19:33     ` Joanne Koong
  0 siblings, 1 reply; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-04 22:44 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Consider a program like below:
>
> void prog(void)
> {
>         {
>                 struct bpf_dynptr ptr;
>                 bpf_dynptr_from_mem(...);
>         }
>         ...
>         {
>                 struct bpf_dynptr ptr;
>                 bpf_dynptr_from_mem(...);
>         }
> }
>
> Here, the C compiler based on lifetime rules in the C standard would be
> well within in its rights to share stack storage for dynptr 'ptr' as
> their lifetimes do not overlap in the two distinct scopes. Currently,
> such an example would be rejected by the verifier, but this is too
> strict. Instead, we should allow reinitializing over dynptr stack slots
> and forget information about the old dynptr object.
>

As mentioned in the previous patch, shouldn't we allow this only for
dynptrs that don't require OBJ_RELEASE, which would be those with
ref_obj_id == 0?


> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b985d90505cc..e85e8c4be00d 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -786,6 +786,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
>         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
>                 return -EINVAL;
>
> +       destroy_stack_slots_dynptr(env, state, spi);
> +       destroy_stack_slots_dynptr(env, state, spi - 1);
> +
>         for (i = 0; i < BPF_REG_SIZE; i++) {
>                 state->stack[spi].slot_type[i] = STACK_DYNPTR;
>                 state->stack[spi - 1].slot_type[i] = STACK_DYNPTR;
> @@ -901,7 +904,7 @@ static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
>  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
>  {
>         struct bpf_func_state *state = func(env, reg);
> -       int spi, i;
> +       int spi;
>
>         if (reg->type == CONST_PTR_TO_DYNPTR)
>                 return false;
> @@ -914,12 +917,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
>         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
>                 return true;
>
> -       for (i = 0; i < BPF_REG_SIZE; i++) {
> -               if (state->stack[spi].slot_type[i] == STACK_DYNPTR ||
> -                   state->stack[spi - 1].slot_type[i] == STACK_DYNPTR)
> -                       return false;
> -       }
> -
> +       /* We allow overwriting existing STACK_DYNPTR slots, see
> +        * mark_stack_slots_dynptr which calls destroy_stack_slots_dynptr to
> +        * ensure dynptr objects at the slots we are touching are completely
> +        * destructed before we reinitialize them for a new one.
> +        */
>         return true;
>  }
>
> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 5/8] selftests/bpf: Add dynptr pruning tests
  2023-01-01  8:33 ` [PATCH bpf-next v1 5/8] selftests/bpf: Add dynptr pruning tests Kumar Kartikeya Dwivedi
@ 2023-01-04 22:49   ` Andrii Nakryiko
  2023-01-09 11:44     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-04 22:49 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Add verifier tests that verify the new pruning behavior for STACK_DYNPTR
> slots, and ensure that state equivalence takes into account changes to
> the old and current verifier state correctly.
>
> Without the prior fixes, both of these bugs trigger with unprivileged
> BPF mode.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  tools/testing/selftests/bpf/verifier/dynptr.c | 90 +++++++++++++++++++
>  1 file changed, 90 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/verifier/dynptr.c
>
> diff --git a/tools/testing/selftests/bpf/verifier/dynptr.c b/tools/testing/selftests/bpf/verifier/dynptr.c
> new file mode 100644
> index 000000000000..798f4f7e0c57
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/verifier/dynptr.c
> @@ -0,0 +1,90 @@
> +{
> +       "dynptr: rewrite dynptr slot",
> +        .insns = {
> +        BPF_MOV64_IMM(BPF_REG_0, 0),
> +        BPF_LD_MAP_FD(BPF_REG_6, 0),
> +        BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
> +        BPF_MOV64_IMM(BPF_REG_2, 8),
> +        BPF_MOV64_IMM(BPF_REG_3, 0),
> +        BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
> +        BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
> +        BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
> +        BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
> +        BPF_JMP_IMM(BPF_JA, 0, 0, 1),
> +        BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0xeB9F),
> +        BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
> +        BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -16),
> +        BPF_MOV64_IMM(BPF_REG_2, 0),
> +        BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
> +        BPF_MOV64_IMM(BPF_REG_0, 0),
> +        BPF_EXIT_INSN(),
> +        },
> +       .fixup_map_ringbuf = { 1 },
> +       .result_unpriv = REJECT,
> +       .errstr_unpriv = "unknown func bpf_ringbuf_reserve_dynptr#198",
> +       .result = REJECT,
> +       .errstr = "arg 1 is an unacquired reference",
> +},
> +{
> +       "dynptr: type confusion",
> +       .insns = {
> +       BPF_MOV64_IMM(BPF_REG_0, 0),
> +       BPF_LD_MAP_FD(BPF_REG_6, 0),
> +       BPF_LD_MAP_FD(BPF_REG_7, 0),
> +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
> +       BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
> +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
> +       BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
> +       BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
> +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -24),
> +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0xeB9FeB9F),
> +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -24, 0xeB9FeB9F),
> +       BPF_MOV64_IMM(BPF_REG_4, 0),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_2),
> +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
> +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
> +       BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
> +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
> +       BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
> +       BPF_EXIT_INSN(),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
> +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
> +       BPF_MOV64_IMM(BPF_REG_2, 8),
> +       BPF_MOV64_IMM(BPF_REG_3, 0),
> +       BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
> +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
> +       BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
> +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
> +       BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
> +       /* pad with insns to trigger add_new_state heuristic for straight line path */
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> +       BPF_JMP_IMM(BPF_JA, 0, 0, 9),
> +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
> +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0),
> +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
> +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
> +       BPF_MOV64_IMM(BPF_REG_2, 0),
> +       BPF_MOV64_IMM(BPF_REG_3, 0),
> +       BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
> +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
> +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_dynptr_from_mem),
> +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
> +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -16),
> +       BPF_MOV64_IMM(BPF_REG_2, 0),
> +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
> +       BPF_MOV64_IMM(BPF_REG_0, 0),
> +       BPF_EXIT_INSN(),
> +       },
> +       .fixup_map_hash_16b = { 1 },
> +       .fixup_map_ringbuf = { 3 },
> +       .result_unpriv = REJECT,
> +       .errstr_unpriv = "unknown func bpf_ringbuf_reserve_dynptr#198",
> +       .result = REJECT,
> +       .errstr = "arg 1 is an unacquired reference",
> +},

have you tried to write these tests as embedded assembly in .bpf.c,
using __attribute__((naked)) and __failure and __msg("")
infrastructure? Eduard is working towards converting test_verifier's
test to this __naked + embed asm approach, so we might want to start
adding new tests in such form anyways? And they will be way more
readable. Defining and passing ringbuf map in C is also much more
obvious and easy.

> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 0/8] Dynptr fixes
  2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
                   ` (7 preceding siblings ...)
  2023-01-01  8:34 ` [PATCH bpf-next v1 8/8] selftests/bpf: Add dynptr helper tests Kumar Kartikeya Dwivedi
@ 2023-01-04 22:51 ` Andrii Nakryiko
  2023-01-12  1:08   ` Kumar Kartikeya Dwivedi
  8 siblings, 1 reply; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-04 22:51 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Happy New Year!
>
> This is part 2 of https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com.
>
> Changelog:
> ----------
> Old v1 -> v1
> Old v1: https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com
>
>  * Allow overwriting dynptr stack slots from dynptr init helpers
>  * Fix a bug in alignment check where reg->var_off.value was still not included
>  * Address other minor nits
>
> Kumar Kartikeya Dwivedi (8):
>   bpf: Fix state pruning for STACK_DYNPTR stack slots
>   bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
>   bpf: Fix partial dynptr stack slot reads/writes
>   bpf: Allow reinitializing unreferenced dynptr stack slots
>   selftests/bpf: Add dynptr pruning tests
>   selftests/bpf: Add dynptr var_off tests
>   selftests/bpf: Add dynptr partial slot overwrite tests
>   selftests/bpf: Add dynptr helper tests
>

Hey Kumar, thanks for fixes! Left few comments, but I was also
wondering if you thought about current is_spilled_reg() usage in the
code? It makes an assumption that stack slots can be either a scalar
(MISC/ZERO/INVALID) or STACK_SPILL. With STACK_DYNPTR it's not the
case anymore, so it feels like we need to audit all the places where
we assume stack spill and see if anything should be fixed. Was just
wondering if you already looked at this?

>  kernel/bpf/verifier.c                         | 243 ++++++++++++++++--
>  .../bpf/prog_tests/kfunc_dynptr_param.c       |   2 +-
>  .../testing/selftests/bpf/progs/dynptr_fail.c |  68 ++++-
>  tools/testing/selftests/bpf/verifier/dynptr.c | 182 +++++++++++++
>  4 files changed, 464 insertions(+), 31 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/verifier/dynptr.c
>
>
> base-commit: bb5747cfbc4b7fe29621ca6cd4a695d2723bf2e8
> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-01  8:33 ` [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes Kumar Kartikeya Dwivedi
  2023-01-04 22:42   ` Andrii Nakryiko
@ 2023-01-05  3:06   ` Alexei Starovoitov
  2023-01-09 11:52     ` Kumar Kartikeya Dwivedi
  2023-01-06 19:16   ` Joanne Koong
  2 siblings, 1 reply; 38+ messages in thread
From: Alexei Starovoitov @ 2023-01-05  3:06 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Sun, Jan 01, 2023 at 02:03:57PM +0530, Kumar Kartikeya Dwivedi wrote:
> Currently, while reads are disallowed for dynptr stack slots, writes are
> not. Reads don't work from both direct access and helpers, while writes
> do work in both cases, but have the effect of overwriting the slot_type.

Unrelated to this patch set, but disallowing reads from dynptr slots
seems like unnecessary restriction.
We allow reads from spilled slots and conceptually dynptr slots should
fall in is_spilled_reg() category in check_stack_read_*().

We already can do:
d = bpf_rdonly_cast(dynptr, bpf_core_type_id_kernel(struct bpf_dynptr_kern))
d->size;
and there is really no need to add bpf_dynptr* accessors either as helpers or as kfuncs.
All accessors can simply be 'static inline' pure bpf functions in bpf_helpers.h.
Automatic inlining and zero kernel side maintenance.

With verifier allowing reads into dynptr we can also enable bpf_cast_to_kern_ctx()
to convert struct bpf_dynptr to struct bpf_dynptr_kern and enable
even faster reads.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-01  8:33 ` [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots Kumar Kartikeya Dwivedi
  2023-01-02 19:28   ` Eduard Zingerman
  2023-01-04 22:24   ` Andrii Nakryiko
@ 2023-01-06  0:18   ` Joanne Koong
  2023-01-09 11:17     ` Kumar Kartikeya Dwivedi
  2 siblings, 1 reply; 38+ messages in thread
From: Joanne Koong @ 2023-01-06  0:18 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> The root of the problem is missing liveness marking for STACK_DYNPTR
> slots. This leads to all kinds of problems inside stacksafe.
>
> The verifier by default inside stacksafe ignores spilled_ptr in stack
> slots which do not have REG_LIVE_READ marks. Since this is being checked
> in the 'old' explored state, it must have already done clean_live_states
> for this old bpf_func_state. Hence, it won't be receiving any more
> liveness marks from to be explored insns (it has received REG_LIVE_DONE
> marking from liveness point of view).
>
> What this means is that verifier considers that it's safe to not compare
> the stack slot if was never read by children states. While liveness
> marks are usually propagated correctly following the parentage chain for
> spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
> case for STACK_DYNPTR.
>
> clean_live_states hence simply rewrites these stack slots to the type
> STACK_INVALID since it sees no REG_LIVE_READ marks.
>
> The end result is that we will never see STACK_DYNPTR slots in explored
> state. Even if verifier was conservatively matching !REG_LIVE_READ
> slots, very next check continuing the stacksafe loop on seeing
> STACK_INVALID would again prevent further checks.
>
> Now as long as verifier stores an explored state which we can compare to
> when reaching a pruning point, we can abuse this bug to make verifier
> prune search for obviously unsafe paths using STACK_DYNPTR slots
> thinking they are never used hence safe.
>
> Doing this in unprivileged mode is a bit challenging. add_new_state is
> only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
> or when jmps_processed difference is >= 2 and insn_processed difference
> is >= 8. So coming up with the unprivileged case requires a little more
> work, but it is still totally possible. The test case being discussed
> below triggers the heuristic even in unprivileged mode.
>
> However, it no longer works since commit
> 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").
>
> Let's try to study the test step by step.
>
> Consider the following program (C style BPF ASM):
>
> 0  r0 = 0;
> 1  r6 = &ringbuf_map;
> 3  r1 = r6;
> 4  r2 = 8;
> 5  r3 = 0;
> 6  r4 = r10;
> 7  r4 -= -16;
> 8  call bpf_ringbuf_reserve_dynptr;
> 9  if r0 == 0 goto pc+1;
> 10 goto pc+1;
> 11 *(r10 - 16) = 0xeB9F;
> 12 r1 = r10;
> 13 r1 -= -16;
> 14 r2 = 0;
> 15 call bpf_ringbuf_discard_dynptr;
> 16 r0 = 0;
> 17 exit;
>
> We know that insn 12 will be a pruning point, hence if we force
> add_new_state for it, it will first verify the following path as
> safe in straight line exploration:
> 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
>
> Then, when we arrive at insn 12 from the following path:
> 0 1 3 4 5 6 7 8 9 -> 11 (12)
>
> We will find a state that has been verified as safe already at insn 12.
> Since register state is same at this point, regsafe will pass. Next, in
> stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
> seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
> Next, refsafe is also true as reference state is unchanged in both
> states.
>
> The states are considered equivalent and search is pruned.
>
> Hence, we are able to construct a dynptr with arbitrary contents and use
> the dynptr API to operate on this arbitrary pointer and arbitrary size +
> offset.
>
> To fix this, first define a mark_dynptr_read function that propagates
> liveness marks whenever a valid initialized dynptr is accessed by dynptr
> helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
> uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
> screening off mark_reg_read and not propagating marks upwards from that
> point.
>
> This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
> none, so clean_live_states either sets both slots to STACK_INVALID or
> none of them. This is the invariant the checks inside stacksafe rely on.
>
> Next, do a complete comparison of both stack slots whenever they have
> STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
> also whether both form the same first_slot. Only then is the later path
> safe.
>
> Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 73 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 4a25375ebb0d..f7248235e119 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
>                 state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
>         }
>
> +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +
>         return 0;
>  }
>
> @@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
>
>         __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
>         __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> +
> +       /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
> +        *
> +        * While we don't allow reading STACK_INVALID, it is still possible to
> +        * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
> +        * helpers or insns can do partial read of that part without failing,
> +        * but check_stack_range_initialized, check_stack_read_var_off, and
> +        * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
> +        * the slot conservatively. Hence we need to screen off those liveness
> +        * marking walks.
> +        *
> +        * This was not a problem before because STACK_INVALID is only set by
> +        * default, or in clean_live_states after REG_LIVE_DONE, not randomly
> +        * during verifier state exploration. Hence, for this case parentage

Where does it get set randomly during verifier state exploration for this case?

> +        * chain will still be live, while earlier reg->parent was NULL, so we

What does "live" in  "parentage chain will still be live" here mean?
what does "earlier" in "earlier reg->parent" refer to here, and why
was the earlier reg->parent NULL?

> +        * need REG_LIVE_WRITTEN to screen off read marker propagation.
> +        */
> +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +
>         return 0;
>  }
>
> @@ -2388,6 +2411,30 @@ static int mark_reg_read(struct bpf_verifier_env *env,
>         return 0;
>  }
>
> +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> +{
> +       struct bpf_func_state *state = func(env, reg);
> +       int spi, ret;
> +
> +       /* For CONST_PTR_TO_DYNPTR, it must have already been done by
> +        * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
> +        * check_kfunc_call.
> +        */
> +       if (reg->type == CONST_PTR_TO_DYNPTR)
> +               return 0;
> +       spi = get_spi(reg->off);
> +       /* Caller ensures dynptr is valid and initialized, which means spi is in
> +        * bounds and spi is the first dynptr slot. Simply mark stack slot as
> +        * read.
> +        */
> +       ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
> +                           state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
> +       if (ret)
> +               return ret;
> +       return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
> +                            state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
> +}
> +
>  /* This function is supposed to be used by the following 32-bit optimization
>   * code only. It returns TRUE if the source or destination register operates
>   * on 64-bit, otherwise return FALSE.
> @@ -5928,6 +5975,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>                         enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
>  {
>         struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
> +       int err;
>
>         /* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
>          * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
> @@ -6008,6 +6056,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>                                 err_extra, regno);
>                         return -EINVAL;
>                 }
> +
> +               err = mark_dynptr_read(env, reg);
> +               if (err)
> +                       return err;
>         }
>         return 0;
>  }
> @@ -13204,6 +13256,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>                          * return false to continue verification of this path
>                          */
>                         return false;
> +               /* Both are same slot_type, but STACK_DYNPTR requires more
> +                * checks before it can considered safe.
> +                */
> +               if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_DYNPTR) {
> +                       /* If both are STACK_DYNPTR, type must be same */
> +                       if (old->stack[spi].spilled_ptr.dynptr.type != cur->stack[spi].spilled_ptr.dynptr.type)
> +                               return false;
> +                       /* Both should also have first slot at same spi */
> +                       if (old->stack[spi].spilled_ptr.dynptr.first_slot != cur->stack[spi].spilled_ptr.dynptr.first_slot)
> +                               return false;
> +                       /* ids should be same */
> +                       if (!!old->stack[spi].spilled_ptr.ref_obj_id != !!cur->stack[spi].spilled_ptr.ref_obj_id)
> +                               return false;
> +                       if (old->stack[spi].spilled_ptr.ref_obj_id &&
> +                           !check_ids(old->stack[spi].spilled_ptr.ref_obj_id,
> +                                      cur->stack[spi].spilled_ptr.ref_obj_id, idmap))
> +                               return false;
> +                       WARN_ON_ONCE(i % BPF_REG_SIZE);
> +                       i += BPF_REG_SIZE - 1;
> +                       continue;
> +               }
>                 if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
>                         continue;
>                 if (!is_spilled_reg(&old->stack[spi]))
> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
  2023-01-01  8:33 ` [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR Kumar Kartikeya Dwivedi
  2023-01-04 22:32   ` Andrii Nakryiko
@ 2023-01-06  0:57   ` Joanne Koong
  2023-01-06 17:56     ` Joanne Koong
  2023-01-09 11:21     ` Kumar Kartikeya Dwivedi
  1 sibling, 2 replies; 38+ messages in thread
From: Joanne Koong @ 2023-01-06  0:57 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Currently, the dynptr function is not checking the variable offset part
> of PTR_TO_STACK that it needs to check. The fixed offset is considered
> when computing the stack pointer index, but if the variable offset was
> not a constant (such that it could not be accumulated in reg->off), we
> will end up a discrepency where runtime pointer does not point to the
> actual stack slot we mark as STACK_DYNPTR.
>
> It is impossible to precisely track dynptr state when variable offset is
> not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc.
> simply reject the case where reg->var_off is not constant. Then,
> consider both reg->off and reg->var_off.value when computing the stack
> pointer index.
>
> A new helper dynptr_get_spi is introduced to hide over these details
> since the dynptr needs to be located in multiple places outside the
> process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we
> need to enforce these checks in all places.
>
> Note that it is disallowed for unprivileged users to have a non-constant
> var_off, so this problem should only be possible to trigger from
> programs having CAP_PERFMON. However, its effects can vary.
>
> Without the fix, it is possible to replace the contents of the dynptr
> arbitrarily by making verifier mark different stack slots than actual
> location and then doing writes to the actual stack address of dynptr at
> runtime.
>
> Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c                         | 83 ++++++++++++++-----
>  .../bpf/prog_tests/kfunc_dynptr_param.c       |  2 +-
>  .../testing/selftests/bpf/progs/dynptr_fail.c |  6 +-
>  3 files changed, 66 insertions(+), 25 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index f7248235e119..ca970f80e395 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env,
>                 verbose(env, "D");
>  }
>
> -static int get_spi(s32 off)
> +static int __get_spi(s32 off)
>  {
>         return (-off - 1) / BPF_REG_SIZE;
>  }
>
> +static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> +{
> +       int off, spi;
> +
> +       if (!tnum_is_const(reg->var_off)) {
> +               verbose(env, "dynptr has to be at the constant offset\n");
> +               return -EINVAL;
> +       }
> +
> +       off = reg->off + reg->var_off.value;
> +       if (off % BPF_REG_SIZE) {
> +               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);

I think you meant off instead of reg->off?

> +               return -EINVAL;
> +       }
> +
> +       spi = __get_spi(off);
> +       if (spi < 1) {
> +               verbose(env, "cannot pass in dynptr at an offset=%d\n", (int)(off + reg->var_off.value));

I think you meant off instead of off + reg->var_off.value

> +               return -EINVAL;
> +       }

I think this if (spi < 1) check should have the same logic
is_spi_bounds_valid() does (eg checking against total allocated slots
as well). I think we can combine is_spi_bounds_valid() with this
function and then every place we call is_spi_bounds_valid()

> +       return spi;
> +}
> +
>  static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots)
>  {
>         int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
> @@ -754,7 +777,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
>         enum bpf_dynptr_type type;
>         int spi, i, id;
>
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (spi < 0)
> +               return spi;
>
>         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
>                 return -EINVAL;
> @@ -792,7 +817,9 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
>         struct bpf_func_state *state = func(env, reg);
>         int spi, i;
>
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (spi < 0)
> +               return spi;
>
>         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
>                 return -EINVAL;
> @@ -839,7 +866,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
>         if (reg->type == CONST_PTR_TO_DYNPTR)
>                 return false;
>
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (spi < 0)
> +               return spi;
> +
> +       /* We will do check_mem_access to check and update stack bounds later */
>         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
>                 return true;
>
> @@ -855,14 +886,15 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
>  static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
>  {
>         struct bpf_func_state *state = func(env, reg);
> -       int spi;
> -       int i;
> +       int spi, i;
>
>         /* This already represents first slot of initialized bpf_dynptr */
>         if (reg->type == CONST_PTR_TO_DYNPTR)
>                 return true;
>
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (spi < 0)
> +               return false;
>         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
>             !state->stack[spi].spilled_ptr.dynptr.first_slot)
>                 return false;
> @@ -891,7 +923,9 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg
>         if (reg->type == CONST_PTR_TO_DYNPTR) {
>                 return reg->dynptr.type == dynptr_type;
>         } else {
> -               spi = get_spi(reg->off);
> +               spi = dynptr_get_spi(env, reg);
> +               if (WARN_ON_ONCE(spi < 0))
> +                       return false;
>                 return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type;
>         }
>  }
> @@ -2422,7 +2456,9 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *
>          */
>         if (reg->type == CONST_PTR_TO_DYNPTR)
>                 return 0;
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (WARN_ON_ONCE(spi < 0))
> +               return spi;
>         /* Caller ensures dynptr is valid and initialized, which means spi is in
>          * bounds and spi is the first dynptr slot. Simply mark stack slot as
>          * read.
> @@ -5946,6 +5982,11 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
>         return 0;
>  }
>
> +static bool arg_type_is_release(enum bpf_arg_type type)
> +{
> +       return type & OBJ_RELEASE;
> +}

nit: I dont think you need this arg_type_is_release() change

> +
>  /* There are two register types representing a bpf_dynptr, one is PTR_TO_STACK
>   * which points to a stack slot, and the other is CONST_PTR_TO_DYNPTR.
>   *
> @@ -5986,12 +6027,14 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
>         }
>         /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
>          * check_func_arg_reg_off's logic. We only need to check offset
> -        * alignment for PTR_TO_STACK.
> +        * and its alignment for PTR_TO_STACK.
>          */
> -       if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) {
> -               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
> -               return -EINVAL;

> +       if (reg->type == PTR_TO_STACK) {
> +               err = dynptr_get_spi(env, reg);
> +               if (err < 0)
> +                       return err;
>         }

nit: if we do something like

If (reg->type == PTR_TO_STACK) {
    spi = dynptr_get_spi(env, reg);
    if (spi < 0)
        return spi;
} else {
    spi = __get_spi(reg->off);
}

then we can just pass in spi to is_dynptr_reg_valid_uninit() and
is_dynptr_reg_valid_init() instead of having to recompute/check them
again

> +
>         /*  MEM_UNINIT - Points to memory that is an appropriate candidate for
>          *               constructing a mutable bpf_dynptr object.
>          *
> @@ -6070,11 +6113,6 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
>                type == ARG_CONST_SIZE_OR_ZERO;
>  }
>
> -static bool arg_type_is_release(enum bpf_arg_type type)
> -{
> -       return type & OBJ_RELEASE;
> -}
> -
>  static bool arg_type_is_dynptr(enum bpf_arg_type type)
>  {
>         return base_type(type) == ARG_PTR_TO_DYNPTR;
> @@ -6404,8 +6442,9 @@ static u32 dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state
>
>         if (reg->type == CONST_PTR_TO_DYNPTR)
>                 return reg->ref_obj_id;
> -
> -       spi = get_spi(reg->off);
> +       spi = dynptr_get_spi(env, reg);
> +       if (WARN_ON_ONCE(spi < 0))
> +               return U32_MAX;
>         return state->stack[spi].spilled_ptr.ref_obj_id;
>  }
>
> @@ -6479,7 +6518,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>                          * PTR_TO_STACK.
>                          */
>                         if (reg->type == PTR_TO_STACK) {
> -                               spi = get_spi(reg->off);
> +                               spi = dynptr_get_spi(env, reg);
> +                               if (spi < 0)
> +                                       return spi;
>                                 if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
>                                     !state->stack[spi].spilled_ptr.ref_obj_id) {
>                                         verbose(env, "arg %d is an unacquired reference\n", regno);
> diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
> index a9229260a6ce..72800b1e8395 100644
> --- a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
> +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
> @@ -18,7 +18,7 @@ static struct {
>         const char *expected_verifier_err_msg;
>         int expected_runtime_err;
>  } kfunc_dynptr_tests[] = {
> -       {"not_valid_dynptr", "Expected an initialized dynptr as arg #1", 0},
> +       {"not_valid_dynptr", "cannot pass in dynptr at an offset=-8", 0},
>         {"not_ptr_to_stack", "arg#0 expected pointer to stack or dynptr_ptr", 0},
>         {"dynptr_data_null", NULL, -EBADMSG},
>  };
> diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> index 78debc1b3820..32df3647b794 100644
> --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
> +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> @@ -382,7 +382,7 @@ int invalid_helper1(void *ctx)
>
>  /* A dynptr can't be passed into a helper function at a non-zero offset */
>  SEC("?raw_tp")
> -__failure __msg("Expected an initialized dynptr as arg #3")
> +__failure __msg("cannot pass in dynptr at an offset=-8")
>  int invalid_helper2(void *ctx)
>  {
>         struct bpf_dynptr ptr;
> @@ -444,7 +444,7 @@ int invalid_write2(void *ctx)
>   * non-const offset
>   */
>  SEC("?raw_tp")
> -__failure __msg("Expected an initialized dynptr as arg #1")
> +__failure __msg("arg 1 is an unacquired reference")
>  int invalid_write3(void *ctx)
>  {
>         struct bpf_dynptr ptr;
> @@ -584,7 +584,7 @@ int invalid_read4(void *ctx)
>
>  /* Initializing a dynptr on an offset should fail */
>  SEC("?raw_tp")
> -__failure __msg("invalid write to stack")
> +__failure __msg("cannot pass in dynptr at an offset=0")
>  int invalid_offset(void *ctx)
>  {
>         struct bpf_dynptr ptr;
> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
  2023-01-06  0:57   ` Joanne Koong
@ 2023-01-06 17:56     ` Joanne Koong
  2023-01-09 11:21     ` Kumar Kartikeya Dwivedi
  1 sibling, 0 replies; 38+ messages in thread
From: Joanne Koong @ 2023-01-06 17:56 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Thu, Jan 5, 2023 at 4:57 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Currently, the dynptr function is not checking the variable offset part
> > of PTR_TO_STACK that it needs to check. The fixed offset is considered
> > when computing the stack pointer index, but if the variable offset was
> > not a constant (such that it could not be accumulated in reg->off), we
> > will end up a discrepency where runtime pointer does not point to the
> > actual stack slot we mark as STACK_DYNPTR.
> >
> > It is impossible to precisely track dynptr state when variable offset is
> > not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc.
> > simply reject the case where reg->var_off is not constant. Then,
> > consider both reg->off and reg->var_off.value when computing the stack
> > pointer index.
> >
> > A new helper dynptr_get_spi is introduced to hide over these details
> > since the dynptr needs to be located in multiple places outside the
> > process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we
> > need to enforce these checks in all places.
> >
> > Note that it is disallowed for unprivileged users to have a non-constant
> > var_off, so this problem should only be possible to trigger from
> > programs having CAP_PERFMON. However, its effects can vary.
> >
> > Without the fix, it is possible to replace the contents of the dynptr
> > arbitrarily by making verifier mark different stack slots than actual
> > location and then doing writes to the actual stack address of dynptr at
> > runtime.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c                         | 83 ++++++++++++++-----
> >  .../bpf/prog_tests/kfunc_dynptr_param.c       |  2 +-
> >  .../testing/selftests/bpf/progs/dynptr_fail.c |  6 +-
> >  3 files changed, 66 insertions(+), 25 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index f7248235e119..ca970f80e395 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env,
> >                 verbose(env, "D");
> >  }
> >
> > -static int get_spi(s32 off)
> > +static int __get_spi(s32 off)
> >  {
> >         return (-off - 1) / BPF_REG_SIZE;
> >  }
> >
> > +static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > +{
> > +       int off, spi;
> > +
> > +       if (!tnum_is_const(reg->var_off)) {
> > +               verbose(env, "dynptr has to be at the constant offset\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       off = reg->off + reg->var_off.value;
> > +       if (off % BPF_REG_SIZE) {
> > +               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
>
> I think you meant off instead of reg->off?
>
> > +               return -EINVAL;
> > +       }
> > +
> > +       spi = __get_spi(off);
> > +       if (spi < 1) {
> > +               verbose(env, "cannot pass in dynptr at an offset=%d\n", (int)(off + reg->var_off.value));
>
> I think you meant off instead of off + reg->var_off.value
>
> > +               return -EINVAL;
> > +       }
>
> I think this if (spi < 1) check should have the same logic
> is_spi_bounds_valid() does (eg checking against total allocated slots
> as well). I think we can combine is_spi_bounds_valid() with this
> function and then every place we call is_spi_bounds_valid()
>
missing a word: and then remove* every place we call is_spi_bounds_valid()

> > +       return spi;
> > +}
> > +
> >  static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots)
> >  {
> >         int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
> > @@ -754,7 +777,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> >         enum bpf_dynptr_type type;
> >         int spi, i, id;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return spi;
> >
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return -EINVAL;
> > @@ -792,7 +817,9 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >         struct bpf_func_state *state = func(env, reg);
> >         int spi, i;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return spi;
> >
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return -EINVAL;
> > @@ -839,7 +866,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return false;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return spi;
> > +
> > +       /* We will do check_mem_access to check and update stack bounds later */
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return true;
> >
> > @@ -855,14 +886,15 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
> >  static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >  {
> >         struct bpf_func_state *state = func(env, reg);
> > -       int spi;
> > -       int i;
> > +       int spi, i;
> >
> >         /* This already represents first slot of initialized bpf_dynptr */
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return true;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return false;
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
> >             !state->stack[spi].spilled_ptr.dynptr.first_slot)
> >                 return false;
> > @@ -891,7 +923,9 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg
> >         if (reg->type == CONST_PTR_TO_DYNPTR) {
> >                 return reg->dynptr.type == dynptr_type;
> >         } else {
> > -               spi = get_spi(reg->off);
> > +               spi = dynptr_get_spi(env, reg);
> > +               if (WARN_ON_ONCE(spi < 0))
> > +                       return false;
> >                 return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type;
> >         }
> >  }
> > @@ -2422,7 +2456,9 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *
> >          */
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return 0;
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (WARN_ON_ONCE(spi < 0))
> > +               return spi;
> >         /* Caller ensures dynptr is valid and initialized, which means spi is in
> >          * bounds and spi is the first dynptr slot. Simply mark stack slot as
> >          * read.
> > @@ -5946,6 +5982,11 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
> >         return 0;
> >  }
> >
> > +static bool arg_type_is_release(enum bpf_arg_type type)
> > +{
> > +       return type & OBJ_RELEASE;
> > +}
>
> nit: I dont think you need this arg_type_is_release() change
>
> > +
> >  /* There are two register types representing a bpf_dynptr, one is PTR_TO_STACK
> >   * which points to a stack slot, and the other is CONST_PTR_TO_DYNPTR.
> >   *
> > @@ -5986,12 +6027,14 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> >         }
> >         /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
> >          * check_func_arg_reg_off's logic. We only need to check offset
> > -        * alignment for PTR_TO_STACK.
> > +        * and its alignment for PTR_TO_STACK.
> >          */
> > -       if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) {
> > -               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
> > -               return -EINVAL;
>
> > +       if (reg->type == PTR_TO_STACK) {
> > +               err = dynptr_get_spi(env, reg);
> > +               if (err < 0)
> > +                       return err;
> >         }
>
> nit: if we do something like
>
> If (reg->type == PTR_TO_STACK) {
>     spi = dynptr_get_spi(env, reg);
>     if (spi < 0)
>         return spi;
> } else {
>     spi = __get_spi(reg->off);
> }
>
> then we can just pass in spi to is_dynptr_reg_valid_uninit() and
> is_dynptr_reg_valid_init() instead of having to recompute/check them
> again
>
> > +
> >         /*  MEM_UNINIT - Points to memory that is an appropriate candidate for
> >          *               constructing a mutable bpf_dynptr object.
> >          *
> > @@ -6070,11 +6113,6 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
> >                type == ARG_CONST_SIZE_OR_ZERO;
> >  }
> >
> > -static bool arg_type_is_release(enum bpf_arg_type type)
> > -{
> > -       return type & OBJ_RELEASE;
> > -}
> > -
> >  static bool arg_type_is_dynptr(enum bpf_arg_type type)
> >  {
> >         return base_type(type) == ARG_PTR_TO_DYNPTR;
> > @@ -6404,8 +6442,9 @@ static u32 dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state
> >
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return reg->ref_obj_id;
> > -
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (WARN_ON_ONCE(spi < 0))
> > +               return U32_MAX;
> >         return state->stack[spi].spilled_ptr.ref_obj_id;
> >  }
> >
> > @@ -6479,7 +6518,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >                          * PTR_TO_STACK.
> >                          */
> >                         if (reg->type == PTR_TO_STACK) {
> > -                               spi = get_spi(reg->off);
> > +                               spi = dynptr_get_spi(env, reg);
> > +                               if (spi < 0)
> > +                                       return spi;
> >                                 if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
> >                                     !state->stack[spi].spilled_ptr.ref_obj_id) {
> >                                         verbose(env, "arg %d is an unacquired reference\n", regno);
> > diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
> > index a9229260a6ce..72800b1e8395 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
> > @@ -18,7 +18,7 @@ static struct {
> >         const char *expected_verifier_err_msg;
> >         int expected_runtime_err;
> >  } kfunc_dynptr_tests[] = {
> > -       {"not_valid_dynptr", "Expected an initialized dynptr as arg #1", 0},
> > +       {"not_valid_dynptr", "cannot pass in dynptr at an offset=-8", 0},
> >         {"not_ptr_to_stack", "arg#0 expected pointer to stack or dynptr_ptr", 0},
> >         {"dynptr_data_null", NULL, -EBADMSG},
> >  };
> > diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > index 78debc1b3820..32df3647b794 100644
> > --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > @@ -382,7 +382,7 @@ int invalid_helper1(void *ctx)
> >
> >  /* A dynptr can't be passed into a helper function at a non-zero offset */
> >  SEC("?raw_tp")
> > -__failure __msg("Expected an initialized dynptr as arg #3")
> > +__failure __msg("cannot pass in dynptr at an offset=-8")
> >  int invalid_helper2(void *ctx)
> >  {
> >         struct bpf_dynptr ptr;
> > @@ -444,7 +444,7 @@ int invalid_write2(void *ctx)
> >   * non-const offset
> >   */
> >  SEC("?raw_tp")
> > -__failure __msg("Expected an initialized dynptr as arg #1")
> > +__failure __msg("arg 1 is an unacquired reference")
> >  int invalid_write3(void *ctx)
> >  {
> >         struct bpf_dynptr ptr;
> > @@ -584,7 +584,7 @@ int invalid_read4(void *ctx)
> >
> >  /* Initializing a dynptr on an offset should fail */
> >  SEC("?raw_tp")
> > -__failure __msg("invalid write to stack")
> > +__failure __msg("cannot pass in dynptr at an offset=0")
> >  int invalid_offset(void *ctx)
> >  {
> >         struct bpf_dynptr ptr;
> > --
> > 2.39.0
> >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-01  8:33 ` [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes Kumar Kartikeya Dwivedi
  2023-01-04 22:42   ` Andrii Nakryiko
  2023-01-05  3:06   ` Alexei Starovoitov
@ 2023-01-06 19:16   ` Joanne Koong
  2023-01-06 19:31     ` Joanne Koong
  2023-01-09 11:30     ` Kumar Kartikeya Dwivedi
  2 siblings, 2 replies; 38+ messages in thread
From: Joanne Koong @ 2023-01-06 19:16 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Currently, while reads are disallowed for dynptr stack slots, writes are
> not. Reads don't work from both direct access and helpers, while writes
> do work in both cases, but have the effect of overwriting the slot_type.
>
> While this is fine, handling for a few edge cases is missing. Firstly,
> a user can overwrite the stack slots of dynptr partially.
>
> Consider the following layout:
> spi: [d][d][?]
>       2  1  0
>
> First slot is at spi 2, second at spi 1.
> Now, do a write of 1 to 8 bytes for spi 1.
>
> This will essentially either write STACK_MISC for all slot_types or
> STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
> of zeroes). The end result is that slot is scrubbed.
>
> Now, the layout is:
> spi: [d][m][?]
>       2  1  0
>
> Suppose if user initializes spi = 1 as dynptr.
> We get:
> spi: [d][d][d]
>       2  1  0
>
> But this time, both spi 2 and spi 1 have first_slot = true.
>
> Now, when passing spi 2 to dynptr helper, it will consider it as
> initialized as it does not check whether second slot has first_slot ==
> false. And spi 1 should already work as normal.
>
> This effectively replaced size + offset of first dynptr, hence allowing
> invalid OOB reads and writes.
>
> Make a few changes to protect against this:
> When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
> STACK_DYNPTR type, mark both first and second slot (regardless of which
> slot we touch) as STACK_INVALID. Reads are already prevented.
>
> Second, prevent writing to stack memory from helpers if the range may
> contain any STACK_DYNPTR slots. Reads are already prevented.
>
> For helpers, we cannot allow it to destroy dynptrs from the writes as
> depending on arguments, helper may take uninit_mem and dynptr both at
> the same time. This would mean that helper may write to uninit_mem
> before it reads the dynptr, which would be bad.
>
> PTR_TO_MEM: [?????dd]
>
> Depending on the code inside the helper, it may end up overwriting the
> dynptr contents first and then read those as the dynptr argument.
>
> Verifier would only simulate destruction when it does byte by byte
> access simulation in check_helper_call for meta.access_size, and
> fail to catch this case, as it happens after argument checks.
>
> The same would need to be done for any other non-trivial objects created
> on the stack in the future, such as bpf_list_head on stack, or
> bpf_rb_root on stack.
>
> A common misunderstanding in the current code is that MEM_UNINIT means
> writes, but note that writes may also be performed even without
> MEM_UNINIT in case of helpers, in that case the code after handling meta
> && meta->raw_mode will complain when it sees STACK_DYNPTR. So that
> invalid read case also covers writes to potential STACK_DYNPTR slots.
> The only loophole was in case of meta->raw_mode which simulated writes
> through instructions which could overwrite them.
>
> A future series sequenced after this will focus on the clean up of
> helper access checks and bugs around that.
>
> Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 73 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index ca970f80e395..b985d90505cc 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
>         __mark_dynptr_reg(reg, type, true);
>  }
>
> +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> +                                      struct bpf_func_state *state, int spi);
>
>  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
>                                    enum bpf_arg_type arg_type, int insn_idx)
> @@ -858,6 +860,44 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
>         return 0;
>  }
>
> +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> +                                      struct bpf_func_state *state, int spi)
> +{
> +       int i;
> +
> +       /* We always ensure that STACK_DYNPTR is never set partially,
> +        * hence just checking for slot_type[0] is enough. This is
> +        * different for STACK_SPILL, where it may be only set for
> +        * 1 byte, so code has to use is_spilled_reg.
> +        */
> +       if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
> +               return;

nit: an empty line here helps readability

> +       /* Reposition spi to first slot */
> +       if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
> +               spi = spi + 1;
> +
> +       mark_stack_slot_scratched(env, spi);
> +       mark_stack_slot_scratched(env, spi - 1);
> +
> +       /* Writing partially to one dynptr stack slot destroys both. */
> +       for (i = 0; i < BPF_REG_SIZE; i++) {
> +               state->stack[spi].slot_type[i] = STACK_INVALID;
> +               state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> +       }
> +
> +       /* Do not release reference state, we are destroying dynptr on stack,
> +        * not using some helper to release it. Just reset register.
> +        */

I agree with Andrii's point - I think it'd be more helpful if we error
out here if the dynptr is refcounted. It'd be easy to check too, we
already have dynptr_type_refcounted().

> +       __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> +       __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> +
> +       /* Same reason as unmark_stack_slots_dynptr above */
> +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> +
> +       return;

I think we should also invalidate any data slices associated with the
dynptrs? It seems natural that once a dynptr is invalidated, none of
its data slices should be usable.

> +}
> +
>  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
>  {
>         struct bpf_func_state *state = func(env, reg);
> @@ -3384,6 +3424,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>                         env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
>         }
>
> +       destroy_stack_slots_dynptr(env, state, spi);
> +
>         mark_stack_slot_scratched(env, spi);
>         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
>             !register_is_null(reg) && env->bpf_capable) {
> @@ -3497,6 +3539,13 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
>         if (err)
>                 return err;
>
> +       for (i = min_off; i < max_off; i++) {
> +               int slot, spi;
> +
> +               slot = -i - 1;
> +               spi = slot / BPF_REG_SIZE;

I think you can just use __get_spi() here

> +               destroy_stack_slots_dynptr(env, state, spi);

I think here too,

if (state->stack[spi].slot_type[0] == STACK_DYNPTR)
    destroy_stack_slots_dynptr(env, state, spi)

makes it more readable.

And if it is a STACK_DYNPTR, we can also fast-forward i.

> +       }
>
>         /* Variable offset writes destroy any spilled pointers in range. */
>         for (i = min_off; i < max_off; i++) {
> @@ -5524,6 +5573,30 @@ static int check_stack_range_initialized(
>         }
>
>         if (meta && meta->raw_mode) {
> +               /* Ensure we won't be overwriting dynptrs when simulating byte
> +                * by byte access in check_helper_call using meta.access_size.
> +                * This would be a problem if we have a helper in the future
> +                * which takes:
> +                *
> +                *      helper(uninit_mem, len, dynptr)
> +                *
> +                * Now, uninint_mem may overlap with dynptr pointer. Hence, it
> +                * may end up writing to dynptr itself when touching memory from
> +                * arg 1. This can be relaxed on a case by case basis for known
> +                * safe cases, but reject due to the possibilitiy of aliasing by
> +                * default.
> +                */
> +               for (i = min_off; i < max_off + access_size; i++) {
> +                       slot = -i - 1;
> +                       spi = slot / BPF_REG_SIZE;

nit: here too, we can use __get_spi()

> +                       /* raw_mode may write past allocated_stack */
> +                       if (state->allocated_stack <= slot)
> +                               continue;
> +                       if (state->stack[spi].slot_type[slot % BPF_REG_SIZE] == STACK_DYNPTR) {
> +                               verbose(env, "potential write to dynptr at off=%d disallowed\n", i);
> +                               return -EACCES;
> +                       }
> +               }
>                 meta->access_size = access_size;
>                 meta->regno = regno;
>                 return 0;
> --
> 2.39.0
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-06 19:16   ` Joanne Koong
@ 2023-01-06 19:31     ` Joanne Koong
  2023-01-09 11:30     ` Kumar Kartikeya Dwivedi
  1 sibling, 0 replies; 38+ messages in thread
From: Joanne Koong @ 2023-01-06 19:31 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Fri, Jan 6, 2023 at 11:16 AM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Currently, while reads are disallowed for dynptr stack slots, writes are
> > not. Reads don't work from both direct access and helpers, while writes
> > do work in both cases, but have the effect of overwriting the slot_type.
> >
> > While this is fine, handling for a few edge cases is missing. Firstly,
> > a user can overwrite the stack slots of dynptr partially.
> >
> > Consider the following layout:
> > spi: [d][d][?]
> >       2  1  0
> >
> > First slot is at spi 2, second at spi 1.
> > Now, do a write of 1 to 8 bytes for spi 1.
> >
> > This will essentially either write STACK_MISC for all slot_types or
> > STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
> > of zeroes). The end result is that slot is scrubbed.
> >
> > Now, the layout is:
> > spi: [d][m][?]
> >       2  1  0
> >
> > Suppose if user initializes spi = 1 as dynptr.
> > We get:
> > spi: [d][d][d]
> >       2  1  0
> >
> > But this time, both spi 2 and spi 1 have first_slot = true.
> >
> > Now, when passing spi 2 to dynptr helper, it will consider it as
> > initialized as it does not check whether second slot has first_slot ==
> > false. And spi 1 should already work as normal.
> >
> > This effectively replaced size + offset of first dynptr, hence allowing
> > invalid OOB reads and writes.
> >
> > Make a few changes to protect against this:
> > When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
> > STACK_DYNPTR type, mark both first and second slot (regardless of which
> > slot we touch) as STACK_INVALID. Reads are already prevented.
> >
> > Second, prevent writing to stack memory from helpers if the range may
> > contain any STACK_DYNPTR slots. Reads are already prevented.
> >
> > For helpers, we cannot allow it to destroy dynptrs from the writes as
> > depending on arguments, helper may take uninit_mem and dynptr both at
> > the same time. This would mean that helper may write to uninit_mem
> > before it reads the dynptr, which would be bad.
> >
> > PTR_TO_MEM: [?????dd]
> >
> > Depending on the code inside the helper, it may end up overwriting the
> > dynptr contents first and then read those as the dynptr argument.
> >
> > Verifier would only simulate destruction when it does byte by byte
> > access simulation in check_helper_call for meta.access_size, and
> > fail to catch this case, as it happens after argument checks.
> >
> > The same would need to be done for any other non-trivial objects created
> > on the stack in the future, such as bpf_list_head on stack, or
> > bpf_rb_root on stack.
> >
> > A common misunderstanding in the current code is that MEM_UNINIT means
> > writes, but note that writes may also be performed even without
> > MEM_UNINIT in case of helpers, in that case the code after handling meta
> > && meta->raw_mode will complain when it sees STACK_DYNPTR. So that
> > invalid read case also covers writes to potential STACK_DYNPTR slots.
> > The only loophole was in case of meta->raw_mode which simulated writes
> > through instructions which could overwrite them.
> >
> > A future series sequenced after this will focus on the clean up of
> > helper access checks and bugs around that.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 73 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index ca970f80e395..b985d90505cc 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
> >         __mark_dynptr_reg(reg, type, true);
> >  }
> >
> > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > +                                      struct bpf_func_state *state, int spi);
> >
> >  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> >                                    enum bpf_arg_type arg_type, int insn_idx)
> > @@ -858,6 +860,44 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >         return 0;
> >  }
> >
> > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > +                                      struct bpf_func_state *state, int spi)
> > +{
> > +       int i;
> > +
> > +       /* We always ensure that STACK_DYNPTR is never set partially,
> > +        * hence just checking for slot_type[0] is enough. This is
> > +        * different for STACK_SPILL, where it may be only set for
> > +        * 1 byte, so code has to use is_spilled_reg.
> > +        */
> > +       if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
> > +               return;
>
> nit: an empty line here helps readability
>
> > +       /* Reposition spi to first slot */
> > +       if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
> > +               spi = spi + 1;
> > +
> > +       mark_stack_slot_scratched(env, spi);
> > +       mark_stack_slot_scratched(env, spi - 1);
> > +
> > +       /* Writing partially to one dynptr stack slot destroys both. */
> > +       for (i = 0; i < BPF_REG_SIZE; i++) {
> > +               state->stack[spi].slot_type[i] = STACK_INVALID;
> > +               state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> > +       }
> > +
> > +       /* Do not release reference state, we are destroying dynptr on stack,
> > +        * not using some helper to release it. Just reset register.
> > +        */
>
> I agree with Andrii's point - I think it'd be more helpful if we error
> out here if the dynptr is refcounted. It'd be easy to check too, we
> already have dynptr_type_refcounted().

Actually, since __mark_reg_unknown sets reg->ref_obj_id to 0, it's
imperative that we error out here if the dynptr is refcounted

>
> > +       __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> > +       __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > +
> > +       /* Same reason as unmark_stack_slots_dynptr above */
> > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
> > +       return;
>
> I think we should also invalidate any data slices associated with the
> dynptrs? It seems natural that once a dynptr is invalidated, none of
> its data slices should be usable.
>
> > +}
> > +
> >  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >  {
> >         struct bpf_func_state *state = func(env, reg);
> > @@ -3384,6 +3424,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >                         env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
> >         }
> >
> > +       destroy_stack_slots_dynptr(env, state, spi);
> > +
> >         mark_stack_slot_scratched(env, spi);
> >         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
> >             !register_is_null(reg) && env->bpf_capable) {
> > @@ -3497,6 +3539,13 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
> >         if (err)
> >                 return err;
> >
> > +       for (i = min_off; i < max_off; i++) {
> > +               int slot, spi;
> > +
> > +               slot = -i - 1;
> > +               spi = slot / BPF_REG_SIZE;
>
> I think you can just use __get_spi() here
>
> > +               destroy_stack_slots_dynptr(env, state, spi);
>
> I think here too,
>
> if (state->stack[spi].slot_type[0] == STACK_DYNPTR)
>     destroy_stack_slots_dynptr(env, state, spi)
>
> makes it more readable.
>
> And if it is a STACK_DYNPTR, we can also fast-forward i.
>
> > +       }
> >
> >         /* Variable offset writes destroy any spilled pointers in range. */
> >         for (i = min_off; i < max_off; i++) {
> > @@ -5524,6 +5573,30 @@ static int check_stack_range_initialized(
> >         }
> >
> >         if (meta && meta->raw_mode) {
> > +               /* Ensure we won't be overwriting dynptrs when simulating byte
> > +                * by byte access in check_helper_call using meta.access_size.
> > +                * This would be a problem if we have a helper in the future
> > +                * which takes:
> > +                *
> > +                *      helper(uninit_mem, len, dynptr)
> > +                *
> > +                * Now, uninint_mem may overlap with dynptr pointer. Hence, it
> > +                * may end up writing to dynptr itself when touching memory from
> > +                * arg 1. This can be relaxed on a case by case basis for known
> > +                * safe cases, but reject due to the possibilitiy of aliasing by
> > +                * default.
> > +                */
> > +               for (i = min_off; i < max_off + access_size; i++) {
> > +                       slot = -i - 1;
> > +                       spi = slot / BPF_REG_SIZE;
>
> nit: here too, we can use __get_spi()
>
> > +                       /* raw_mode may write past allocated_stack */
> > +                       if (state->allocated_stack <= slot)
> > +                               continue;
> > +                       if (state->stack[spi].slot_type[slot % BPF_REG_SIZE] == STACK_DYNPTR) {
> > +                               verbose(env, "potential write to dynptr at off=%d disallowed\n", i);
> > +                               return -EACCES;
> > +                       }
> > +               }
> >                 meta->access_size = access_size;
> >                 meta->regno = regno;
> >                 return 0;
> > --
> > 2.39.0
> >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots
  2023-01-04 22:44   ` Andrii Nakryiko
@ 2023-01-06 19:33     ` Joanne Koong
  2023-01-09 11:40       ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 38+ messages in thread
From: Joanne Koong @ 2023-01-06 19:33 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Kumar Kartikeya Dwivedi, bpf, Alexei Starovoitov,
	Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau, David Vernet,
	Eduard Zingerman

On Wed, Jan 4, 2023 at 2:44 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Consider a program like below:
> >
> > void prog(void)
> > {
> >         {
> >                 struct bpf_dynptr ptr;
> >                 bpf_dynptr_from_mem(...);
> >         }
> >         ...
> >         {
> >                 struct bpf_dynptr ptr;
> >                 bpf_dynptr_from_mem(...);
> >         }
> > }
> >
> > Here, the C compiler based on lifetime rules in the C standard would be
> > well within in its rights to share stack storage for dynptr 'ptr' as
> > their lifetimes do not overlap in the two distinct scopes. Currently,
> > such an example would be rejected by the verifier, but this is too
> > strict. Instead, we should allow reinitializing over dynptr stack slots
> > and forget information about the old dynptr object.
> >
>
> As mentioned in the previous patch, shouldn't we allow this only for
> dynptrs that don't require OBJ_RELEASE, which would be those with
> ref_obj_id == 0?
>

+1

>
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 16 +++++++++-------
> >  1 file changed, 9 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index b985d90505cc..e85e8c4be00d 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -786,6 +786,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return -EINVAL;
> >
> > +       destroy_stack_slots_dynptr(env, state, spi);
> > +       destroy_stack_slots_dynptr(env, state, spi - 1);

We don't need the 2nd call since destroy_slots_dynptr() destroys both slots

> > +
> >         for (i = 0; i < BPF_REG_SIZE; i++) {
> >                 state->stack[spi].slot_type[i] = STACK_DYNPTR;
> >                 state->stack[spi - 1].slot_type[i] = STACK_DYNPTR;
> > @@ -901,7 +904,7 @@ static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> >  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >  {
> >         struct bpf_func_state *state = func(env, reg);
> > -       int spi, i;
> > +       int spi;
> >
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return false;
> > @@ -914,12 +917,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return true;
> >
> > -       for (i = 0; i < BPF_REG_SIZE; i++) {
> > -               if (state->stack[spi].slot_type[i] == STACK_DYNPTR ||
> > -                   state->stack[spi - 1].slot_type[i] == STACK_DYNPTR)
> > -                       return false;
> > -       }
> > -
> > +       /* We allow overwriting existing STACK_DYNPTR slots, see
> > +        * mark_stack_slots_dynptr which calls destroy_stack_slots_dynptr to
> > +        * ensure dynptr objects at the slots we are touching are completely
> > +        * destructed before we reinitialize them for a new one.
> > +        */
> >         return true;
> >  }
> >
> > --
> > 2.39.0
> >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-02 19:28   ` Eduard Zingerman
@ 2023-01-09 10:59     ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 10:59 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet

On Tue, Jan 03, 2023 at 12:58:40AM IST, Eduard Zingerman wrote:
> On Sun, 2023-01-01 at 14:03 +0530, Kumar Kartikeya Dwivedi wrote:
> > The root of the problem is missing liveness marking for STACK_DYNPTR
> > slots. This leads to all kinds of problems inside stacksafe.
> >
> > The verifier by default inside stacksafe ignores spilled_ptr in stack
> > slots which do not have REG_LIVE_READ marks. Since this is being checked
> > in the 'old' explored state, it must have already done clean_live_states
> > for this old bpf_func_state. Hence, it won't be receiving any more
> > liveness marks from to be explored insns (it has received REG_LIVE_DONE
> > marking from liveness point of view).
> >
> > What this means is that verifier considers that it's safe to not compare
> > the stack slot if was never read by children states. While liveness
> > marks are usually propagated correctly following the parentage chain for
> > spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
> > case for STACK_DYNPTR.
> >
> > clean_live_states hence simply rewrites these stack slots to the type
> > STACK_INVALID since it sees no REG_LIVE_READ marks.
> >
> > The end result is that we will never see STACK_DYNPTR slots in explored
> > state. Even if verifier was conservatively matching !REG_LIVE_READ
> > slots, very next check continuing the stacksafe loop on seeing
> > STACK_INVALID would again prevent further checks.
> >
> > Now as long as verifier stores an explored state which we can compare to
> > when reaching a pruning point, we can abuse this bug to make verifier
> > prune search for obviously unsafe paths using STACK_DYNPTR slots
> > thinking they are never used hence safe.
> >
> > Doing this in unprivileged mode is a bit challenging. add_new_state is
> > only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
> > or when jmps_processed difference is >= 2 and insn_processed difference
> > is >= 8. So coming up with the unprivileged case requires a little more
> > work, but it is still totally possible. The test case being discussed
> > below triggers the heuristic even in unprivileged mode.
> >
> > However, it no longer works since commit
> > 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").
> >
> > Let's try to study the test step by step.
> >
> > Consider the following program (C style BPF ASM):
> >
> > 0  r0 = 0;
> > 1  r6 = &ringbuf_map;
> > 3  r1 = r6;
> > 4  r2 = 8;
> > 5  r3 = 0;
> > 6  r4 = r10;
> > 7  r4 -= -16;
> > 8  call bpf_ringbuf_reserve_dynptr;
> > 9  if r0 == 0 goto pc+1;
> > 10 goto pc+1;
> > 11 *(r10 - 16) = 0xeB9F;
> > 12 r1 = r10;
> > 13 r1 -= -16;
> > 14 r2 = 0;
> > 15 call bpf_ringbuf_discard_dynptr;
> > 16 r0 = 0;
> > 17 exit;
> >
> > We know that insn 12 will be a pruning point, hence if we force
> > add_new_state for it, it will first verify the following path as
> > safe in straight line exploration:
> > 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
> >
> > Then, when we arrive at insn 12 from the following path:
> > 0 1 3 4 5 6 7 8 9 -> 11 (12)
> >
> > We will find a state that has been verified as safe already at insn 12.
> > Since register state is same at this point, regsafe will pass. Next, in
> > stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
> > seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
> > Next, refsafe is also true as reference state is unchanged in both
> > states.
> >
> > The states are considered equivalent and search is pruned.
> >
> > Hence, we are able to construct a dynptr with arbitrary contents and use
> > the dynptr API to operate on this arbitrary pointer and arbitrary size +
> > offset.
> >
> > To fix this, first define a mark_dynptr_read function that propagates
> > liveness marks whenever a valid initialized dynptr is accessed by dynptr
> > helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
> > uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
> > screening off mark_reg_read and not propagating marks upwards from that
> > point.
> >
> > This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
> > none, so clean_live_states either sets both slots to STACK_INVALID or
> > none of them. This is the invariant the checks inside stacksafe rely on.
> >
> > Next, do a complete comparison of both stack slots whenever they have
> > STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
> > also whether both form the same first_slot. Only then is the later path
> > safe.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 73 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 4a25375ebb0d..f7248235e119 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> >  		state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> >  	}
> >
> > +	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +	state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
> >  	return 0;
> >  }
> >
> > @@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >
> >  	__mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> >  	__mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > +
> > +	/* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
> > +	 *
> > +	 * While we don't allow reading STACK_INVALID, it is still possible to
> > +	 * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
> > +	 * helpers or insns can do partial read of that part without failing,
> > +	 * but check_stack_range_initialized, check_stack_read_var_off, and
> > +	 * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
> > +	 * the slot conservatively. Hence we need to screen off those liveness
> > +	 * marking walks.
> > +	 *
> > +	 * This was not a problem before because STACK_INVALID is only set by
> > +	 * default, or in clean_live_states after REG_LIVE_DONE, not randomly
> > +	 * during verifier state exploration. Hence, for this case parentage
> > +	 * chain will still be live, while earlier reg->parent was NULL, so we
> > +	 * need REG_LIVE_WRITTEN to screen off read marker propagation.
> > +	 */
> > +	state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +	state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
>
> This is purely to assist with verifier state pruning and does not
> affect correctness, right?

Yes, it should not affect correctness (to the best of my knowledge).

> Commenting the lines does not seem to fail any tests, maybe add one
> matching some "77 safe: ..." jump in the log?
>

I will, thanks.

> >  	return 0;
> >  }
> >
> > @@ -2388,6 +2411,30 @@ static int mark_reg_read(struct bpf_verifier_env *env,
> >  	return 0;
> >  }
> >
> > +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > +{
> > +	struct bpf_func_state *state = func(env, reg);
> > +	int spi, ret;
> > +
> > +	/* For CONST_PTR_TO_DYNPTR, it must have already been done by
> > +	 * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
> > +	 * check_kfunc_call.
> > +	 */
> > +	if (reg->type == CONST_PTR_TO_DYNPTR)
> > +		return 0;
> > +	spi = get_spi(reg->off);
> > +	/* Caller ensures dynptr is valid and initialized, which means spi is in
> > +	 * bounds and spi is the first dynptr slot. Simply mark stack slot as
> > +	 * read.
> > +	 */
> > +	ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
> > +			    state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
> > +	if (ret)
> > +		return ret;
> > +	return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
> > +			     state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
> > +}
> > +
> >  /* This function is supposed to be used by the following 32-bit optimization
> >   * code only. It returns TRUE if the source or destination register operates
> >   * on 64-bit, otherwise return FALSE.
> > @@ -5928,6 +5975,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> >  			enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
> >  {
> >  	struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
> > +	int err;
> >
> >  	/* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
> >  	 * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
> > @@ -6008,6 +6056,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> >  				err_extra, regno);
> >  			return -EINVAL;
> >  		}
> > +
> > +		err = mark_dynptr_read(env, reg);
> > +		if (err)
> > +			return err;
> >  	}
> >  	return 0;
> >  }
> > @@ -13204,6 +13256,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> >  			 * return false to continue verification of this path
> >  			 */
> >  			return false;
> > +		/* Both are same slot_type, but STACK_DYNPTR requires more
> > +		 * checks before it can considered safe.
> > +		 */
> > +		if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_DYNPTR) {
> > +			/* If both are STACK_DYNPTR, type must be same */
> > +			if (old->stack[spi].spilled_ptr.dynptr.type != cur->stack[spi].spilled_ptr.dynptr.type)
> > +				return false;
> > +			/* Both should also have first slot at same spi */
> > +			if (old->stack[spi].spilled_ptr.dynptr.first_slot != cur->stack[spi].spilled_ptr.dynptr.first_slot)
> > +				return false;
> > +			/* ids should be same */
> > +			if (!!old->stack[spi].spilled_ptr.ref_obj_id != !!cur->stack[spi].spilled_ptr.ref_obj_id)
> > +				return false;
> > +			if (old->stack[spi].spilled_ptr.ref_obj_id &&
> > +			    !check_ids(old->stack[spi].spilled_ptr.ref_obj_id,
> > +				       cur->stack[spi].spilled_ptr.ref_obj_id, idmap))
> > +				return false;
> > +			WARN_ON_ONCE(i % BPF_REG_SIZE);
> > +			i += BPF_REG_SIZE - 1;
> > +			continue;
> > +		}
>
> Nitpick: maybe move the checks above inside regsafe() as all
> conditions operate on old/cur->stack[spi].spilled_ptr ?

Good suggestion, but may need to tweak the condition that falls through to
regsafe for is_spilled_reg, and include STACK_DYNPTR there. I'll check Andrii's
comments as well and see how the end result looks.

>
> Acked-by: Eduard Zingerman <eddyz@gmail.com>
>
> >  		if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
> >  			continue;
> >  		if (!is_spilled_reg(&old->stack[spi]))
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-04 22:24   ` Andrii Nakryiko
@ 2023-01-09 11:05     ` Kumar Kartikeya Dwivedi
  2023-01-12  0:47       ` Andrii Nakryiko
  0 siblings, 1 reply; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:05 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Thu, Jan 05, 2023 at 03:54:03AM IST, Andrii Nakryiko wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > The root of the problem is missing liveness marking for STACK_DYNPTR
> > slots. This leads to all kinds of problems inside stacksafe.
> >
> > The verifier by default inside stacksafe ignores spilled_ptr in stack
> > slots which do not have REG_LIVE_READ marks. Since this is being checked
> > in the 'old' explored state, it must have already done clean_live_states
> > for this old bpf_func_state. Hence, it won't be receiving any more
> > liveness marks from to be explored insns (it has received REG_LIVE_DONE
> > marking from liveness point of view).
> >
> > What this means is that verifier considers that it's safe to not compare
> > the stack slot if was never read by children states. While liveness
> > marks are usually propagated correctly following the parentage chain for
> > spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
> > case for STACK_DYNPTR.
> >
> > clean_live_states hence simply rewrites these stack slots to the type
> > STACK_INVALID since it sees no REG_LIVE_READ marks.
> >
> > The end result is that we will never see STACK_DYNPTR slots in explored
> > state. Even if verifier was conservatively matching !REG_LIVE_READ
> > slots, very next check continuing the stacksafe loop on seeing
> > STACK_INVALID would again prevent further checks.
> >
> > Now as long as verifier stores an explored state which we can compare to
> > when reaching a pruning point, we can abuse this bug to make verifier
> > prune search for obviously unsafe paths using STACK_DYNPTR slots
> > thinking they are never used hence safe.
> >
> > Doing this in unprivileged mode is a bit challenging. add_new_state is
> > only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
> > or when jmps_processed difference is >= 2 and insn_processed difference
> > is >= 8. So coming up with the unprivileged case requires a little more
> > work, but it is still totally possible. The test case being discussed
> > below triggers the heuristic even in unprivileged mode.
> >
> > However, it no longer works since commit
> > 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").
> >
> > Let's try to study the test step by step.
> >
> > Consider the following program (C style BPF ASM):
> >
> > 0  r0 = 0;
> > 1  r6 = &ringbuf_map;
> > 3  r1 = r6;
> > 4  r2 = 8;
> > 5  r3 = 0;
> > 6  r4 = r10;
> > 7  r4 -= -16;
> > 8  call bpf_ringbuf_reserve_dynptr;
> > 9  if r0 == 0 goto pc+1;
> > 10 goto pc+1;
> > 11 *(r10 - 16) = 0xeB9F;
> > 12 r1 = r10;
> > 13 r1 -= -16;
> > 14 r2 = 0;
> > 15 call bpf_ringbuf_discard_dynptr;
> > 16 r0 = 0;
> > 17 exit;
> >
> > We know that insn 12 will be a pruning point, hence if we force
> > add_new_state for it, it will first verify the following path as
> > safe in straight line exploration:
> > 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
> >
> > Then, when we arrive at insn 12 from the following path:
> > 0 1 3 4 5 6 7 8 9 -> 11 (12)
> >
> > We will find a state that has been verified as safe already at insn 12.
> > Since register state is same at this point, regsafe will pass. Next, in
> > stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
> > seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
> > Next, refsafe is also true as reference state is unchanged in both
> > states.
> >
> > The states are considered equivalent and search is pruned.
> >
> > Hence, we are able to construct a dynptr with arbitrary contents and use
> > the dynptr API to operate on this arbitrary pointer and arbitrary size +
> > offset.
> >
> > To fix this, first define a mark_dynptr_read function that propagates
> > liveness marks whenever a valid initialized dynptr is accessed by dynptr
> > helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
> > uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
> > screening off mark_reg_read and not propagating marks upwards from that
> > point.
> >
> > This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
> > none, so clean_live_states either sets both slots to STACK_INVALID or
> > none of them. This is the invariant the checks inside stacksafe rely on.
> >
> > Next, do a complete comparison of both stack slots whenever they have
> > STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
> > also whether both form the same first_slot. Only then is the later path
> > safe.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 73 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 4a25375ebb0d..f7248235e119 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> >                 state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> >         }
> >
> > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
> >         return 0;
> >  }
> >
> > @@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >
> >         __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> >         __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > +
> > +       /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
> > +        *
> > +        * While we don't allow reading STACK_INVALID, it is still possible to
> > +        * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
> > +        * helpers or insns can do partial read of that part without failing,
> > +        * but check_stack_range_initialized, check_stack_read_var_off, and
> > +        * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
> > +        * the slot conservatively. Hence we need to screen off those liveness
> > +        * marking walks.
> > +        *
> > +        * This was not a problem before because STACK_INVALID is only set by
> > +        * default, or in clean_live_states after REG_LIVE_DONE, not randomly
> > +        * during verifier state exploration. Hence, for this case parentage
> > +        * chain will still be live, while earlier reg->parent was NULL, so we
> > +        * need REG_LIVE_WRITTEN to screen off read marker propagation.
> > +        */
> > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
> >         return 0;
> >  }
> >
> > @@ -2388,6 +2411,30 @@ static int mark_reg_read(struct bpf_verifier_env *env,
> >         return 0;
> >  }
> >
> > +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > +{
> > +       struct bpf_func_state *state = func(env, reg);
> > +       int spi, ret;
> > +
> > +       /* For CONST_PTR_TO_DYNPTR, it must have already been done by
> > +        * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
> > +        * check_kfunc_call.
> > +        */
> > +       if (reg->type == CONST_PTR_TO_DYNPTR)
> > +               return 0;
> > +       spi = get_spi(reg->off);
> > +       /* Caller ensures dynptr is valid and initialized, which means spi is in
> > +        * bounds and spi is the first dynptr slot. Simply mark stack slot as
> > +        * read.
> > +        */
> > +       ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
> > +                           state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
> > +       if (ret)
> > +               return ret;
> > +       return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
> > +                            state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
> > +}
> > +
> >  /* This function is supposed to be used by the following 32-bit optimization
> >   * code only. It returns TRUE if the source or destination register operates
> >   * on 64-bit, otherwise return FALSE.
> > @@ -5928,6 +5975,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> >                         enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
> >  {
> >         struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
> > +       int err;
> >
> >         /* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
> >          * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
> > @@ -6008,6 +6056,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> >                                 err_extra, regno);
> >                         return -EINVAL;
> >                 }
> > +
> > +               err = mark_dynptr_read(env, reg);
> > +               if (err)
> > +                       return err;
> >         }
> >         return 0;
> >  }
> > @@ -13204,6 +13256,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> >                          * return false to continue verification of this path
> >                          */
> >                         return false;
> > +               /* Both are same slot_type, but STACK_DYNPTR requires more
> > +                * checks before it can considered safe.
> > +                */
> > +               if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_DYNPTR) {
>
> how about moving this check right after `if (i % BPF_REG_SIZE !=
> BPF_REG_SIZE - 1)` ? Then we can actually generalize this to a switch
> to handle STACK_SPILL and STACK_DYNPTR separately. I'm adding
> STACK_ITER in my upcoming patch set, so this will have all the things
> ready for that?
>
> switch (old->stack[spi].slot_type[BPF_REG_SIZE - 1]) {
> case STACK_SPILL:
>   if (!regsafe(...))
>      return false;
>   break;
> case STACK_DYNPTR:
>   ...
>   break;
> /* and then eventually */
> case STACK_ITER:
>   ...
>
> WDYT?
>

I can do this, it certainly makes sense with your upcoming changes, and it does
look cleaner.

> > +                       /* If both are STACK_DYNPTR, type must be same */
> > +                       if (old->stack[spi].spilled_ptr.dynptr.type != cur->stack[spi].spilled_ptr.dynptr.type)
>
> struct bpf_reg_state *old_reg, *cur_reg;
>
> old_reg = &old->stack[spi].spilled_ptr;
> cur_reg = &cur->stack[spi].spilled_ptr;
>
> and then use old_reg and cur_reg in one simple if
>
> here's how I have it locally:
>
>                 case STACK_DYNPTR:
>                         old_reg = &old->stack[spi].spilled_ptr;
>                         cur_reg = &cur->stack[spi].spilled_ptr;
>                         if (old_reg->dynptr.type != cur_reg->dynptr.type ||
>                             old_reg->dynptr.first_slot !=
> cur_reg->dynptr.first_slot ||
>                             !check_ids(old_reg->ref_obj_id,
> cur_reg->ref_obj_id, idmap))
>                                 return false;
>                         break;
>
> seems a bit cleaner?
>

Yep.

> I'm also thinking of getting rid of first_slot field and instead have
> a rule that first slot has proper type set, but the next one has
> BPF_DYNPTR_TYPE_INVALID as type. This should simplify things a bit, I
> think. At least it seems that way for STACK_ITER state I'm adding. But
> that's a separate refactoring, probably.
>

Yeah, I'd rather not mix that into this set. Let me know if you think that's
better and I can follow up after the next iteration with that change.

> > +                               return false;
> > +                       /* Both should also have first slot at same spi */
> > +                       if (old->stack[spi].spilled_ptr.dynptr.first_slot != cur->stack[spi].spilled_ptr.dynptr.first_slot)
> > +                               return false;
> > +                       /* ids should be same */
> > +                       if (!!old->stack[spi].spilled_ptr.ref_obj_id != !!cur->stack[spi].spilled_ptr.ref_obj_id)
> > +                               return false;
> > +                       if (old->stack[spi].spilled_ptr.ref_obj_id &&
> > +                           !check_ids(old->stack[spi].spilled_ptr.ref_obj_id,
> > +                                      cur->stack[spi].spilled_ptr.ref_obj_id, idmap))
>
> my previous change to tech check_ids to enforce that either id have to
> be zeroes or non-zeroes at the same time already landed, so you don't
> need to check `old->stack[spi].spilled_ptr.ref_obj_id`. Even more, it
> seems wrong to do this check like this, because if cur has ref_obj_id
> set we'll ignore it, right?

The check before that ensures either both are set or both are unset. If there is
a mismatch we return false. I see that check_ids does it now, so yes it wouldn't
be needed anymore. I am not sure about the last part, I don't think it will be
ignored?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-06  0:18   ` Joanne Koong
@ 2023-01-09 11:17     ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:17 UTC (permalink / raw)
  To: Joanne Koong
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Fri, Jan 06, 2023 at 05:48:06AM IST, Joanne Koong wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > The root of the problem is missing liveness marking for STACK_DYNPTR
> > slots. This leads to all kinds of problems inside stacksafe.
> >
> > The verifier by default inside stacksafe ignores spilled_ptr in stack
> > slots which do not have REG_LIVE_READ marks. Since this is being checked
> > in the 'old' explored state, it must have already done clean_live_states
> > for this old bpf_func_state. Hence, it won't be receiving any more
> > liveness marks from to be explored insns (it has received REG_LIVE_DONE
> > marking from liveness point of view).
> >
> > What this means is that verifier considers that it's safe to not compare
> > the stack slot if was never read by children states. While liveness
> > marks are usually propagated correctly following the parentage chain for
> > spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
> > case for STACK_DYNPTR.
> >
> > clean_live_states hence simply rewrites these stack slots to the type
> > STACK_INVALID since it sees no REG_LIVE_READ marks.
> >
> > The end result is that we will never see STACK_DYNPTR slots in explored
> > state. Even if verifier was conservatively matching !REG_LIVE_READ
> > slots, very next check continuing the stacksafe loop on seeing
> > STACK_INVALID would again prevent further checks.
> >
> > Now as long as verifier stores an explored state which we can compare to
> > when reaching a pruning point, we can abuse this bug to make verifier
> > prune search for obviously unsafe paths using STACK_DYNPTR slots
> > thinking they are never used hence safe.
> >
> > Doing this in unprivileged mode is a bit challenging. add_new_state is
> > only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
> > or when jmps_processed difference is >= 2 and insn_processed difference
> > is >= 8. So coming up with the unprivileged case requires a little more
> > work, but it is still totally possible. The test case being discussed
> > below triggers the heuristic even in unprivileged mode.
> >
> > However, it no longer works since commit
> > 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").
> >
> > Let's try to study the test step by step.
> >
> > Consider the following program (C style BPF ASM):
> >
> > 0  r0 = 0;
> > 1  r6 = &ringbuf_map;
> > 3  r1 = r6;
> > 4  r2 = 8;
> > 5  r3 = 0;
> > 6  r4 = r10;
> > 7  r4 -= -16;
> > 8  call bpf_ringbuf_reserve_dynptr;
> > 9  if r0 == 0 goto pc+1;
> > 10 goto pc+1;
> > 11 *(r10 - 16) = 0xeB9F;
> > 12 r1 = r10;
> > 13 r1 -= -16;
> > 14 r2 = 0;
> > 15 call bpf_ringbuf_discard_dynptr;
> > 16 r0 = 0;
> > 17 exit;
> >
> > We know that insn 12 will be a pruning point, hence if we force
> > add_new_state for it, it will first verify the following path as
> > safe in straight line exploration:
> > 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
> >
> > Then, when we arrive at insn 12 from the following path:
> > 0 1 3 4 5 6 7 8 9 -> 11 (12)
> >
> > We will find a state that has been verified as safe already at insn 12.
> > Since register state is same at this point, regsafe will pass. Next, in
> > stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
> > seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
> > Next, refsafe is also true as reference state is unchanged in both
> > states.
> >
> > The states are considered equivalent and search is pruned.
> >
> > Hence, we are able to construct a dynptr with arbitrary contents and use
> > the dynptr API to operate on this arbitrary pointer and arbitrary size +
> > offset.
> >
> > To fix this, first define a mark_dynptr_read function that propagates
> > liveness marks whenever a valid initialized dynptr is accessed by dynptr
> > helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
> > uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
> > screening off mark_reg_read and not propagating marks upwards from that
> > point.
> >
> > This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
> > none, so clean_live_states either sets both slots to STACK_INVALID or
> > none of them. This is the invariant the checks inside stacksafe rely on.
> >
> > Next, do a complete comparison of both stack slots whenever they have
> > STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
> > also whether both form the same first_slot. Only then is the later path
> > safe.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 73 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 4a25375ebb0d..f7248235e119 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> >                 state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> >         }
> >
> > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
> >         return 0;
> >  }
> >
> > @@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >
> >         __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> >         __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > +
> > +       /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
> > +        *
> > +        * While we don't allow reading STACK_INVALID, it is still possible to
> > +        * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
> > +        * helpers or insns can do partial read of that part without failing,
> > +        * but check_stack_range_initialized, check_stack_read_var_off, and
> > +        * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
> > +        * the slot conservatively. Hence we need to screen off those liveness
> > +        * marking walks.
> > +        *
> > +        * This was not a problem before because STACK_INVALID is only set by
> > +        * default, or in clean_live_states after REG_LIVE_DONE, not randomly
> > +        * during verifier state exploration. Hence, for this case parentage
>
> Where does it get set randomly during verifier state exploration for this case?
>

When unmarking dynptr slots, we set STACK_INVALID. There are no other instances
where a slot is marked STACK_INVALID while verifier is doing symbolic execution.
It is either set by default or after checkpointed state won't be receiving
liveness marks anymore (in clean_live_states).

> > +        * chain will still be live, while earlier reg->parent was NULL, so we
>
> What does "live" in  "parentage chain will still be live" here mean?

It just means it will probably have a non-NULL reg->parent (which mark_reg_read
will follow), by default when STACK_INVALID slot is written to the spilled_ptr
will have reg->parent as NULL.

> what does "earlier" in "earlier reg->parent" refer to here, and why
> was the earlier reg->parent NULL?
>

Earlier refers to the default case when STACK_INVALID was not being set while
unmarking dynptr slots. So any mark_reg_read on spilled_ptr didn't propagate any
read marks. Now it can happen, so to make state pruning less conservative we
need to be more careful in places where REG_LIVE_WRITTEN needs to be set to
stop those register parent chain walks in mark_reg_read.

> > [...]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
  2023-01-04 22:32   ` Andrii Nakryiko
@ 2023-01-09 11:18     ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:18 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Thu, Jan 05, 2023 at 04:02:11AM IST, Andrii Nakryiko wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Currently, the dynptr function is not checking the variable offset part
> > of PTR_TO_STACK that it needs to check. The fixed offset is considered
> > when computing the stack pointer index, but if the variable offset was
> > not a constant (such that it could not be accumulated in reg->off), we
> > will end up a discrepency where runtime pointer does not point to the
> > actual stack slot we mark as STACK_DYNPTR.
> >
> > It is impossible to precisely track dynptr state when variable offset is
> > not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc.
> > simply reject the case where reg->var_off is not constant. Then,
> > consider both reg->off and reg->var_off.value when computing the stack
> > pointer index.
> >
> > A new helper dynptr_get_spi is introduced to hide over these details
> > since the dynptr needs to be located in multiple places outside the
> > process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we
> > need to enforce these checks in all places.
> >
> > Note that it is disallowed for unprivileged users to have a non-constant
> > var_off, so this problem should only be possible to trigger from
> > programs having CAP_PERFMON. However, its effects can vary.
> >
> > Without the fix, it is possible to replace the contents of the dynptr
> > arbitrarily by making verifier mark different stack slots than actual
> > location and then doing writes to the actual stack address of dynptr at
> > runtime.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c                         | 83 ++++++++++++++-----
> >  .../bpf/prog_tests/kfunc_dynptr_param.c       |  2 +-
> >  .../testing/selftests/bpf/progs/dynptr_fail.c |  6 +-
> >  3 files changed, 66 insertions(+), 25 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index f7248235e119..ca970f80e395 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env,
> >                 verbose(env, "D");
> >  }
> >
> > -static int get_spi(s32 off)
> > +static int __get_spi(s32 off)
> >  {
> >         return (-off - 1) / BPF_REG_SIZE;
> >  }
> >
> > +static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > +{
> > +       int off, spi;
> > +
> > +       if (!tnum_is_const(reg->var_off)) {
> > +               verbose(env, "dynptr has to be at the constant offset\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       off = reg->off + reg->var_off.value;
> > +       if (off % BPF_REG_SIZE) {
> > +               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
>
> s/reg->off/off/ ?
>

Yep, thanks for catching.

> > +               return -EINVAL;
> > +       }
> > +
> > +       spi = __get_spi(off);
> > +       if (spi < 1) {
> > +               verbose(env, "cannot pass in dynptr at an offset=%d\n", (int)(off + reg->var_off.value));
>
> s/(int)(off + reg->var_off.value)/off/?
>

Same, yes.

> > +               return -EINVAL;
> > +       }
> > +       return spi;
> > +}
> > +
>
> [...]
>
> > @@ -2422,7 +2456,9 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *
> >          */
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return 0;
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (WARN_ON_ONCE(spi < 0))
> > +               return spi;
> >         /* Caller ensures dynptr is valid and initialized, which means spi is in
> >          * bounds and spi is the first dynptr slot. Simply mark stack slot as
> >          * read.
> > @@ -5946,6 +5982,11 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
> >         return 0;
> >  }
> >
> > +static bool arg_type_is_release(enum bpf_arg_type type)
> > +{
> > +       return type & OBJ_RELEASE;
> > +}
> > +
>
> no need to move it?
>

Yeah, will fix.

> >  /* There are two register types representing a bpf_dynptr, one is PTR_TO_STACK
> >   * which points to a stack slot, and the other is CONST_PTR_TO_DYNPTR.
> >   *
> > @@ -5986,12 +6027,14 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> >         }
> >         /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
> >          * check_func_arg_reg_off's logic. We only need to check offset
> > -        * alignment for PTR_TO_STACK.
> > +        * and its alignment for PTR_TO_STACK.
> >          */
> > -       if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) {
> > -               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
> > -               return -EINVAL;
> > +       if (reg->type == PTR_TO_STACK) {
> > +               err = dynptr_get_spi(env, reg);
> > +               if (err < 0)
> > +                       return err;
> >         }
> > +
> >         /*  MEM_UNINIT - Points to memory that is an appropriate candidate for
> >          *               constructing a mutable bpf_dynptr object.
> >          *
> > @@ -6070,11 +6113,6 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
> >                type == ARG_CONST_SIZE_OR_ZERO;
> >  }
> >
> > -static bool arg_type_is_release(enum bpf_arg_type type)
> > -{
> > -       return type & OBJ_RELEASE;
> > -}
> > -
> >  static bool arg_type_is_dynptr(enum bpf_arg_type type)
> >  {
> >         return base_type(type) == ARG_PTR_TO_DYNPTR;
> > @@ -6404,8 +6442,9 @@ static u32 dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state
>
> why not make dynptr_ref_obj_id return int and <0 on error? There seems
> to be just one place where we call dynptr_ref_obj_id and we can check
> and report error there
>

Good suggestion, I'll make that change.

> >
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return reg->ref_obj_id;
> > -
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (WARN_ON_ONCE(spi < 0))
> > +               return U32_MAX;
> >         return state->stack[spi].spilled_ptr.ref_obj_id;
> >  }
> >
>
> [...]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
  2023-01-06  0:57   ` Joanne Koong
  2023-01-06 17:56     ` Joanne Koong
@ 2023-01-09 11:21     ` Kumar Kartikeya Dwivedi
  1 sibling, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:21 UTC (permalink / raw)
  To: Joanne Koong
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Fri, Jan 06, 2023 at 06:27:06AM IST, Joanne Koong wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Currently, the dynptr function is not checking the variable offset part
> > of PTR_TO_STACK that it needs to check. The fixed offset is considered
> > when computing the stack pointer index, but if the variable offset was
> > not a constant (such that it could not be accumulated in reg->off), we
> > will end up a discrepency where runtime pointer does not point to the
> > actual stack slot we mark as STACK_DYNPTR.
> >
> > It is impossible to precisely track dynptr state when variable offset is
> > not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc.
> > simply reject the case where reg->var_off is not constant. Then,
> > consider both reg->off and reg->var_off.value when computing the stack
> > pointer index.
> >
> > A new helper dynptr_get_spi is introduced to hide over these details
> > since the dynptr needs to be located in multiple places outside the
> > process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we
> > need to enforce these checks in all places.
> >
> > Note that it is disallowed for unprivileged users to have a non-constant
> > var_off, so this problem should only be possible to trigger from
> > programs having CAP_PERFMON. However, its effects can vary.
> >
> > Without the fix, it is possible to replace the contents of the dynptr
> > arbitrarily by making verifier mark different stack slots than actual
> > location and then doing writes to the actual stack address of dynptr at
> > runtime.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c                         | 83 ++++++++++++++-----
> >  .../bpf/prog_tests/kfunc_dynptr_param.c       |  2 +-
> >  .../testing/selftests/bpf/progs/dynptr_fail.c |  6 +-
> >  3 files changed, 66 insertions(+), 25 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index f7248235e119..ca970f80e395 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env,
> >                 verbose(env, "D");
> >  }
> >
> > -static int get_spi(s32 off)
> > +static int __get_spi(s32 off)
> >  {
> >         return (-off - 1) / BPF_REG_SIZE;
> >  }
> >
> > +static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > +{
> > +       int off, spi;
> > +
> > +       if (!tnum_is_const(reg->var_off)) {
> > +               verbose(env, "dynptr has to be at the constant offset\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       off = reg->off + reg->var_off.value;
> > +       if (off % BPF_REG_SIZE) {
> > +               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
>
> I think you meant off instead of reg->off?
>

Ack.

> > +               return -EINVAL;
> > +       }
> > +
> > +       spi = __get_spi(off);
> > +       if (spi < 1) {
> > +               verbose(env, "cannot pass in dynptr at an offset=%d\n", (int)(off + reg->var_off.value));
>
> I think you meant off instead of off + reg->var_off.value
>

Ack.

> > +               return -EINVAL;
> > +       }
>
> I think this if (spi < 1) check should have the same logic
> is_spi_bounds_valid() does (eg checking against total allocated slots
> as well). I think we can combine is_spi_bounds_valid() with this
> function and then every place we call is_spi_bounds_valid()
>

Ok, I'll combine both.

> > +       return spi;
> > +}
> > +
> >  static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots)
> >  {
> >         int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
> > @@ -754,7 +777,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> >         enum bpf_dynptr_type type;
> >         int spi, i, id;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return spi;
> >
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return -EINVAL;
> > @@ -792,7 +817,9 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >         struct bpf_func_state *state = func(env, reg);
> >         int spi, i;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return spi;
> >
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return -EINVAL;
> > @@ -839,7 +866,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return false;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return spi;
> > +
> > +       /* We will do check_mem_access to check and update stack bounds later */
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> >                 return true;
> >
> > @@ -855,14 +886,15 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
> >  static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >  {
> >         struct bpf_func_state *state = func(env, reg);
> > -       int spi;
> > -       int i;
> > +       int spi, i;
> >
> >         /* This already represents first slot of initialized bpf_dynptr */
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return true;
> >
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (spi < 0)
> > +               return false;
> >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
> >             !state->stack[spi].spilled_ptr.dynptr.first_slot)
> >                 return false;
> > @@ -891,7 +923,9 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg
> >         if (reg->type == CONST_PTR_TO_DYNPTR) {
> >                 return reg->dynptr.type == dynptr_type;
> >         } else {
> > -               spi = get_spi(reg->off);
> > +               spi = dynptr_get_spi(env, reg);
> > +               if (WARN_ON_ONCE(spi < 0))
> > +                       return false;
> >                 return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type;
> >         }
> >  }
> > @@ -2422,7 +2456,9 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *
> >          */
> >         if (reg->type == CONST_PTR_TO_DYNPTR)
> >                 return 0;
> > -       spi = get_spi(reg->off);
> > +       spi = dynptr_get_spi(env, reg);
> > +       if (WARN_ON_ONCE(spi < 0))
> > +               return spi;
> >         /* Caller ensures dynptr is valid and initialized, which means spi is in
> >          * bounds and spi is the first dynptr slot. Simply mark stack slot as
> >          * read.
> > @@ -5946,6 +5982,11 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
> >         return 0;
> >  }
> >
> > +static bool arg_type_is_release(enum bpf_arg_type type)
> > +{
> > +       return type & OBJ_RELEASE;
> > +}
>
> nit: I dont think you need this arg_type_is_release() change
>

Ack.

> > +
> >  /* There are two register types representing a bpf_dynptr, one is PTR_TO_STACK
> >   * which points to a stack slot, and the other is CONST_PTR_TO_DYNPTR.
> >   *
> > @@ -5986,12 +6027,14 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> >         }
> >         /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
> >          * check_func_arg_reg_off's logic. We only need to check offset
> > -        * alignment for PTR_TO_STACK.
> > +        * and its alignment for PTR_TO_STACK.
> >          */
> > -       if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) {
> > -               verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
> > -               return -EINVAL;
>
> > +       if (reg->type == PTR_TO_STACK) {
> > +               err = dynptr_get_spi(env, reg);
> > +               if (err < 0)
> > +                       return err;
> >         }
>
> nit: if we do something like
>
> If (reg->type == PTR_TO_STACK) {
>     spi = dynptr_get_spi(env, reg);
>     if (spi < 0)
>         return spi;
> } else {
>     spi = __get_spi(reg->off);
> }
>
> then we can just pass in spi to is_dynptr_reg_valid_uninit() and
> is_dynptr_reg_valid_init() instead of having to recompute/check them
> again
>

Seems a little misleading to set it to something in the else branch (where stack
pointer index has no meaning), but I do see your point, I guess it can be
ignored for the other case and set to 0 by default.

> [...]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-04 22:42   ` Andrii Nakryiko
@ 2023-01-09 11:26     ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:26 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Thu, Jan 05, 2023 at 04:12:52AM IST, Andrii Nakryiko wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Currently, while reads are disallowed for dynptr stack slots, writes are
> > not. Reads don't work from both direct access and helpers, while writes
> > do work in both cases, but have the effect of overwriting the slot_type.
> >
> > While this is fine, handling for a few edge cases is missing. Firstly,
> > a user can overwrite the stack slots of dynptr partially.
> >
> > Consider the following layout:
> > spi: [d][d][?]
> >       2  1  0
> >
> > First slot is at spi 2, second at spi 1.
> > Now, do a write of 1 to 8 bytes for spi 1.
> >
> > This will essentially either write STACK_MISC for all slot_types or
> > STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
> > of zeroes). The end result is that slot is scrubbed.
> >
> > Now, the layout is:
> > spi: [d][m][?]
> >       2  1  0
> >
> > Suppose if user initializes spi = 1 as dynptr.
> > We get:
> > spi: [d][d][d]
> >       2  1  0
> >
> > But this time, both spi 2 and spi 1 have first_slot = true.
> >
> > Now, when passing spi 2 to dynptr helper, it will consider it as
> > initialized as it does not check whether second slot has first_slot ==
> > false. And spi 1 should already work as normal.
> >
> > This effectively replaced size + offset of first dynptr, hence allowing
> > invalid OOB reads and writes.
> >
> > Make a few changes to protect against this:
> > When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
> > STACK_DYNPTR type, mark both first and second slot (regardless of which
> > slot we touch) as STACK_INVALID. Reads are already prevented.
> >
> > Second, prevent writing to stack memory from helpers if the range may
> > contain any STACK_DYNPTR slots. Reads are already prevented.
> >
> > For helpers, we cannot allow it to destroy dynptrs from the writes as
> > depending on arguments, helper may take uninit_mem and dynptr both at
> > the same time. This would mean that helper may write to uninit_mem
> > before it reads the dynptr, which would be bad.
> >
> > PTR_TO_MEM: [?????dd]
> >
> > Depending on the code inside the helper, it may end up overwriting the
> > dynptr contents first and then read those as the dynptr argument.
> >
> > Verifier would only simulate destruction when it does byte by byte
> > access simulation in check_helper_call for meta.access_size, and
> > fail to catch this case, as it happens after argument checks.
> >
> > The same would need to be done for any other non-trivial objects created
> > on the stack in the future, such as bpf_list_head on stack, or
> > bpf_rb_root on stack.
> >
> > A common misunderstanding in the current code is that MEM_UNINIT means
> > writes, but note that writes may also be performed even without
> > MEM_UNINIT in case of helpers, in that case the code after handling meta
> > && meta->raw_mode will complain when it sees STACK_DYNPTR. So that
> > invalid read case also covers writes to potential STACK_DYNPTR slots.
> > The only loophole was in case of meta->raw_mode which simulated writes
> > through instructions which could overwrite them.
> >
> > A future series sequenced after this will focus on the clean up of
> > helper access checks and bugs around that.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 73 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index ca970f80e395..b985d90505cc 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
> >         __mark_dynptr_reg(reg, type, true);
> >  }
> >
> > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > +                                      struct bpf_func_state *state, int spi);
> >
> >  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> >                                    enum bpf_arg_type arg_type, int insn_idx)
> > @@ -858,6 +860,44 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >         return 0;
> >  }
> >
> > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > +                                      struct bpf_func_state *state, int spi)
> > +{
> > +       int i;
> > +
> > +       /* We always ensure that STACK_DYNPTR is never set partially,
> > +        * hence just checking for slot_type[0] is enough. This is
> > +        * different for STACK_SPILL, where it may be only set for
> > +        * 1 byte, so code has to use is_spilled_reg.
> > +        */
> > +       if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
> > +               return;
> > +       /* Reposition spi to first slot */
> > +       if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
> > +               spi = spi + 1;
> > +
> > +       mark_stack_slot_scratched(env, spi);
> > +       mark_stack_slot_scratched(env, spi - 1);
> > +
> > +       /* Writing partially to one dynptr stack slot destroys both. */
> > +       for (i = 0; i < BPF_REG_SIZE; i++) {
> > +               state->stack[spi].slot_type[i] = STACK_INVALID;
> > +               state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> > +       }
> > +
> > +       /* Do not release reference state, we are destroying dynptr on stack,
> > +        * not using some helper to release it. Just reset register.
> > +        */
> > +       __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> > +       __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > +
> > +       /* Same reason as unmark_stack_slots_dynptr above */
> > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
> > +       return;
> > +}
> > +
> >  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >  {
> >         struct bpf_func_state *state = func(env, reg);
> > @@ -3384,6 +3424,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >                         env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
> >         }
> >
> > +       destroy_stack_slots_dynptr(env, state, spi);
> > +
>
> subjective, but it feels like having an explicit slot_type !=
> STACK_DYNPTR here is better, then "destroy_stack_slots_dynptr"
> actually is doing destruction, not "maybe_destroy_stack_slots_dynptr",
> which you effectively are implementing here
>

The intent of the function is to destroy any dynptr which the spi belongs to.
If there is nothing, it will just return. I don't mind pulling the check out,
but I think it's going to be done before each call for this function, so it felt
better to keep it inside it and make non STACK_DYNPTR case a no-op.

> also, shouldn't overwrite of dynptrs w/ ref_obj_id be prevented early
> on with a meaningful error, instead of waiting for "unreleased
> reference" error later on? for ref_obj_id dynptrs we know that you
> have to call helper with OBJ_RELEASE semantics, at which point we'll
> reset stack slots
>
> am I missing something?
>

Yes, I can make that change.

>
> >         mark_stack_slot_scratched(env, spi);
> >         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
> >             !register_is_null(reg) && env->bpf_capable) {
> > @@ -3497,6 +3539,13 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
> >         if (err)
> >                 return err;
> >
> > +       for (i = min_off; i < max_off; i++) {
> > +               int slot, spi;
> > +
> > +               slot = -i - 1;
> > +               spi = slot / BPF_REG_SIZE;
> > +               destroy_stack_slots_dynptr(env, state, spi);
> > +       }
> >
> >         /* Variable offset writes destroy any spilled pointers in range. */
> >         for (i = min_off; i < max_off; i++) {
> > @@ -5524,6 +5573,30 @@ static int check_stack_range_initialized(
> >         }
> >
> >         if (meta && meta->raw_mode) {
> > +               /* Ensure we won't be overwriting dynptrs when simulating byte
> > +                * by byte access in check_helper_call using meta.access_size.
> > +                * This would be a problem if we have a helper in the future
> > +                * which takes:
> > +                *
> > +                *      helper(uninit_mem, len, dynptr)
> > +                *
> > +                * Now, uninint_mem may overlap with dynptr pointer. Hence, it
> > +                * may end up writing to dynptr itself when touching memory from
> > +                * arg 1. This can be relaxed on a case by case basis for known
> > +                * safe cases, but reject due to the possibilitiy of aliasing by
> > +                * default.
> > +                */
> > +               for (i = min_off; i < max_off + access_size; i++) {
> > +                       slot = -i - 1;
>
> nit: slot name is misleading, we normally call entire 8-byte slot a
> "slot", while here slot is actually off, right? same above.
>

The same naming has been used in multiple places, probably because these
functions also get an off parameter passed in from the caller. I guess
stack_off sounds better?

> > +                       spi = slot / BPF_REG_SIZE;
> > +                       /* raw_mode may write past allocated_stack */
> > +                       if (state->allocated_stack <= slot)
> > +                               continue;
> > +                       if (state->stack[spi].slot_type[slot % BPF_REG_SIZE] == STACK_DYNPTR) {
> > +                               verbose(env, "potential write to dynptr at off=%d disallowed\n", i);
> > +                               return -EACCES;
> > +                       }
> > +               }
> >                 meta->access_size = access_size;
> >                 meta->regno = regno;
> >                 return 0;
> > --
> > 2.39.0
> >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-06 19:16   ` Joanne Koong
  2023-01-06 19:31     ` Joanne Koong
@ 2023-01-09 11:30     ` Kumar Kartikeya Dwivedi
  2023-01-12 18:51       ` Joanne Koong
  1 sibling, 1 reply; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:30 UTC (permalink / raw)
  To: Joanne Koong
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Sat, Jan 07, 2023 at 12:46:23AM IST, Joanne Koong wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Currently, while reads are disallowed for dynptr stack slots, writes are
> > not. Reads don't work from both direct access and helpers, while writes
> > do work in both cases, but have the effect of overwriting the slot_type.
> >
> > While this is fine, handling for a few edge cases is missing. Firstly,
> > a user can overwrite the stack slots of dynptr partially.
> >
> > Consider the following layout:
> > spi: [d][d][?]
> >       2  1  0
> >
> > First slot is at spi 2, second at spi 1.
> > Now, do a write of 1 to 8 bytes for spi 1.
> >
> > This will essentially either write STACK_MISC for all slot_types or
> > STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
> > of zeroes). The end result is that slot is scrubbed.
> >
> > Now, the layout is:
> > spi: [d][m][?]
> >       2  1  0
> >
> > Suppose if user initializes spi = 1 as dynptr.
> > We get:
> > spi: [d][d][d]
> >       2  1  0
> >
> > But this time, both spi 2 and spi 1 have first_slot = true.
> >
> > Now, when passing spi 2 to dynptr helper, it will consider it as
> > initialized as it does not check whether second slot has first_slot ==
> > false. And spi 1 should already work as normal.
> >
> > This effectively replaced size + offset of first dynptr, hence allowing
> > invalid OOB reads and writes.
> >
> > Make a few changes to protect against this:
> > When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
> > STACK_DYNPTR type, mark both first and second slot (regardless of which
> > slot we touch) as STACK_INVALID. Reads are already prevented.
> >
> > Second, prevent writing to stack memory from helpers if the range may
> > contain any STACK_DYNPTR slots. Reads are already prevented.
> >
> > For helpers, we cannot allow it to destroy dynptrs from the writes as
> > depending on arguments, helper may take uninit_mem and dynptr both at
> > the same time. This would mean that helper may write to uninit_mem
> > before it reads the dynptr, which would be bad.
> >
> > PTR_TO_MEM: [?????dd]
> >
> > Depending on the code inside the helper, it may end up overwriting the
> > dynptr contents first and then read those as the dynptr argument.
> >
> > Verifier would only simulate destruction when it does byte by byte
> > access simulation in check_helper_call for meta.access_size, and
> > fail to catch this case, as it happens after argument checks.
> >
> > The same would need to be done for any other non-trivial objects created
> > on the stack in the future, such as bpf_list_head on stack, or
> > bpf_rb_root on stack.
> >
> > A common misunderstanding in the current code is that MEM_UNINIT means
> > writes, but note that writes may also be performed even without
> > MEM_UNINIT in case of helpers, in that case the code after handling meta
> > && meta->raw_mode will complain when it sees STACK_DYNPTR. So that
> > invalid read case also covers writes to potential STACK_DYNPTR slots.
> > The only loophole was in case of meta->raw_mode which simulated writes
> > through instructions which could overwrite them.
> >
> > A future series sequenced after this will focus on the clean up of
> > helper access checks and bugs around that.
> >
> > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 73 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index ca970f80e395..b985d90505cc 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
> >         __mark_dynptr_reg(reg, type, true);
> >  }
> >
> > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > +                                      struct bpf_func_state *state, int spi);
> >
> >  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> >                                    enum bpf_arg_type arg_type, int insn_idx)
> > @@ -858,6 +860,44 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >         return 0;
> >  }
> >
> > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > +                                      struct bpf_func_state *state, int spi)
> > +{
> > +       int i;
> > +
> > +       /* We always ensure that STACK_DYNPTR is never set partially,
> > +        * hence just checking for slot_type[0] is enough. This is
> > +        * different for STACK_SPILL, where it may be only set for
> > +        * 1 byte, so code has to use is_spilled_reg.
> > +        */
> > +       if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
> > +               return;
>
> nit: an empty line here helps readability
>

Ok.

> > +       /* Reposition spi to first slot */
> > +       if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
> > +               spi = spi + 1;
> > +
> > +       mark_stack_slot_scratched(env, spi);
> > +       mark_stack_slot_scratched(env, spi - 1);
> > +
> > +       /* Writing partially to one dynptr stack slot destroys both. */
> > +       for (i = 0; i < BPF_REG_SIZE; i++) {
> > +               state->stack[spi].slot_type[i] = STACK_INVALID;
> > +               state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> > +       }
> > +
> > +       /* Do not release reference state, we are destroying dynptr on stack,
> > +        * not using some helper to release it. Just reset register.
> > +        */
>
> I agree with Andrii's point - I think it'd be more helpful if we error
> out here if the dynptr is refcounted. It'd be easy to check too, we
> already have dynptr_type_refcounted().
>

Ack, I'll change it to return an error early.

> > +       __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> > +       __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > +
> > +       /* Same reason as unmark_stack_slots_dynptr above */
> > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > +
> > +       return;
>
> I think we should also invalidate any data slices associated with the
> dynptrs? It seems natural that once a dynptr is invalidated, none of
> its data slices should be usable.
>

Great catch, will fix.

> > +}
> > +
> >  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >  {
> >         struct bpf_func_state *state = func(env, reg);
> > @@ -3384,6 +3424,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> >                         env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
> >         }
> >
> > +       destroy_stack_slots_dynptr(env, state, spi);
> > +
> >         mark_stack_slot_scratched(env, spi);
> >         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
> >             !register_is_null(reg) && env->bpf_capable) {
> > @@ -3497,6 +3539,13 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
> >         if (err)
> >                 return err;
> >
> > +       for (i = min_off; i < max_off; i++) {
> > +               int slot, spi;
> > +
> > +               slot = -i - 1;
> > +               spi = slot / BPF_REG_SIZE;
>
> I think you can just use __get_spi() here
>

Ack.

> > +               destroy_stack_slots_dynptr(env, state, spi);
>
> I think here too,
>
> if (state->stack[spi].slot_type[0] == STACK_DYNPTR)
>     destroy_stack_slots_dynptr(env, state, spi)
>
> makes it more readable.
>
> And if it is a STACK_DYNPTR, we can also fast-forward i.
>

No issues with such a change, but it's going to precede almost every call to
this function. I don't have a strong preference, but we could also call it
destroy_if_dynptr_stack_slot to make it more clear the destructon is conditional
and move it inside the function.

> [...]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots
  2023-01-06 19:33     ` Joanne Koong
@ 2023-01-09 11:40       ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:40 UTC (permalink / raw)
  To: Joanne Koong
  Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Martin KaFai Lau, David Vernet,
	Eduard Zingerman

On Sat, Jan 07, 2023 at 01:03:24AM IST, Joanne Koong wrote:
> On Wed, Jan 4, 2023 at 2:44 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > Consider a program like below:
> > >
> > > void prog(void)
> > > {
> > >         {
> > >                 struct bpf_dynptr ptr;
> > >                 bpf_dynptr_from_mem(...);
> > >         }
> > >         ...
> > >         {
> > >                 struct bpf_dynptr ptr;
> > >                 bpf_dynptr_from_mem(...);
> > >         }
> > > }
> > >
> > > Here, the C compiler based on lifetime rules in the C standard would be
> > > well within in its rights to share stack storage for dynptr 'ptr' as
> > > their lifetimes do not overlap in the two distinct scopes. Currently,
> > > such an example would be rejected by the verifier, but this is too
> > > strict. Instead, we should allow reinitializing over dynptr stack slots
> > > and forget information about the old dynptr object.
> > >
> >
> > As mentioned in the previous patch, shouldn't we allow this only for
> > dynptrs that don't require OBJ_RELEASE, which would be those with
> > ref_obj_id == 0?
> >
>
> +1
>

Ack, I'll make this change.

> >
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > > ---
> > >  kernel/bpf/verifier.c | 16 +++++++++-------
> > >  1 file changed, 9 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index b985d90505cc..e85e8c4be00d 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -786,6 +786,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> > >         if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
> > >                 return -EINVAL;
> > >
> > > +       destroy_stack_slots_dynptr(env, state, spi);
> > > +       destroy_stack_slots_dynptr(env, state, spi - 1);
>
> We don't need the 2nd call since destroy_slots_dynptr() destroys both slots
>

We do, I'll add a comment explaining this.
There are two cases.

    [d1][d1][d2][d2]
spi   3   2   1   0

If initializing on spi = 3, it will destroy d1, spi - 1 will see no STACK_DYNPTR
and simply ignore the call.
But in case we initialize on spi = 2, it will destroy d1 but not d2, only the
next call will destroy the dynptr at d2.

So for the case where we initialize over slots of two adjacent dynptrs, we'll
miss that case without making the 2nd call.

The call simply means 'destroy any dynptr which spi belongs to'. So it needs to
be made for both, as both spi and spi-1 may belong to different dynptrs.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 5/8] selftests/bpf: Add dynptr pruning tests
  2023-01-04 22:49   ` Andrii Nakryiko
@ 2023-01-09 11:44     ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Thu, Jan 05, 2023 at 04:19:30AM IST, Andrii Nakryiko wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Add verifier tests that verify the new pruning behavior for STACK_DYNPTR
> > slots, and ensure that state equivalence takes into account changes to
> > the old and current verifier state correctly.
> >
> > Without the prior fixes, both of these bugs trigger with unprivileged
> > BPF mode.
> >
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  tools/testing/selftests/bpf/verifier/dynptr.c | 90 +++++++++++++++++++
> >  1 file changed, 90 insertions(+)
> >  create mode 100644 tools/testing/selftests/bpf/verifier/dynptr.c
> >
> > diff --git a/tools/testing/selftests/bpf/verifier/dynptr.c b/tools/testing/selftests/bpf/verifier/dynptr.c
> > new file mode 100644
> > index 000000000000..798f4f7e0c57
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/verifier/dynptr.c
> > @@ -0,0 +1,90 @@
> > +{
> > +       "dynptr: rewrite dynptr slot",
> > +        .insns = {
> > +        BPF_MOV64_IMM(BPF_REG_0, 0),
> > +        BPF_LD_MAP_FD(BPF_REG_6, 0),
> > +        BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
> > +        BPF_MOV64_IMM(BPF_REG_2, 8),
> > +        BPF_MOV64_IMM(BPF_REG_3, 0),
> > +        BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
> > +        BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
> > +        BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
> > +        BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
> > +        BPF_JMP_IMM(BPF_JA, 0, 0, 1),
> > +        BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0xeB9F),
> > +        BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
> > +        BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -16),
> > +        BPF_MOV64_IMM(BPF_REG_2, 0),
> > +        BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
> > +        BPF_MOV64_IMM(BPF_REG_0, 0),
> > +        BPF_EXIT_INSN(),
> > +        },
> > +       .fixup_map_ringbuf = { 1 },
> > +       .result_unpriv = REJECT,
> > +       .errstr_unpriv = "unknown func bpf_ringbuf_reserve_dynptr#198",
> > +       .result = REJECT,
> > +       .errstr = "arg 1 is an unacquired reference",
> > +},
> > +{
> > +       "dynptr: type confusion",
> > +       .insns = {
> > +       BPF_MOV64_IMM(BPF_REG_0, 0),
> > +       BPF_LD_MAP_FD(BPF_REG_6, 0),
> > +       BPF_LD_MAP_FD(BPF_REG_7, 0),
> > +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
> > +       BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
> > +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
> > +       BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
> > +       BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
> > +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -24),
> > +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0xeB9FeB9F),
> > +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -24, 0xeB9FeB9F),
> > +       BPF_MOV64_IMM(BPF_REG_4, 0),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_2),
> > +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
> > +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
> > +       BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
> > +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
> > +       BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
> > +       BPF_EXIT_INSN(),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
> > +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
> > +       BPF_MOV64_IMM(BPF_REG_2, 8),
> > +       BPF_MOV64_IMM(BPF_REG_3, 0),
> > +       BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
> > +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
> > +       BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
> > +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve_dynptr),
> > +       BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
> > +       /* pad with insns to trigger add_new_state heuristic for straight line path */
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> > +       BPF_MOV64_REG(BPF_REG_8, BPF_REG_8),
> > +       BPF_JMP_IMM(BPF_JA, 0, 0, 9),
> > +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
> > +       BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0),
> > +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
> > +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
> > +       BPF_MOV64_IMM(BPF_REG_2, 0),
> > +       BPF_MOV64_IMM(BPF_REG_3, 0),
> > +       BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
> > +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -16),
> > +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_dynptr_from_mem),
> > +       BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
> > +       BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -16),
> > +       BPF_MOV64_IMM(BPF_REG_2, 0),
> > +       BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_discard_dynptr),
> > +       BPF_MOV64_IMM(BPF_REG_0, 0),
> > +       BPF_EXIT_INSN(),
> > +       },
> > +       .fixup_map_hash_16b = { 1 },
> > +       .fixup_map_ringbuf = { 3 },
> > +       .result_unpriv = REJECT,
> > +       .errstr_unpriv = "unknown func bpf_ringbuf_reserve_dynptr#198",
> > +       .result = REJECT,
> > +       .errstr = "arg 1 is an unacquired reference",
> > +},
>
> have you tried to write these tests as embedded assembly in .bpf.c,
> using __attribute__((naked)) and __failure and __msg("")
> infrastructure? Eduard is working towards converting test_verifier's
> test to this __naked + embed asm approach, so we might want to start
> adding new tests in such form anyways? And they will be way more
> readable. Defining and passing ringbuf map in C is also much more
> obvious and easy.
>

I have been away for a while and missed that discussion, I just saw it. I'll try
writing the tests like that. It does look much better. Thanks for the suggestion!

> > --
> > 2.39.0
> >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-05  3:06   ` Alexei Starovoitov
@ 2023-01-09 11:52     ` Kumar Kartikeya Dwivedi
  2023-01-10  2:19       ` Alexei Starovoitov
  0 siblings, 1 reply; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-09 11:52 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Thu, Jan 05, 2023 at 08:36:07AM IST, Alexei Starovoitov wrote:
> On Sun, Jan 01, 2023 at 02:03:57PM +0530, Kumar Kartikeya Dwivedi wrote:
> > Currently, while reads are disallowed for dynptr stack slots, writes are
> > not. Reads don't work from both direct access and helpers, while writes
> > do work in both cases, but have the effect of overwriting the slot_type.
>
> Unrelated to this patch set, but disallowing reads from dynptr slots
> seems like unnecessary restriction.
> We allow reads from spilled slots and conceptually dynptr slots should
> fall in is_spilled_reg() category in check_stack_read_*().
>
> We already can do:
> d = bpf_rdonly_cast(dynptr, bpf_core_type_id_kernel(struct bpf_dynptr_kern))
> d->size;

Not sure this cast is required, it can just be reads from the stack and clang
will generate CO-RE relocatable accesses when casted to the right struct with
preserve_access_index attribute set? Or did I miss something?

> and there is really no need to add bpf_dynptr* accessors either as helpers or as kfuncs.
> All accessors can simply be 'static inline' pure bpf functions in bpf_helpers.h.
> Automatic inlining and zero kernel side maintenance.

Yeah, it could be made to work (perhaps even portably using CO-RE and
relocatable enum values which check for set bits etc.). But in the end how do
you define such an interface, will it be UAPI like xdp_md or __sk_buff, or
unstable like kfunc, or just best-effort stable as long as user is making use of
CO-RE relocs?

>
> With verifier allowing reads into dynptr we can also enable bpf_cast_to_kern_ctx()
> to convert struct bpf_dynptr to struct bpf_dynptr_kern and enable
> even faster reads.

I think rdonly_cast is unnecessary, just enabling reads into stack will be
enough to enable this.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-09 11:52     ` Kumar Kartikeya Dwivedi
@ 2023-01-10  2:19       ` Alexei Starovoitov
  0 siblings, 0 replies; 38+ messages in thread
From: Alexei Starovoitov @ 2023-01-10  2:19 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Mon, Jan 9, 2023 at 3:52 AM Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
>
> On Thu, Jan 05, 2023 at 08:36:07AM IST, Alexei Starovoitov wrote:
> > On Sun, Jan 01, 2023 at 02:03:57PM +0530, Kumar Kartikeya Dwivedi wrote:
> > > Currently, while reads are disallowed for dynptr stack slots, writes are
> > > not. Reads don't work from both direct access and helpers, while writes
> > > do work in both cases, but have the effect of overwriting the slot_type.
> >
> > Unrelated to this patch set, but disallowing reads from dynptr slots
> > seems like unnecessary restriction.
> > We allow reads from spilled slots and conceptually dynptr slots should
> > fall in is_spilled_reg() category in check_stack_read_*().
> >
> > We already can do:
> > d = bpf_rdonly_cast(dynptr, bpf_core_type_id_kernel(struct bpf_dynptr_kern))
> > d->size;
>
> Not sure this cast is required, it can just be reads from the stack and clang
> will generate CO-RE relocatable accesses when casted to the right struct with
> preserve_access_index attribute set? Or did I miss something?

rdonly_cast is required today, because the verifier rejects raw reads.

> >
> > With verifier allowing reads into dynptr we can also enable bpf_cast_to_kern_ctx()
> > to convert struct bpf_dynptr to struct bpf_dynptr_kern and enable
> > even faster reads.
>
> I think rdonly_cast is unnecessary, just enabling reads into stack will be
> enough to enable this.

Right. I was thinking that bpf_cast_to_kern_ctx will make things
more robust, but plain vanilla cast to struct bpf_dynptr_kern *
and using CO-RE will work when the verifier enables reads.

> relocatable enum values which check for set bits etc.). But in the end how do
> you define such an interface, will it be UAPI like xdp_md or __sk_buff, or
> unstable like kfunc, or just best-effort stable as long as user is making use of
> CO-RE relocs?

We can start best-effort with CORE.
All our fixed uapi things like __sk_buff and xdp_md have ongoing
usability issues. People always need something new and we keep
adding to those structs every now and then.
struct __sk_buff is quite scary. Its vlan_* fields was a bit of a pain
to adjust when corresponding vlan changes were made in the sk_buff
and networking core.
Eventually we still had to do:
        /* Simulate the following kernel macro:
         *   #define skb_shinfo(SKB) ((struct skb_shared_info
*)(skb_end_pointer(SKB)))
         */
        shared_info = bpf_rdonly_cast(kskb->head + kskb->end,
                bpf_core_type_id_kernel(struct skb_shared_info));
I'm arguing that everything we expose to bpf progs should
be unstable in the beginning.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots
  2023-01-09 11:05     ` Kumar Kartikeya Dwivedi
@ 2023-01-12  0:47       ` Andrii Nakryiko
  0 siblings, 0 replies; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-12  0:47 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Mon, Jan 9, 2023 at 3:05 AM Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
>
> On Thu, Jan 05, 2023 at 03:54:03AM IST, Andrii Nakryiko wrote:
> > On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > The root of the problem is missing liveness marking for STACK_DYNPTR
> > > slots. This leads to all kinds of problems inside stacksafe.
> > >
> > > The verifier by default inside stacksafe ignores spilled_ptr in stack
> > > slots which do not have REG_LIVE_READ marks. Since this is being checked
> > > in the 'old' explored state, it must have already done clean_live_states
> > > for this old bpf_func_state. Hence, it won't be receiving any more
> > > liveness marks from to be explored insns (it has received REG_LIVE_DONE
> > > marking from liveness point of view).
> > >
> > > What this means is that verifier considers that it's safe to not compare
> > > the stack slot if was never read by children states. While liveness
> > > marks are usually propagated correctly following the parentage chain for
> > > spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
> > > case for STACK_DYNPTR.
> > >
> > > clean_live_states hence simply rewrites these stack slots to the type
> > > STACK_INVALID since it sees no REG_LIVE_READ marks.
> > >
> > > The end result is that we will never see STACK_DYNPTR slots in explored
> > > state. Even if verifier was conservatively matching !REG_LIVE_READ
> > > slots, very next check continuing the stacksafe loop on seeing
> > > STACK_INVALID would again prevent further checks.
> > >
> > > Now as long as verifier stores an explored state which we can compare to
> > > when reaching a pruning point, we can abuse this bug to make verifier
> > > prune search for obviously unsafe paths using STACK_DYNPTR slots
> > > thinking they are never used hence safe.
> > >
> > > Doing this in unprivileged mode is a bit challenging. add_new_state is
> > > only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
> > > or when jmps_processed difference is >= 2 and insn_processed difference
> > > is >= 8. So coming up with the unprivileged case requires a little more
> > > work, but it is still totally possible. The test case being discussed
> > > below triggers the heuristic even in unprivileged mode.
> > >
> > > However, it no longer works since commit
> > > 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF").
> > >
> > > Let's try to study the test step by step.
> > >
> > > Consider the following program (C style BPF ASM):
> > >
> > > 0  r0 = 0;
> > > 1  r6 = &ringbuf_map;
> > > 3  r1 = r6;
> > > 4  r2 = 8;
> > > 5  r3 = 0;
> > > 6  r4 = r10;
> > > 7  r4 -= -16;
> > > 8  call bpf_ringbuf_reserve_dynptr;
> > > 9  if r0 == 0 goto pc+1;
> > > 10 goto pc+1;
> > > 11 *(r10 - 16) = 0xeB9F;
> > > 12 r1 = r10;
> > > 13 r1 -= -16;
> > > 14 r2 = 0;
> > > 15 call bpf_ringbuf_discard_dynptr;
> > > 16 r0 = 0;
> > > 17 exit;
> > >
> > > We know that insn 12 will be a pruning point, hence if we force
> > > add_new_state for it, it will first verify the following path as
> > > safe in straight line exploration:
> > > 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
> > >
> > > Then, when we arrive at insn 12 from the following path:
> > > 0 1 3 4 5 6 7 8 9 -> 11 (12)
> > >
> > > We will find a state that has been verified as safe already at insn 12.
> > > Since register state is same at this point, regsafe will pass. Next, in
> > > stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
> > > seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
> > > Next, refsafe is also true as reference state is unchanged in both
> > > states.
> > >
> > > The states are considered equivalent and search is pruned.
> > >
> > > Hence, we are able to construct a dynptr with arbitrary contents and use
> > > the dynptr API to operate on this arbitrary pointer and arbitrary size +
> > > offset.
> > >
> > > To fix this, first define a mark_dynptr_read function that propagates
> > > liveness marks whenever a valid initialized dynptr is accessed by dynptr
> > > helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
> > > uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
> > > screening off mark_reg_read and not propagating marks upwards from that
> > > point.
> > >
> > > This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
> > > none, so clean_live_states either sets both slots to STACK_INVALID or
> > > none of them. This is the invariant the checks inside stacksafe rely on.
> > >
> > > Next, do a complete comparison of both stack slots whenever they have
> > > STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
> > > also whether both form the same first_slot. Only then is the later path
> > > safe.
> > >
> > > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > > ---
> > >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 73 insertions(+)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 4a25375ebb0d..f7248235e119 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> > >                 state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> > >         }
> > >
> > > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > > +
> > >         return 0;
> > >  }
> > >
> > > @@ -805,6 +808,26 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> > >
> > >         __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> > >         __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > > +
> > > +       /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
> > > +        *
> > > +        * While we don't allow reading STACK_INVALID, it is still possible to
> > > +        * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
> > > +        * helpers or insns can do partial read of that part without failing,
> > > +        * but check_stack_range_initialized, check_stack_read_var_off, and
> > > +        * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
> > > +        * the slot conservatively. Hence we need to screen off those liveness
> > > +        * marking walks.
> > > +        *
> > > +        * This was not a problem before because STACK_INVALID is only set by
> > > +        * default, or in clean_live_states after REG_LIVE_DONE, not randomly
> > > +        * during verifier state exploration. Hence, for this case parentage
> > > +        * chain will still be live, while earlier reg->parent was NULL, so we
> > > +        * need REG_LIVE_WRITTEN to screen off read marker propagation.
> > > +        */
> > > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > > +
> > >         return 0;
> > >  }
> > >
> > > @@ -2388,6 +2411,30 @@ static int mark_reg_read(struct bpf_verifier_env *env,
> > >         return 0;
> > >  }
> > >
> > > +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > > +{
> > > +       struct bpf_func_state *state = func(env, reg);
> > > +       int spi, ret;
> > > +
> > > +       /* For CONST_PTR_TO_DYNPTR, it must have already been done by
> > > +        * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
> > > +        * check_kfunc_call.
> > > +        */
> > > +       if (reg->type == CONST_PTR_TO_DYNPTR)
> > > +               return 0;
> > > +       spi = get_spi(reg->off);
> > > +       /* Caller ensures dynptr is valid and initialized, which means spi is in
> > > +        * bounds and spi is the first dynptr slot. Simply mark stack slot as
> > > +        * read.
> > > +        */
> > > +       ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
> > > +                           state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
> > > +       if (ret)
> > > +               return ret;
> > > +       return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
> > > +                            state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
> > > +}
> > > +
> > >  /* This function is supposed to be used by the following 32-bit optimization
> > >   * code only. It returns TRUE if the source or destination register operates
> > >   * on 64-bit, otherwise return FALSE.
> > > @@ -5928,6 +5975,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> > >                         enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
> > >  {
> > >         struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
> > > +       int err;
> > >
> > >         /* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
> > >          * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
> > > @@ -6008,6 +6056,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
> > >                                 err_extra, regno);
> > >                         return -EINVAL;
> > >                 }
> > > +
> > > +               err = mark_dynptr_read(env, reg);
> > > +               if (err)
> > > +                       return err;
> > >         }
> > >         return 0;
> > >  }
> > > @@ -13204,6 +13256,27 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > >                          * return false to continue verification of this path
> > >                          */
> > >                         return false;
> > > +               /* Both are same slot_type, but STACK_DYNPTR requires more
> > > +                * checks before it can considered safe.
> > > +                */
> > > +               if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_DYNPTR) {
> >
> > how about moving this check right after `if (i % BPF_REG_SIZE !=
> > BPF_REG_SIZE - 1)` ? Then we can actually generalize this to a switch
> > to handle STACK_SPILL and STACK_DYNPTR separately. I'm adding
> > STACK_ITER in my upcoming patch set, so this will have all the things
> > ready for that?
> >
> > switch (old->stack[spi].slot_type[BPF_REG_SIZE - 1]) {
> > case STACK_SPILL:
> >   if (!regsafe(...))
> >      return false;
> >   break;
> > case STACK_DYNPTR:
> >   ...
> >   break;
> > /* and then eventually */
> > case STACK_ITER:
> >   ...
> >
> > WDYT?
> >
>
> I can do this, it certainly makes sense with your upcoming changes, and it does
> look cleaner.
>
> > > +                       /* If both are STACK_DYNPTR, type must be same */
> > > +                       if (old->stack[spi].spilled_ptr.dynptr.type != cur->stack[spi].spilled_ptr.dynptr.type)
> >
> > struct bpf_reg_state *old_reg, *cur_reg;
> >
> > old_reg = &old->stack[spi].spilled_ptr;
> > cur_reg = &cur->stack[spi].spilled_ptr;
> >
> > and then use old_reg and cur_reg in one simple if
> >
> > here's how I have it locally:
> >
> >                 case STACK_DYNPTR:
> >                         old_reg = &old->stack[spi].spilled_ptr;
> >                         cur_reg = &cur->stack[spi].spilled_ptr;
> >                         if (old_reg->dynptr.type != cur_reg->dynptr.type ||
> >                             old_reg->dynptr.first_slot !=
> > cur_reg->dynptr.first_slot ||
> >                             !check_ids(old_reg->ref_obj_id,
> > cur_reg->ref_obj_id, idmap))
> >                                 return false;
> >                         break;
> >
> > seems a bit cleaner?
> >
>
> Yep.
>
> > I'm also thinking of getting rid of first_slot field and instead have
> > a rule that first slot has proper type set, but the next one has
> > BPF_DYNPTR_TYPE_INVALID as type. This should simplify things a bit, I
> > think. At least it seems that way for STACK_ITER state I'm adding. But
> > that's a separate refactoring, probably.
> >
>
> Yeah, I'd rather not mix that into this set. Let me know if you think that's
> better and I can follow up after the next iteration with that change.

So far it looks better for me, but we can always fix it later.

>
> > > +                               return false;
> > > +                       /* Both should also have first slot at same spi */
> > > +                       if (old->stack[spi].spilled_ptr.dynptr.first_slot != cur->stack[spi].spilled_ptr.dynptr.first_slot)
> > > +                               return false;
> > > +                       /* ids should be same */
> > > +                       if (!!old->stack[spi].spilled_ptr.ref_obj_id != !!cur->stack[spi].spilled_ptr.ref_obj_id)
> > > +                               return false;
> > > +                       if (old->stack[spi].spilled_ptr.ref_obj_id &&
> > > +                           !check_ids(old->stack[spi].spilled_ptr.ref_obj_id,
> > > +                                      cur->stack[spi].spilled_ptr.ref_obj_id, idmap))
> >
> > my previous change to tech check_ids to enforce that either id have to
> > be zeroes or non-zeroes at the same time already landed, so you don't
> > need to check `old->stack[spi].spilled_ptr.ref_obj_id`. Even more, it
> > seems wrong to do this check like this, because if cur has ref_obj_id
> > set we'll ignore it, right?
>
> The check before that ensures either both are set or both are unset. If there is
> a mismatch we return false. I see that check_ids does it now, so yes it wouldn't
> be needed anymore. I am not sure about the last part, I don't think it will be
> ignored?

Right, I forgot about that `if with !!old.ref_obj_id != !!cur.ref_obj_id`.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 0/8] Dynptr fixes
  2023-01-04 22:51 ` [PATCH bpf-next v1 0/8] Dynptr fixes Andrii Nakryiko
@ 2023-01-12  1:08   ` Kumar Kartikeya Dwivedi
  2023-01-13 22:31     ` Andrii Nakryiko
  0 siblings, 1 reply; 38+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2023-01-12  1:08 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Thu, Jan 05, 2023 at 04:21:27AM IST, Andrii Nakryiko wrote:
> On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > Happy New Year!
> >
> > This is part 2 of https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com.
> >
> > Changelog:
> > ----------
> > Old v1 -> v1
> > Old v1: https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com
> >
> >  * Allow overwriting dynptr stack slots from dynptr init helpers
> >  * Fix a bug in alignment check where reg->var_off.value was still not included
> >  * Address other minor nits
> >
> > Kumar Kartikeya Dwivedi (8):
> >   bpf: Fix state pruning for STACK_DYNPTR stack slots
> >   bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
> >   bpf: Fix partial dynptr stack slot reads/writes
> >   bpf: Allow reinitializing unreferenced dynptr stack slots
> >   selftests/bpf: Add dynptr pruning tests
> >   selftests/bpf: Add dynptr var_off tests
> >   selftests/bpf: Add dynptr partial slot overwrite tests
> >   selftests/bpf: Add dynptr helper tests
> >
>
> Hey Kumar, thanks for fixes! Left few comments, but I was also
> wondering if you thought about current is_spilled_reg() usage in the
> code? It makes an assumption that stack slots can be either a scalar
> (MISC/ZERO/INVALID) or STACK_SPILL. With STACK_DYNPTR it's not the
> case anymore, so it feels like we need to audit all the places where
> we assume stack spill and see if anything should be fixed. Was just
> wondering if you already looked at this?
>

I did look at its usage (once again), here are some comments:

For check_stack_read_fixed_off, the else branch for !is_spilled_reg treats
anything apart from STACK_MISC and STACK_ZERO as an error, so it would include
STACK_INVALID, STACK_DYNPTR, and your upcoming STACK_ITER.
For check_stack_read_var_off and its underlying check_stack_range_initialized,
situation is the same, anything apart from STACK_MISC, STACK_ZERO and
STACK_SPILL becomes an error.

Coming to check_stack_write_fixed_off and check_stack_write_var_off, they will
no longer see STACK_DYNPTR, as we destroy all dynptr for the stack slots being
written to, so it falls back to the handling for the case of STACK_INVALID.
With your upcoming STACK_ITER I assume dynptr/iter/list_head all such objects on
the stack will follow consistent lifetime rules so stray writes should lead to
their destruction in verifier state.

The rest of the cases to me seem to be about checking for spilled reg to be a
SCALAR_VALUE, and one case in stacksafe which looks ok as well.

Are there any particular cases that you are worried about?

> >  kernel/bpf/verifier.c                         | 243 ++++++++++++++++--
> >  .../bpf/prog_tests/kfunc_dynptr_param.c       |   2 +-
> >  .../testing/selftests/bpf/progs/dynptr_fail.c |  68 ++++-
> >  tools/testing/selftests/bpf/verifier/dynptr.c | 182 +++++++++++++
> >  4 files changed, 464 insertions(+), 31 deletions(-)
> >  create mode 100644 tools/testing/selftests/bpf/verifier/dynptr.c
> >
> >
> > base-commit: bb5747cfbc4b7fe29621ca6cd4a695d2723bf2e8
> > --
> > 2.39.0
> >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes
  2023-01-09 11:30     ` Kumar Kartikeya Dwivedi
@ 2023-01-12 18:51       ` Joanne Koong
  0 siblings, 0 replies; 38+ messages in thread
From: Joanne Koong @ 2023-01-12 18:51 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, David Vernet, Eduard Zingerman

On Mon, Jan 9, 2023 at 3:30 AM Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
>
> On Sat, Jan 07, 2023 at 12:46:23AM IST, Joanne Koong wrote:
> > On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > Currently, while reads are disallowed for dynptr stack slots, writes are
> > > not. Reads don't work from both direct access and helpers, while writes
> > > do work in both cases, but have the effect of overwriting the slot_type.
> > >
> > > While this is fine, handling for a few edge cases is missing. Firstly,
> > > a user can overwrite the stack slots of dynptr partially.
> > >
> > > Consider the following layout:
> > > spi: [d][d][?]
> > >       2  1  0
> > >
> > > First slot is at spi 2, second at spi 1.
> > > Now, do a write of 1 to 8 bytes for spi 1.
> > >
> > > This will essentially either write STACK_MISC for all slot_types or
> > > STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
> > > of zeroes). The end result is that slot is scrubbed.
> > >
> > > Now, the layout is:
> > > spi: [d][m][?]
> > >       2  1  0
> > >
> > > Suppose if user initializes spi = 1 as dynptr.
> > > We get:
> > > spi: [d][d][d]
> > >       2  1  0
> > >
> > > But this time, both spi 2 and spi 1 have first_slot = true.
> > >
> > > Now, when passing spi 2 to dynptr helper, it will consider it as
> > > initialized as it does not check whether second slot has first_slot ==
> > > false. And spi 1 should already work as normal.
> > >
> > > This effectively replaced size + offset of first dynptr, hence allowing
> > > invalid OOB reads and writes.
> > >
> > > Make a few changes to protect against this:
> > > When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
> > > STACK_DYNPTR type, mark both first and second slot (regardless of which
> > > slot we touch) as STACK_INVALID. Reads are already prevented.
> > >
> > > Second, prevent writing to stack memory from helpers if the range may
> > > contain any STACK_DYNPTR slots. Reads are already prevented.
> > >
> > > For helpers, we cannot allow it to destroy dynptrs from the writes as
> > > depending on arguments, helper may take uninit_mem and dynptr both at
> > > the same time. This would mean that helper may write to uninit_mem
> > > before it reads the dynptr, which would be bad.
> > >
> > > PTR_TO_MEM: [?????dd]
> > >
> > > Depending on the code inside the helper, it may end up overwriting the
> > > dynptr contents first and then read those as the dynptr argument.
> > >
> > > Verifier would only simulate destruction when it does byte by byte
> > > access simulation in check_helper_call for meta.access_size, and
> > > fail to catch this case, as it happens after argument checks.
> > >
> > > The same would need to be done for any other non-trivial objects created
> > > on the stack in the future, such as bpf_list_head on stack, or
> > > bpf_rb_root on stack.
> > >
> > > A common misunderstanding in the current code is that MEM_UNINIT means
> > > writes, but note that writes may also be performed even without
> > > MEM_UNINIT in case of helpers, in that case the code after handling meta
> > > && meta->raw_mode will complain when it sees STACK_DYNPTR. So that
> > > invalid read case also covers writes to potential STACK_DYNPTR slots.
> > > The only loophole was in case of meta->raw_mode which simulated writes
> > > through instructions which could overwrite them.
> > >
> > > A future series sequenced after this will focus on the clean up of
> > > helper access checks and bugs around that.
> > >
> > > Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs")
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > > ---
> > >  kernel/bpf/verifier.c | 73 +++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 73 insertions(+)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index ca970f80e395..b985d90505cc 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
> > >         __mark_dynptr_reg(reg, type, true);
> > >  }
> > >
> > > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > > +                                      struct bpf_func_state *state, int spi);
> > >
> > >  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> > >                                    enum bpf_arg_type arg_type, int insn_idx)
> > > @@ -858,6 +860,44 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> > >         return 0;
> > >  }
> > >
> > > +static void destroy_stack_slots_dynptr(struct bpf_verifier_env *env,
> > > +                                      struct bpf_func_state *state, int spi)
> > > +{
> > > +       int i;
> > > +
> > > +       /* We always ensure that STACK_DYNPTR is never set partially,
> > > +        * hence just checking for slot_type[0] is enough. This is
> > > +        * different for STACK_SPILL, where it may be only set for
> > > +        * 1 byte, so code has to use is_spilled_reg.
> > > +        */
> > > +       if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
> > > +               return;
> >
> > nit: an empty line here helps readability
> >
>
> Ok.
>
> > > +       /* Reposition spi to first slot */
> > > +       if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
> > > +               spi = spi + 1;
> > > +
> > > +       mark_stack_slot_scratched(env, spi);
> > > +       mark_stack_slot_scratched(env, spi - 1);
> > > +
> > > +       /* Writing partially to one dynptr stack slot destroys both. */
> > > +       for (i = 0; i < BPF_REG_SIZE; i++) {
> > > +               state->stack[spi].slot_type[i] = STACK_INVALID;
> > > +               state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> > > +       }
> > > +
> > > +       /* Do not release reference state, we are destroying dynptr on stack,
> > > +        * not using some helper to release it. Just reset register.
> > > +        */
> >
> > I agree with Andrii's point - I think it'd be more helpful if we error
> > out here if the dynptr is refcounted. It'd be easy to check too, we
> > already have dynptr_type_refcounted().
> >
>
> Ack, I'll change it to return an error early.
>
> > > +       __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> > > +       __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > > +
> > > +       /* Same reason as unmark_stack_slots_dynptr above */
> > > +       state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > > +       state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
> > > +
> > > +       return;
> >
> > I think we should also invalidate any data slices associated with the
> > dynptrs? It seems natural that once a dynptr is invalidated, none of
> > its data slices should be usable.
> >
>
> Great catch, will fix.
>
> > > +}
> > > +
> > >  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > >  {
> > >         struct bpf_func_state *state = func(env, reg);
> > > @@ -3384,6 +3424,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> > >                         env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
> > >         }
> > >
> > > +       destroy_stack_slots_dynptr(env, state, spi);
> > > +
> > >         mark_stack_slot_scratched(env, spi);
> > >         if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
> > >             !register_is_null(reg) && env->bpf_capable) {
> > > @@ -3497,6 +3539,13 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
> > >         if (err)
> > >                 return err;
> > >
> > > +       for (i = min_off; i < max_off; i++) {
> > > +               int slot, spi;
> > > +
> > > +               slot = -i - 1;
> > > +               spi = slot / BPF_REG_SIZE;
> >
> > I think you can just use __get_spi() here
> >
>
> Ack.
>
> > > +               destroy_stack_slots_dynptr(env, state, spi);
> >
> > I think here too,
> >
> > if (state->stack[spi].slot_type[0] == STACK_DYNPTR)
> >     destroy_stack_slots_dynptr(env, state, spi)
> >
> > makes it more readable.
> >
> > And if it is a STACK_DYNPTR, we can also fast-forward i.
> >
>
> No issues with such a change, but it's going to precede almost every call to
> this function. I don't have a strong preference, but we could also call it
> destroy_if_dynptr_stack_slot to make it more clear the destructon is conditional
> and move it inside the function.

I don't have a strong preference either. Calling it
destroy_if_dynptr_stack_slot() sounds good as well.

>
> > [...]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v1 0/8] Dynptr fixes
  2023-01-12  1:08   ` Kumar Kartikeya Dwivedi
@ 2023-01-13 22:31     ` Andrii Nakryiko
  0 siblings, 0 replies; 38+ messages in thread
From: Andrii Nakryiko @ 2023-01-13 22:31 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Martin KaFai Lau, Joanne Koong, David Vernet, Eduard Zingerman

On Wed, Jan 11, 2023 at 5:08 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Thu, Jan 05, 2023 at 04:21:27AM IST, Andrii Nakryiko wrote:
> > On Sun, Jan 1, 2023 at 12:34 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > Happy New Year!
> > >
> > > This is part 2 of https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com.
> > >
> > > Changelog:
> > > ----------
> > > Old v1 -> v1
> > > Old v1: https://lore.kernel.org/bpf/20221018135920.726360-1-memxor@gmail.com
> > >
> > >  * Allow overwriting dynptr stack slots from dynptr init helpers
> > >  * Fix a bug in alignment check where reg->var_off.value was still not included
> > >  * Address other minor nits
> > >
> > > Kumar Kartikeya Dwivedi (8):
> > >   bpf: Fix state pruning for STACK_DYNPTR stack slots
> > >   bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR
> > >   bpf: Fix partial dynptr stack slot reads/writes
> > >   bpf: Allow reinitializing unreferenced dynptr stack slots
> > >   selftests/bpf: Add dynptr pruning tests
> > >   selftests/bpf: Add dynptr var_off tests
> > >   selftests/bpf: Add dynptr partial slot overwrite tests
> > >   selftests/bpf: Add dynptr helper tests
> > >
> >
> > Hey Kumar, thanks for fixes! Left few comments, but I was also
> > wondering if you thought about current is_spilled_reg() usage in the
> > code? It makes an assumption that stack slots can be either a scalar
> > (MISC/ZERO/INVALID) or STACK_SPILL. With STACK_DYNPTR it's not the
> > case anymore, so it feels like we need to audit all the places where
> > we assume stack spill and see if anything should be fixed. Was just
> > wondering if you already looked at this?
> >
>
> I did look at its usage (once again), here are some comments:
>
> For check_stack_read_fixed_off, the else branch for !is_spilled_reg treats
> anything apart from STACK_MISC and STACK_ZERO as an error, so it would include
> STACK_INVALID, STACK_DYNPTR, and your upcoming STACK_ITER.
> For check_stack_read_var_off and its underlying check_stack_range_initialized,
> situation is the same, anything apart from STACK_MISC, STACK_ZERO and
> STACK_SPILL becomes an error.
>
> Coming to check_stack_write_fixed_off and check_stack_write_var_off, they will
> no longer see STACK_DYNPTR, as we destroy all dynptr for the stack slots being
> written to, so it falls back to the handling for the case of STACK_INVALID.
> With your upcoming STACK_ITER I assume dynptr/iter/list_head all such objects on
> the stack will follow consistent lifetime rules so stray writes should lead to
> their destruction in verifier state.
>
> The rest of the cases to me seem to be about checking for spilled reg to be a
> SCALAR_VALUE, and one case in stacksafe which looks ok as well.
>
> Are there any particular cases that you are worried about?

So I did a first quick pass and just changed is_spilled_reg to

+static bool is_stack_slot_special(const struct bpf_stack_state *stack)
+{
+       enum bpf_stack_slot_type type = stack->slot_type[BPF_REG_SIZE - 1];
+
+       switch (type) {
+       case STACK_SPILL:
+       case STACK_DYNPTR:
+       case STACK_ITER:
+               return true;
+       case STACK_INVALID:
+       case STACK_MISC:
+       case STACK_ZERO:
+               return false;
+       default:
+               WARN_ONCE(1, "unknown stack slot type %d\n", type);
+               return true;
+       }
+}
+

Which resulted in one or two tests failing due to the wrong verifier
message. I haven't spent much time debugging this and I still have a
few TODOs in the code to double-check everything in regards to this
change.

Thanks for the above analysis, I'll come back to it when I get to work
on my code again.


>
> > >  kernel/bpf/verifier.c                         | 243 ++++++++++++++++--
> > >  .../bpf/prog_tests/kfunc_dynptr_param.c       |   2 +-
> > >  .../testing/selftests/bpf/progs/dynptr_fail.c |  68 ++++-
> > >  tools/testing/selftests/bpf/verifier/dynptr.c | 182 +++++++++++++
> > >  4 files changed, 464 insertions(+), 31 deletions(-)
> > >  create mode 100644 tools/testing/selftests/bpf/verifier/dynptr.c
> > >
> > >
> > > base-commit: bb5747cfbc4b7fe29621ca6cd4a695d2723bf2e8
> > > --
> > > 2.39.0
> > >

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2023-01-13 22:31 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-01  8:33 [PATCH bpf-next v1 0/8] Dynptr fixes Kumar Kartikeya Dwivedi
2023-01-01  8:33 ` [PATCH bpf-next v1 1/8] bpf: Fix state pruning for STACK_DYNPTR stack slots Kumar Kartikeya Dwivedi
2023-01-02 19:28   ` Eduard Zingerman
2023-01-09 10:59     ` Kumar Kartikeya Dwivedi
2023-01-04 22:24   ` Andrii Nakryiko
2023-01-09 11:05     ` Kumar Kartikeya Dwivedi
2023-01-12  0:47       ` Andrii Nakryiko
2023-01-06  0:18   ` Joanne Koong
2023-01-09 11:17     ` Kumar Kartikeya Dwivedi
2023-01-01  8:33 ` [PATCH bpf-next v1 2/8] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR Kumar Kartikeya Dwivedi
2023-01-04 22:32   ` Andrii Nakryiko
2023-01-09 11:18     ` Kumar Kartikeya Dwivedi
2023-01-06  0:57   ` Joanne Koong
2023-01-06 17:56     ` Joanne Koong
2023-01-09 11:21     ` Kumar Kartikeya Dwivedi
2023-01-01  8:33 ` [PATCH bpf-next v1 3/8] bpf: Fix partial dynptr stack slot reads/writes Kumar Kartikeya Dwivedi
2023-01-04 22:42   ` Andrii Nakryiko
2023-01-09 11:26     ` Kumar Kartikeya Dwivedi
2023-01-05  3:06   ` Alexei Starovoitov
2023-01-09 11:52     ` Kumar Kartikeya Dwivedi
2023-01-10  2:19       ` Alexei Starovoitov
2023-01-06 19:16   ` Joanne Koong
2023-01-06 19:31     ` Joanne Koong
2023-01-09 11:30     ` Kumar Kartikeya Dwivedi
2023-01-12 18:51       ` Joanne Koong
2023-01-01  8:33 ` [PATCH bpf-next v1 4/8] bpf: Allow reinitializing unreferenced dynptr stack slots Kumar Kartikeya Dwivedi
2023-01-04 22:44   ` Andrii Nakryiko
2023-01-06 19:33     ` Joanne Koong
2023-01-09 11:40       ` Kumar Kartikeya Dwivedi
2023-01-01  8:33 ` [PATCH bpf-next v1 5/8] selftests/bpf: Add dynptr pruning tests Kumar Kartikeya Dwivedi
2023-01-04 22:49   ` Andrii Nakryiko
2023-01-09 11:44     ` Kumar Kartikeya Dwivedi
2023-01-01  8:34 ` [PATCH bpf-next v1 6/8] selftests/bpf: Add dynptr var_off tests Kumar Kartikeya Dwivedi
2023-01-01  8:34 ` [PATCH bpf-next v1 7/8] selftests/bpf: Add dynptr partial slot overwrite tests Kumar Kartikeya Dwivedi
2023-01-01  8:34 ` [PATCH bpf-next v1 8/8] selftests/bpf: Add dynptr helper tests Kumar Kartikeya Dwivedi
2023-01-04 22:51 ` [PATCH bpf-next v1 0/8] Dynptr fixes Andrii Nakryiko
2023-01-12  1:08   ` Kumar Kartikeya Dwivedi
2023-01-13 22:31     ` Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).