All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling
@ 2022-12-06 23:33 Andrii Nakryiko
  2022-12-06 23:33 ` [PATCH v2 bpf-next 1/3] bpf: decouple prune and jump points Andrii Nakryiko
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Andrii Nakryiko @ 2022-12-06 23:33 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Disentangle prune and jump points in BPF verifier code. They are conceptually
independent but currently coupled together. This small patch set refactors
related code and make it possible to have some instruction marked as pruning
or jump point independently.

Besides just conceptual cleanliness, this allows to remove unnecessary jump
points (saving a tiny bit of performance and memory usage, potentially), and
even more importantly it allows for clean extension of special pruning points,
similarly to how it's done for BPF_FUNC_timer_set_callback. This will be used
by future patches implementing open-coded BPF iterators.

v1->v2:
  - clarified path #3 commit message and a comment in the code (John);
  - added back mark_jmp_point() to right after subprog call to record
    non-linear implicit jump from BPF_EXIT to right after CALL <subprog>.

Andrii Nakryiko (3):
  bpf: decouple prune and jump points
  bpf: mostly decouple jump history management from is_state_visited()
  bpf: remove unnecessary prune and jump points

 include/linux/bpf_verifier.h |   1 +
 kernel/bpf/verifier.c        | 108 ++++++++++++++++++++---------------
 2 files changed, 64 insertions(+), 45 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 bpf-next 1/3] bpf: decouple prune and jump points
  2022-12-06 23:33 [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling Andrii Nakryiko
@ 2022-12-06 23:33 ` Andrii Nakryiko
  2022-12-06 23:33 ` [PATCH v2 bpf-next 2/3] bpf: mostly decouple jump history management from is_state_visited() Andrii Nakryiko
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Andrii Nakryiko @ 2022-12-06 23:33 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team, John Fastabend

BPF verifier marks some instructions as prune points. Currently these
prune points serve two purposes.

It's a point where verifier tries to find previously verified state and
check current state's equivalence to short circuit verification for
current code path.

But also currently it's a point where jump history, used for precision
backtracking, is updated. This is done so that non-linear flow of
execution could be properly backtracked.

Such coupling is coincidental and unnecessary. Some prune points are not
part of some non-linear jump path, so don't need update of jump history.
On the other hand, not all instructions which have to be recorded in
jump history necessarily are good prune points.

This patch splits prune and jump points into independent flags.
Currently all prune points are marked as jump points to minimize amount
of changes in this patch, but next patch will perform some optimization
of prune vs jmp point placement.

No functional changes are intended.

Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf_verifier.h |  1 +
 kernel/bpf/verifier.c        | 57 +++++++++++++++++++++++++++---------
 2 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index c07b351a5bc7..70d06a99f0b8 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -452,6 +452,7 @@ struct bpf_insn_aux_data {
 	/* below fields are initialized once */
 	unsigned int orig_idx; /* original instruction index */
 	bool prune_point;
+	bool jmp_point;
 };
 
 #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1d51bd9596da..cb72536fbf5d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2525,6 +2525,16 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno,
 	return 0;
 }
 
+static void mark_jmp_point(struct bpf_verifier_env *env, int idx)
+{
+	env->insn_aux_data[idx].jmp_point = true;
+}
+
+static bool is_jmp_point(struct bpf_verifier_env *env, int insn_idx)
+{
+	return env->insn_aux_data[insn_idx].jmp_point;
+}
+
 /* for any branch, call, exit record the history of jmps in the given state */
 static int push_jmp_history(struct bpf_verifier_env *env,
 			    struct bpf_verifier_state *cur)
@@ -2533,6 +2543,9 @@ static int push_jmp_history(struct bpf_verifier_env *env,
 	struct bpf_idx_pair *p;
 	size_t alloc_size;
 
+	if (!is_jmp_point(env, env->insn_idx))
+		return 0;
+
 	cnt++;
 	alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
 	p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
@@ -12135,11 +12148,16 @@ static struct bpf_verifier_state_list **explored_state(
 	return &env->explored_states[(idx ^ state->callsite) % state_htab_size(env)];
 }
 
-static void init_explored_state(struct bpf_verifier_env *env, int idx)
+static void mark_prune_point(struct bpf_verifier_env *env, int idx)
 {
 	env->insn_aux_data[idx].prune_point = true;
 }
 
+static bool is_prune_point(struct bpf_verifier_env *env, int insn_idx)
+{
+	return env->insn_aux_data[insn_idx].prune_point;
+}
+
 enum {
 	DONE_EXPLORING = 0,
 	KEEP_EXPLORING = 1,
@@ -12168,9 +12186,11 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
 		return -EINVAL;
 	}
 
-	if (e == BRANCH)
+	if (e == BRANCH) {
 		/* mark branch target for state pruning */
-		init_explored_state(env, w);
+		mark_prune_point(env, w);
+		mark_jmp_point(env, w);
+	}
 
 	if (insn_state[w] == 0) {
 		/* tree-edge */
@@ -12208,10 +12228,13 @@ static int visit_func_call_insn(int t, int insn_cnt,
 	if (ret)
 		return ret;
 
-	if (t + 1 < insn_cnt)
-		init_explored_state(env, t + 1);
+	if (t + 1 < insn_cnt) {
+		mark_prune_point(env, t + 1);
+		mark_jmp_point(env, t + 1);
+	}
 	if (visit_callee) {
-		init_explored_state(env, t);
+		mark_prune_point(env, t);
+		mark_jmp_point(env, t);
 		ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env,
 				/* It's ok to allow recursion from CFG point of
 				 * view. __check_func_call() will do the actual
@@ -12245,13 +12268,15 @@ static int visit_insn(int t, int insn_cnt, struct bpf_verifier_env *env)
 		return DONE_EXPLORING;
 
 	case BPF_CALL:
-		if (insns[t].imm == BPF_FUNC_timer_set_callback)
+		if (insns[t].imm == BPF_FUNC_timer_set_callback) {
 			/* Mark this call insn to trigger is_state_visited() check
 			 * before call itself is processed by __check_func_call().
 			 * Otherwise new async state will be pushed for further
 			 * exploration.
 			 */
-			init_explored_state(env, t);
+			mark_prune_point(env, t);
+			mark_jmp_point(env, t);
+		}
 		return visit_func_call_insn(t, insn_cnt, insns, env,
 					    insns[t].src_reg == BPF_PSEUDO_CALL);
 
@@ -12269,18 +12294,22 @@ static int visit_insn(int t, int insn_cnt, struct bpf_verifier_env *env)
 		 * but it's marked, since backtracking needs
 		 * to record jmp history in is_state_visited().
 		 */
-		init_explored_state(env, t + insns[t].off + 1);
+		mark_prune_point(env, t + insns[t].off + 1);
+		mark_jmp_point(env, t + insns[t].off + 1);
 		/* tell verifier to check for equivalent states
 		 * after every call and jump
 		 */
-		if (t + 1 < insn_cnt)
-			init_explored_state(env, t + 1);
+		if (t + 1 < insn_cnt) {
+			mark_prune_point(env, t + 1);
+			mark_jmp_point(env, t + 1);
+		}
 
 		return ret;
 
 	default:
 		/* conditional jump with two edges */
-		init_explored_state(env, t);
+		mark_prune_point(env, t);
+		mark_jmp_point(env, t);
 		ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
 		if (ret)
 			return ret;
@@ -13315,11 +13344,11 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 	bool add_new_state = env->test_state_freq ? true : false;
 
 	cur->last_insn_idx = env->prev_insn_idx;
-	if (!env->insn_aux_data[insn_idx].prune_point)
+	if (!is_prune_point(env, insn_idx))
 		/* this 'insn_idx' instruction wasn't marked, so we will not
 		 * be doing state search here
 		 */
-		return 0;
+		return push_jmp_history(env, cur);
 
 	/* bpf progs typically have pruning point every 4 instructions
 	 * http://vger.kernel.org/bpfconf2019.html#session-1
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 bpf-next 2/3] bpf: mostly decouple jump history management from is_state_visited()
  2022-12-06 23:33 [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling Andrii Nakryiko
  2022-12-06 23:33 ` [PATCH v2 bpf-next 1/3] bpf: decouple prune and jump points Andrii Nakryiko
@ 2022-12-06 23:33 ` Andrii Nakryiko
  2022-12-06 23:33 ` [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points Andrii Nakryiko
  2022-12-07  3:30 ` [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling patchwork-bot+netdevbpf
  3 siblings, 0 replies; 9+ messages in thread
From: Andrii Nakryiko @ 2022-12-06 23:33 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team, John Fastabend

Jump history updating and state equivalence checks are conceptually
independent, so move push_jmp_history() out of is_state_visited(). Also
make a decision whether to perform state equivalence checks or not one
layer higher in do_check(), keeping is_state_visited() unconditionally
performing state checks.

push_jmp_history() should be performed after state checks. There is just
one small non-uniformity. When is_state_visited() finds already
validated equivalent state, it propagates precision marks to current
state's parent chain. For this to work correctly, jump history has to be
updated, so is_state_visited() is doing that internally.

But if no equivalent verified state is found, jump history has to be
updated in a newly cloned child state, so is_jmp_point()
+ push_jmp_history() is performed after is_state_visited() exited with
zero result, which means "proceed with validation".

This change has no functional changes. It's not strictly necessary, but
feels right to decouple these two processes.

Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/verifier.c | 49 +++++++++++++++++++++++--------------------
 1 file changed, 26 insertions(+), 23 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cb72536fbf5d..b1feb8d3c42e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13343,13 +13343,6 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 	int i, j, err, states_cnt = 0;
 	bool add_new_state = env->test_state_freq ? true : false;
 
-	cur->last_insn_idx = env->prev_insn_idx;
-	if (!is_prune_point(env, insn_idx))
-		/* this 'insn_idx' instruction wasn't marked, so we will not
-		 * be doing state search here
-		 */
-		return push_jmp_history(env, cur);
-
 	/* bpf progs typically have pruning point every 4 instructions
 	 * http://vger.kernel.org/bpfconf2019.html#session-1
 	 * Do not add new state for future pruning if the verifier hasn't seen
@@ -13484,10 +13477,10 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 		env->max_states_per_insn = states_cnt;
 
 	if (!env->bpf_capable && states_cnt > BPF_COMPLEXITY_LIMIT_STATES)
-		return push_jmp_history(env, cur);
+		return 0;
 
 	if (!add_new_state)
-		return push_jmp_history(env, cur);
+		return 0;
 
 	/* There were no equivalent states, remember the current one.
 	 * Technically the current state is not proven to be safe yet,
@@ -13627,21 +13620,31 @@ static int do_check(struct bpf_verifier_env *env)
 			return -E2BIG;
 		}
 
-		err = is_state_visited(env, env->insn_idx);
-		if (err < 0)
-			return err;
-		if (err == 1) {
-			/* found equivalent state, can prune the search */
-			if (env->log.level & BPF_LOG_LEVEL) {
-				if (do_print_state)
-					verbose(env, "\nfrom %d to %d%s: safe\n",
-						env->prev_insn_idx, env->insn_idx,
-						env->cur_state->speculative ?
-						" (speculative execution)" : "");
-				else
-					verbose(env, "%d: safe\n", env->insn_idx);
+		state->last_insn_idx = env->prev_insn_idx;
+
+		if (is_prune_point(env, env->insn_idx)) {
+			err = is_state_visited(env, env->insn_idx);
+			if (err < 0)
+				return err;
+			if (err == 1) {
+				/* found equivalent state, can prune the search */
+				if (env->log.level & BPF_LOG_LEVEL) {
+					if (do_print_state)
+						verbose(env, "\nfrom %d to %d%s: safe\n",
+							env->prev_insn_idx, env->insn_idx,
+							env->cur_state->speculative ?
+							" (speculative execution)" : "");
+					else
+						verbose(env, "%d: safe\n", env->insn_idx);
+				}
+				goto process_bpf_exit;
 			}
-			goto process_bpf_exit;
+		}
+
+		if (is_jmp_point(env, env->insn_idx)) {
+			err = push_jmp_history(env, state);
+			if (err)
+				return err;
 		}
 
 		if (signal_pending(current))
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points
  2022-12-06 23:33 [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling Andrii Nakryiko
  2022-12-06 23:33 ` [PATCH v2 bpf-next 1/3] bpf: decouple prune and jump points Andrii Nakryiko
  2022-12-06 23:33 ` [PATCH v2 bpf-next 2/3] bpf: mostly decouple jump history management from is_state_visited() Andrii Nakryiko
@ 2022-12-06 23:33 ` Andrii Nakryiko
  2022-12-07  1:28   ` John Fastabend
  2022-12-07  3:17   ` Alexei Starovoitov
  2022-12-07  3:30 ` [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling patchwork-bot+netdevbpf
  3 siblings, 2 replies; 9+ messages in thread
From: Andrii Nakryiko @ 2022-12-06 23:33 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Don't mark some instructions as jump points when there are actually no
jumps and instructions are just processed sequentially. Such case is
handled naturally by precision backtracking logic without the need to
update jump history. See get_prev_insn_idx(). It goes back linearly by
one instruction, unless current top of jmp_history is pointing to
current instruction. In such case we use `st->jmp_history[cnt - 1].prev_idx`
to find instruction from which we jumped to the current instruction
non-linearly.

Also remove both jump and prune point marking for instruction right
after unconditional jumps, as program flow can get to the instruction
right after unconditional jump instruction only if there is a jump to
that instruction from somewhere else in the program. In such case we'll
mark such instruction as prune/jump point because it's a destination of
a jump.

This change has no changes in terms of number of instructions or states
processes across Cilium and selftests programs.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/verifier.c | 34 ++++++++++------------------------
 1 file changed, 10 insertions(+), 24 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b1feb8d3c42e..4f163a28ab59 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12228,13 +12228,12 @@ static int visit_func_call_insn(int t, int insn_cnt,
 	if (ret)
 		return ret;
 
-	if (t + 1 < insn_cnt) {
-		mark_prune_point(env, t + 1);
-		mark_jmp_point(env, t + 1);
-	}
+	mark_prune_point(env, t + 1);
+	/* when we exit from subprog, we need to record non-linear history */
+	mark_jmp_point(env, t + 1);
+
 	if (visit_callee) {
 		mark_prune_point(env, t);
-		mark_jmp_point(env, t);
 		ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env,
 				/* It's ok to allow recursion from CFG point of
 				 * view. __check_func_call() will do the actual
@@ -12268,15 +12267,13 @@ static int visit_insn(int t, int insn_cnt, struct bpf_verifier_env *env)
 		return DONE_EXPLORING;
 
 	case BPF_CALL:
-		if (insns[t].imm == BPF_FUNC_timer_set_callback) {
-			/* Mark this call insn to trigger is_state_visited() check
-			 * before call itself is processed by __check_func_call().
-			 * Otherwise new async state will be pushed for further
-			 * exploration.
+		if (insns[t].imm == BPF_FUNC_timer_set_callback)
+			/* Mark this call insn as a prune point to trigger
+			 * is_state_visited() check before call itself is
+			 * processed by __check_func_call(). Otherwise new
+			 * async state will be pushed for further exploration.
 			 */
 			mark_prune_point(env, t);
-			mark_jmp_point(env, t);
-		}
 		return visit_func_call_insn(t, insn_cnt, insns, env,
 					    insns[t].src_reg == BPF_PSEUDO_CALL);
 
@@ -12290,26 +12287,15 @@ static int visit_insn(int t, int insn_cnt, struct bpf_verifier_env *env)
 		if (ret)
 			return ret;
 
-		/* unconditional jmp is not a good pruning point,
-		 * but it's marked, since backtracking needs
-		 * to record jmp history in is_state_visited().
-		 */
 		mark_prune_point(env, t + insns[t].off + 1);
 		mark_jmp_point(env, t + insns[t].off + 1);
-		/* tell verifier to check for equivalent states
-		 * after every call and jump
-		 */
-		if (t + 1 < insn_cnt) {
-			mark_prune_point(env, t + 1);
-			mark_jmp_point(env, t + 1);
-		}
 
 		return ret;
 
 	default:
 		/* conditional jump with two edges */
 		mark_prune_point(env, t);
-		mark_jmp_point(env, t);
+
 		ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
 		if (ret)
 			return ret;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* RE: [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points
  2022-12-06 23:33 ` [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points Andrii Nakryiko
@ 2022-12-07  1:28   ` John Fastabend
  2022-12-07  3:17   ` Alexei Starovoitov
  1 sibling, 0 replies; 9+ messages in thread
From: John Fastabend @ 2022-12-07  1:28 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, ast, daniel; +Cc: andrii, kernel-team

Andrii Nakryiko wrote:
> Don't mark some instructions as jump points when there are actually no
> jumps and instructions are just processed sequentially. Such case is
> handled naturally by precision backtracking logic without the need to
> update jump history. See get_prev_insn_idx(). It goes back linearly by
> one instruction, unless current top of jmp_history is pointing to
> current instruction. In such case we use `st->jmp_history[cnt - 1].prev_idx`
> to find instruction from which we jumped to the current instruction
> non-linearly.
> 
> Also remove both jump and prune point marking for instruction right
> after unconditional jumps, as program flow can get to the instruction
> right after unconditional jump instruction only if there is a jump to
> that instruction from somewhere else in the program. In such case we'll
> mark such instruction as prune/jump point because it's a destination of
> a jump.
> 
> This change has no changes in terms of number of instructions or states
> processes across Cilium and selftests programs.
> 
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---

Thanks.

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points
  2022-12-06 23:33 ` [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points Andrii Nakryiko
  2022-12-07  1:28   ` John Fastabend
@ 2022-12-07  3:17   ` Alexei Starovoitov
  2022-12-07 18:36     ` Andrii Nakryiko
  1 sibling, 1 reply; 9+ messages in thread
From: Alexei Starovoitov @ 2022-12-07  3:17 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

On Tue, Dec 06, 2022 at 03:33:45PM -0800, Andrii Nakryiko wrote:
> Don't mark some instructions as jump points when there are actually no
> jumps and instructions are just processed sequentially. Such case is
> handled naturally by precision backtracking logic without the need to
> update jump history. See get_prev_insn_idx(). It goes back linearly by
> one instruction, unless current top of jmp_history is pointing to
> current instruction. In such case we use `st->jmp_history[cnt - 1].prev_idx`
> to find instruction from which we jumped to the current instruction
> non-linearly.
> 
> Also remove both jump and prune point marking for instruction right
> after unconditional jumps, as program flow can get to the instruction
> right after unconditional jump instruction only if there is a jump to
> that instruction from somewhere else in the program. In such case we'll
> mark such instruction as prune/jump point because it's a destination of
> a jump.
> 
> This change has no changes in terms of number of instructions or states
> processes across Cilium and selftests programs.
> 
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
>  kernel/bpf/verifier.c | 34 ++++++++++------------------------
>  1 file changed, 10 insertions(+), 24 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b1feb8d3c42e..4f163a28ab59 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12228,13 +12228,12 @@ static int visit_func_call_insn(int t, int insn_cnt,
>  	if (ret)
>  		return ret;
>  
> -	if (t + 1 < insn_cnt) {
> -		mark_prune_point(env, t + 1);
> -		mark_jmp_point(env, t + 1);
> -	}
> +	mark_prune_point(env, t + 1);
> +	/* when we exit from subprog, we need to record non-linear history */
> +	mark_jmp_point(env, t + 1);
> +

With this we probably should remove 'insn_cnt' from visit_func_call_insn().
and in-turn from visit_insn() as well.
Pls consider as a follow up.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling
  2022-12-06 23:33 [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2022-12-06 23:33 ` [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points Andrii Nakryiko
@ 2022-12-07  3:30 ` patchwork-bot+netdevbpf
  3 siblings, 0 replies; 9+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-12-07  3:30 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Tue, 6 Dec 2022 15:33:42 -0800 you wrote:
> Disentangle prune and jump points in BPF verifier code. They are conceptually
> independent but currently coupled together. This small patch set refactors
> related code and make it possible to have some instruction marked as pruning
> or jump point independently.
> 
> Besides just conceptual cleanliness, this allows to remove unnecessary jump
> points (saving a tiny bit of performance and memory usage, potentially), and
> even more importantly it allows for clean extension of special pruning points,
> similarly to how it's done for BPF_FUNC_timer_set_callback. This will be used
> by future patches implementing open-coded BPF iterators.
> 
> [...]

Here is the summary with links:
  - [v2,bpf-next,1/3] bpf: decouple prune and jump points
    https://git.kernel.org/bpf/bpf-next/c/bffdeaa8a5af
  - [v2,bpf-next,2/3] bpf: mostly decouple jump history management from is_state_visited()
    https://git.kernel.org/bpf/bpf-next/c/a095f421057e
  - [v2,bpf-next,3/3] bpf: remove unnecessary prune and jump points
    https://git.kernel.org/bpf/bpf-next/c/618945fbed50

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points
  2022-12-07  3:17   ` Alexei Starovoitov
@ 2022-12-07 18:36     ` Andrii Nakryiko
  2022-12-07 18:39       ` Alexei Starovoitov
  0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2022-12-07 18:36 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: Andrii Nakryiko, bpf, ast, daniel, kernel-team

On Tue, Dec 6, 2022 at 7:17 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Dec 06, 2022 at 03:33:45PM -0800, Andrii Nakryiko wrote:
> > Don't mark some instructions as jump points when there are actually no
> > jumps and instructions are just processed sequentially. Such case is
> > handled naturally by precision backtracking logic without the need to
> > update jump history. See get_prev_insn_idx(). It goes back linearly by
> > one instruction, unless current top of jmp_history is pointing to
> > current instruction. In such case we use `st->jmp_history[cnt - 1].prev_idx`
> > to find instruction from which we jumped to the current instruction
> > non-linearly.
> >
> > Also remove both jump and prune point marking for instruction right
> > after unconditional jumps, as program flow can get to the instruction
> > right after unconditional jump instruction only if there is a jump to
> > that instruction from somewhere else in the program. In such case we'll
> > mark such instruction as prune/jump point because it's a destination of
> > a jump.
> >
> > This change has no changes in terms of number of instructions or states
> > processes across Cilium and selftests programs.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> >  kernel/bpf/verifier.c | 34 ++++++++++------------------------
> >  1 file changed, 10 insertions(+), 24 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index b1feb8d3c42e..4f163a28ab59 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -12228,13 +12228,12 @@ static int visit_func_call_insn(int t, int insn_cnt,
> >       if (ret)
> >               return ret;
> >
> > -     if (t + 1 < insn_cnt) {
> > -             mark_prune_point(env, t + 1);
> > -             mark_jmp_point(env, t + 1);
> > -     }
> > +     mark_prune_point(env, t + 1);
> > +     /* when we exit from subprog, we need to record non-linear history */
> > +     mark_jmp_point(env, t + 1);
> > +
>
> With this we probably should remove 'insn_cnt' from visit_func_call_insn().
> and in-turn from visit_insn() as well.
> Pls consider as a follow up.

Yep, will do, didn't notice it's not needed anymore.

BTW, no one asked why it was ok to drop the `if (t + 1 < insns_cnt)`
check, I was a bit surprised. But this is because push_insns() already
validates that t+1 is correct and doesn't go beyond the insns array,
so this was not needed in the first place.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points
  2022-12-07 18:36     ` Andrii Nakryiko
@ 2022-12-07 18:39       ` Alexei Starovoitov
  0 siblings, 0 replies; 9+ messages in thread
From: Alexei Starovoitov @ 2022-12-07 18:39 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team

On Wed, Dec 7, 2022 at 10:36 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Dec 6, 2022 at 7:17 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Tue, Dec 06, 2022 at 03:33:45PM -0800, Andrii Nakryiko wrote:
> > > Don't mark some instructions as jump points when there are actually no
> > > jumps and instructions are just processed sequentially. Such case is
> > > handled naturally by precision backtracking logic without the need to
> > > update jump history. See get_prev_insn_idx(). It goes back linearly by
> > > one instruction, unless current top of jmp_history is pointing to
> > > current instruction. In such case we use `st->jmp_history[cnt - 1].prev_idx`
> > > to find instruction from which we jumped to the current instruction
> > > non-linearly.
> > >
> > > Also remove both jump and prune point marking for instruction right
> > > after unconditional jumps, as program flow can get to the instruction
> > > right after unconditional jump instruction only if there is a jump to
> > > that instruction from somewhere else in the program. In such case we'll
> > > mark such instruction as prune/jump point because it's a destination of
> > > a jump.
> > >
> > > This change has no changes in terms of number of instructions or states
> > > processes across Cilium and selftests programs.
> > >
> > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > ---
> > >  kernel/bpf/verifier.c | 34 ++++++++++------------------------
> > >  1 file changed, 10 insertions(+), 24 deletions(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index b1feb8d3c42e..4f163a28ab59 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -12228,13 +12228,12 @@ static int visit_func_call_insn(int t, int insn_cnt,
> > >       if (ret)
> > >               return ret;
> > >
> > > -     if (t + 1 < insn_cnt) {
> > > -             mark_prune_point(env, t + 1);
> > > -             mark_jmp_point(env, t + 1);
> > > -     }
> > > +     mark_prune_point(env, t + 1);
> > > +     /* when we exit from subprog, we need to record non-linear history */
> > > +     mark_jmp_point(env, t + 1);
> > > +
> >
> > With this we probably should remove 'insn_cnt' from visit_func_call_insn().
> > and in-turn from visit_insn() as well.
> > Pls consider as a follow up.
>
> Yep, will do, didn't notice it's not needed anymore.

Thanks

> BTW, no one asked why it was ok to drop the `if (t + 1 < insns_cnt)`
> check, I was a bit surprised. But this is because push_insns() already
> validates that t+1 is correct and doesn't go beyond the insns array,
> so this was not needed in the first place.

Correct. I read the code the same way.
I was a bit concerned whether insn_cnt is always equal to env->prog->len.
I thought we're tightening insn_cnt to be the subprog insn_cnt only.
But that doesn't look to be the case.
So looks safe to remove that 'if' and hence insn_cnt is useless too,
since it's the same as env->prog->len.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-12-07 18:40 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-06 23:33 [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling Andrii Nakryiko
2022-12-06 23:33 ` [PATCH v2 bpf-next 1/3] bpf: decouple prune and jump points Andrii Nakryiko
2022-12-06 23:33 ` [PATCH v2 bpf-next 2/3] bpf: mostly decouple jump history management from is_state_visited() Andrii Nakryiko
2022-12-06 23:33 ` [PATCH v2 bpf-next 3/3] bpf: remove unnecessary prune and jump points Andrii Nakryiko
2022-12-07  1:28   ` John Fastabend
2022-12-07  3:17   ` Alexei Starovoitov
2022-12-07 18:36     ` Andrii Nakryiko
2022-12-07 18:39       ` Alexei Starovoitov
2022-12-07  3:30 ` [PATCH v2 bpf-next 0/3] Refactor verifier prune and jump point handling patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.