All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF
@ 2018-02-23 17:35 Edward Cree
  2018-02-23 17:39 ` [RFC PATCH bpf-next 01/12] bpf/verifier: validate func_calls by marking at do_check() time Edward Cree
                   ` (11 more replies)
  0 siblings, 12 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:35 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

The main object of this patch series is to support verification of eBPF
 programs with bounded loops.  Only bounds derived from JLT (unsigned <)
 with the taken branch in the loop are supported, but it should be clear
 how others could be added.
Testing of these changes has consisted only of test_verifier (including
 a small number of loop tests added in patch #8).  Much more testing
 (including more test_verifier test cases) would be needed before these
 patches could be considered ready to apply.

That said, some component parts of the series are useful outside of the
 context of bounded loop handling, and may be worth applying separately.

Patches #1 and #2 (tests) are a nice cleanup/simplification of subprog
 marking.  I posted them a couple of weeks ago as RFC "bpf/verifier:
 simplify subprog tracking"; I have not yet integrated the comments and
 suggestions from replies on that posting (so no need to repeat them!)
Patch #3 is a cleanup we've wanted to do since func_calls went in,
 putting parent pointers in the registers instead of the state so that
 we don't need skip_callee() machinery.  Again, worth doing on its own
 even if the bounded loops stuff is rejected.
Patch #4 moves the saved return-addresses in bpf_func_state down one
 frame, so that we don't store a meaningless callsite of -1 in the
 outermost frame and we do store a state's current insn_idx in the
 innermost frame.  This is necessary for my approach to loop handling,
 which needs to identify the location of jump insns when a loop is
 detected.
Patch #5 adds a new parent state pointer into bpf_func_state (it should
 really be in bpf_verifier_state, and I later (Patch #10) figured out
 how to make that work) and walks back along these pointers to detect
 loops each time a state list mark is visited.  This then replaces the
 static loop detection in check_cfg.
The verifier already did not walk impossible JEQ/JNE branches; patch #6
 extends this to greater-than/less-than type branches.  As well as
 allowing us to reduce the number of paths walked and prevent
 inconsistent min/max state (potentially making this another one useful
 outside of the series), this means that when we walk around a bounded
 loop we can eventually determine that we're done and stop walking it.
Patch #7 implements the key 'boundedness detection', looking at the
 register state last time through the loop and ensuring that the loop
 is on course to terminate in a reasonable number of iterations.  The
 behaviour could be improved by having it store some state about what
 the boundedness check concluded (e.g. the delta last time through the
 loop) in the new explored_state.
Patch #8 adds some tests for bounded loops, mainly either probing for
 bugs that existed in early versions of the code, or testing constructs
 which are (or were) beyond the verifier's ability to understand.  As
 noted above, there are not nearly enough tests here yet.
Patch #9 is a better way to distinguish between states which have been
 shown to safely reach an exit, and states whose continuations are
 still being explored (since the latter cannot be used for pruning),
 rather than just distrusting all states which appear in a loop body.
Patches #10 and #11 improve upon some suboptimal code from earlier in
 the series, mostly from patch #5.  I haven't yet had time to fold them
 into the earlier patches.  Patch #12 updates a test to match.

  bpf/verifier: validate func_calls by marking at do_check() time
  bpf/verifier: update selftests
  bpf/verifier: per-register parent pointers
  bpf/verifier: store insn_idx instead of callsite in bpf_func_state
  bpf/verifier: detect loops dynamically rather than statically
  bpf/verifier: do not walk impossible branches
  bpf/verifier: allow bounded loops with JLT/true back-edge
  bpf/verifier: selftests for bounded loops
  bpf/verifier: count still-live children of explored_states
  bpf/verifier: parent state pointer is not per-frame
  bpf/verifier: better detection of statically unreachable code
  bpf/verifier: update selftests

 include/linux/bpf_verifier.h                |   46 +-
 kernel/bpf/verifier.c                       | 1267 +++++++++++++++------------
 tools/testing/selftests/bpf/test_verifier.c |  248 +++++-
 3 files changed, 949 insertions(+), 612 deletions(-)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 01/12] bpf/verifier: validate func_calls by marking at do_check() time
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
@ 2018-02-23 17:39 ` Edward Cree
  2018-02-23 17:39 ` [RFC PATCH bpf-next 02/12] bpf/verifier: update selftests Edward Cree
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:39 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

Removes a couple of passes from the verifier, one to check subprogs don't
 overlap etc., and one to compute max stack depth (which now is done by
 topologically sorting the call graph).

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
As noted in the cover letter, I haven't yet integrated the feedback I got the
 first time I posted this patch.

 include/linux/bpf_verifier.h |  24 ++-
 kernel/bpf/verifier.c        | 425 +++++++++++++++++++++++--------------------
 2 files changed, 242 insertions(+), 207 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 6b66cd1aa0b9..0387e0c61161 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -146,6 +146,8 @@ struct bpf_insn_aux_data {
 		s32 call_imm;			/* saved imm field of call insn */
 	};
 	int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
+	u16 subprogno; /* subprog in which this insn resides, valid iff @seen */
+	u16 subprog_off; /* insn_idx within subprog, computed in jit_subprogs */
 	bool seen; /* this insn was processed by the verifier */
 };
 
@@ -168,6 +170,15 @@ static inline bool bpf_verifier_log_full(const struct bpf_verifer_log *log)
 
 #define BPF_MAX_SUBPROGS 256
 
+struct bpf_subprog_info {
+	/* which other subprogs does this one directly call? */
+	DECLARE_BITMAP(callees, BPF_MAX_SUBPROGS);
+	u32 start; /* insn idx of function entry point */
+	u16 stack_depth; /* max. stack depth used by this function */
+	u16 total_stack_depth; /* max. stack depth used by entire call chain */
+	u16 len; /* #insns in this subprog */
+};
+
 /* single container for all structs
  * one verifier_env per bpf_check() call
  */
@@ -186,20 +197,23 @@ struct bpf_verifier_env {
 	bool seen_direct_write;
 	struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */
 	struct bpf_verifer_log log;
-	u32 subprog_starts[BPF_MAX_SUBPROGS];
-	/* computes the stack depth of each bpf function */
-	u16 subprog_stack_depth[BPF_MAX_SUBPROGS + 1];
+	struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS];
 	u32 subprog_cnt;
 };
 
 __printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env,
 					   const char *fmt, ...);
 
-static inline struct bpf_reg_state *cur_regs(struct bpf_verifier_env *env)
+static inline struct bpf_func_state *cur_frame(struct bpf_verifier_env *env)
 {
 	struct bpf_verifier_state *cur = env->cur_state;
 
-	return cur->frame[cur->curframe]->regs;
+	return cur->frame[cur->curframe];
+}
+
+static inline struct bpf_reg_state *cur_regs(struct bpf_verifier_env *env)
+{
+	return cur_frame(env)->regs;
 }
 
 int bpf_prog_offload_verifier_prep(struct bpf_verifier_env *env);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5fb69a85d967..2dc69fb3bfbe 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -728,111 +728,49 @@ enum reg_arg_type {
 	DST_OP_NO_MARK	/* same as above, check only, don't mark */
 };
 
-static int cmp_subprogs(const void *a, const void *b)
+static int find_subprog(struct bpf_verifier_env *env, int insn_idx)
 {
-	return *(int *)a - *(int *)b;
-}
-
-static int find_subprog(struct bpf_verifier_env *env, int off)
-{
-	u32 *p;
+	struct bpf_insn_aux_data *aux;
+	int insn_cnt = env->prog->len;
+	u32 subprogno;
 
-	p = bsearch(&off, env->subprog_starts, env->subprog_cnt,
-		    sizeof(env->subprog_starts[0]), cmp_subprogs);
-	if (!p)
+	if (insn_idx >= insn_cnt || insn_idx < 0) {
+		verbose(env, "find_subprog of invalid insn_idx %d\n", insn_idx);
+		return -EINVAL;
+	}
+	aux = &env->insn_aux_data[insn_idx];
+	if (!aux->seen) /* haven't visited this line yet */
 		return -ENOENT;
-	return p - env->subprog_starts;
-
+	subprogno = aux->subprogno;
+	/* validate that we are at start of subprog */
+	if (env->subprog_info[subprogno].start != insn_idx) {
+		verbose(env, "insn_idx %d is in subprog %u but that starts at %d\n",
+			insn_idx, subprogno, env->subprog_info[subprogno].start);
+		return -EINVAL;
+	}
+	return subprogno;
 }
 
 static int add_subprog(struct bpf_verifier_env *env, int off)
 {
 	int insn_cnt = env->prog->len;
+	struct bpf_subprog_info *info;
 	int ret;
 
 	if (off >= insn_cnt || off < 0) {
 		verbose(env, "call to invalid destination\n");
 		return -EINVAL;
 	}
-	ret = find_subprog(env, off);
-	if (ret >= 0)
-		return 0;
 	if (env->subprog_cnt >= BPF_MAX_SUBPROGS) {
 		verbose(env, "too many subprograms\n");
 		return -E2BIG;
 	}
-	env->subprog_starts[env->subprog_cnt++] = off;
-	sort(env->subprog_starts, env->subprog_cnt,
-	     sizeof(env->subprog_starts[0]), cmp_subprogs, NULL);
-	return 0;
-}
-
-static int check_subprogs(struct bpf_verifier_env *env)
-{
-	int i, ret, subprog_start, subprog_end, off, cur_subprog = 0;
-	struct bpf_insn *insn = env->prog->insnsi;
-	int insn_cnt = env->prog->len;
-
-	/* determine subprog starts. The end is one before the next starts */
-	for (i = 0; i < insn_cnt; i++) {
-		if (insn[i].code != (BPF_JMP | BPF_CALL))
-			continue;
-		if (insn[i].src_reg != BPF_PSEUDO_CALL)
-			continue;
-		if (!env->allow_ptr_leaks) {
-			verbose(env, "function calls to other bpf functions are allowed for root only\n");
-			return -EPERM;
-		}
-		if (bpf_prog_is_dev_bound(env->prog->aux)) {
-			verbose(env, "function calls in offloaded programs are not supported yet\n");
-			return -EINVAL;
-		}
-		ret = add_subprog(env, i + insn[i].imm + 1);
-		if (ret < 0)
-			return ret;
-	}
-
-	if (env->log.level > 1)
-		for (i = 0; i < env->subprog_cnt; i++)
-			verbose(env, "func#%d @%d\n", i, env->subprog_starts[i]);
-
-	/* now check that all jumps are within the same subprog */
-	subprog_start = 0;
-	if (env->subprog_cnt == cur_subprog)
-		subprog_end = insn_cnt;
-	else
-		subprog_end = env->subprog_starts[cur_subprog++];
-	for (i = 0; i < insn_cnt; i++) {
-		u8 code = insn[i].code;
-
-		if (BPF_CLASS(code) != BPF_JMP)
-			goto next;
-		if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)
-			goto next;
-		off = i + insn[i].off + 1;
-		if (off < subprog_start || off >= subprog_end) {
-			verbose(env, "jump out of range from insn %d to %d\n", i, off);
-			return -EINVAL;
-		}
-next:
-		if (i == subprog_end - 1) {
-			/* to avoid fall-through from one subprog into another
-			 * the last insn of the subprog should be either exit
-			 * or unconditional jump back
-			 */
-			if (code != (BPF_JMP | BPF_EXIT) &&
-			    code != (BPF_JMP | BPF_JA)) {
-				verbose(env, "last insn is not an exit or jmp\n");
-				return -EINVAL;
-			}
-			subprog_start = subprog_end;
-			if (env->subprog_cnt == cur_subprog)
-				subprog_end = insn_cnt;
-			else
-				subprog_end = env->subprog_starts[cur_subprog++];
-		}
-	}
-	return 0;
+	ret = env->subprog_cnt++;
+	info = &env->subprog_info[ret];
+	info->start = off;
+	info->stack_depth = 0;
+	memset(info->callees, 0, sizeof(info->callees));
+	return ret;
 }
 
 static
@@ -1454,80 +1392,83 @@ static int update_stack_depth(struct bpf_verifier_env *env,
 			      const struct bpf_func_state *func,
 			      int off)
 {
-	u16 stack = env->subprog_stack_depth[func->subprogno];
+	struct bpf_subprog_info *info;
 
-	if (stack >= -off)
-		return 0;
+	if (!func) return -EFAULT;
+	if (func->subprogno >= BPF_MAX_SUBPROGS) return -E2BIG;
+	info = &env->subprog_info[func->subprogno];
 
 	/* update known max for given subprogram */
-	env->subprog_stack_depth[func->subprogno] = -off;
+	info->stack_depth = max_t(u16, info->stack_depth, -off);
 	return 0;
 }
 
-/* starting from main bpf function walk all instructions of the function
- * and recursively walk all callees that given function can call.
- * Ignore jump and exit insns.
- * Since recursion is prevented by check_cfg() this algorithm
- * only needs a local stack of MAX_CALL_FRAMES to remember callsites
+/* Topologically sort the call graph, and thereby determine the maximum stack
+ * depth of each subprog's worst-case call chain.  Store in total_stack_depth.
  */
 static int check_max_stack_depth(struct bpf_verifier_env *env)
 {
-	int depth = 0, frame = 0, subprog = 0, i = 0, subprog_end;
-	struct bpf_insn *insn = env->prog->insnsi;
-	int insn_cnt = env->prog->len;
-	int ret_insn[MAX_CALL_FRAMES];
-	int ret_prog[MAX_CALL_FRAMES];
-
-process_func:
-	/* round up to 32-bytes, since this is granularity
-	 * of interpreter stack size
-	 */
-	depth += round_up(max_t(u32, env->subprog_stack_depth[subprog], 1), 32);
-	if (depth > MAX_BPF_STACK) {
-		verbose(env, "combined stack size of %d calls is %d. Too large\n",
-			frame + 1, depth);
-		return -EACCES;
-	}
-continue_func:
-	if (env->subprog_cnt == subprog)
-		subprog_end = insn_cnt;
-	else
-		subprog_end = env->subprog_starts[subprog];
-	for (; i < subprog_end; i++) {
-		if (insn[i].code != (BPF_JMP | BPF_CALL))
-			continue;
-		if (insn[i].src_reg != BPF_PSEUDO_CALL)
-			continue;
-		/* remember insn and function to return to */
-		ret_insn[frame] = i + 1;
-		ret_prog[frame] = subprog;
-
-		/* find the callee */
-		i = i + insn[i].imm + 1;
-		subprog = find_subprog(env, i);
-		if (subprog < 0) {
-			WARN_ONCE(1, "verifier bug. No program starts at insn %d\n",
-				  i);
-			return -EFAULT;
+	DECLARE_BITMAP(frontier, BPF_MAX_SUBPROGS) = {0};
+	DECLARE_BITMAP(visited, BPF_MAX_SUBPROGS) = {0};
+	int subprog, i, j;
+
+	/* subprog 0 has no incoming edges, and should be the only such */
+	__set_bit(0, frontier);
+	env->subprog_info[0].total_stack_depth = env->subprog_info[0].stack_depth;
+
+	while (true) {
+		/* select a frontier node */
+		subprog = find_first_bit(frontier, BPF_MAX_SUBPROGS);
+		if (subprog >= BPF_MAX_SUBPROGS) /* frontier is empty, done */
+			break;
+		/* remove it from the frontier */
+		__clear_bit(subprog, frontier);
+		/* validate its total stack depth */
+		if (env->subprog_info[subprog].total_stack_depth > MAX_BPF_STACK) {
+			verbose(env, "combined stack size of calls to %d (at insn %d) is %d.  Too large\n",
+				subprog, env->subprog_info[subprog].start,
+				env->subprog_info[subprog].total_stack_depth);
+			return -EACCES;
 		}
-		subprog++;
-		frame++;
-		if (frame >= MAX_CALL_FRAMES) {
-			WARN_ONCE(1, "verifier bug. Call stack is too deep\n");
-			return -EFAULT;
+		if (env->log.level > 1) {
+			verbose(env, "combined stack size of calls to %d (at insn %d) is %d\n",
+				subprog, env->subprog_info[subprog].start,
+				env->subprog_info[subprog].total_stack_depth);
+		}
+		__set_bit(subprog, visited);
+		/* for each callee */
+		for_each_set_bit(i, env->subprog_info[subprog].callees,
+				 BPF_MAX_SUBPROGS) {
+			/* round up to 32-bytes, since this is granularity of
+			 * interpreter stack size
+			 */
+			u16 stack_depth = round_up(max_t(u16, env->subprog_info[i].stack_depth, 1), 32);
+
+			/* Update callee total stack depth */
+			env->subprog_info[i].total_stack_depth = max_t(u16,
+				env->subprog_info[i].total_stack_depth,
+				env->subprog_info[subprog].total_stack_depth +
+				stack_depth);
+			/* does it have unvisited callers? */
+			for_each_clear_bit(j, visited, env->subprog_cnt) {
+				if (test_bit(i, env->subprog_info[j].callees))
+					break;
+			}
+			/* if not, add it to the frontier */
+			if (j >= env->subprog_cnt)
+				__set_bit(i, frontier);
 		}
-		goto process_func;
 	}
-	/* end of for() loop means the last insn of the 'subprog'
-	 * was reached. Doesn't matter whether it was JA or EXIT
-	 */
-	if (frame == 0)
-		return 0;
-	depth -= round_up(max_t(u32, env->subprog_stack_depth[subprog], 1), 32);
-	frame--;
-	i = ret_insn[frame];
-	subprog = ret_prog[frame];
-	goto continue_func;
+
+	/* are any nodes left unvisited? */
+	subprog = find_first_zero_bit(visited, env->subprog_cnt);
+	if (subprog < env->subprog_cnt) {
+		/* then call graph is not acyclic, which shouldn't happen */
+		verbose(env, "verifier bug.  Call graph has a cycle including subprog %d (at insn %d)\n",
+			subprog, env->subprog_info[subprog].start);
+		return -EFAULT;
+	}
+	return 0;
 }
 
 #ifndef CONFIG_BPF_JIT_ALWAYS_ON
@@ -1542,8 +1483,7 @@ static int get_callee_stack_depth(struct bpf_verifier_env *env,
 			  start);
 		return -EFAULT;
 	}
-	subprog++;
-	return env->subprog_stack_depth[subprog];
+	return env->subprog_info[subprog].stack_depth;
 }
 #endif
 
@@ -2078,7 +2018,7 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 	case BPF_FUNC_tail_call:
 		if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY)
 			goto error;
-		if (env->subprog_cnt) {
+		if (env->subprog_cnt > 1) {
 			verbose(env, "tail_calls are not allowed in programs with bpf-to-bpf calls\n");
 			return -EINVAL;
 		}
@@ -2213,21 +2153,32 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 {
 	struct bpf_verifier_state *state = env->cur_state;
 	struct bpf_func_state *caller, *callee;
-	int i, subprog, target_insn;
+	int i, subprog, target;
 
 	if (state->curframe + 1 >= MAX_CALL_FRAMES) {
 		verbose(env, "the call stack of %d frames is too deep\n",
 			state->curframe + 2);
 		return -E2BIG;
 	}
-
-	target_insn = *insn_idx + insn->imm;
-	subprog = find_subprog(env, target_insn + 1);
-	if (subprog < 0) {
-		verbose(env, "verifier bug. No program starts at insn %d\n",
-			target_insn + 1);
-		return -EFAULT;
+	if (!env->allow_ptr_leaks) {
+		verbose(env, "function calls to other bpf functions are allowed for root only\n");
+		return -EPERM;
 	}
+	if (bpf_prog_is_dev_bound(env->prog->aux)) {
+		verbose(env, "function calls in offloaded programs are not supported yet\n");
+		return -EINVAL;
+	}
+
+
+	target = *insn_idx + insn->imm;
+	/* We will increment insn_idx (PC) in do_check() after handling this
+	 * call insn, so the actual start of the function is target + 1.
+	 */
+	subprog = find_subprog(env, target + 1);
+	if (subprog == -ENOENT)
+		subprog = add_subprog(env, target + 1);
+	if (subprog < 0)
+		return subprog;
 
 	caller = state->frame[state->curframe];
 	if (state->frame[state->curframe + 1]) {
@@ -2241,6 +2192,9 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		return -ENOMEM;
 	state->frame[state->curframe + 1] = callee;
 
+	/* record edge in call graph */
+	__set_bit(subprog, env->subprog_info[caller->subprogno].callees);
+
 	/* callee cannot access r0, r6 - r9 for reading and has to write
 	 * into its own stack before reading from it.
 	 * callee can read/write into caller's stack
@@ -2249,7 +2203,7 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			/* remember the callsite, it will be used by bpf_exit */
 			*insn_idx /* callsite */,
 			state->curframe + 1 /* frameno within this callchain */,
-			subprog + 1 /* subprog number within this prog */);
+			subprog /* subprog number within this prog */);
 
 	/* copy r1 - r5 args that callee can access */
 	for (i = BPF_REG_1; i <= BPF_REG_5; i++)
@@ -2265,7 +2219,7 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	state->curframe++;
 
 	/* and go analyze first insn of the callee */
-	*insn_idx = target_insn;
+	*insn_idx = target;
 
 	if (env->log.level) {
 		verbose(env, "caller:\n");
@@ -3807,13 +3761,15 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
 		return -EINVAL;
 	}
 
-	if (env->subprog_cnt) {
+	if (cur_frame(env)->subprogno) {
 		/* when program has LD_ABS insn JITs and interpreter assume
 		 * that r1 == ctx == skb which is not the case for callees
 		 * that can have arbitrary arguments. It's problematic
 		 * for main prog as well since JITs would need to analyze
 		 * all functions in order to make proper register save/restore
-		 * decisions in the main prog. Hence disallow LD_ABS with calls
+		 * decisions in the main prog. Hence disallow LD_ABS with calls.
+		 * However, if we're the main-function (subprog 0) then we know
+		 * _we_ were called with r1 == ctx, so it's fine.
 		 */
 		verbose(env, "BPF_LD_[ABS|IND] instructions cannot be mixed with bpf-to-bpf calls\n");
 		return -EINVAL;
@@ -3995,10 +3951,6 @@ static int check_cfg(struct bpf_verifier_env *env)
 	int ret = 0;
 	int i, t;
 
-	ret = check_subprogs(env);
-	if (ret < 0)
-		return ret;
-
 	insn_state = kcalloc(insn_cnt, sizeof(int), GFP_KERNEL);
 	if (!insn_state)
 		return -ENOMEM;
@@ -4521,6 +4473,7 @@ static int do_check(struct bpf_verifier_env *env)
 	int insn_cnt = env->prog->len, i;
 	int insn_idx, prev_insn_idx = 0;
 	int insn_processed = 0;
+	int mainprogno;
 	bool do_print_state = false;
 
 	state = kzalloc(sizeof(struct bpf_verifier_state), GFP_KERNEL);
@@ -4534,12 +4487,17 @@ static int do_check(struct bpf_verifier_env *env)
 		return -ENOMEM;
 	}
 	env->cur_state = state;
+	mainprogno = add_subprog(env, 0);
+	if (mainprogno < 0)
+		return mainprogno;
 	init_func_state(env, state->frame[0],
 			BPF_MAIN_FUNC /* callsite */,
 			0 /* frameno */,
-			0 /* subprogno, zero == main subprog */);
+			mainprogno /* subprogno */);
 	insn_idx = 0;
 	for (;;) {
+		struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
+		struct bpf_func_state *frame = cur_frame(env);
 		struct bpf_insn *insn;
 		u8 class;
 		int err;
@@ -4560,6 +4518,17 @@ static int do_check(struct bpf_verifier_env *env)
 			return -E2BIG;
 		}
 
+		/* check the insn doesn't appear in multiple subprogs, which
+		 * would imply bad function calls/returns/jumps
+		 */
+		if (aux->seen && aux->subprogno != frame->subprogno) {
+			verbose(env, "insn %d was in subprog %u, now %u\n",
+				insn_idx, aux->subprogno, frame->subprogno);
+			return -EINVAL;
+		}
+		aux->subprogno = frame->subprogno;
+		aux->seen = true;
+
 		err = is_state_visited(env, insn_idx);
 		if (err < 0)
 			return err;
@@ -4584,7 +4553,7 @@ static int do_check(struct bpf_verifier_env *env)
 			else
 				verbose(env, "\nfrom %d to %d:",
 					prev_insn_idx, insn_idx);
-			print_verifier_state(env, state->frame[state->curframe]);
+			print_verifier_state(env, frame);
 			do_print_state = false;
 		}
 
@@ -4605,7 +4574,6 @@ static int do_check(struct bpf_verifier_env *env)
 		}
 
 		regs = cur_regs(env);
-		env->insn_aux_data[insn_idx].seen = true;
 		if (class == BPF_ALU || class == BPF_ALU64) {
 			err = check_alu_op(env, insn);
 			if (err)
@@ -4821,7 +4789,15 @@ static int do_check(struct bpf_verifier_env *env)
 					return err;
 
 				insn_idx++;
-				env->insn_aux_data[insn_idx].seen = true;
+				/* Mark second half of LD_IMM64 insn */
+				aux++;
+				if (aux->seen && aux->subprogno != frame->subprogno) {
+					verbose(env, "insn %d was in subprog %u, now %u\n",
+						insn_idx, aux->subprogno, frame->subprogno);
+					return -EINVAL;
+				}
+				aux->subprogno = frame->subprogno;
+				aux->seen = true;
 			} else {
 				verbose(env, "invalid BPF_LD mode\n");
 				return -EINVAL;
@@ -4836,15 +4812,15 @@ static int do_check(struct bpf_verifier_env *env)
 
 	verbose(env, "processed %d insns (limit %d), stack depth ",
 		insn_processed, BPF_COMPLEXITY_LIMIT_INSNS);
-	for (i = 0; i < env->subprog_cnt + 1; i++) {
-		u32 depth = env->subprog_stack_depth[i];
+	for (i = 0; i < env->subprog_cnt; i++) {
+		u32 depth = env->subprog_info[i].stack_depth;
 
-		verbose(env, "%d", depth);
-		if (i + 1 < env->subprog_cnt + 1)
+		if (i)
 			verbose(env, "+");
+		verbose(env, "%d", depth);
 	}
 	verbose(env, "\n");
-	env->prog->aux->stack_depth = env->subprog_stack_depth[0];
+	env->prog->aux->stack_depth = env->subprog_info[0].stack_depth;
 	return 0;
 }
 
@@ -5037,8 +5013,10 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env, u32 prog_len,
 	memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off);
 	memcpy(new_data + off + cnt - 1, old_data + off,
 	       sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
-	for (i = off; i < off + cnt - 1; i++)
+	for (i = off; i < off + cnt - 1; i++) {
 		new_data[i].seen = true;
+		new_data[i].subprogno = old_data[off].subprogno;
+	}
 	env->insn_aux_data = new_data;
 	vfree(old_data);
 	return 0;
@@ -5051,9 +5029,9 @@ static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len
 	if (len == 1)
 		return;
 	for (i = 0; i < env->subprog_cnt; i++) {
-		if (env->subprog_starts[i] < off)
+		if (env->subprog_info[i].start < off)
 			continue;
-		env->subprog_starts[i] += len - 1;
+		env->subprog_info[i].start += len - 1;
 	}
 }
 
@@ -5212,15 +5190,19 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
 static int jit_subprogs(struct bpf_verifier_env *env)
 {
 	struct bpf_prog *prog = env->prog, **func, *tmp;
-	int i, j, subprog_start, subprog_end = 0, len, subprog;
+	int i, j, subprog;
 	struct bpf_insn *insn;
 	void *old_bpf_func;
 	int err = -ENOMEM;
 
-	if (env->subprog_cnt == 0)
+	if (env->subprog_cnt <= 1)
 		return 0;
 
+	for (i = 0; i <= env->subprog_cnt; i++)
+		env->subprog_info[i].len = 0;
+
 	for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) {
+		env->subprog_info[env->insn_aux_data[i].subprogno].len++;
 		if (insn->code != (BPF_JMP | BPF_CALL) ||
 		    insn->src_reg != BPF_PSEUDO_CALL)
 			continue;
@@ -5233,7 +5215,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 		/* temporarily remember subprog id inside insn instead of
 		 * aux_data, since next loop will split up all insns into funcs
 		 */
-		insn->off = subprog + 1;
+		insn->off = subprog;
 		/* remember original imm in case JIT fails and fallback
 		 * to interpreter will be needed
 		 */
@@ -5242,25 +5224,59 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 		insn->imm = 1;
 	}
 
-	func = kzalloc(sizeof(prog) * (env->subprog_cnt + 1), GFP_KERNEL);
+	func = kzalloc(sizeof(prog) * env->subprog_cnt, GFP_KERNEL);
 	if (!func)
 		return -ENOMEM;
 
-	for (i = 0; i <= env->subprog_cnt; i++) {
-		subprog_start = subprog_end;
-		if (env->subprog_cnt == i)
-			subprog_end = prog->len;
-		else
-			subprog_end = env->subprog_starts[i];
+	for (i = 0; i < env->subprog_cnt; i++) {
+		struct bpf_subprog_info *info = &env->subprog_info[i];
+		int k = 0;
 
-		len = subprog_end - subprog_start;
-		func[i] = bpf_prog_alloc(bpf_prog_size(len), GFP_USER);
+		func[i] = bpf_prog_alloc(bpf_prog_size(info->len), GFP_USER);
 		if (!func[i])
 			goto out_free;
-		memcpy(func[i]->insnsi, &prog->insnsi[subprog_start],
-		       len * sizeof(struct bpf_insn));
+
+		for (j = 0; j < prog->len; j++) {
+			if (env->insn_aux_data[j].subprogno != i)
+				continue;
+			if (WARN_ON_ONCE(k >= info->len)) {
+				verbose(env, "Tried to add insn %d to subprog %d but previously counted %d\n",
+					k, i, info->len);
+				err = -EFAULT;
+				goto out_free;
+			}
+			env->insn_aux_data[j].subprog_off = k;
+			memcpy(func[i]->insnsi + k++, prog->insnsi + j,
+			       sizeof(struct bpf_insn));
+		}
+		if (WARN_ON_ONCE(k != info->len)) {
+			verbose(env, "Only %d insns in subprog %d but previously counted %d\n",
+				k, i, info->len);
+		}
+		/* Fix up jump offsets.  If a subprog was not contiguous, then
+		 * jumps within it need their offsets adjusting to account for
+		 * the insns that are not present in the subprog.
+		 */
+		for (j = 0; j < prog->len; j++) {
+			if (env->insn_aux_data[j].subprogno != i)
+				continue;
+			k = env->insn_aux_data[j].subprog_off;
+			insn = func[i]->insnsi + k;
+			if (BPF_CLASS(insn->code) == BPF_JMP &&
+			    BPF_OP(insn->code) != BPF_CALL &&
+			    BPF_OP(insn->code) != BPF_EXIT) {
+				/* This was a jump from orig[j] to orig[j+off+1]
+				 * which becomes a jump from new[k] to
+				 * new[aux[j+off+1].subprog_off]
+				 */
+				int t = env->insn_aux_data[j + insn->off + 1].subprog_off;
+
+				insn->off = t - k - 1;
+			}
+		}
+
 		func[i]->type = prog->type;
-		func[i]->len = len;
+		func[i]->len = info->len;
 		if (bpf_prog_calc_tag(func[i]))
 			goto out_free;
 		func[i]->is_func = 1;
@@ -5268,7 +5284,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 		 * Long term would need debug info to populate names
 		 */
 		func[i]->aux->name[0] = 'F';
-		func[i]->aux->stack_depth = env->subprog_stack_depth[i];
+		func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
 		func[i]->jit_requested = 1;
 		func[i] = bpf_int_jit_compile(func[i]);
 		if (!func[i]->jited) {
@@ -5281,7 +5297,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 	 * now populate all bpf_calls with correct addresses and
 	 * run last pass of JIT
 	 */
-	for (i = 0; i <= env->subprog_cnt; i++) {
+	for (i = 0; i < env->subprog_cnt; i++) {
 		insn = func[i]->insnsi;
 		for (j = 0; j < func[i]->len; j++, insn++) {
 			if (insn->code != (BPF_JMP | BPF_CALL) ||
@@ -5289,12 +5305,16 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 				continue;
 			subprog = insn->off;
 			insn->off = 0;
+			if (subprog < 0 || subprog >= env->subprog_cnt) {
+				err = -EFAULT;
+				goto out_free;
+			}
 			insn->imm = (u64 (*)(u64, u64, u64, u64, u64))
 				func[subprog]->bpf_func -
 				__bpf_call_base;
 		}
 	}
-	for (i = 0; i <= env->subprog_cnt; i++) {
+	for (i = 0; i < env->subprog_cnt; i++) {
 		old_bpf_func = func[i]->bpf_func;
 		tmp = bpf_int_jit_compile(func[i]);
 		if (tmp != func[i] || func[i]->bpf_func != old_bpf_func) {
@@ -5308,7 +5328,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 	/* finally lock prog and jit images for all functions and
 	 * populate kallsysm
 	 */
-	for (i = 0; i <= env->subprog_cnt; i++) {
+	for (i = 0; i < env->subprog_cnt; i++) {
 		bpf_prog_lock_ro(func[i]);
 		bpf_prog_kallsyms_add(func[i]);
 	}
@@ -5325,7 +5345,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 			continue;
 		insn->off = env->insn_aux_data[i].call_imm;
 		subprog = find_subprog(env, i + insn->off + 1);
-		addr  = (unsigned long)func[subprog + 1]->bpf_func;
+		addr  = (unsigned long)func[subprog]->bpf_func;
 		addr &= PAGE_MASK;
 		insn->imm = (u64 (*)(u64, u64, u64, u64, u64))
 			    addr - __bpf_call_base;
@@ -5334,10 +5354,10 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 	prog->jited = 1;
 	prog->bpf_func = func[0]->bpf_func;
 	prog->aux->func = func;
-	prog->aux->func_cnt = env->subprog_cnt + 1;
+	prog->aux->func_cnt = env->subprog_cnt;
 	return 0;
 out_free:
-	for (i = 0; i <= env->subprog_cnt; i++)
+	for (i = 0; i < env->subprog_cnt; i++)
 		if (func[i])
 			bpf_jit_free(func[i]);
 	kfree(func);
@@ -5367,6 +5387,7 @@ static int fixup_call_args(struct bpf_verifier_env *env)
 		err = jit_subprogs(env);
 		if (err == 0)
 			return 0;
+		verbose(env, "failed to jit_subprogs %d\n", err);
 	}
 #ifndef CONFIG_BPF_JIT_ALWAYS_ON
 	for (i = 0; i < prog->len; i++, insn++) {

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 02/12] bpf/verifier: update selftests
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
  2018-02-23 17:39 ` [RFC PATCH bpf-next 01/12] bpf/verifier: validate func_calls by marking at do_check() time Edward Cree
@ 2018-02-23 17:39 ` Edward Cree
  2018-02-23 17:40 ` [RFC PATCH bpf-next 03/12] bpf/verifier: per-register parent pointers Edward Cree
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:39 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

Error messages for some bad programs have changed, partly because we now
 check for loops / out-of-bounds jumps before checking subprogs.

Problematic selftests:
513 calls: wrong recursive calls
 This is now rejected with 'unreachable insn 1'.  I'm not entirely sure what
 it was meant to do/test, since all of the JMP|CALLs are also unreachable.
546 calls: ld_abs with changing ctx data in callee
 Rejected with R1 !read_ok.  It was testing for the "can't mix LD_ABS with
 function calls", which has now changed to "can't use LD_ABS in functions
 other than main()".  I'm still not 100% sure that's right though.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 tools/testing/selftests/bpf/test_verifier.c | 46 ++++++++++++++++++-----------
 1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 697bd83de295..9c7531887ee3 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -644,7 +644,7 @@ static struct bpf_test tests[] = {
 		.insns = {
 			BPF_ALU64_REG(BPF_MOV, BPF_REG_0, BPF_REG_2),
 		},
-		.errstr = "not an exit",
+		.errstr = "jump out of range",
 		.result = REJECT,
 	},
 	{
@@ -9288,7 +9288,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "last insn is not an exit or jmp",
+		.errstr = "insn 1 was in subprog 1, now 0",
 		.result = REJECT,
 	},
 	{
@@ -9354,7 +9354,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "jump out of range",
+		.errstr = "insn 5 was in subprog 1, now 0",
 		.result = REJECT,
 	},
 	{
@@ -9633,7 +9633,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-		.errstr = "jump out of range from insn 1 to 4",
+		.errstr = "insn 5 was in subprog 1, now 0",
 		.result = REJECT,
 	},
 	{
@@ -9649,13 +9649,12 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 			BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
 			BPF_EXIT_INSN(),
-			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-				    offsetof(struct __sk_buff, len)),
+			BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8),
 			BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, -3),
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "jump out of range from insn 11 to 9",
+		.errstr = "insn 9 was in subprog 1, now 2",
 		.result = REJECT,
 	},
 	{
@@ -9707,7 +9706,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "invalid destination",
+		.errstr = "jump out of range from insn 2 to -1",
 		.result = REJECT,
 	},
 	{
@@ -9719,7 +9718,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "invalid destination",
+		.errstr = "jump out of range from insn 2 to -2147483646",
 		.result = REJECT,
 	},
 	{
@@ -9732,7 +9731,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "jump out of range",
+		.errstr = "insn 1 was in subprog 0, now 1",
 		.result = REJECT,
 	},
 	{
@@ -9745,7 +9744,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "jump out of range",
+		.errstr = "insn 4 was in subprog 1, now 0",
 		.result = REJECT,
 	},
 	{
@@ -9759,7 +9758,7 @@ static struct bpf_test tests[] = {
 			BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -2),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "not an exit",
+		.errstr = "jump out of range from insn 5 to 6",
 		.result = REJECT,
 	},
 	{
@@ -9773,7 +9772,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "last insn",
+		.errstr = "insn_idx 5 is in subprog 1 but that starts at 4",
 		.result = REJECT,
 	},
 	{
@@ -9788,7 +9787,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "last insn",
+		.errstr = "insn_idx 5 is in subprog 1 but that starts at 4",
 		.result = REJECT,
 	},
 	{
@@ -9828,12 +9827,11 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 			BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
 			BPF_MOV64_REG(BPF_REG_0, BPF_REG_0),
-			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
-				    offsetof(struct __sk_buff, len)),
+			BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8),
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "not an exit",
+		.errstr = "insn 10 was in subprog 2, now 1",
 		.result = REJECT,
 	},
 	{
@@ -11073,6 +11071,20 @@ static struct bpf_test tests[] = {
 		.prog_type = BPF_PROG_TYPE_XDP,
 	},
 	{
+		"calls: interleaved functions",
+		.insns = {
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+			BPF_MOV64_IMM(BPF_REG_0, 2),
+			BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+			BPF_EXIT_INSN(),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+		.result = ACCEPT,
+	},
+	{
 		"search pruning: all branches should be verified (nop operation)",
 		.insns = {
 			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 03/12] bpf/verifier: per-register parent pointers
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
  2018-02-23 17:39 ` [RFC PATCH bpf-next 01/12] bpf/verifier: validate func_calls by marking at do_check() time Edward Cree
  2018-02-23 17:39 ` [RFC PATCH bpf-next 02/12] bpf/verifier: update selftests Edward Cree
@ 2018-02-23 17:40 ` Edward Cree
  2018-02-23 17:40 ` [RFC PATCH bpf-next 04/12] bpf/verifier: store insn_idx instead of callsite in bpf_func_state Edward Cree
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:40 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

By giving each register its own liveness chain, we elide the skip_callee()
 logic.  Instead, each register's parent is the state it inherits from;
 both check_func_call() and prepare_func_exit() automatically connect
 reg states to the correct chain since when they copy the reg state across
 (r1-r5 into the callee as args, and r0 out as the return value) they also
 copy the parent pointer.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/linux/bpf_verifier.h |   8 +-
 kernel/bpf/verifier.c        | 180 ++++++++++---------------------------------
 2 files changed, 45 insertions(+), 143 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 0387e0c61161..2b56be9dfb56 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -41,6 +41,7 @@ enum bpf_reg_liveness {
 };
 
 struct bpf_reg_state {
+	/* Ordering of fields matters.  See states_equal() */
 	enum bpf_reg_type type;
 	union {
 		/* valid when type == PTR_TO_PACKET */
@@ -59,7 +60,6 @@ struct bpf_reg_state {
 	 * came from, when one is tested for != NULL.
 	 */
 	u32 id;
-	/* Ordering of fields matters.  See states_equal() */
 	/* For scalar types (SCALAR_VALUE), this represents our knowledge of
 	 * the actual value.
 	 * For pointer types, this represents the variable part of the offset
@@ -76,15 +76,15 @@ struct bpf_reg_state {
 	s64 smax_value; /* maximum possible (s64)value */
 	u64 umin_value; /* minimum possible (u64)value */
 	u64 umax_value; /* maximum possible (u64)value */
+	/* parentage chain for liveness checking */
+	struct bpf_reg_state *parent;
 	/* Inside the callee two registers can be both PTR_TO_STACK like
 	 * R1=fp-8 and R2=fp-8, but one of them points to this function stack
 	 * while another to the caller's stack. To differentiate them 'frameno'
 	 * is used which is an index in bpf_verifier_state->frame[] array
 	 * pointing to bpf_func_state.
-	 * This field must be second to last, for states_equal() reasons.
 	 */
 	u32 frameno;
-	/* This field must be last, for states_equal() reasons. */
 	enum bpf_reg_liveness live;
 };
 
@@ -107,7 +107,6 @@ struct bpf_stack_state {
  */
 struct bpf_func_state {
 	struct bpf_reg_state regs[MAX_BPF_REG];
-	struct bpf_verifier_state *parent;
 	/* index of call instruction that called into this func */
 	int callsite;
 	/* stack frame number of this function state from pov of
@@ -129,7 +128,6 @@ struct bpf_func_state {
 struct bpf_verifier_state {
 	/* call stack tracking */
 	struct bpf_func_state *frame[MAX_CALL_FRAMES];
-	struct bpf_verifier_state *parent;
 	u32 curframe;
 };
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2dc69fb3bfbe..dfba9dbc5bfb 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -343,9 +343,9 @@ static int copy_stack_state(struct bpf_func_state *dst,
 /* do_check() starts with zero-sized stack in struct bpf_verifier_state to
  * make it consume minimal amount of memory. check_stack_write() access from
  * the program calls into realloc_func_state() to grow the stack size.
- * Note there is a non-zero 'parent' pointer inside bpf_verifier_state
- * which this function copies over. It points to previous bpf_verifier_state
- * which is never reallocated
+ * Note there is a non-zero parent pointer inside each reg of bpf_verifier_state
+ * which this function copies over. It points to corresponding reg in previous
+ * bpf_verifier_state which is never reallocated
  */
 static int realloc_func_state(struct bpf_func_state *state, int size,
 			      bool copy_old)
@@ -429,7 +429,6 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
 		dst_state->frame[i] = NULL;
 	}
 	dst_state->curframe = src->curframe;
-	dst_state->parent = src->parent;
 	for (i = 0; i <= src->curframe; i++) {
 		dst = dst_state->frame[i];
 		if (!dst) {
@@ -699,6 +698,7 @@ static void init_reg_state(struct bpf_verifier_env *env,
 	for (i = 0; i < MAX_BPF_REG; i++) {
 		mark_reg_not_init(env, regs, i);
 		regs[i].live = REG_LIVE_NONE;
+		regs[i].parent = NULL;
 	}
 
 	/* frame pointer */
@@ -773,74 +773,21 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
 	return ret;
 }
 
-static
-struct bpf_verifier_state *skip_callee(struct bpf_verifier_env *env,
-				       const struct bpf_verifier_state *state,
-				       struct bpf_verifier_state *parent,
-				       u32 regno)
-{
-	struct bpf_verifier_state *tmp = NULL;
-
-	/* 'parent' could be a state of caller and
-	 * 'state' could be a state of callee. In such case
-	 * parent->curframe < state->curframe
-	 * and it's ok for r1 - r5 registers
-	 *
-	 * 'parent' could be a callee's state after it bpf_exit-ed.
-	 * In such case parent->curframe > state->curframe
-	 * and it's ok for r0 only
-	 */
-	if (parent->curframe == state->curframe ||
-	    (parent->curframe < state->curframe &&
-	     regno >= BPF_REG_1 && regno <= BPF_REG_5) ||
-	    (parent->curframe > state->curframe &&
-	       regno == BPF_REG_0))
-		return parent;
-
-	if (parent->curframe > state->curframe &&
-	    regno >= BPF_REG_6) {
-		/* for callee saved regs we have to skip the whole chain
-		 * of states that belong to callee and mark as LIVE_READ
-		 * the registers before the call
-		 */
-		tmp = parent;
-		while (tmp && tmp->curframe != state->curframe) {
-			tmp = tmp->parent;
-		}
-		if (!tmp)
-			goto bug;
-		parent = tmp;
-	} else {
-		goto bug;
-	}
-	return parent;
-bug:
-	verbose(env, "verifier bug regno %d tmp %p\n", regno, tmp);
-	verbose(env, "regno %d parent frame %d current frame %d\n",
-		regno, parent->curframe, state->curframe);
-	return NULL;
-}
-
+/* Parentage chain of this register (or stack slot) should take care of all
+ * issues like callee-saved registers, stack slot allocation time, etc.
+ */
 static int mark_reg_read(struct bpf_verifier_env *env,
-			 const struct bpf_verifier_state *state,
-			 struct bpf_verifier_state *parent,
-			 u32 regno)
+			 const struct bpf_reg_state *state,
+			 struct bpf_reg_state *parent)
 {
 	bool writes = parent == state->parent; /* Observe write marks */
 
-	if (regno == BPF_REG_FP)
-		/* We don't need to worry about FP liveness because it's read-only */
-		return 0;
-
 	while (parent) {
 		/* if read wasn't screened by an earlier write ... */
-		if (writes && state->frame[state->curframe]->regs[regno].live & REG_LIVE_WRITTEN)
+		if (writes && state->live & REG_LIVE_WRITTEN)
 			break;
-		parent = skip_callee(env, state, parent, regno);
-		if (!parent)
-			return -EFAULT;
 		/* ... then we depend on parent's value */
-		parent->frame[parent->curframe]->regs[regno].live |= REG_LIVE_READ;
+		parent->live |= REG_LIVE_READ;
 		state = parent;
 		parent = state->parent;
 		writes = true;
@@ -866,7 +813,10 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno,
 			verbose(env, "R%d !read_ok\n", regno);
 			return -EACCES;
 		}
-		return mark_reg_read(env, vstate, vstate->parent, regno);
+		/* We don't need to worry about FP liveness because it's read-only */
+		if (regno != BPF_REG_FP)
+			return mark_reg_read(env, &regs[regno],
+					     regs[regno].parent);
 	} else {
 		/* check whether register used as dest operand can be written to */
 		if (regno == BPF_REG_FP) {
@@ -978,61 +928,6 @@ static int check_stack_write(struct bpf_verifier_env *env,
 	return 0;
 }
 
-/* registers of every function are unique and mark_reg_read() propagates
- * the liveness in the following cases:
- * - from callee into caller for R1 - R5 that were used as arguments
- * - from caller into callee for R0 that used as result of the call
- * - from caller to the same caller skipping states of the callee for R6 - R9,
- *   since R6 - R9 are callee saved by implicit function prologue and
- *   caller's R6 != callee's R6, so when we propagate liveness up to
- *   parent states we need to skip callee states for R6 - R9.
- *
- * stack slot marking is different, since stacks of caller and callee are
- * accessible in both (since caller can pass a pointer to caller's stack to
- * callee which can pass it to another function), hence mark_stack_slot_read()
- * has to propagate the stack liveness to all parent states at given frame number.
- * Consider code:
- * f1() {
- *   ptr = fp - 8;
- *   *ptr = ctx;
- *   call f2 {
- *      .. = *ptr;
- *   }
- *   .. = *ptr;
- * }
- * First *ptr is reading from f1's stack and mark_stack_slot_read() has
- * to mark liveness at the f1's frame and not f2's frame.
- * Second *ptr is also reading from f1's stack and mark_stack_slot_read() has
- * to propagate liveness to f2 states at f1's frame level and further into
- * f1 states at f1's frame level until write into that stack slot
- */
-static void mark_stack_slot_read(struct bpf_verifier_env *env,
-				 const struct bpf_verifier_state *state,
-				 struct bpf_verifier_state *parent,
-				 int slot, int frameno)
-{
-	bool writes = parent == state->parent; /* Observe write marks */
-
-	while (parent) {
-		if (parent->frame[frameno]->allocated_stack <= slot * BPF_REG_SIZE)
-			/* since LIVE_WRITTEN mark is only done for full 8-byte
-			 * write the read marks are conservative and parent
-			 * state may not even have the stack allocated. In such case
-			 * end the propagation, since the loop reached beginning
-			 * of the function
-			 */
-			break;
-		/* if read wasn't screened by an earlier write ... */
-		if (writes && state->frame[frameno]->stack[slot].spilled_ptr.live & REG_LIVE_WRITTEN)
-			break;
-		/* ... then we depend on parent's value */
-		parent->frame[frameno]->stack[slot].spilled_ptr.live |= REG_LIVE_READ;
-		state = parent;
-		parent = state->parent;
-		writes = true;
-	}
-}
-
 static int check_stack_read(struct bpf_verifier_env *env,
 			    struct bpf_func_state *reg_state /* func where register points to */,
 			    int off, int size, int value_regno)
@@ -1070,8 +965,8 @@ static int check_stack_read(struct bpf_verifier_env *env,
 			 */
 			state->regs[value_regno].live |= REG_LIVE_WRITTEN;
 		}
-		mark_stack_slot_read(env, vstate, vstate->parent, spi,
-				     reg_state->frameno);
+		mark_reg_read(env, &reg_state->stack[spi].spilled_ptr,
+			      reg_state->stack[spi].spilled_ptr.parent);
 		return 0;
 	} else {
 		int zeros = 0;
@@ -1087,8 +982,8 @@ static int check_stack_read(struct bpf_verifier_env *env,
 				off, i, size);
 			return -EACCES;
 		}
-		mark_stack_slot_read(env, vstate, vstate->parent, spi,
-				     reg_state->frameno);
+		mark_reg_read(env, &reg_state->stack[spi].spilled_ptr,
+			      reg_state->stack[spi].spilled_ptr.parent);
 		if (value_regno >= 0) {
 			if (zeros == size) {
 				/* any size read into register is zero extended,
@@ -1764,8 +1659,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
 		/* reading any byte out of 8-byte 'spill_slot' will cause
 		 * the whole slot to be marked as 'read'
 		 */
-		mark_stack_slot_read(env, env->cur_state, env->cur_state->parent,
-				     spi, state->frameno);
+		mark_reg_read(env, &state->stack[spi].spilled_ptr,
+			      state->stack[spi].spilled_ptr.parent);
 	}
 	return update_stack_depth(env, state, off);
 }
@@ -2205,11 +2100,13 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			state->curframe + 1 /* frameno within this callchain */,
 			subprog /* subprog number within this prog */);
 
-	/* copy r1 - r5 args that callee can access */
+	/* copy r1 - r5 args that callee can access.  The copy includes parent
+	 * pointers, which connects us up to the liveness chain
+	 */
 	for (i = BPF_REG_1; i <= BPF_REG_5; i++)
 		callee->regs[i] = caller->regs[i];
 
-	/* after the call regsiters r0 - r5 were scratched */
+	/* after the call registers r0 - r5 were scratched */
 	for (i = 0; i < CALLER_SAVED_REGS; i++) {
 		mark_reg_not_init(env, caller->regs, caller_saved[i]);
 		check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK);
@@ -4115,7 +4012,7 @@ static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
 		/* explored state didn't use this */
 		return true;
 
-	equal = memcmp(rold, rcur, offsetof(struct bpf_reg_state, frameno)) == 0;
+	equal = memcmp(rold, rcur, offsetof(struct bpf_reg_state, parent)) == 0;
 
 	if (rold->type == PTR_TO_STACK)
 		/* two stack pointers are equal only if they're pointing to
@@ -4348,7 +4245,7 @@ static bool states_equal(struct bpf_verifier_env *env,
  * equivalent state (jump target or such) we didn't arrive by the straight-line
  * code, so read marks in the state must propagate to the parent regardless
  * of the state's write marks. That's what 'parent == state->parent' comparison
- * in mark_reg_read() and mark_stack_slot_read() is for.
+ * in mark_reg_read() is for.
  */
 static int propagate_liveness(struct bpf_verifier_env *env,
 			      const struct bpf_verifier_state *vstate,
@@ -4369,7 +4266,8 @@ static int propagate_liveness(struct bpf_verifier_env *env,
 		if (vparent->frame[vparent->curframe]->regs[i].live & REG_LIVE_READ)
 			continue;
 		if (vstate->frame[vstate->curframe]->regs[i].live & REG_LIVE_READ) {
-			err = mark_reg_read(env, vstate, vparent, i);
+			err = mark_reg_read(env, &vstate->frame[vstate->curframe]->regs[i],
+					    &vparent->frame[vstate->curframe]->regs[i]);
 			if (err)
 				return err;
 		}
@@ -4384,7 +4282,8 @@ static int propagate_liveness(struct bpf_verifier_env *env,
 			if (parent->stack[i].spilled_ptr.live & REG_LIVE_READ)
 				continue;
 			if (state->stack[i].spilled_ptr.live & REG_LIVE_READ)
-				mark_stack_slot_read(env, vstate, vparent, i, frame);
+				mark_reg_read(env, &state->stack[i].spilled_ptr,
+					      &parent->stack[i].spilled_ptr);
 		}
 	}
 	return err;
@@ -4394,7 +4293,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 {
 	struct bpf_verifier_state_list *new_sl;
 	struct bpf_verifier_state_list *sl;
-	struct bpf_verifier_state *cur = env->cur_state;
+	struct bpf_verifier_state *cur = env->cur_state, *new;
 	int i, j, err;
 
 	sl = env->explored_states[insn_idx];
@@ -4436,16 +4335,18 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 		return -ENOMEM;
 
 	/* add new state to the head of linked list */
-	err = copy_verifier_state(&new_sl->state, cur);
+	new = &new_sl->state;
+	err = copy_verifier_state(new, cur);
 	if (err) {
-		free_verifier_state(&new_sl->state, false);
+		free_verifier_state(new, false);
 		kfree(new_sl);
 		return err;
 	}
 	new_sl->next = env->explored_states[insn_idx];
 	env->explored_states[insn_idx] = new_sl;
 	/* connect new state to parentage chain */
-	cur->parent = &new_sl->state;
+	for (i = 0; i < BPF_REG_FP; i++)
+		cur_regs(env)[i].parent = &new->frame[new->curframe]->regs[i];
 	/* clear write marks in current state: the writes we did are not writes
 	 * our child did, so they don't screen off its reads from us.
 	 * (There are no read marks in current state, because reads always mark
@@ -4458,9 +4359,13 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 	/* all stack frames are accessible from callee, clear them all */
 	for (j = 0; j <= cur->curframe; j++) {
 		struct bpf_func_state *frame = cur->frame[j];
+		struct bpf_func_state *newframe = new->frame[j];
 
-		for (i = 0; i < frame->allocated_stack / BPF_REG_SIZE; i++)
+		for (i = 0; i < frame->allocated_stack / BPF_REG_SIZE; i++) {
 			frame->stack[i].spilled_ptr.live = REG_LIVE_NONE;
+			frame->stack[i].spilled_ptr.parent =
+						&newframe->stack[i].spilled_ptr;
+		}
 	}
 	return 0;
 }
@@ -4480,7 +4385,6 @@ static int do_check(struct bpf_verifier_env *env)
 	if (!state)
 		return -ENOMEM;
 	state->curframe = 0;
-	state->parent = NULL;
 	state->frame[0] = kzalloc(sizeof(struct bpf_func_state), GFP_KERNEL);
 	if (!state->frame[0]) {
 		kfree(state);

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 04/12] bpf/verifier: store insn_idx instead of callsite in bpf_func_state
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (2 preceding siblings ...)
  2018-02-23 17:40 ` [RFC PATCH bpf-next 03/12] bpf/verifier: per-register parent pointers Edward Cree
@ 2018-02-23 17:40 ` Edward Cree
  2018-02-23 17:40 ` [RFC PATCH bpf-next 05/12] bpf/verifier: detect loops dynamically rather than statically Edward Cree
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:40 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

This means each entry in the parentage chain can have its insn identified,
 which will help to support bounded-loop handling later.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/linux/bpf_verifier.h |  6 ++++--
 kernel/bpf/verifier.c        | 22 +++++++++++-----------
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 2b56be9dfb56..0bc49c768585 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -107,8 +107,10 @@ struct bpf_stack_state {
  */
 struct bpf_func_state {
 	struct bpf_reg_state regs[MAX_BPF_REG];
-	/* index of call instruction that called into this func */
-	int callsite;
+	/* index of last instruction processed in this func.  In frames other
+	 * than innermost, will be call insn
+	 */
+	int insn_idx;
 	/* stack frame number of this function state from pov of
 	 * enclosing bpf_verifier_state.
 	 * 0 = main function, 1 = first callee.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index dfba9dbc5bfb..7a08b5e8e071 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -267,7 +267,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
 			/* reg->off should be 0 for SCALAR_VALUE */
 			verbose(env, "%lld", reg->var_off.value + reg->off);
 			if (t == PTR_TO_STACK)
-				verbose(env, ",call_%d", func(env, reg)->callsite);
+				verbose(env, ",frame_%u", reg->frameno);
 		} else {
 			verbose(env, "(id=%d", reg->id);
 			if (t != SCALAR_VALUE)
@@ -711,12 +711,11 @@ static void init_reg_state(struct bpf_verifier_env *env,
 	mark_reg_known_zero(env, regs, BPF_REG_1);
 }
 
-#define BPF_MAIN_FUNC (-1)
 static void init_func_state(struct bpf_verifier_env *env,
 			    struct bpf_func_state *state,
-			    int callsite, int frameno, int subprogno)
+			    int entry, int frameno, int subprogno)
 {
-	state->callsite = callsite;
+	state->insn_idx = entry;
 	state->frameno = frameno;
 	state->subprogno = subprogno;
 	init_reg_state(env, state);
@@ -2095,8 +2094,7 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	 * callee can read/write into caller's stack
 	 */
 	init_func_state(env, callee,
-			/* remember the callsite, it will be used by bpf_exit */
-			*insn_idx /* callsite */,
+			target /* entry point */,
 			state->curframe + 1 /* frameno within this callchain */,
 			subprog /* subprog number within this prog */);
 
@@ -2151,7 +2149,7 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
 	/* return to the caller whatever r0 had in the callee */
 	caller->regs[BPF_REG_0] = *r0;
 
-	*insn_idx = callee->callsite + 1;
+	*insn_idx = caller->insn_idx + 1;
 	if (env->log.level) {
 		verbose(env, "returning from callee:\n");
 		print_verifier_state(env, callee);
@@ -4232,7 +4230,7 @@ static bool states_equal(struct bpf_verifier_env *env,
 	 * and all frame states need to be equivalent
 	 */
 	for (i = 0; i <= old->curframe; i++) {
-		if (old->frame[i]->callsite != cur->frame[i]->callsite)
+		if (old->frame[i]->insn_idx != cur->frame[i]->insn_idx)
 			return false;
 		if (!func_states_equal(old->frame[i], cur->frame[i]))
 			return false;
@@ -4327,7 +4325,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 	 * technically the current state is not proven to be safe yet,
 	 * but it will either reach outer most bpf_exit (which means it's safe)
 	 * or it will be rejected. Since there are no loops, we won't be
-	 * seeing this tuple (frame[0].callsite, frame[1].callsite, .. insn_idx)
+	 * seeing this tuple (frame[0].insn_idx, frame[1].insn_idx, .. insn_idx)
 	 * again on the way to bpf_exit
 	 */
 	new_sl = kzalloc(sizeof(struct bpf_verifier_state_list), GFP_KERNEL);
@@ -4394,11 +4392,11 @@ static int do_check(struct bpf_verifier_env *env)
 	mainprogno = add_subprog(env, 0);
 	if (mainprogno < 0)
 		return mainprogno;
+	insn_idx = 0;
 	init_func_state(env, state->frame[0],
-			BPF_MAIN_FUNC /* callsite */,
+			insn_idx /* entry point */,
 			0 /* frameno */,
 			mainprogno /* subprogno */);
-	insn_idx = 0;
 	for (;;) {
 		struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
 		struct bpf_func_state *frame = cur_frame(env);
@@ -4412,6 +4410,8 @@ static int do_check(struct bpf_verifier_env *env)
 			return -EFAULT;
 		}
 
+		frame->insn_idx = insn_idx;
+
 		insn = &insns[insn_idx];
 		class = BPF_CLASS(insn->code);
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 05/12] bpf/verifier: detect loops dynamically rather than statically
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (3 preceding siblings ...)
  2018-02-23 17:40 ` [RFC PATCH bpf-next 04/12] bpf/verifier: store insn_idx instead of callsite in bpf_func_state Edward Cree
@ 2018-02-23 17:40 ` Edward Cree
  2018-02-23 22:27   ` John Fastabend
  2018-02-23 17:41 ` [RFC PATCH bpf-next 06/12] bpf/verifier: do not walk impossible branches Edward Cree
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:40 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

Add in a new chain of parent states, which does not cross function-call
 boundaries, and check whether our current insn_idx appears anywhere in
 the chain.  Since all jump targets have state-list marks (now placed
 by prepare_cfg_marks(), which replaces check_cfg()), it suffices to
 thread this chain through the existing explored_states and check it
 only in is_state_visited().
By using this parent-state chain to detect loops, we open up the
 possibility that in future we could determine a loop to be bounded and
 thus accept it.
A few selftests had to be changed to ensure that all the insns walked
 before reaching the back-edge are valid.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/linux/bpf_verifier.h                |   2 +
 kernel/bpf/verifier.c                       | 280 +++++++++-------------------
 tools/testing/selftests/bpf/test_verifier.c |  12 +-
 3 files changed, 97 insertions(+), 197 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 0bc49c768585..24ddbc1ed3b2 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -120,6 +120,8 @@ struct bpf_func_state {
 	 * zero == main subprog
 	 */
 	u32 subprogno;
+	/* loop detection; points into an explored_state */
+	struct bpf_func_state *parent;
 
 	/* should be second to last. See copy_func_state() */
 	int allocated_stack;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 7a08b5e8e071..e7d898f4fd29 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -429,6 +429,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
 		dst_state->frame[i] = NULL;
 	}
 	dst_state->curframe = src->curframe;
+
 	for (i = 0; i <= src->curframe; i++) {
 		dst = dst_state->frame[i];
 		if (!dst) {
@@ -716,6 +717,7 @@ static void init_func_state(struct bpf_verifier_env *env,
 			    int entry, int frameno, int subprogno)
 {
 	state->insn_idx = entry;
+	state->parent = NULL;
 	state->frameno = frameno;
 	state->subprogno = subprogno;
 	init_reg_state(env, state);
@@ -3747,211 +3749,85 @@ static int check_return_code(struct bpf_verifier_env *env)
 	return 0;
 }
 
-/* non-recursive DFS pseudo code
- * 1  procedure DFS-iterative(G,v):
- * 2      label v as discovered
- * 3      let S be a stack
- * 4      S.push(v)
- * 5      while S is not empty
- * 6            t <- S.pop()
- * 7            if t is what we're looking for:
- * 8                return t
- * 9            for all edges e in G.adjacentEdges(t) do
- * 10               if edge e is already labelled
- * 11                   continue with the next edge
- * 12               w <- G.adjacentVertex(t,e)
- * 13               if vertex w is not discovered and not explored
- * 14                   label e as tree-edge
- * 15                   label w as discovered
- * 16                   S.push(w)
- * 17                   continue at 5
- * 18               else if vertex w is discovered
- * 19                   label e as back-edge
- * 20               else
- * 21                   // vertex w is explored
- * 22                   label e as forward- or cross-edge
- * 23           label t as explored
- * 24           S.pop()
- *
- * convention:
- * 0x10 - discovered
- * 0x11 - discovered and fall-through edge labelled
- * 0x12 - discovered and fall-through and branch edges labelled
- * 0x20 - explored
- */
-
-enum {
-	DISCOVERED = 0x10,
-	EXPLORED = 0x20,
-	FALLTHROUGH = 1,
-	BRANCH = 2,
-};
-
 #define STATE_LIST_MARK ((struct bpf_verifier_state_list *) -1L)
 
-static int *insn_stack;	/* stack of insns to process */
-static int cur_stack;	/* current stack index */
-static int *insn_state;
-
-/* t, w, e - match pseudo-code above:
- * t - index of current instruction
- * w - next instruction
- * e - edge
- */
-static int push_insn(int t, int w, int e, struct bpf_verifier_env *env)
+static int mark_jump_target(struct bpf_verifier_env *env, int from, int off)
 {
-	if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
-		return 0;
-
-	if (e == BRANCH && insn_state[t] >= (DISCOVERED | BRANCH))
-		return 0;
+	int to = from + off + 1;
 
-	if (w < 0 || w >= env->prog->len) {
-		verbose(env, "jump out of range from insn %d to %d\n", t, w);
+	if (to < 0 || to >= env->prog->len) {
+		verbose(env, "jump out of range from insn %d to %d\n", from, to);
 		return -EINVAL;
 	}
-
-	if (e == BRANCH)
-		/* mark branch target for state pruning */
-		env->explored_states[w] = STATE_LIST_MARK;
-
-	if (insn_state[w] == 0) {
-		/* tree-edge */
-		insn_state[t] = DISCOVERED | e;
-		insn_state[w] = DISCOVERED;
-		if (cur_stack >= env->prog->len)
-			return -E2BIG;
-		insn_stack[cur_stack++] = w;
-		return 1;
-	} else if ((insn_state[w] & 0xF0) == DISCOVERED) {
-		verbose(env, "back-edge from insn %d to %d\n", t, w);
-		return -EINVAL;
-	} else if (insn_state[w] == EXPLORED) {
-		/* forward- or cross-edge */
-		insn_state[t] = DISCOVERED | e;
-	} else {
-		verbose(env, "insn state internal bug\n");
-		return -EFAULT;
-	}
+	/* mark branch target for state pruning */
+	env->explored_states[to] = STATE_LIST_MARK;
 	return 0;
 }
 
-/* non-recursive depth-first-search to detect loops in BPF program
- * loop == back-edge in directed graph
- */
-static int check_cfg(struct bpf_verifier_env *env)
+/* Mark jump/branch targets for control flow tracking & state pruning */
+static int prepare_cfg_marks(struct bpf_verifier_env *env)
 {
 	struct bpf_insn *insns = env->prog->insnsi;
-	int insn_cnt = env->prog->len;
-	int ret = 0;
-	int i, t;
+	int insn_cnt = env->prog->len, i, err = 0;
 
-	insn_state = kcalloc(insn_cnt, sizeof(int), GFP_KERNEL);
-	if (!insn_state)
-		return -ENOMEM;
-
-	insn_stack = kcalloc(insn_cnt, sizeof(int), GFP_KERNEL);
-	if (!insn_stack) {
-		kfree(insn_state);
-		return -ENOMEM;
-	}
+	for (i = 0; i < insn_cnt; i++) {
+		u8 opcode = BPF_OP(insns[i].code);
 
-	insn_state[0] = DISCOVERED; /* mark 1st insn as discovered */
-	insn_stack[0] = 0; /* 0 is the first instruction */
-	cur_stack = 1;
-
-peek_stack:
-	if (cur_stack == 0)
-		goto check_state;
-	t = insn_stack[cur_stack - 1];
-
-	if (BPF_CLASS(insns[t].code) == BPF_JMP) {
-		u8 opcode = BPF_OP(insns[t].code);
-
-		if (opcode == BPF_EXIT) {
-			goto mark_explored;
-		} else if (opcode == BPF_CALL) {
-			ret = push_insn(t, t + 1, FALLTHROUGH, env);
-			if (ret == 1)
-				goto peek_stack;
-			else if (ret < 0)
-				goto err_free;
-			if (t + 1 < insn_cnt)
-				env->explored_states[t + 1] = STATE_LIST_MARK;
-			if (insns[t].src_reg == BPF_PSEUDO_CALL) {
-				env->explored_states[t] = STATE_LIST_MARK;
-				ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env);
-				if (ret == 1)
-					goto peek_stack;
-				else if (ret < 0)
-					goto err_free;
-			}
-		} else if (opcode == BPF_JA) {
-			if (BPF_SRC(insns[t].code) != BPF_K) {
-				ret = -EINVAL;
-				goto err_free;
+		if (BPF_CLASS(insns[i].code) != BPF_JMP) {
+			if (i + 1 == insn_cnt) {
+				verbose(env, "no exit/jump at end of program (insn %d)\n",
+					i);
+				return -EINVAL;
 			}
-			/* unconditional jump with single edge */
-			ret = push_insn(t, t + insns[t].off + 1,
-					FALLTHROUGH, env);
-			if (ret == 1)
-				goto peek_stack;
-			else if (ret < 0)
-				goto err_free;
-			/* tell verifier to check for equivalent states
-			 * after every call and jump
-			 */
-			if (t + 1 < insn_cnt)
-				env->explored_states[t + 1] = STATE_LIST_MARK;
-		} else {
-			/* conditional jump with two edges */
-			env->explored_states[t] = STATE_LIST_MARK;
-			ret = push_insn(t, t + 1, FALLTHROUGH, env);
-			if (ret == 1)
-				goto peek_stack;
-			else if (ret < 0)
-				goto err_free;
-
-			ret = push_insn(t, t + insns[t].off + 1, BRANCH, env);
-			if (ret == 1)
-				goto peek_stack;
-			else if (ret < 0)
-				goto err_free;
+			continue;
 		}
-	} else {
-		/* all other non-branch instructions with single
-		 * fall-through edge
-		 */
-		ret = push_insn(t, t + 1, FALLTHROUGH, env);
-		if (ret == 1)
-			goto peek_stack;
-		else if (ret < 0)
-			goto err_free;
-	}
-
-mark_explored:
-	insn_state[t] = EXPLORED;
-	if (cur_stack-- <= 0) {
-		verbose(env, "pop stack internal bug\n");
-		ret = -EFAULT;
-		goto err_free;
+		switch (opcode) {
+		case BPF_EXIT:
+			continue;
+		case BPF_CALL:
+			/* following insn is target of function return */
+			err = mark_jump_target(env, i, 0);
+			if (err)
+				return err;
+			if (insns[i].src_reg == BPF_PSEUDO_CALL)
+				err = mark_jump_target(env, i, insns[i].imm);
+			break;
+		case BPF_JA:
+			/* unconditional jump; mark target */
+			err = mark_jump_target(env, i, insns[i].off);
+			break;
+		default:
+			/* conditional jump; mark following insn and target */
+			err = mark_jump_target(env, i, 0);
+			if (err)
+				return err;
+			err = mark_jump_target(env, i, insns[i].off);
+		}
+		if (err)
+			return err;
 	}
-	goto peek_stack;
 
-check_state:
+	/* Second pass, to detect statically unreachable code.  Any target of
+	 * a jump or call will have a state-list mark
+	 */
 	for (i = 0; i < insn_cnt; i++) {
-		if (insn_state[i] != EXPLORED) {
-			verbose(env, "unreachable insn %d\n", i);
-			ret = -EINVAL;
-			goto err_free;
-		}
+		/* insn following a non-jump is statically reachable */
+		if (BPF_CLASS(insns[i].code) != BPF_JMP)
+			continue;
+		/* insn following a CALL or conditional jump will have been
+		 * marked by that insn (see above).  So if following insn is
+		 * not marked, then we're an EXIT or JA and thus the
+		 * following insn is statically unreachable.
+		 * If we're last insn, and we're not an EXIT or JA, then first
+		 * pass would have complained in mark_jump_target().
+		 */
+		if (i + 1 < insn_cnt)
+			if (env->explored_states[i + 1] != STATE_LIST_MARK) {
+				verbose(env, "unreachable insn %d\n", i + 1);
+				return -EINVAL;
+			}
 	}
-	ret = 0; /* cfg looks good */
-
-err_free:
-	kfree(insn_state);
-	kfree(insn_stack);
-	return ret;
+	return 0;
 }
 
 /* check %cur's range satisfies %old's */
@@ -4287,11 +4163,24 @@ static int propagate_liveness(struct bpf_verifier_env *env,
 	return err;
 }
 
-static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
+static struct bpf_func_state *find_loop(struct bpf_verifier_env *env)
+{
+	struct bpf_func_state *cur = cur_frame(env);
+	int insn_idx = cur->insn_idx;
+
+	while ((cur = cur->parent) != NULL)
+		if (cur->insn_idx == insn_idx)
+			return cur;
+	return NULL;
+}
+
+static int is_state_visited(struct bpf_verifier_env *env, int prev_insn_idx,
+			    int insn_idx)
 {
 	struct bpf_verifier_state_list *new_sl;
 	struct bpf_verifier_state_list *sl;
 	struct bpf_verifier_state *cur = env->cur_state, *new;
+	struct bpf_func_state *old;
 	int i, j, err;
 
 	sl = env->explored_states[insn_idx];
@@ -4301,6 +4190,14 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 		 */
 		return 0;
 
+	/* Check our parentage chain: have we looped? */
+	old = find_loop(env);
+	if (old != NULL) {
+		verbose(env, "back-edge from insn %d to %d\n", prev_insn_idx,
+			insn_idx);
+		return -EINVAL;
+	}
+
 	while (sl != STATE_LIST_MARK) {
 		if (states_equal(env, &sl->state, cur)) {
 			/* reached equivalent register/stack state,
@@ -4342,7 +4239,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 	}
 	new_sl->next = env->explored_states[insn_idx];
 	env->explored_states[insn_idx] = new_sl;
-	/* connect new state to parentage chain */
+	/* connect new state's regs to parentage chain */
 	for (i = 0; i < BPF_REG_FP; i++)
 		cur_regs(env)[i].parent = &new->frame[new->curframe]->regs[i];
 	/* clear write marks in current state: the writes we did are not writes
@@ -4359,6 +4256,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 		struct bpf_func_state *frame = cur->frame[j];
 		struct bpf_func_state *newframe = new->frame[j];
 
+		frame->parent = newframe;
 		for (i = 0; i < frame->allocated_stack / BPF_REG_SIZE; i++) {
 			frame->stack[i].spilled_ptr.live = REG_LIVE_NONE;
 			frame->stack[i].spilled_ptr.parent =
@@ -4404,7 +4302,7 @@ static int do_check(struct bpf_verifier_env *env)
 		u8 class;
 		int err;
 
-		if (insn_idx >= insn_cnt) {
+		if (insn_idx < 0 || insn_idx >= insn_cnt) {
 			verbose(env, "invalid insn idx %d insn_cnt %d\n",
 				insn_idx, insn_cnt);
 			return -EFAULT;
@@ -4433,7 +4331,7 @@ static int do_check(struct bpf_verifier_env *env)
 		aux->subprogno = frame->subprogno;
 		aux->seen = true;
 
-		err = is_state_visited(env, insn_idx);
+		err = is_state_visited(env, prev_insn_idx, insn_idx);
 		if (err < 0)
 			return err;
 		if (err == 1) {
@@ -5581,7 +5479,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
 
 	env->allow_ptr_leaks = capable(CAP_SYS_ADMIN);
 
-	ret = check_cfg(env);
+	ret = prepare_cfg_marks(env);
 	if (ret < 0)
 		goto skip_full_check;
 
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 9c7531887ee3..722a16b2e9c4 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -644,14 +644,13 @@ static struct bpf_test tests[] = {
 		.insns = {
 			BPF_ALU64_REG(BPF_MOV, BPF_REG_0, BPF_REG_2),
 		},
-		.errstr = "jump out of range",
+		.errstr = "no exit/jump at end of program",
 		.result = REJECT,
 	},
 	{
 		"loop (back-edge)",
 		.insns = {
 			BPF_JMP_IMM(BPF_JA, 0, 0, -1),
-			BPF_EXIT_INSN(),
 		},
 		.errstr = "back-edge",
 		.result = REJECT,
@@ -659,11 +658,11 @@ static struct bpf_test tests[] = {
 	{
 		"loop2 (back-edge)",
 		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 			BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 			BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
 			BPF_JMP_IMM(BPF_JA, 0, 0, -4),
-			BPF_EXIT_INSN(),
 		},
 		.errstr = "back-edge",
 		.result = REJECT,
@@ -671,6 +670,7 @@ static struct bpf_test tests[] = {
 	{
 		"conditional loop",
 		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 			BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 			BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
@@ -9338,7 +9338,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "back-edge from insn 0 to 0",
+		.errstr = "frames is too deep",
 		.result = REJECT,
 	},
 	{
@@ -9666,7 +9666,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "back-edge",
+		.errstr = "frames is too deep",
 		.result = REJECT,
 	},
 	{
@@ -9678,7 +9678,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "back-edge",
+		.errstr = "frames is too deep",
 		.result = REJECT,
 	},
 	{

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 06/12] bpf/verifier: do not walk impossible branches
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (4 preceding siblings ...)
  2018-02-23 17:40 ` [RFC PATCH bpf-next 05/12] bpf/verifier: detect loops dynamically rather than statically Edward Cree
@ 2018-02-23 17:41 ` Edward Cree
  2018-02-23 17:41 ` [RFC PATCH bpf-next 07/12] bpf/verifier: allow bounded loops with JLT/true back-edge Edward Cree
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:41 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

If a conditional branch would produce an inconsistent set of bounds (e.g.
 umin_value > umax_value) on one leg, then that leg cannot actually happen
 so we don't need to walk it.
This will necessary for bounded loop support so that we stop walking round
 the loop once the termination condition is known to have been met.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 kernel/bpf/verifier.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e7d898f4fd29..7a8ae633d0c3 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -472,6 +472,22 @@ static int pop_stack(struct bpf_verifier_env *env, int *prev_insn_idx,
 	return 0;
 }
 
+/* Throw away top element of stack.  Like pop_stack but don't update cur_state */
+static int unpush_stack(struct bpf_verifier_env *env)
+{
+	struct bpf_verifier_stack_elem *elem, *head = env->head;
+
+	if (env->head == NULL)
+		return -ENOENT;
+
+	elem = head->next;
+	free_verifier_state(&head->st, false);
+	kfree(head);
+	env->head = elem;
+	env->stack_size--;
+	return 0;
+}
+
 static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
 					     int insn_idx, int prev_insn_idx)
 {
@@ -2376,8 +2392,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
 	if ((known && (smin_val != smax_val || umin_val != umax_val)) ||
 	    smin_val > smax_val || umin_val > umax_val) {
 		/* Taint dst register if offset had invalid bounds derived from
-		 * e.g. dead branches.
+		 * e.g. dead branches.  This should be impossible now, since we
+		 * prune dead branches in check_cond_jmp_op().
 		 */
+		WARN_ON_ONCE(1);
 		__mark_reg_unknown(dst_reg);
 		return 0;
 	}
@@ -3455,6 +3473,13 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
 	return true;
 }
 
+static bool reg_is_impossible(struct bpf_reg_state reg)
+{
+	return reg.type == SCALAR_VALUE &&
+	       (reg.umin_value > reg.umax_value ||
+		reg.smin_value > reg.smax_value);
+}
+
 static int check_cond_jmp_op(struct bpf_verifier_env *env,
 			     struct bpf_insn *insn, int *insn_idx)
 {
@@ -3574,6 +3599,32 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
 	}
 	if (env->log.level)
 		print_verifier_state(env, this_branch->frame[this_branch->curframe]);
+
+	if (reg_is_impossible(other_branch_regs[insn->src_reg]) ||
+	    reg_is_impossible(other_branch_regs[insn->dst_reg])) {
+		/* if (false) goto pc+off;
+		 * only follow fall-through branch, since
+		 * that's where the program will go
+		 */
+		verbose(env, "pruning impossible jump\n");
+		unpush_stack(env);
+	} else if (reg_is_impossible(regs[insn->src_reg]) ||
+		   reg_is_impossible(regs[insn->dst_reg])) {
+		/* if (true) goto pc+off;
+		 * only follow the goto, ignore fall-through
+		 */
+		verbose(env, "pruning impossible fall-through\n");
+		err = pop_stack(env, NULL, insn_idx);
+		if (err < 0) {
+			if (err != -ENOENT)
+				return err;
+		}
+		/* pushed goto included the +1, which caller will try to apply
+		 * so let's undo it here.
+		 */
+		(*insn_idx)--;
+	}
+
 	return 0;
 }
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 07/12] bpf/verifier: allow bounded loops with JLT/true back-edge
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (5 preceding siblings ...)
  2018-02-23 17:41 ` [RFC PATCH bpf-next 06/12] bpf/verifier: do not walk impossible branches Edward Cree
@ 2018-02-23 17:41 ` Edward Cree
  2018-02-23 22:27   ` John Fastabend
  2018-02-23 17:41 ` [RFC PATCH bpf-next 08/12] bpf/verifier: selftests for bounded loops Edward Cree
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:41 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

Where the register umin_value is increasing sufficiently fast, the loop
 will terminate after a reasonable number of iterations, so we can allow
 to keep walking it.
The semantics of prev_insn_idx are changed slightly: it now always refers
 to the last jump insn walked, even when that jump was not taken.  Also it
 is moved into env, alongside a new bool last_jump_taken that records
 whether the jump was taken or not.  This is needed so that, when we detect
 a loop, we can see how we ended up in the back-edge: what was the jump
 condition, and is it the true- or the false-branch that ends up looping?
We record in our explored_states whether they represent a conditional jump
 and whether that jump produced a good loop bound.  This allows us to make
 unconditional jumps "not responsible" for ensuring the loop is bounded, as
 long as there is a conditional jump somewhere in the loop body; whereas a
 conditional jump must ensure that there is a bounding conditional somewhere
 in the loop body.
Recursive calls have to remain forbidden because the calculation of maximum
 stack depth assumes the call graph is acyclic, even though the loop
 handling code could determine whether the recursion was bounded.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/linux/bpf_verifier.h |   5 +
 kernel/bpf/verifier.c        | 295 +++++++++++++++++++++++++++++++++++--------
 2 files changed, 244 insertions(+), 56 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 24ddbc1ed3b2..6abd484391f4 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -120,8 +120,11 @@ struct bpf_func_state {
 	 * zero == main subprog
 	 */
 	u32 subprogno;
+
 	/* loop detection; points into an explored_state */
 	struct bpf_func_state *parent;
+	/* These flags are only meaningful in an explored_state, not cur_state */
+	bool in_loop, bounded_loop, conditional;
 
 	/* should be second to last. See copy_func_state() */
 	int allocated_stack;
@@ -197,6 +200,8 @@ struct bpf_verifier_env {
 	u32 id_gen;			/* used to generate unique reg IDs */
 	bool allow_ptr_leaks;
 	bool seen_direct_write;
+	int prev_insn_idx;		/* last branch (BPF_JMP-class) insn */
+	bool last_jump_taken;		/* Was branch at prev_insn_idx taken? */
 	struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */
 	struct bpf_verifer_log log;
 	struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS];
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 7a8ae633d0c3..9828cb0cde73 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -445,8 +445,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
 	return 0;
 }
 
-static int pop_stack(struct bpf_verifier_env *env, int *prev_insn_idx,
-		     int *insn_idx)
+static int pop_stack(struct bpf_verifier_env *env, int *insn_idx)
 {
 	struct bpf_verifier_state *cur = env->cur_state;
 	struct bpf_verifier_stack_elem *elem, *head = env->head;
@@ -462,8 +461,7 @@ static int pop_stack(struct bpf_verifier_env *env, int *prev_insn_idx,
 	}
 	if (insn_idx)
 		*insn_idx = head->insn_idx;
-	if (prev_insn_idx)
-		*prev_insn_idx = head->prev_insn_idx;
+	env->prev_insn_idx = head->prev_insn_idx;
 	elem = head->next;
 	free_verifier_state(&head->st, false);
 	kfree(head);
@@ -516,7 +514,7 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
 	free_verifier_state(env->cur_state, true);
 	env->cur_state = NULL;
 	/* pop all elements and return */
-	while (!pop_stack(env, NULL, NULL));
+	while (!pop_stack(env, NULL));
 	return NULL;
 }
 
@@ -730,10 +728,11 @@ static void init_reg_state(struct bpf_verifier_env *env,
 
 static void init_func_state(struct bpf_verifier_env *env,
 			    struct bpf_func_state *state,
+			    struct bpf_func_state *parent,
 			    int entry, int frameno, int subprogno)
 {
 	state->insn_idx = entry;
-	state->parent = NULL;
+	state->parent = parent;
 	state->frameno = frameno;
 	state->subprogno = subprogno;
 	init_reg_state(env, state);
@@ -2112,6 +2111,7 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	 * callee can read/write into caller's stack
 	 */
 	init_func_state(env, callee,
+			caller->parent /* parent state for loop detection */,
 			target /* entry point */,
 			state->curframe + 1 /* frameno within this callchain */,
 			subprog /* subprog number within this prog */);
@@ -3480,6 +3480,12 @@ static bool reg_is_impossible(struct bpf_reg_state reg)
 		reg.smin_value > reg.smax_value);
 }
 
+enum cond_jmp_result {
+	COND_JMP_EITHER = 0, /* Both jump and fallthrough are possible */
+	COND_JMP_FALLTHROUGH, /* Jump branch is never taken */
+	COND_JMP_FOLLOW, /* Fallthrough branch is never taken */
+};
+
 static int check_cond_jmp_op(struct bpf_verifier_env *env,
 			     struct bpf_insn *insn, int *insn_idx)
 {
@@ -3503,7 +3509,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
 
 		/* check src1 operand */
 		err = check_reg_arg(env, insn->src_reg, SRC_OP);
-		if (err)
+		if (err < 0)
 			return err;
 
 		if (is_pointer_value(env, insn->src_reg)) {
@@ -3520,35 +3526,34 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
 
 	/* check src2 operand */
 	err = check_reg_arg(env, insn->dst_reg, SRC_OP);
-	if (err)
+	if (err < 0)
 		return err;
 
 	dst_reg = &regs[insn->dst_reg];
 
+	other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx);
+	if (!other_branch)
+		return -EFAULT;
+
 	/* detect if R == 0 where R was initialized to zero earlier */
 	if (BPF_SRC(insn->code) == BPF_K &&
 	    (opcode == BPF_JEQ || opcode == BPF_JNE) &&
 	    dst_reg->type == SCALAR_VALUE &&
 	    tnum_is_const(dst_reg->var_off)) {
 		if ((opcode == BPF_JEQ && dst_reg->var_off.value == insn->imm) ||
-		    (opcode == BPF_JNE && dst_reg->var_off.value != insn->imm)) {
+		    (opcode == BPF_JNE && dst_reg->var_off.value != insn->imm))
 			/* if (imm == imm) goto pc+off;
 			 * only follow the goto, ignore fall-through
 			 */
-			*insn_idx += insn->off;
-			return 0;
-		} else {
+			return COND_JMP_FOLLOW;
+		else
 			/* if (imm != imm) goto pc+off;
 			 * only follow fall-through branch, since
 			 * that's where the program will go
 			 */
-			return 0;
-		}
+			return COND_JMP_FALLTHROUGH;
 	}
 
-	other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx);
-	if (!other_branch)
-		return -EFAULT;
 	other_branch_regs = other_branch->frame[other_branch->curframe]->regs;
 
 	/* detect if we are comparing against a constant value so we can adjust
@@ -3599,33 +3604,21 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
 	}
 	if (env->log.level)
 		print_verifier_state(env, this_branch->frame[this_branch->curframe]);
-
 	if (reg_is_impossible(other_branch_regs[insn->src_reg]) ||
-	    reg_is_impossible(other_branch_regs[insn->dst_reg])) {
+	    reg_is_impossible(other_branch_regs[insn->dst_reg]))
 		/* if (false) goto pc+off;
 		 * only follow fall-through branch, since
 		 * that's where the program will go
 		 */
-		verbose(env, "pruning impossible jump\n");
-		unpush_stack(env);
-	} else if (reg_is_impossible(regs[insn->src_reg]) ||
-		   reg_is_impossible(regs[insn->dst_reg])) {
+		return COND_JMP_FALLTHROUGH;
+	else if (reg_is_impossible(regs[insn->src_reg]) ||
+		 reg_is_impossible(regs[insn->dst_reg]))
 		/* if (true) goto pc+off;
 		 * only follow the goto, ignore fall-through
 		 */
-		verbose(env, "pruning impossible fall-through\n");
-		err = pop_stack(env, NULL, insn_idx);
-		if (err < 0) {
-			if (err != -ENOENT)
-				return err;
-		}
-		/* pushed goto included the +1, which caller will try to apply
-		 * so let's undo it here.
-		 */
-		(*insn_idx)--;
-	}
+		return COND_JMP_FOLLOW;
 
-	return 0;
+	return COND_JMP_EITHER;
 }
 
 /* return the map pointer stored inside BPF_LD_IMM64 instruction */
@@ -4214,23 +4207,141 @@ static int propagate_liveness(struct bpf_verifier_env *env,
 	return err;
 }
 
-static struct bpf_func_state *find_loop(struct bpf_verifier_env *env)
+static struct bpf_func_state *find_loop(struct bpf_verifier_env *env,
+					bool *saw_cond, bool *saw_bound)
 {
 	struct bpf_func_state *cur = cur_frame(env);
 	int insn_idx = cur->insn_idx;
 
-	while ((cur = cur->parent) != NULL)
+	while ((cur = cur->parent) != NULL) {
 		if (cur->insn_idx == insn_idx)
 			return cur;
+		if (cur->conditional && saw_cond)
+			*saw_cond = true;
+		if (cur->bounded_loop && saw_bound)
+			*saw_bound = true;
+	}
 	return NULL;
 }
 
-static int is_state_visited(struct bpf_verifier_env *env, int prev_insn_idx,
-			    int insn_idx)
+#define BPF_LINEAR_LOOP_LIMIT	16
+/* Test whether register has increased fast enough between %old and %new that it
+ * will reach %val in a reasonably small number of loop iterations.
+ * We pay attention only to the minimum value, as that's the worst case.
+ */
+static bool is_reg_increasing(struct bpf_verifier_env *env, u8 regno,
+			      struct bpf_reg_state *old,
+			      struct bpf_reg_state *new, u64 val)
+{
+	u64 oldval = old->umin_value;
+	u64 newval = new->umin_value;
+
+	if (oldval >= newval) {
+		verbose(env, "loop variable r%d is not increasing\n", regno);
+		return false;
+	}
+	/* Assuming the loop variable is linear in the iteration count, check
+	 * that we will be done in at most BPF_LINEAR_LOOP_LIMIT iterations.
+	 * If that assumption is in fact false, since we are checking this on
+	 * every iteration, we may approach %val by an exponential decay, but
+	 * since we're working in integers, that still ensures we get there in
+	 * a reasonable length of time (687 iterations in the absolute worst
+	 * case, from 64 / log2(16/15).)
+	 */
+	/* TODO think about overflow, this probably needs to be more careful */
+	if ((newval - oldval) * BPF_LINEAR_LOOP_LIMIT + oldval <= val) {
+		verbose(env, "loop variable r%d is increasing too slowly\n",
+			regno);
+		return false;
+	}
+	return true;
+}
+
+static bool is_conditional_jump(struct bpf_verifier_env *env)
+{
+	struct bpf_insn *insn = env->prog->insnsi + env->prev_insn_idx;
+	u8 opcode = BPF_OP(insn->code);
+
+	if (env->prev_insn_idx < 0)
+		return false;
+	/* Should be impossible, how else did we get here? */
+	if (BPF_CLASS(insn->code) != BPF_JMP) {
+		verbose(env, "back-edge is not a jump\n");
+		return false;
+	}
+	return !(opcode == BPF_JA || opcode == BPF_CALL || opcode == BPF_EXIT);
+}
+
+static bool is_loop_bounded(struct bpf_verifier_env *env, int insn_idx,
+			    struct bpf_func_state *old)
+{
+	struct bpf_insn *insn = env->prog->insnsi + env->prev_insn_idx;
+	struct bpf_func_state *new = cur_frame(env);
+	struct bpf_reg_state *oldreg, *newreg;
+	u8 opcode = BPF_OP(insn->code);
+
+	/* If our frames somehow don't match up, just say it's not safe. */
+	if (old->frameno != new->frameno) {
+		verbose(env, "looped from frame %d into %d\n", old->frameno,
+			new->frameno);
+	}
+	if (env->prev_insn_idx < 0) {
+		verbose(env, "back-edge without conditional jump\n");
+		return false;
+	}
+	/* Should be impossible, how else did we get here? */
+	if (BPF_CLASS(insn->code) != BPF_JMP) {
+		verbose(env, "back-edge is not a jump\n");
+		return false;
+	}
+	/* For now, we don't support variable as the loop bound */
+	if (BPF_SRC(insn->code) != BPF_K) {
+		verbose(env, "loop with variable bound not supported\n");
+		return false;
+	}
+	oldreg = &old->regs[insn->dst_reg];
+	newreg = &new->regs[insn->dst_reg];
+	/* Loop variable must be a scalar */
+	if (oldreg->type != SCALAR_VALUE) {
+		verbose(env, "loop variable r%d is not a scalar\n",
+			insn->dst_reg);
+		return false;
+	}
+	if (newreg->type != SCALAR_VALUE) {
+		verbose(env, "loop variable r%d is no longer a scalar\n",
+			insn->dst_reg);
+		return false;
+	}
+	switch (opcode) {
+	case BPF_JA: /* unconditional branch = infinite loop */
+	case BPF_CALL: /* weird.  Also unconditional */
+	case BPF_EXIT: /* also weird.  Also unconditional */
+		verbose(env, "loop with unconditional BPF_JMP insn\n");
+		return false;
+	case BPF_JLT:
+		/* e.g. for (i=0; i<k; i++) */
+		if (env->last_jump_taken)
+			return is_reg_increasing(env, insn->dst_reg, oldreg,
+						 newreg, insn->imm);
+		/* else something like "while(true) if (i<k) break;", which
+		 * we don't support yet
+		 */
+		verbose(env, "loop on conditional fallthrough\n");
+		return false;
+	default:
+		/* TODO handle other cases */
+		verbose(env, "loop with this opcode not supported (yet)\n");
+		return false;
+	}
+}
+
+static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 {
 	struct bpf_verifier_state_list *new_sl;
 	struct bpf_verifier_state_list *sl;
 	struct bpf_verifier_state *cur = env->cur_state, *new;
+	bool saw_cond = false, saw_bound = false;
+	bool cond = false, bounded = false;
 	struct bpf_func_state *old;
 	int i, j, err;
 
@@ -4241,16 +4352,66 @@ static int is_state_visited(struct bpf_verifier_env *env, int prev_insn_idx,
 		 */
 		return 0;
 
+	cond = is_conditional_jump(env);
 	/* Check our parentage chain: have we looped? */
-	old = find_loop(env);
+	old = find_loop(env, &saw_cond, &saw_bound);
 	if (old != NULL) {
-		verbose(env, "back-edge from insn %d to %d\n", prev_insn_idx,
-			insn_idx);
-		return -EINVAL;
+		if (old->frameno != cur->curframe) {
+			/* if it's in our parentage chain, then it called us;
+			 * but we're the same insn, so in the same subprog, so
+			 * recursion has occurred.
+			 * The loop detection could handle recursion fine (it
+			 * distinguishes between bounded and unbounded
+			 * recursion, and the latter would quickly run out of
+			 * call stack anyway), but the stack max depth
+			 * calculation can't deal with it (because it doesn't
+			 * know how deeply we might recurse).
+			 */
+			verbose(env, "recursive call from insn %d to %d\n",
+				env->prev_insn_idx, insn_idx);
+			return -EINVAL;
+		}
+		/* Mark old state as not prunable */
+		old->in_loop = true;
+		if (cond)
+			bounded = is_loop_bounded(env, insn_idx, old);
+		if (bounded) {
+			verbose(env, "following bounded loop from insn %d to %d\n",
+				env->prev_insn_idx, insn_idx);
+		} else if (old->bounded_loop) {
+			/* This insn was managing the loop last time.  So we
+			 * have to insist it continues to do so, to prevent the
+			 * 'three-jump trick' (see test_verifier.c for example)
+			 * whereby at least one jump appears to be making
+			 * progress at any given time but none of them ever get
+			 * anywhere because they take it in turns to undo their
+			 * progress on alternate passages through the loop.
+			 */
+			verbose(env, "loop from insn %d to %d ceased to be bounded\n",
+				env->prev_insn_idx, insn_idx);
+			return -EINVAL;
+		} else if (saw_bound) {
+			verbose(env, "following loop from insn %d to %d, bounded elsewhere\n",
+				env->prev_insn_idx, insn_idx);
+		} else if (saw_cond && !cond) {
+			/* We're not a conditional, but there's a conditional
+			 * somewhere else in the loop.  So they will be
+			 * responsible for ensuring the loop is bounded (it's
+			 * possible we've been revisited but they haven't, which
+			 * is why they might not have bounded_loop set).
+			 */
+			verbose(env, "following loop from insn %d to %d for now, condition is elsewhere\n",
+				env->prev_insn_idx, insn_idx);
+		} else {
+			verbose(env, "back-edge from insn %d to %d\n",
+				env->prev_insn_idx, insn_idx);
+			return -EINVAL;
+		}
 	}
 
 	while (sl != STATE_LIST_MARK) {
-		if (states_equal(env, &sl->state, cur)) {
+		if (!sl->state.frame[sl->state.curframe]->in_loop &&
+		    states_equal(env, &sl->state, cur)) {
 			/* reached equivalent register/stack state,
 			 * prune the search.
 			 * Registers read by the continuation are read by us.
@@ -4272,9 +4433,10 @@ static int is_state_visited(struct bpf_verifier_env *env, int prev_insn_idx,
 	/* there were no equivalent states, remember current one.
 	 * technically the current state is not proven to be safe yet,
 	 * but it will either reach outer most bpf_exit (which means it's safe)
-	 * or it will be rejected. Since there are no loops, we won't be
-	 * seeing this tuple (frame[0].insn_idx, frame[1].insn_idx, .. insn_idx)
-	 * again on the way to bpf_exit
+	 * or it will be rejected.  The only way we can see this tuple
+	 * (frame[0].insn_idx, frame[1].insn_idx, .. insn_idx) again on the way
+	 * to bpf_exit is in a loop which will be caught above and marked with
+	 * 'looping' flag.
 	 */
 	new_sl = kzalloc(sizeof(struct bpf_verifier_state_list), GFP_KERNEL);
 	if (!new_sl)
@@ -4288,6 +4450,8 @@ static int is_state_visited(struct bpf_verifier_env *env, int prev_insn_idx,
 		kfree(new_sl);
 		return err;
 	}
+	new->frame[new->curframe]->conditional = cond;
+	new->frame[new->curframe]->bounded_loop = bounded;
 	new_sl->next = env->explored_states[insn_idx];
 	env->explored_states[insn_idx] = new_sl;
 	/* connect new state's regs to parentage chain */
@@ -4323,7 +4487,7 @@ static int do_check(struct bpf_verifier_env *env)
 	struct bpf_insn *insns = env->prog->insnsi;
 	struct bpf_reg_state *regs;
 	int insn_cnt = env->prog->len, i;
-	int insn_idx, prev_insn_idx = 0;
+	int insn_idx;
 	int insn_processed = 0;
 	int mainprogno;
 	bool do_print_state = false;
@@ -4342,7 +4506,9 @@ static int do_check(struct bpf_verifier_env *env)
 	if (mainprogno < 0)
 		return mainprogno;
 	insn_idx = 0;
+	env->prev_insn_idx = -1;
 	init_func_state(env, state->frame[0],
+			NULL /* parent state for loop detection */,
 			insn_idx /* entry point */,
 			0 /* frameno */,
 			mainprogno /* subprogno */);
@@ -4382,7 +4548,7 @@ static int do_check(struct bpf_verifier_env *env)
 		aux->subprogno = frame->subprogno;
 		aux->seen = true;
 
-		err = is_state_visited(env, prev_insn_idx, insn_idx);
+		err = is_state_visited(env, insn_idx);
 		if (err < 0)
 			return err;
 		if (err == 1) {
@@ -4390,7 +4556,7 @@ static int do_check(struct bpf_verifier_env *env)
 			if (env->log.level) {
 				if (do_print_state)
 					verbose(env, "\nfrom %d to %d: safe\n",
-						prev_insn_idx, insn_idx);
+						env->prev_insn_idx, insn_idx);
 				else
 					verbose(env, "%d: safe\n", insn_idx);
 			}
@@ -4405,7 +4571,7 @@ static int do_check(struct bpf_verifier_env *env)
 				verbose(env, "%d:", insn_idx);
 			else
 				verbose(env, "\nfrom %d to %d:",
-					prev_insn_idx, insn_idx);
+					env->prev_insn_idx, insn_idx);
 			print_verifier_state(env, frame);
 			do_print_state = false;
 		}
@@ -4421,7 +4587,7 @@ static int do_check(struct bpf_verifier_env *env)
 
 		if (bpf_prog_is_dev_bound(env->prog->aux)) {
 			err = bpf_prog_offload_verify_insn(env, insn_idx,
-							   prev_insn_idx);
+							   env->prev_insn_idx);
 			if (err)
 				return err;
 		}
@@ -4547,6 +4713,8 @@ static int do_check(struct bpf_verifier_env *env)
 		} else if (class == BPF_JMP) {
 			u8 opcode = BPF_OP(insn->code);
 
+			env->prev_insn_idx = insn_idx;
+			env->last_jump_taken = true;
 			if (opcode == BPF_CALL) {
 				if (BPF_SRC(insn->code) != BPF_K ||
 				    insn->off != 0 ||
@@ -4587,7 +4755,6 @@ static int do_check(struct bpf_verifier_env *env)
 
 				if (state->curframe) {
 					/* exit from nested function */
-					prev_insn_idx = insn_idx;
 					err = prepare_func_exit(env, &insn_idx);
 					if (err)
 						return err;
@@ -4614,7 +4781,11 @@ static int do_check(struct bpf_verifier_env *env)
 				if (err)
 					return err;
 process_bpf_exit:
-				err = pop_stack(env, &prev_insn_idx, &insn_idx);
+				err = pop_stack(env, &insn_idx);
+				/* We are following a path that was pushed to
+				 * the stack, thus was a jump-taken path.
+				 */
+				env->last_jump_taken = true;
 				if (err < 0) {
 					if (err != -ENOENT)
 						return err;
@@ -4625,8 +4796,20 @@ static int do_check(struct bpf_verifier_env *env)
 				}
 			} else {
 				err = check_cond_jmp_op(env, insn, &insn_idx);
-				if (err)
+				if (err < 0)
 					return err;
+				if (err == COND_JMP_FOLLOW) {
+					verbose(env, "pruning impossible fall-through\n");
+					goto process_bpf_exit;
+				}
+				/* We are currently following the fall-through
+				 * path, whether we prune the jump path or not
+				 */
+				env->last_jump_taken = false;
+				if (err == COND_JMP_FALLTHROUGH) {
+					verbose(env, "pruning impossible jump\n");
+					unpush_stack(env);
+				}
 			}
 		} else if (class == BPF_LD) {
 			u8 mode = BPF_MODE(insn->code);
@@ -5541,7 +5724,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
 	}
 
 skip_full_check:
-	while (!pop_stack(env, NULL, NULL));
+	while (!pop_stack(env, NULL));
 	free_states(env);
 
 	if (ret == 0)

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 08/12] bpf/verifier: selftests for bounded loops
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (6 preceding siblings ...)
  2018-02-23 17:41 ` [RFC PATCH bpf-next 07/12] bpf/verifier: allow bounded loops with JLT/true back-edge Edward Cree
@ 2018-02-23 17:41 ` Edward Cree
  2018-02-23 17:41 ` [RFC PATCH bpf-next 09/12] bpf/verifier: count still-live children of explored_states Edward Cree
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:41 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

Mainly consists of tests that broke (or I expected to break) earlier
 versions of the bounded-loop handling.
Also updated some existing tests to deal with changed error messages,
 programs now being accepted etc.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 tools/testing/selftests/bpf/test_verifier.c | 198 +++++++++++++++++++++++++++-
 1 file changed, 191 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 722a16b2e9c4..fda35a5a0ff9 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -9338,7 +9338,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "frames is too deep",
+		.errstr = "recursive call",
 		.result = REJECT,
 	},
 	{
@@ -9389,8 +9389,8 @@ static struct bpf_test tests[] = {
 			BPF_JMP_IMM(BPF_JA, 0, 0, -6),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "back-edge from insn",
-		.result = REJECT,
+		.result = ACCEPT,
+		.retval = 1,
 	},
 	{
 		"calls: conditional call 4",
@@ -9424,8 +9424,8 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "back-edge from insn",
-		.result = REJECT,
+		.result = ACCEPT,
+		.retval = 1,
 	},
 	{
 		"calls: conditional call 6",
@@ -9666,7 +9666,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "frames is too deep",
+		.errstr = "recursive call",
 		.result = REJECT,
 	},
 	{
@@ -9678,7 +9678,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "frames is too deep",
+		.errstr = "recursive call",
 		.result = REJECT,
 	},
 	{
@@ -11135,6 +11135,190 @@ static struct bpf_test tests[] = {
 		.result = REJECT,
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
 	},
+	{
+		"bounded loop, count to 4",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+		.retval = 4,
+	},
+	{
+		"bounded loop, count to 20",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 20, -2),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "r0 is increasing too slowly",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
+	{
+		"bounded loop, count from positive unknown to 4",
+		.insns = {
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+			BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+			BPF_LD_MAP_FD(BPF_REG_1, 0),
+			BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+			BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+			BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JSLT, BPF_REG_0, 0, 2),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+			BPF_EXIT_INSN(),
+		},
+		.fixup_map1 = { 3 },
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+		.retval = 4,
+	},
+	{
+		"bounded loop, count from totally unknown to 4",
+		.insns = {
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+			BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+			BPF_LD_MAP_FD(BPF_REG_1, 0),
+			BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+			BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+			BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+			BPF_EXIT_INSN(),
+		},
+		.fixup_map1 = { 3 },
+		.result = REJECT,
+		.errstr = "r0 is not increasing",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
+	{
+		"bounded loop, count to 4 with equality",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 4, -2),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "loop with this opcode not supported",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
+	{
+		"bounded loop, start in the middle",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_A(1),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+		.retval = 4,
+	},
+	{
+		"bounded loop containing a forward jump",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -3),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+		.retval = 4,
+	},
+	{
+		"bounded loop that jumps out rather than in",
+		.insns = {
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+			BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
+			BPF_LD_MAP_FD(BPF_REG_1, 0),
+			BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
+			BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+			BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, 1),
+			BPF_JMP_A(-3),
+			BPF_EXIT_INSN(),
+		},
+		.fixup_map1 = { 3 },
+		.result = REJECT,
+		.errstr = "loop on conditional fallthrough",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
+	{
+		"infinite loop after a conditional jump",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 5),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, 2),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_JMP_A(-2),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "back-edge from insn 3 to 2",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
+	{
+		"bounded recursion",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_1, 0),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+			BPF_EXIT_INSN(),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+			BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 4, 1),
+			BPF_EXIT_INSN(),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -5),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "recursive call",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
+	{
+		"infinite loop in two jumps",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_A(0),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "loop variable r0 is not increasing",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
+	{
+		"infinite loop: three-jump trick",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1),
+			BPF_EXIT_INSN(),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1),
+			BPF_EXIT_INSN(),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+			BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
+			BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, -11),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "loop from insn 11 to 1 ceased to be bounded",
+		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
+	},
 };
 
 static int probe_filter_length(const struct bpf_insn *fp)

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 09/12] bpf/verifier: count still-live children of explored_states
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (7 preceding siblings ...)
  2018-02-23 17:41 ` [RFC PATCH bpf-next 08/12] bpf/verifier: selftests for bounded loops Edward Cree
@ 2018-02-23 17:41 ` Edward Cree
  2018-02-23 17:42 ` [RFC PATCH bpf-next 10/12] bpf/verifier: parent state pointer is not per-frame Edward Cree
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:41 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

By reference-counting how many children of an explored_state are still
 being walked, we can avoid pruning based on a state that's in our own
 history (and thus hasn't reached an exit yet) without a persistent mark
 that prevents other, later branches from being pruned against it when
 it _has_ reached an exit.
Includes a check at free_states() time to ensure that all the reference
 counts have fallen to zero.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/linux/bpf_verifier.h |   3 +-
 kernel/bpf/verifier.c        | 109 ++++++++++++++++++++++++++++---------------
 2 files changed, 74 insertions(+), 38 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 6abd484391f4..ee034232fbd6 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -124,7 +124,8 @@ struct bpf_func_state {
 	/* loop detection; points into an explored_state */
 	struct bpf_func_state *parent;
 	/* These flags are only meaningful in an explored_state, not cur_state */
-	bool in_loop, bounded_loop, conditional;
+	bool bounded_loop, conditional;
+	int live_children;
 
 	/* should be second to last. See copy_func_state() */
 	int allocated_stack;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9828cb0cde73..cc8aaf73b4a2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -398,6 +398,12 @@ static void free_verifier_state(struct bpf_verifier_state *state,
 		free_func_state(state->frame[i]);
 		state->frame[i] = NULL;
 	}
+	/* Check that the live_children accounting is correct */
+	if (state->live_children)
+		pr_warn("Leaked live_children=%d at insn %d, frame %d\n",
+			state->live_children,
+			state->frame[state->curframe]->insn_idx,
+			state->curframe);
 	if (free_self)
 		kfree(state);
 }
@@ -429,6 +435,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
 		dst_state->frame[i] = NULL;
 	}
 	dst_state->curframe = src->curframe;
+	dst_state->parent = src->parent;
 
 	for (i = 0; i <= src->curframe; i++) {
 		dst = dst_state->frame[i];
@@ -445,6 +452,15 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
 	return 0;
 }
 
+/* Mark this thread as having reached an exit */
+static void kill_thread(struct bpf_verifier_state *state)
+{
+	struct bpf_verifier_state *cur = state->parent;
+
+	while (cur && !--cur->live_children)
+		cur = cur->parent;
+}
+
 static int pop_stack(struct bpf_verifier_env *env, int *insn_idx)
 {
 	struct bpf_verifier_state *cur = env->cur_state;
@@ -458,6 +474,8 @@ static int pop_stack(struct bpf_verifier_env *env, int *insn_idx)
 		err = copy_verifier_state(cur, &head->st);
 		if (err)
 			return err;
+	} else {
+		kill_thread(&head->st);
 	}
 	if (insn_idx)
 		*insn_idx = head->insn_idx;
@@ -479,6 +497,7 @@ static int unpush_stack(struct bpf_verifier_env *env)
 		return -ENOENT;
 
 	elem = head->next;
+	kill_thread(&head->st);
 	free_verifier_state(&head->st, false);
 	kfree(head);
 	env->head = elem;
@@ -509,6 +528,8 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
 		verbose(env, "BPF program is too complex\n");
 		goto err;
 	}
+	if (elem->st.parent)
+		elem->st.parent->live_children++;
 	return &elem->st;
 err:
 	free_verifier_state(env->cur_state, true);
@@ -728,11 +749,9 @@ static void init_reg_state(struct bpf_verifier_env *env,
 
 static void init_func_state(struct bpf_verifier_env *env,
 			    struct bpf_func_state *state,
-			    struct bpf_func_state *parent,
 			    int entry, int frameno, int subprogno)
 {
 	state->insn_idx = entry;
-	state->parent = parent;
 	state->frameno = frameno;
 	state->subprogno = subprogno;
 	init_reg_state(env, state);
@@ -2111,7 +2130,6 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	 * callee can read/write into caller's stack
 	 */
 	init_func_state(env, callee,
-			caller->parent /* parent state for loop detection */,
 			target /* entry point */,
 			state->curframe + 1 /* frameno within this callchain */,
 			subprog /* subprog number within this prog */);
@@ -4207,14 +4225,20 @@ static int propagate_liveness(struct bpf_verifier_env *env,
 	return err;
 }
 
-static struct bpf_func_state *find_loop(struct bpf_verifier_env *env,
-					bool *saw_cond, bool *saw_bound)
+static struct bpf_verifier_state *find_loop(struct bpf_verifier_env *env,
+					    bool *saw_cond, bool *saw_bound,
+					    int *min_frame)
 {
-	struct bpf_func_state *cur = cur_frame(env);
-	int insn_idx = cur->insn_idx;
+	struct bpf_verifier_state *cur = env->cur_state;
+	int insn_idx = cur_frame(env)->insn_idx;
+
+	if (min_frame)
+		*min_frame = cur->curframe;
 
 	while ((cur = cur->parent) != NULL) {
-		if (cur->insn_idx == insn_idx)
+		if (min_frame)
+			*min_frame = min_t(int, cur->curframe, *min_frame);
+		if (cur->frame[cur->curframe]->insn_idx == insn_idx)
 			return cur;
 		if (cur->conditional && saw_cond)
 			*saw_cond = true;
@@ -4273,9 +4297,10 @@ static bool is_conditional_jump(struct bpf_verifier_env *env)
 }
 
 static bool is_loop_bounded(struct bpf_verifier_env *env, int insn_idx,
-			    struct bpf_func_state *old)
+			    struct bpf_verifier_state *vold)
 {
 	struct bpf_insn *insn = env->prog->insnsi + env->prev_insn_idx;
+	struct bpf_func_state *old = vold->frame[vold->curframe];
 	struct bpf_func_state *new = cur_frame(env);
 	struct bpf_reg_state *oldreg, *newreg;
 	u8 opcode = BPF_OP(insn->code);
@@ -4339,11 +4364,10 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 {
 	struct bpf_verifier_state_list *new_sl;
 	struct bpf_verifier_state_list *sl;
-	struct bpf_verifier_state *cur = env->cur_state, *new;
+	struct bpf_verifier_state *cur = env->cur_state, *new, *old;
 	bool saw_cond = false, saw_bound = false;
 	bool cond = false, bounded = false;
-	struct bpf_func_state *old;
-	int i, j, err;
+	int i, j, err, min_frame;
 
 	sl = env->explored_states[insn_idx];
 	if (!sl)
@@ -4354,25 +4378,31 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 
 	cond = is_conditional_jump(env);
 	/* Check our parentage chain: have we looped? */
-	old = find_loop(env, &saw_cond, &saw_bound);
-	if (old != NULL) {
-		if (old->frameno != cur->curframe) {
-			/* if it's in our parentage chain, then it called us;
-			 * but we're the same insn, so in the same subprog, so
-			 * recursion has occurred.
-			 * The loop detection could handle recursion fine (it
-			 * distinguishes between bounded and unbounded
-			 * recursion, and the latter would quickly run out of
-			 * call stack anyway), but the stack max depth
-			 * calculation can't deal with it (because it doesn't
-			 * know how deeply we might recurse).
+	old = find_loop(env, &saw_cond, &saw_bound, &min_frame);
+	/* If old->curframe != min_frame, then there is a return (BPF_EXIT) from
+	 * old's frame somewhere in the "loop", so it's not a real loop, just
+	 * two calls to the same function.  (Those calls might come from a loop
+	 * in the outer frame, but we'll deal with that when we walk the outer
+	 * frame.)
+	 */
+	if (old != NULL && old->curframe == min_frame) {
+		if (old->curframe != cur->curframe) {
+			/* since it's in our parentage chain and its
+			 * frame is the minimum in the loop body, it
+			 * called us; but we're the same insn, so in the
+			 * same subprog, so recursion has occurred.
+			 * The loop detection could handle recursion
+			 * fine (it distinguishes between bounded and
+			 * unbounded recursion, and the latter would
+			 * quickly run out of call stack anyway), but
+			 * the stack max depth calculation can't deal
+			 * with it (because it doesn't know how deeply
+			 * we might recurse).
 			 */
 			verbose(env, "recursive call from insn %d to %d\n",
 				env->prev_insn_idx, insn_idx);
 			return -EINVAL;
 		}
-		/* Mark old state as not prunable */
-		old->in_loop = true;
 		if (cond)
 			bounded = is_loop_bounded(env, insn_idx, old);
 		if (bounded) {
@@ -4394,11 +4424,12 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 			verbose(env, "following loop from insn %d to %d, bounded elsewhere\n",
 				env->prev_insn_idx, insn_idx);
 		} else if (saw_cond && !cond) {
-			/* We're not a conditional, but there's a conditional
-			 * somewhere else in the loop.  So they will be
-			 * responsible for ensuring the loop is bounded (it's
-			 * possible we've been revisited but they haven't, which
-			 * is why they might not have bounded_loop set).
+			/* We're not a conditional, but there's a
+			 * conditional somewhere else in the loop.  So
+			 * they will be responsible for ensuring the
+			 * loop is bounded (it's possible we've been
+			 * revisited but they haven't, which is why they
+			 * might not have bounded_loop set).
 			 */
 			verbose(env, "following loop from insn %d to %d for now, condition is elsewhere\n",
 				env->prev_insn_idx, insn_idx);
@@ -4410,7 +4441,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 	}
 
 	while (sl != STATE_LIST_MARK) {
-		if (!sl->state.frame[sl->state.curframe]->in_loop &&
+		if (!sl->state.live_children &&
 		    states_equal(env, &sl->state, cur)) {
 			/* reached equivalent register/stack state,
 			 * prune the search.
@@ -4450,11 +4481,13 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 		kfree(new_sl);
 		return err;
 	}
-	new->frame[new->curframe]->conditional = cond;
-	new->frame[new->curframe]->bounded_loop = bounded;
+	new->conditional = cond;
+	new->bounded_loop = bounded;
 	new_sl->next = env->explored_states[insn_idx];
 	env->explored_states[insn_idx] = new_sl;
-	/* connect new state's regs to parentage chain */
+	/* connect new state and its regs to parentage chain */
+	cur->parent = new;
+	new->live_children = 1;
 	for (i = 0; i < BPF_REG_FP; i++)
 		cur_regs(env)[i].parent = &new->frame[new->curframe]->regs[i];
 	/* clear write marks in current state: the writes we did are not writes
@@ -4471,7 +4504,6 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
 		struct bpf_func_state *frame = cur->frame[j];
 		struct bpf_func_state *newframe = new->frame[j];
 
-		frame->parent = newframe;
 		for (i = 0; i < frame->allocated_stack / BPF_REG_SIZE; i++) {
 			frame->stack[i].spilled_ptr.live = REG_LIVE_NONE;
 			frame->stack[i].spilled_ptr.parent =
@@ -4501,6 +4533,7 @@ static int do_check(struct bpf_verifier_env *env)
 		kfree(state);
 		return -ENOMEM;
 	}
+	state->parent = NULL;
 	env->cur_state = state;
 	mainprogno = add_subprog(env, 0);
 	if (mainprogno < 0)
@@ -4508,7 +4541,6 @@ static int do_check(struct bpf_verifier_env *env)
 	insn_idx = 0;
 	env->prev_insn_idx = -1;
 	init_func_state(env, state->frame[0],
-			NULL /* parent state for loop detection */,
 			insn_idx /* entry point */,
 			0 /* frameno */,
 			mainprogno /* subprogno */);
@@ -4781,6 +4813,7 @@ static int do_check(struct bpf_verifier_env *env)
 				if (err)
 					return err;
 process_bpf_exit:
+				kill_thread(env->cur_state);
 				err = pop_stack(env, &insn_idx);
 				/* We are following a path that was pushed to
 				 * the stack, thus was a jump-taken path.
@@ -5719,6 +5752,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
 
 	ret = do_check(env);
 	if (env->cur_state) {
+		if (ret)
+			kill_thread(env->cur_state);
 		free_verifier_state(env->cur_state, true);
 		env->cur_state = NULL;
 	}

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 10/12] bpf/verifier: parent state pointer is not per-frame
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (8 preceding siblings ...)
  2018-02-23 17:41 ` [RFC PATCH bpf-next 09/12] bpf/verifier: count still-live children of explored_states Edward Cree
@ 2018-02-23 17:42 ` Edward Cree
  2018-02-23 17:42 ` [RFC PATCH bpf-next 11/12] bpf/verifier: better detection of statically unreachable code Edward Cree
  2018-02-23 17:42 ` [RFC PATCH bpf-next 12/12] bpf/verifier: update selftests Edward Cree
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:42 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

Computing min_frame in find_loop and using it to detect recursion means we
 don't need to play games with per-frame parent pointers, and can instead
 have a single parent pointer in the verifier_state.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/linux/bpf_verifier.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index ee034232fbd6..0320df10555b 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -121,12 +121,6 @@ struct bpf_func_state {
 	 */
 	u32 subprogno;
 
-	/* loop detection; points into an explored_state */
-	struct bpf_func_state *parent;
-	/* These flags are only meaningful in an explored_state, not cur_state */
-	bool bounded_loop, conditional;
-	int live_children;
-
 	/* should be second to last. See copy_func_state() */
 	int allocated_stack;
 	struct bpf_stack_state *stack;
@@ -137,6 +131,12 @@ struct bpf_verifier_state {
 	/* call stack tracking */
 	struct bpf_func_state *frame[MAX_CALL_FRAMES];
 	u32 curframe;
+
+	/* loop detection; points into an explored_state */
+	struct bpf_verifier_state *parent;
+	/* These flags are only meaningful in an explored_state, not cur_state */
+	bool bounded_loop, conditional;
+	int live_children;
 };
 
 /* linked list of verifier states used to prune search */

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 11/12] bpf/verifier: better detection of statically unreachable code
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (9 preceding siblings ...)
  2018-02-23 17:42 ` [RFC PATCH bpf-next 10/12] bpf/verifier: parent state pointer is not per-frame Edward Cree
@ 2018-02-23 17:42 ` Edward Cree
  2018-02-23 17:42 ` [RFC PATCH bpf-next 12/12] bpf/verifier: update selftests Edward Cree
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:42 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

My previous version just checked that every insn could be jumped to from
 somewhere, which meant that the program could contain multiple connected
 components.  Instead let's do a flood-fill of the insns set to ensure all
 insns are actually reachable from the entry point.  Since this involves
 essentially the same "where does the jump go" calculations as placing
 state list marks, do both in the same pass.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 kernel/bpf/verifier.c | 149 +++++++++++++++++++++++++++++---------------------
 1 file changed, 88 insertions(+), 61 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cc8aaf73b4a2..d9d851d8c84e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3813,83 +3813,110 @@ static int check_return_code(struct bpf_verifier_env *env)
 
 #define STATE_LIST_MARK ((struct bpf_verifier_state_list *) -1L)
 
-static int mark_jump_target(struct bpf_verifier_env *env, int from, int off)
-{
-	int to = from + off + 1;
-
-	if (to < 0 || to >= env->prog->len) {
-		verbose(env, "jump out of range from insn %d to %d\n", from, to);
-		return -EINVAL;
-	}
-	/* mark branch target for state pruning */
-	env->explored_states[to] = STATE_LIST_MARK;
-	return 0;
-}
-
 /* Mark jump/branch targets for control flow tracking & state pruning */
 static int prepare_cfg_marks(struct bpf_verifier_env *env)
 {
 	struct bpf_insn *insns = env->prog->insnsi;
 	int insn_cnt = env->prog->len, i, err = 0;
+	unsigned long *frontier, *visited;
 
-	for (i = 0; i < insn_cnt; i++) {
-		u8 opcode = BPF_OP(insns[i].code);
+	frontier = kcalloc(BITS_TO_LONGS(insn_cnt), sizeof(unsigned long),
+			   GFP_KERNEL);
+	if (!frontier)
+		return -ENOMEM;
+	visited = kcalloc(BITS_TO_LONGS(insn_cnt), sizeof(unsigned long),
+			  GFP_KERNEL);
+	if (!visited) {
+		err = -ENOMEM;
+		goto out_frontier;
+	}
 
+	/* insn 0 is our start node */
+	__set_bit(0, frontier);
+
+	/* flood fill to find all reachable insns and mark jump targets */
+	while (true) {
+		bool fallthrough = false, jump = false;
+		int target;
+		u8 opcode;
+
+		/* select a frontier node */
+		i = find_first_bit(frontier, insn_cnt);
+		if (i >= insn_cnt) /* frontier is empty, done */
+			break;
+		/* remove it from the frontier */
+		__clear_bit(i, frontier);
+		/* mark it as visited */
+		__set_bit(i, visited);
+		opcode = BPF_OP(insns[i].code);
 		if (BPF_CLASS(insns[i].code) != BPF_JMP) {
-			if (i + 1 == insn_cnt) {
+			fallthrough = true;
+		} else {
+			switch (opcode) {
+			case BPF_EXIT:
+				break;
+			case BPF_CALL:
+				/* following insn is target of function return */
+				fallthrough = true;
+				/* subprog call insns 'branch both ways', as the
+				 * subprog will eventually exit
+				 */
+				if (insns[i].src_reg == BPF_PSEUDO_CALL) {
+					jump = true;
+					target = i + insns[i].imm + 1;
+				}
+				break;
+			case BPF_JA:
+				/* unconditional jump; mark target */
+				jump = true;
+				target = i + insns[i].off + 1;
+				break;
+			default:
+				/* conditional jump; branch both ways */
+				fallthrough = true;
+				jump = true;
+				target = i + insns[i].off + 1;
+				break;
+			}
+		}
+		/* if targets are unvisited, add them to the frontier */
+		if (fallthrough) {
+			if (i + 1 >= insn_cnt) {
 				verbose(env, "no exit/jump at end of program (insn %d)\n",
 					i);
-				return -EINVAL;
+				err = -EINVAL;
+				goto out_visited;
 			}
-			continue;
+			if (!test_bit(i + 1, visited))
+				__set_bit(i + 1, frontier);
 		}
-		switch (opcode) {
-		case BPF_EXIT:
-			continue;
-		case BPF_CALL:
-			/* following insn is target of function return */
-			err = mark_jump_target(env, i, 0);
-			if (err)
-				return err;
-			if (insns[i].src_reg == BPF_PSEUDO_CALL)
-				err = mark_jump_target(env, i, insns[i].imm);
-			break;
-		case BPF_JA:
-			/* unconditional jump; mark target */
-			err = mark_jump_target(env, i, insns[i].off);
-			break;
-		default:
-			/* conditional jump; mark following insn and target */
-			err = mark_jump_target(env, i, 0);
-			if (err)
-				return err;
-			err = mark_jump_target(env, i, insns[i].off);
+		if (jump) {
+			if (target < 0 || target >= insn_cnt) {
+				verbose(env, "jump out of range from insn %d to %d\n", i, target);
+				err = -EINVAL;
+				goto out_visited;
+			}
+			if (!test_bit(target, visited))
+				__set_bit(target, frontier);
+			/* mark branch target for state pruning */
+			env->explored_states[target] = STATE_LIST_MARK;
+			if (fallthrough)
+				/* mark fallthrough for state pruning */
+				env->explored_states[i + 1] = STATE_LIST_MARK;
 		}
-		if (err)
-			return err;
 	}
 
-	/* Second pass, to detect statically unreachable code.  Any target of
-	 * a jump or call will have a state-list mark
-	 */
-	for (i = 0; i < insn_cnt; i++) {
-		/* insn following a non-jump is statically reachable */
-		if (BPF_CLASS(insns[i].code) != BPF_JMP)
-			continue;
-		/* insn following a CALL or conditional jump will have been
-		 * marked by that insn (see above).  So if following insn is
-		 * not marked, then we're an EXIT or JA and thus the
-		 * following insn is statically unreachable.
-		 * If we're last insn, and we're not an EXIT or JA, then first
-		 * pass would have complained in mark_jump_target().
-		 */
-		if (i + 1 < insn_cnt)
-			if (env->explored_states[i + 1] != STATE_LIST_MARK) {
-				verbose(env, "unreachable insn %d\n", i + 1);
-				return -EINVAL;
-			}
+	/* Check for statically unreachable code. */
+	i = find_first_zero_bit(visited, insn_cnt);
+	if (i < insn_cnt) {
+		verbose(env, "unreachable insn %d\n", i);
+		err = -EINVAL;
 	}
-	return 0;
+out_visited:
+	kfree(visited);
+out_frontier:
+	kfree(frontier);
+	return err;
 }
 
 /* check %cur's range satisfies %old's */

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH bpf-next 12/12] bpf/verifier: update selftests
  2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
                   ` (10 preceding siblings ...)
  2018-02-23 17:42 ` [RFC PATCH bpf-next 11/12] bpf/verifier: better detection of statically unreachable code Edward Cree
@ 2018-02-23 17:42 ` Edward Cree
  11 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-23 17:42 UTC (permalink / raw)
  To: netdev, Alexei Starovoitov, Daniel Borkmann

Slight change to message when execution can run off the end of the program.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 tools/testing/selftests/bpf/test_verifier.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index fda35a5a0ff9..a54e50a887dc 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -9758,7 +9758,7 @@ static struct bpf_test tests[] = {
 			BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -2),
 		},
 		.prog_type = BPF_PROG_TYPE_TRACEPOINT,
-		.errstr = "jump out of range from insn 5 to 6",
+		.errstr = "no exit/jump at end of program (insn 5)",
 		.result = REJECT,
 	},
 	{

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH bpf-next 05/12] bpf/verifier: detect loops dynamically rather than statically
  2018-02-23 17:40 ` [RFC PATCH bpf-next 05/12] bpf/verifier: detect loops dynamically rather than statically Edward Cree
@ 2018-02-23 22:27   ` John Fastabend
  2018-02-26 12:53     ` Edward Cree
  0 siblings, 1 reply; 17+ messages in thread
From: John Fastabend @ 2018-02-23 22:27 UTC (permalink / raw)
  To: Edward Cree, netdev, Alexei Starovoitov, Daniel Borkmann

On 02/23/2018 09:40 AM, Edward Cree wrote:
> Add in a new chain of parent states, which does not cross function-call
>  boundaries, and check whether our current insn_idx appears anywhere in
>  the chain.  Since all jump targets have state-list marks (now placed
>  by prepare_cfg_marks(), which replaces check_cfg()), it suffices to
>  thread this chain through the existing explored_states and check it
>  only in is_state_visited().
> By using this parent-state chain to detect loops, we open up the
>  possibility that in future we could determine a loop to be bounded and
>  thus accept it.
> A few selftests had to be changed to ensure that all the insns walked
>  before reaching the back-edge are valid.
> 

I still believe this needs to be done in an initial pass where we can
build a proper CFG and analyze the loops to ensure we have loops that
can be properly verified. Otherwise we end up trying to handle all the
special cases later like the comment in patch 7/12 'three-jump trick'.
So rather than try to handle all loops only handle "normal" loops. In
an early CFG stage with DOM tree the algorithm for this is already
used by gcc/clang so there is good precedent for this type of work.

Without this we are open to other "clever" goto programs that create
loops that we have to special case. Better to just reject all these
classes of loops because most users will never need them.

See more notes in 7/12 patch, sorry I've been busy and not had a chance
to push code to build cfg and dom tree yet.

Also case analysis in patch description describing types of loops
this handles (either here or in another patch) would help me understand
at least.

> Signed-off-by: Edward Cree <ecree@solarflare.com>
> ---
>  include/linux/bpf_verifier.h                |   2 +
>  kernel/bpf/verifier.c                       | 280 +++++++++-------------------
>  tools/testing/selftests/bpf/test_verifier.c |  12 +-
>  3 files changed, 97 insertions(+), 197 deletions(-)
> 
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 0bc49c768585..24ddbc1ed3b2 100644
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH bpf-next 07/12] bpf/verifier: allow bounded loops with JLT/true back-edge
  2018-02-23 17:41 ` [RFC PATCH bpf-next 07/12] bpf/verifier: allow bounded loops with JLT/true back-edge Edward Cree
@ 2018-02-23 22:27   ` John Fastabend
  2018-02-26 11:32     ` Edward Cree
  0 siblings, 1 reply; 17+ messages in thread
From: John Fastabend @ 2018-02-23 22:27 UTC (permalink / raw)
  To: Edward Cree, netdev, Alexei Starovoitov, Daniel Borkmann

On 02/23/2018 09:41 AM, Edward Cree wrote:
> Where the register umin_value is increasing sufficiently fast, the loop
>  will terminate after a reasonable number of iterations, so we can allow
>  to keep walking it.

Continuing to walk the loop is problematic because we hit the complexity
limit. What we want to do is a static analysis step upfront on the basic
block containing the loop to ensure the loop is bounded.

We can do this by finding the induction variables (variables with form
cn+d) and ensuring the comparison register is an induction variable and
also increasing/decreasing correctly with respect to the comparison op.

This seems to be common in compilers to pull computation out of the
loop. So should be doable in the verifier.


> The semantics of prev_insn_idx are changed slightly: it now always refers
>  to the last jump insn walked, even when that jump was not taken.  Also it
>  is moved into env, alongside a new bool last_jump_taken that records
>  whether the jump was taken or not.  This is needed so that, when we detect
>  a loop, we can see how we ended up in the back-edge: what was the jump
>  condition, and is it the true- or the false-branch that ends up looping?
> We record in our explored_states whether they represent a conditional jump
>  and whether that jump produced a good loop bound.  This allows us to make
>  unconditional jumps "not responsible" for ensuring the loop is bounded, as
>  long as there is a conditional jump somewhere in the loop body; whereas a
>  conditional jump must ensure that there is a bounding conditional somewhere
>  in the loop body.
> Recursive calls have to remain forbidden because the calculation of maximum
>  stack depth assumes the call graph is acyclic, even though the loop
>  handling code could determine whether the recursion was bounded.
> 
> Signed-off-by: Edward Cree <ecree@solarflare.com>
> ---

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH bpf-next 07/12] bpf/verifier: allow bounded loops with JLT/true back-edge
  2018-02-23 22:27   ` John Fastabend
@ 2018-02-26 11:32     ` Edward Cree
  0 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-26 11:32 UTC (permalink / raw)
  To: John Fastabend, netdev, Alexei Starovoitov, Daniel Borkmann

On 23/02/18 22:27, John Fastabend wrote:
> On 02/23/2018 09:41 AM, Edward Cree wrote:
>> Where the register umin_value is increasing sufficiently fast, the loop
>>  will terminate after a reasonable number of iterations, so we can allow
>>  to keep walking it.
> Continuing to walk the loop is problematic because we hit the complexity
> limit. What we want to do is a static analysis step upfront on the basic
> block containing the loop to ensure the loop is bounded.
>
> We can do this by finding the induction variables (variables with form
> cn+d) and ensuring the comparison register is an induction variable and
> also increasing/decreasing correctly with respect to the comparison op.
>
> This seems to be common in compilers to pull computation out of the
> loop. So should be doable in the verifier.
So, I should mention that I _do_ have an idea for an optimisation that lets
 us avoid walking N times.  Basically, after you walk the loop for the first
 time (and detect the loop presence & jump condition), you change the type
 of the register from SCALAR_VALUE to INDUCTION_VARIABLE, which the other
 code should treat basically like a pointer in terms of accumulating offsets
 when doing arithmetic, etc.  Then when you get round the loop the second
 time, if the register still holds the INDUCTION_VARIABLE, you can look at
 the offsets and determine that the right thing is happening.  And because
 you made it around the loop with the INDUCTION_VARIABLE there, you know
 that any accesses etc. you do in the loop body are safe (you'd feed in the
 constraints you have in the min/max values so that e.g. array indexing could
 be verified).  So you do have to walk it twice, but not N times :-)
However, the idea needs more work to turn it into an obviously-correct
 algorithm (issues like nested loops and popped branches make me worry a
 little).
Also I feel a little safer knowing that if the verifier wrongly considers a
 loop bounded that actually isn't, it'll just walk round it until it hits the
 complexity limit and reject the program.  That seems like a nice kind of
 belt-and-braces security to have.

-Ed

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH bpf-next 05/12] bpf/verifier: detect loops dynamically rather than statically
  2018-02-23 22:27   ` John Fastabend
@ 2018-02-26 12:53     ` Edward Cree
  0 siblings, 0 replies; 17+ messages in thread
From: Edward Cree @ 2018-02-26 12:53 UTC (permalink / raw)
  To: John Fastabend, netdev, Alexei Starovoitov, Daniel Borkmann

On 23/02/18 22:27, John Fastabend wrote:
> On 02/23/2018 09:40 AM, Edward Cree wrote:
>> Add in a new chain of parent states, which does not cross function-call
>>  boundaries, and check whether our current insn_idx appears anywhere in
>>  the chain.  Since all jump targets have state-list marks (now placed
>>  by prepare_cfg_marks(), which replaces check_cfg()), it suffices to
>>  thread this chain through the existing explored_states and check it
>>  only in is_state_visited().
>> By using this parent-state chain to detect loops, we open up the
>>  possibility that in future we could determine a loop to be bounded and
>>  thus accept it.
>> A few selftests had to be changed to ensure that all the insns walked
>>  before reaching the back-edge are valid.
>>
> I still believe this needs to be done in an initial pass where we can
> build a proper CFG and analyze the loops to ensure we have loops that
> can be properly verified. Otherwise we end up trying to handle all the
> special cases later like the comment in patch 7/12 'three-jump trick'.
It's not so much that the 'three-jump trick' is a special case with special
 code to handle it; more that it's the simplest construction I found for
 defeating the case where you make a purely local decision and just look at
 whether there's a progressing test in the current loop body.  That & the
 entire class of trickery it represents is dealt with by saying that once a
 test has been determined to be bounded, it has to stay bounded on all
 subsequent loop iterations.  (I considered various relaxed versions of
 this constraint but they're harder to reason about.)

I would be happier if I could write out a proof that the verifier does what
 I think it does; but I'm not really sure how to formally define "what I
 think it does"...

> So rather than try to handle all loops only handle "normal" loops. In
> an early CFG stage with DOM tree the algorithm for this is already
> used by gcc/clang so there is good precedent for this type of work.
>
> Without this we are open to other "clever" goto programs that create
> loops that we have to special case. Better to just reject all these
> classes of loops because most users will never need them.
If we want to be narrow (perhaps just initially), we could also just say
 that the loop body can't contain any other conditional jumps.  But that
 would be awkward since, while users may not need loop-in-loop, they are
 quite likely to need if-in-loop, and I don't yet quite have a solid
 algorithm for disallowing only the former.

> See more notes in 7/12 patch, sorry I've been busy and not had a chance
> to push code to build cfg and dom tree yet.
That's fine, gives me more time to polish mine so that it looks like the
 better option when your version does come out ;-)

> Also case analysis in patch description describing types of loops
> this handles (either here or in another patch) would help me understand
> at least.
Yes, I didn't really describe that very much, did I?  Here's a précis:
>From a 'loop structure' point of view, this basically handles anything;
 if the verifier walks it and arrives at an insn it's already visited,
 then the loop handling fires.  The current implementation does have
 the restriction that the conditional branch that's responsible for loop
 termination has to loop in the 'true' branch, i.e.
    loop:   /* do things */
            jr cc, loop
 rather than
    loop:   /* do things */
            jr cc, break
            ja loop
    break:  /* etc. */
 but that's really just because only one case is handled so far in
 is_loop_bounded(), and there's no reason the latter kind can't be
 handled too.
Interestingly, if it didn't have a separate check for recursion, it
 would also handle boundedly recursive calls, like in the test_verifier
 test case "bounded recursion" added in patch #8.  It _really_ doesn't
 care about the 'structure' of the loop...

The other constraint at the moment is that only JLT is handled as the
 back-edge jump, and the 'src' has to be a constant.  Extending it to
 cover other insns than JLT should be pretty trivial (just need an
 is_reg_decreasing() function and signed equivalents, mainly); adding
 support for variable src would be harder & probably unnecessary.

So currently it will handle loops like
    loop:    /* do things, including arithmetic on rX */
             jr <, rX, 16, loop
 or even like
    loop:    /* do things */
             jr <, rX, 16, more
             ja break
    more:    /* do more things */
             ja loop /* or jr cc, loop; so long as this isn't sometimes-bounded */
    break:   /* etc. */

>From a C perspective, this should mean things like
    int i;
    for (i = 0; i < 10; i++)
        /* do things */;
 where that 'do things' can include all kinds of jumps, including
 conditional ones (mostly), and can even have extra 'i++' statements.
 This last is necessary for the (potential) use case of IPv6 options
 parsing, which would look something like:
    int nh_off;
    for (nh_off = iph_off + 40;
         nh_off < ARBITRARY_MAXIMUM && data + nh_off + 2 < data_end;
         ) {
        u8 nexthdr = data[nh_off];
        u16 exthdrlen = (data[nh_off + 1] + 1) * 8;

        if (!is_option(nexthdr)) break;
        nh_off += exthdrlen;
    }
Note that it takes some value analysis (noting that exthdrlen >= 8)
 to determine that nh_off is increasing; I think there is at least
 *some* advantage in keeping all of that value analysis in one place
 (the existing walker) rather than having some potentially duplicate
 code trying to prove similar things on the dominance tree.

But if you (/maintainers) still think the static-analysis cfg/dom way
 is more appropriate after having seen mine, then go ahead and do it
 your way, I won't try to fight to have it done this way :-)

-Ed

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-02-26 12:53 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-23 17:35 [RFC/PoC PATCH bpf-next 00/12] bounded loops for eBPF Edward Cree
2018-02-23 17:39 ` [RFC PATCH bpf-next 01/12] bpf/verifier: validate func_calls by marking at do_check() time Edward Cree
2018-02-23 17:39 ` [RFC PATCH bpf-next 02/12] bpf/verifier: update selftests Edward Cree
2018-02-23 17:40 ` [RFC PATCH bpf-next 03/12] bpf/verifier: per-register parent pointers Edward Cree
2018-02-23 17:40 ` [RFC PATCH bpf-next 04/12] bpf/verifier: store insn_idx instead of callsite in bpf_func_state Edward Cree
2018-02-23 17:40 ` [RFC PATCH bpf-next 05/12] bpf/verifier: detect loops dynamically rather than statically Edward Cree
2018-02-23 22:27   ` John Fastabend
2018-02-26 12:53     ` Edward Cree
2018-02-23 17:41 ` [RFC PATCH bpf-next 06/12] bpf/verifier: do not walk impossible branches Edward Cree
2018-02-23 17:41 ` [RFC PATCH bpf-next 07/12] bpf/verifier: allow bounded loops with JLT/true back-edge Edward Cree
2018-02-23 22:27   ` John Fastabend
2018-02-26 11:32     ` Edward Cree
2018-02-23 17:41 ` [RFC PATCH bpf-next 08/12] bpf/verifier: selftests for bounded loops Edward Cree
2018-02-23 17:41 ` [RFC PATCH bpf-next 09/12] bpf/verifier: count still-live children of explored_states Edward Cree
2018-02-23 17:42 ` [RFC PATCH bpf-next 10/12] bpf/verifier: parent state pointer is not per-frame Edward Cree
2018-02-23 17:42 ` [RFC PATCH bpf-next 11/12] bpf/verifier: better detection of statically unreachable code Edward Cree
2018-02-23 17:42 ` [RFC PATCH bpf-next 12/12] bpf/verifier: update selftests Edward Cree

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.