All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH stable 4.19] bpf: fix buggy r0 retval refinement for tracing helpers
@ 2020-04-21 12:58 Daniel Borkmann
  2020-04-21 16:31 ` Lorenzo Fontana
  0 siblings, 1 reply; 3+ messages in thread
From: Daniel Borkmann @ 2020-04-21 12:58 UTC (permalink / raw)
  To: gregkh
  Cc: alexei.starovoitov, john.fastabend, kpsingh, jannh,
	fontanalorenz, leodidonato, yhs, bpf, Daniel Borkmann,
	Alexei Starovoitov

[ no upstream commit ]

See the glory details in 100605035e15 ("bpf: Verifier, do_refine_retval_range
may clamp umin to 0 incorrectly") for why 849fa50662fb ("bpf/verifier: refine
retval R0 state for bpf_get_stack helper") is buggy. The whole series however
is not suitable for stable since it adds significant amount [0] of verifier
complexity in order to add 32bit subreg tracking. Something simpler is needed.

Unfortunately, reverting 849fa50662fb ("bpf/verifier: refine retval R0 state
for bpf_get_stack helper") or just cherry-picking 100605035e15 ("bpf: Verifier,
do_refine_retval_range may clamp umin to 0 incorrectly") is not an option since
it will break existing tracing programs badly (at least those that are using
bpf_get_stack() and bpf_probe_read_str() helpers). Not fixing it in stable is
also not an option since on 4.19 kernels an error will cause a soft-lockup due
to hitting dead-code sanitized branch since we don't hard-wire such branches
in old kernels yet. But even then for 5.x 849fa50662fb ("bpf/verifier: refine
retval R0 state for bpf_get_stack helper") would cause wrong bounds on the
verifier simluation when an error is hit.

In one of the earlier iterations of mentioned patch series for upstream there
was the concern that just using smax_value in do_refine_retval_range() would
nuke bounds by subsequent <<32 >>32 shifts before the comparison against 0 [1]
which eventually led to the 32bit subreg tracking in the first place. While I
initially went for implementing the idea [1] to pattern match the two shift
operations, it turned out to be more complex than actually needed, meaning, we
could simply treat do_refine_retval_range() similarly to how we branch off
verification for conditionals or under speculation, that is, pushing a new
reg state to the stack for later verification. This means, instead of verifying
the current path with the ret_reg in [S32MIN, msize_max_value] interval where
later bounds would get nuked, we split this into two: i) for the success case
where ret_reg can be in [0, msize_max_value], and ii) for the error case with
ret_reg known to be in interval [S32MIN, -1]. Latter will preserve the bounds
during these shift patterns and can match reg < 0 test. test_progs also succeed
with this approach.

  [0] https://lore.kernel.org/bpf/158507130343.15666.8018068546764556975.stgit@john-Precision-5820-Tower/
  [1] https://lore.kernel.org/bpf/158015334199.28573.4940395881683556537.stgit@john-XPS-13-9370/T/#m2e0ad1d5949131014748b6daa48a3495e7f0456d

Fixes: 849fa50662fb ("bpf/verifier: refine retval R0 state for bpf_get_stack helper")
Reported-by: Lorenzo Fontana <fontanalorenz@gmail.com>
Reported-by: Leonardo Di Donato <leodidonato@gmail.com>
Reported-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Tested-by: John Fastabend <john.fastabend@gmail.com>
---
 [ Lorenzo, Leonardo, I did check my local checkout of driver/bpf/probe.o,
   but please make sure to double check 4.19 with this patch here also from
   your side, so we can add a Tested-by from one of you before Greg takes
   it into stable. Thanks guys! ]

 kernel/bpf/verifier.c | 45 ++++++++++++++++++++++++++++++++-----------
 1 file changed, 34 insertions(+), 11 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e85636fb81b9..daf0a9637d73 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -188,8 +188,7 @@ struct bpf_call_arg_meta {
 	bool pkt_access;
 	int regno;
 	int access_size;
-	s64 msize_smax_value;
-	u64 msize_umax_value;
+	u64 msize_max_value;
 };
 
 static DEFINE_MUTEX(bpf_verifier_lock);
@@ -2076,8 +2075,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
 		/* remember the mem_size which may be used later
 		 * to refine return values.
 		 */
-		meta->msize_smax_value = reg->smax_value;
-		meta->msize_umax_value = reg->umax_value;
+		meta->msize_max_value = reg->umax_value;
 
 		/* The register is SCALAR_VALUE; the access check
 		 * happens using its boundaries.
@@ -2448,21 +2446,44 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
 	return 0;
 }
 
-static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
-				   int func_id,
-				   struct bpf_call_arg_meta *meta)
+static int do_refine_retval_range(struct bpf_verifier_env *env,
+				  struct bpf_reg_state *regs, int ret_type,
+				  int func_id, struct bpf_call_arg_meta *meta)
 {
 	struct bpf_reg_state *ret_reg = &regs[BPF_REG_0];
+	struct bpf_reg_state tmp_reg = *ret_reg;
+	bool ret;
 
 	if (ret_type != RET_INTEGER ||
 	    (func_id != BPF_FUNC_get_stack &&
 	     func_id != BPF_FUNC_probe_read_str))
-		return;
+		return 0;
+
+	/* Error case where ret is in interval [S32MIN, -1]. */
+	ret_reg->smin_value = S32_MIN;
+	ret_reg->smax_value = -1;
+
+	__reg_deduce_bounds(ret_reg);
+	__reg_bound_offset(ret_reg);
+	__update_reg_bounds(ret_reg);
+
+	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, false);
+	if (!ret)
+		return -EFAULT;
+
+	*ret_reg = tmp_reg;
+
+	/* Success case where ret is in range [0, msize_max_value]. */
+	ret_reg->smin_value = 0;
+	ret_reg->smax_value = meta->msize_max_value;
+	ret_reg->umin_value = ret_reg->smin_value;
+	ret_reg->umax_value = ret_reg->smax_value;
 
-	ret_reg->smax_value = meta->msize_smax_value;
-	ret_reg->umax_value = meta->msize_umax_value;
 	__reg_deduce_bounds(ret_reg);
 	__reg_bound_offset(ret_reg);
+	__update_reg_bounds(ret_reg);
+
+	return 0;
 }
 
 static int
@@ -2617,7 +2638,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 		return -EINVAL;
 	}
 
-	do_refine_retval_range(regs, fn->ret_type, func_id, &meta);
+	err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
+	if (err)
+		return err;
 
 	err = check_map_func_compatibility(env, meta.map_ptr, func_id);
 	if (err)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH stable 4.19] bpf: fix buggy r0 retval refinement for tracing helpers
  2020-04-21 12:58 [PATCH stable 4.19] bpf: fix buggy r0 retval refinement for tracing helpers Daniel Borkmann
@ 2020-04-21 16:31 ` Lorenzo Fontana
  2020-04-21 18:20   ` Daniel Borkmann
  0 siblings, 1 reply; 3+ messages in thread
From: Lorenzo Fontana @ 2020-04-21 16:31 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: gregkh, alexei.starovoitov, john.fastabend, kpsingh, jannh,
	leodidonato, yhs, bpf, Alexei Starovoitov

On Tue, Apr 21, 2020 at 02:58:22PM +0200, Daniel Borkmann wrote:
> [ no upstream commit ]
> 
> See the glory details in 100605035e15 ("bpf: Verifier, do_refine_retval_range
> may clamp umin to 0 incorrectly") for why 849fa50662fb ("bpf/verifier: refine
> retval R0 state for bpf_get_stack helper") is buggy. The whole series however
> is not suitable for stable since it adds significant amount [0] of verifier
> complexity in order to add 32bit subreg tracking. Something simpler is needed.
> 
> Unfortunately, reverting 849fa50662fb ("bpf/verifier: refine retval R0 state
> for bpf_get_stack helper") or just cherry-picking 100605035e15 ("bpf: Verifier,
> do_refine_retval_range may clamp umin to 0 incorrectly") is not an option since
> it will break existing tracing programs badly (at least those that are using
> bpf_get_stack() and bpf_probe_read_str() helpers). Not fixing it in stable is
> also not an option since on 4.19 kernels an error will cause a soft-lockup due
> to hitting dead-code sanitized branch since we don't hard-wire such branches
> in old kernels yet. But even then for 5.x 849fa50662fb ("bpf/verifier: refine
> retval R0 state for bpf_get_stack helper") would cause wrong bounds on the
> verifier simluation when an error is hit.
> 
> In one of the earlier iterations of mentioned patch series for upstream there
> was the concern that just using smax_value in do_refine_retval_range() would
> nuke bounds by subsequent <<32 >>32 shifts before the comparison against 0 [1]
> which eventually led to the 32bit subreg tracking in the first place. While I
> initially went for implementing the idea [1] to pattern match the two shift
> operations, it turned out to be more complex than actually needed, meaning, we
> could simply treat do_refine_retval_range() similarly to how we branch off
> verification for conditionals or under speculation, that is, pushing a new
> reg state to the stack for later verification. This means, instead of verifying
> the current path with the ret_reg in [S32MIN, msize_max_value] interval where
> later bounds would get nuked, we split this into two: i) for the success case
> where ret_reg can be in [0, msize_max_value], and ii) for the error case with
> ret_reg known to be in interval [S32MIN, -1]. Latter will preserve the bounds
> during these shift patterns and can match reg < 0 test. test_progs also succeed
> with this approach.
> 
>   [0] https://lore.kernel.org/bpf/158507130343.15666.8018068546764556975.stgit@john-Precision-5820-Tower/
>   [1] https://lore.kernel.org/bpf/158015334199.28573.4940395881683556537.stgit@john-XPS-13-9370/T/#m2e0ad1d5949131014748b6daa48a3495e7f0456d
> 
> Fixes: 849fa50662fb ("bpf/verifier: refine retval R0 state for bpf_get_stack helper")
> Reported-by: Lorenzo Fontana <fontanalorenz@gmail.com>
> Reported-by: Leonardo Di Donato <leodidonato@gmail.com>
> Reported-by: John Fastabend <john.fastabend@gmail.com>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Acked-by: Alexei Starovoitov <ast@kernel.org>
> Acked-by: John Fastabend <john.fastabend@gmail.com>
> Tested-by: John Fastabend <john.fastabend@gmail.com>
> ---
>  [ Lorenzo, Leonardo, I did check my local checkout of driver/bpf/probe.o,
>    but please make sure to double check 4.19 with this patch here also from
>    your side, so we can add a Tested-by from one of you before Greg takes
>    it into stable. Thanks guys! ]
> 
>  kernel/bpf/verifier.c | 45 ++++++++++++++++++++++++++++++++-----------
>  1 file changed, 34 insertions(+), 11 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index e85636fb81b9..daf0a9637d73 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -188,8 +188,7 @@ struct bpf_call_arg_meta {
>  	bool pkt_access;
>  	int regno;
>  	int access_size;
> -	s64 msize_smax_value;
> -	u64 msize_umax_value;
> +	u64 msize_max_value;
>  };
>  
>  static DEFINE_MUTEX(bpf_verifier_lock);
> @@ -2076,8 +2075,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
>  		/* remember the mem_size which may be used later
>  		 * to refine return values.
>  		 */
> -		meta->msize_smax_value = reg->smax_value;
> -		meta->msize_umax_value = reg->umax_value;
> +		meta->msize_max_value = reg->umax_value;
>  
>  		/* The register is SCALAR_VALUE; the access check
>  		 * happens using its boundaries.
> @@ -2448,21 +2446,44 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
>  	return 0;
>  }
>  
> -static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
> -				   int func_id,
> -				   struct bpf_call_arg_meta *meta)
> +static int do_refine_retval_range(struct bpf_verifier_env *env,
> +				  struct bpf_reg_state *regs, int ret_type,
> +				  int func_id, struct bpf_call_arg_meta *meta)
>  {
>  	struct bpf_reg_state *ret_reg = &regs[BPF_REG_0];
> +	struct bpf_reg_state tmp_reg = *ret_reg;
> +	bool ret;
>  
>  	if (ret_type != RET_INTEGER ||
>  	    (func_id != BPF_FUNC_get_stack &&
>  	     func_id != BPF_FUNC_probe_read_str))
> -		return;
> +		return 0;
> +
> +	/* Error case where ret is in interval [S32MIN, -1]. */
> +	ret_reg->smin_value = S32_MIN;
> +	ret_reg->smax_value = -1;
> +
> +	__reg_deduce_bounds(ret_reg);
> +	__reg_bound_offset(ret_reg);
> +	__update_reg_bounds(ret_reg);
> +
> +	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, false);
> +	if (!ret)
> +		return -EFAULT;
> +
> +	*ret_reg = tmp_reg;
> +
> +	/* Success case where ret is in range [0, msize_max_value]. */
> +	ret_reg->smin_value = 0;
> +	ret_reg->smax_value = meta->msize_max_value;
> +	ret_reg->umin_value = ret_reg->smin_value;
> +	ret_reg->umax_value = ret_reg->smax_value;
>  
> -	ret_reg->smax_value = meta->msize_smax_value;
> -	ret_reg->umax_value = meta->msize_umax_value;
>  	__reg_deduce_bounds(ret_reg);
>  	__reg_bound_offset(ret_reg);
> +	__update_reg_bounds(ret_reg);
> +
> +	return 0;
>  }
>  
>  static int
> @@ -2617,7 +2638,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
>  		return -EINVAL;
>  	}
>  
> -	do_refine_retval_range(regs, fn->ret_type, func_id, &meta);
> +	err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
> +	if (err)
> +		return err;
>  
>  	err = check_map_func_compatibility(env, meta.map_ptr, func_id);
>  	if (err)
> -- 
> 2.20.1
> 

Hi Daniel,
Leonardo and I applied this on top of 8e2406c85187 and our old probe works as
expected, as well as the new one.
We produced a dot graph [0] of the in memory xlated representation [1], it clearly
shows that this patch solves the bug. A rendered [2] version is
available for the lazy.

So, Daniel please add a Tested-by for each one of us.

Thanks Daniel!
Lorenzo and Leonardo

[0] https://fs.fntlnz.wtf/kernel/bpf-retval-refinement-4-19/prog.dot
[1] https://fs.fntlnz.wtf/kernel/bpf-retval-refinement-4-19/xlated.txt
[2] https://fs.fntlnz.wtf/kernel/bpf-retval-refinement-4-19/render.png

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH stable 4.19] bpf: fix buggy r0 retval refinement for tracing helpers
  2020-04-21 16:31 ` Lorenzo Fontana
@ 2020-04-21 18:20   ` Daniel Borkmann
  0 siblings, 0 replies; 3+ messages in thread
From: Daniel Borkmann @ 2020-04-21 18:20 UTC (permalink / raw)
  To: Lorenzo Fontana
  Cc: gregkh, alexei.starovoitov, john.fastabend, kpsingh, jannh,
	leodidonato, yhs, bpf, Alexei Starovoitov

On 4/21/20 6:31 PM, Lorenzo Fontana wrote:
[...]
> Hi Daniel,
> Leonardo and I applied this on top of 8e2406c85187 and our old probe works as
> expected, as well as the new one.
> We produced a dot graph [0] of the in memory xlated representation [1], it clearly
> shows that this patch solves the bug. A rendered [2] version is
> available for the lazy.

Perfect, thanks for double-checking!

> So, Daniel please add a Tested-by for each one of us.

Here we go:

Tested-by: Lorenzo Fontana <fontanalorenz@gmail.com>
Tested-by: Leonardo Di Donato <leodidonato@gmail.com>

> Thanks Daniel!
> Lorenzo and Leonardo
> 
> [0] https://fs.fntlnz.wtf/kernel/bpf-retval-refinement-4-19/prog.dot
> [1] https://fs.fntlnz.wtf/kernel/bpf-retval-refinement-4-19/xlated.txt
> [2] https://fs.fntlnz.wtf/kernel/bpf-retval-refinement-4-19/render.png
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-04-21 18:20 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-21 12:58 [PATCH stable 4.19] bpf: fix buggy r0 retval refinement for tracing helpers Daniel Borkmann
2020-04-21 16:31 ` Lorenzo Fontana
2020-04-21 18:20   ` Daniel Borkmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.