netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next] bpf/test_run: fix unkillable BPF_PROG_TEST_RUN for flow dissector
@ 2019-02-19 18:54 Stanislav Fomichev
  2019-02-25 21:36 ` Daniel Borkmann
  0 siblings, 1 reply; 2+ messages in thread
From: Stanislav Fomichev @ 2019-02-19 18:54 UTC (permalink / raw)
  To: netdev; +Cc: davem, ast, daniel, Stanislav Fomichev, syzbot

Syzbot found out that running BPF_PROG_TEST_RUN with repeat=0xffffffff
makes process unkillable. The problem is that when CONFIG_PREEMPT is
enabled, we never see need_resched() return true. This is due to the
fact that preempt_enable() (which we do in bpf_test_run_one on each
iteration) now handles resched if it's needed.

Let's disable preemption for the whole run, not per test. In this case
we can properly see whether resched is needed.
Let's also properly return -EINTR to the userspace in case of a signal
interrupt.

This is a follow up for a recently fixed issue in bpf_test_run, see
commit df1a2cb7c74b ("bpf/test_run: fix unkillable
BPF_PROG_TEST_RUN").

Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 net/bpf/test_run.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 2c5172b33209..619655db8d9e 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -293,31 +293,45 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 	if (!repeat)
 		repeat = 1;
 
+	rcu_read_lock();
+	preempt_disable();
 	time_start = ktime_get_ns();
 	for (i = 0; i < repeat; i++) {
-		preempt_disable();
-		rcu_read_lock();
 		retval = __skb_flow_bpf_dissect(prog, skb,
 						&flow_keys_dissector,
 						&flow_keys);
-		rcu_read_unlock();
-		preempt_enable();
+
+		if (signal_pending(current)) {
+			preempt_enable();
+			rcu_read_unlock();
+
+			ret = -EINTR;
+			goto out;
+		}
 
 		if (need_resched()) {
-			if (signal_pending(current))
-				break;
 			time_spent += ktime_get_ns() - time_start;
+			preempt_enable();
+			rcu_read_unlock();
+
 			cond_resched();
+
+			rcu_read_lock();
+			preempt_disable();
 			time_start = ktime_get_ns();
 		}
 	}
 	time_spent += ktime_get_ns() - time_start;
+	preempt_enable();
+	rcu_read_unlock();
+
 	do_div(time_spent, repeat);
 	duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
 
 	ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
 			      retval, duration);
 
+out:
 	kfree_skb(skb);
 	kfree(sk);
 	return ret;
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH bpf-next] bpf/test_run: fix unkillable BPF_PROG_TEST_RUN for flow dissector
  2019-02-19 18:54 [PATCH bpf-next] bpf/test_run: fix unkillable BPF_PROG_TEST_RUN for flow dissector Stanislav Fomichev
@ 2019-02-25 21:36 ` Daniel Borkmann
  0 siblings, 0 replies; 2+ messages in thread
From: Daniel Borkmann @ 2019-02-25 21:36 UTC (permalink / raw)
  To: Stanislav Fomichev, netdev; +Cc: davem, ast, syzbot

On 02/19/2019 07:54 PM, Stanislav Fomichev wrote:
> Syzbot found out that running BPF_PROG_TEST_RUN with repeat=0xffffffff
> makes process unkillable. The problem is that when CONFIG_PREEMPT is
> enabled, we never see need_resched() return true. This is due to the
> fact that preempt_enable() (which we do in bpf_test_run_one on each
> iteration) now handles resched if it's needed.
> 
> Let's disable preemption for the whole run, not per test. In this case
> we can properly see whether resched is needed.
> Let's also properly return -EINTR to the userspace in case of a signal
> interrupt.
> 
> This is a follow up for a recently fixed issue in bpf_test_run, see
> commit df1a2cb7c74b ("bpf/test_run: fix unkillable
> BPF_PROG_TEST_RUN").
> 
> Reported-by: syzbot <syzkaller@googlegroups.com>
> Signed-off-by: Stanislav Fomichev <sdf@google.com>

Applied, thanks! Would be nice to consolidate at least some of these bits
so it's not duplicated, perhaps __always_inline might help for function args
in avoiding indirect calls (iirc, perf's rb handling uses this heavily).

> ---
>  net/bpf/test_run.c | 26 ++++++++++++++++++++------
>  1 file changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 2c5172b33209..619655db8d9e 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -293,31 +293,45 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
>  	if (!repeat)
>  		repeat = 1;
>  
> +	rcu_read_lock();
> +	preempt_disable();
>  	time_start = ktime_get_ns();
>  	for (i = 0; i < repeat; i++) {
> -		preempt_disable();
> -		rcu_read_lock();
>  		retval = __skb_flow_bpf_dissect(prog, skb,
>  						&flow_keys_dissector,
>  						&flow_keys);
> -		rcu_read_unlock();
> -		preempt_enable();
> +
> +		if (signal_pending(current)) {
> +			preempt_enable();
> +			rcu_read_unlock();
> +
> +			ret = -EINTR;
> +			goto out;
> +		}
>  
>  		if (need_resched()) {
> -			if (signal_pending(current))
> -				break;
>  			time_spent += ktime_get_ns() - time_start;
> +			preempt_enable();
> +			rcu_read_unlock();
> +
>  			cond_resched();
> +
> +			rcu_read_lock();
> +			preempt_disable();
>  			time_start = ktime_get_ns();
>  		}
>  	}
>  	time_spent += ktime_get_ns() - time_start;
> +	preempt_enable();
> +	rcu_read_unlock();
> +
>  	do_div(time_spent, repeat);
>  	duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
>  
>  	ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
>  			      retval, duration);
>  
> +out:
>  	kfree_skb(skb);
>  	kfree(sk);
>  	return ret;
> 


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-02-25 21:39 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-19 18:54 [PATCH bpf-next] bpf/test_run: fix unkillable BPF_PROG_TEST_RUN for flow dissector Stanislav Fomichev
2019-02-25 21:36 ` Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).