bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test
@ 2020-03-14  1:39 Andrii Nakryiko
  2020-03-14  1:39 ` [PATCH bpf-next 2/3] selftests/bpf: fix test_progs's parsing of test numbers Andrii Nakryiko
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Andrii Nakryiko @ 2020-03-14  1:39 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Previous attempt to make tcp_rtt more robust introduced a new race, in which
server_done might be set to true before server can actually accept any
connection. Fix this by unconditionally waiting for accept(). Given socket is
non-blocking, if there are any problems with client side, it should eventually
close listening FD and let server thread exit with failure.

Fixes: 4cd729fa022c ("selftests/bpf: Make tcp_rtt test more robust to failures")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/testing/selftests/bpf/prog_tests/tcp_rtt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
index e08f6bb17700..e56b52ab41da 100644
--- a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
+++ b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
@@ -226,7 +226,7 @@ static void *server_thread(void *arg)
 		return ERR_PTR(err);
 	}
 
-	while (!server_done) {
+	while (true) {
 		client_fd = accept(fd, (struct sockaddr *)&addr, &len);
 		if (client_fd == -1 && errno == EAGAIN) {
 			usleep(50);
@@ -272,7 +272,7 @@ void test_tcp_rtt(void)
 	CHECK_FAIL(run_test(cgroup_fd, server_fd));
 
 	server_done = true;
-	pthread_join(tid, &server_res);
+	CHECK_FAIL(pthread_join(tid, &server_res));
 	CHECK_FAIL(IS_ERR(server_res));
 
 close_server_fd:
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 2/3] selftests/bpf: fix test_progs's parsing of test numbers
  2020-03-14  1:39 [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Andrii Nakryiko
@ 2020-03-14  1:39 ` Andrii Nakryiko
  2020-03-17  5:28   ` Martin KaFai Lau
  2020-03-14  1:39 ` [PATCH bpf-next 3/3] selftests/bpf: reset process and thread affinity after each test/sub-test Andrii Nakryiko
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Andrii Nakryiko @ 2020-03-14  1:39 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

When specifying disjoint set of tests, test_progs doesn't set skipped test's
array elements to false. This leads to spurious execution of tests that should
have been skipped. Fix it by explicitly initializing them to false.

Fixes: 3a516a0a3a7b ("selftests/bpf: add sub-tests support for test_progs")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/testing/selftests/bpf/test_progs.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index dc12fd0de1c2..c8cb407482c6 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -424,7 +424,7 @@ static int parse_str_list(const char *s, struct str_set *set)
 
 int parse_num_list(const char *s, struct test_selector *sel)
 {
-	int i, set_len = 0, num, start = 0, end = -1;
+	int i, set_len = 0, new_len, num, start = 0, end = -1;
 	bool *set = NULL, *tmp, parsing_end = false;
 	char *next;
 
@@ -459,18 +459,19 @@ int parse_num_list(const char *s, struct test_selector *sel)
 			return -EINVAL;
 
 		if (end + 1 > set_len) {
-			set_len = end + 1;
-			tmp = realloc(set, set_len);
+			new_len = end + 1;
+			tmp = realloc(set, new_len);
 			if (!tmp) {
 				free(set);
 				return -ENOMEM;
 			}
+			for (i = set_len; i < start; i++)
+				tmp[i] = false;
 			set = tmp;
+			set_len = new_len;
 		}
-		for (i = start; i <= end; i++) {
+		for (i = start; i <= end; i++)
 			set[i] = true;
-		}
-
 	}
 
 	if (!set)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 3/3] selftests/bpf: reset process and thread affinity after each test/sub-test
  2020-03-14  1:39 [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Andrii Nakryiko
  2020-03-14  1:39 ` [PATCH bpf-next 2/3] selftests/bpf: fix test_progs's parsing of test numbers Andrii Nakryiko
@ 2020-03-14  1:39 ` Andrii Nakryiko
  2020-03-17  5:35   ` [Potential Spoof] " Martin KaFai Lau
  2020-03-17  5:27 ` [Potential Spoof] [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Martin KaFai Lau
  2020-03-17 18:58 ` Daniel Borkmann
  3 siblings, 1 reply; 8+ messages in thread
From: Andrii Nakryiko @ 2020-03-14  1:39 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Some tests and sub-tests are setting "custom" thread/process affinity and
don't reset it back. Instead of requiring each test to undo all this, ensure
that thread affinity is restored by test_progs test runner itself.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/testing/selftests/bpf/test_progs.c | 42 +++++++++++++++++++++++-
 tools/testing/selftests/bpf/test_progs.h |  1 +
 2 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index c8cb407482c6..b521e0a512b6 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -1,12 +1,15 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright (c) 2017 Facebook
  */
+#define _GNU_SOURCE
 #include "test_progs.h"
 #include "cgroup_helpers.h"
 #include "bpf_rlimit.h"
 #include <argp.h>
-#include <string.h>
+#include <pthread.h>
+#include <sched.h>
 #include <signal.h>
+#include <string.h>
 #include <execinfo.h> /* backtrace */
 
 /* defined in test_progs.h */
@@ -90,6 +93,34 @@ static void skip_account(void)
 	}
 }
 
+static void stdio_restore(void);
+
+/* A bunch of tests set custom affinity per-thread and/or per-process. Reset
+ * it after each test/sub-test.
+ */
+static void reset_affinity() {
+
+	cpu_set_t cpuset;
+	int i, err;
+
+	CPU_ZERO(&cpuset);
+	for (i = 0; i < env.nr_cpus; i++)
+		CPU_SET(i, &cpuset);
+
+	err = sched_setaffinity(0, sizeof(cpuset), &cpuset);
+	if (err < 0) {
+		stdio_restore();
+		fprintf(stderr, "Failed to reset process affinity: %d!\n", err);
+		exit(-1);
+	}
+	err = pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
+	if (err < 0) {
+		stdio_restore();
+		fprintf(stderr, "Failed to reset thread affinity: %d!\n", err);
+		exit(-1);
+	}
+}
+
 void test__end_subtest()
 {
 	struct prog_test_def *test = env.test;
@@ -107,6 +138,8 @@ void test__end_subtest()
 	       test->test_num, test->subtest_num,
 	       test->subtest_name, sub_error_cnt ? "FAIL" : "OK");
 
+	reset_affinity();
+
 	free(test->subtest_name);
 	test->subtest_name = NULL;
 }
@@ -679,6 +712,12 @@ int main(int argc, char **argv)
 	srand(time(NULL));
 
 	env.jit_enabled = is_jit_enabled();
+	env.nr_cpus = libbpf_num_possible_cpus();
+	if (env.nr_cpus < 0) {
+		fprintf(stderr, "Failed to get number of CPUs: %d!\n",
+			env.nr_cpus);
+		return -1;
+	}
 
 	stdio_hijack();
 	for (i = 0; i < prog_test_cnt; i++) {
@@ -709,6 +748,7 @@ int main(int argc, char **argv)
 			test->test_num, test->test_name,
 			test->error_cnt ? "FAIL" : "OK");
 
+		reset_affinity();
 		if (test->need_cgroup_cleanup)
 			cleanup_cgroup_environment();
 	}
diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
index fd85fa61dbf7..f4aff6b8284b 100644
--- a/tools/testing/selftests/bpf/test_progs.h
+++ b/tools/testing/selftests/bpf/test_progs.h
@@ -71,6 +71,7 @@ struct test_env {
 	FILE *stderr;
 	char *log_buf;
 	size_t log_cnt;
+	int nr_cpus;
 
 	int succ_cnt; /* successful tests */
 	int sub_succ_cnt; /* successful sub-tests */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [Potential Spoof] [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test
  2020-03-14  1:39 [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Andrii Nakryiko
  2020-03-14  1:39 ` [PATCH bpf-next 2/3] selftests/bpf: fix test_progs's parsing of test numbers Andrii Nakryiko
  2020-03-14  1:39 ` [PATCH bpf-next 3/3] selftests/bpf: reset process and thread affinity after each test/sub-test Andrii Nakryiko
@ 2020-03-17  5:27 ` Martin KaFai Lau
  2020-03-17 18:58 ` Daniel Borkmann
  3 siblings, 0 replies; 8+ messages in thread
From: Martin KaFai Lau @ 2020-03-17  5:27 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team

On Fri, Mar 13, 2020 at 06:39:30PM -0700, Andrii Nakryiko wrote:
> Previous attempt to make tcp_rtt more robust introduced a new race, in which
> server_done might be set to true before server can actually accept any
> connection. Fix this by unconditionally waiting for accept(). Given socket is
> non-blocking, if there are any problems with client side, it should eventually
> close listening FD and let server thread exit with failure.
> 
> Fixes: 4cd729fa022c ("selftests/bpf: Make tcp_rtt test more robust to failures")
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  tools/testing/selftests/bpf/prog_tests/tcp_rtt.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
> index e08f6bb17700..e56b52ab41da 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
> @@ -226,7 +226,7 @@ static void *server_thread(void *arg)
>  		return ERR_PTR(err);
>  	}
>  
> -	while (!server_done) {
> +	while (true) {
>  		client_fd = accept(fd, (struct sockaddr *)&addr, &len);
>  		if (client_fd == -1 && errno == EAGAIN) {
>  			usleep(50);
> @@ -272,7 +272,7 @@ void test_tcp_rtt(void)
>  	CHECK_FAIL(run_test(cgroup_fd, server_fd));
>  
>  	server_done = true;
> -	pthread_join(tid, &server_res);
> +	CHECK_FAIL(pthread_join(tid, &server_res));
From looking at run_test(),
I suspect without accept and server_thread, this may also work:

listen(server_fd, 1);
run_test(cgroup_fd, server_fd);
close(server_fd);

This change lgtm since it is NONBLOCK.  Ideally, may be it should
also be time limited by epoll or setsockopt(SO_RCVTIMEO) in the future.

Acked-by: Martin KaFai Lau <kafai@fb.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 2/3] selftests/bpf: fix test_progs's parsing of test numbers
  2020-03-14  1:39 ` [PATCH bpf-next 2/3] selftests/bpf: fix test_progs's parsing of test numbers Andrii Nakryiko
@ 2020-03-17  5:28   ` Martin KaFai Lau
  0 siblings, 0 replies; 8+ messages in thread
From: Martin KaFai Lau @ 2020-03-17  5:28 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team

On Fri, Mar 13, 2020 at 06:39:31PM -0700, Andrii Nakryiko wrote:
> When specifying disjoint set of tests, test_progs doesn't set skipped test's
> array elements to false. This leads to spurious execution of tests that should
> have been skipped. Fix it by explicitly initializing them to false.
Acked-by: Martin KaFai Lau <kafai@fb.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Potential Spoof] [PATCH bpf-next 3/3] selftests/bpf: reset process and thread affinity after each test/sub-test
  2020-03-14  1:39 ` [PATCH bpf-next 3/3] selftests/bpf: reset process and thread affinity after each test/sub-test Andrii Nakryiko
@ 2020-03-17  5:35   ` Martin KaFai Lau
  2020-03-17  5:39     ` Andrii Nakryiko
  0 siblings, 1 reply; 8+ messages in thread
From: Martin KaFai Lau @ 2020-03-17  5:35 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team

On Fri, Mar 13, 2020 at 06:39:32PM -0700, Andrii Nakryiko wrote:
> Some tests and sub-tests are setting "custom" thread/process affinity and
> don't reset it back. Instead of requiring each test to undo all this, ensure
> that thread affinity is restored by test_progs test runner itself.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  tools/testing/selftests/bpf/test_progs.c | 42 +++++++++++++++++++++++-
>  tools/testing/selftests/bpf/test_progs.h |  1 +
>  2 files changed, 42 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
> index c8cb407482c6..b521e0a512b6 100644
> --- a/tools/testing/selftests/bpf/test_progs.c
> +++ b/tools/testing/selftests/bpf/test_progs.c
> @@ -1,12 +1,15 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  /* Copyright (c) 2017 Facebook
>   */
> +#define _GNU_SOURCE
>  #include "test_progs.h"
>  #include "cgroup_helpers.h"
>  #include "bpf_rlimit.h"
>  #include <argp.h>
> -#include <string.h>
> +#include <pthread.h>
> +#include <sched.h>
>  #include <signal.h>
> +#include <string.h>
>  #include <execinfo.h> /* backtrace */
>  
>  /* defined in test_progs.h */
> @@ -90,6 +93,34 @@ static void skip_account(void)
>  	}
>  }
>  
> +static void stdio_restore(void);
> +
> +/* A bunch of tests set custom affinity per-thread and/or per-process. Reset
> + * it after each test/sub-test.
> + */
> +static void reset_affinity() {
> +
> +	cpu_set_t cpuset;
> +	int i, err;
> +
> +	CPU_ZERO(&cpuset);
> +	for (i = 0; i < env.nr_cpus; i++)
> +		CPU_SET(i, &cpuset);
In case the user may run "taskset somemask test_progs",
is it better to store the inital_cpuset at the beginning
of main and then restore to inital_cpuset after each run?

> +
> +	err = sched_setaffinity(0, sizeof(cpuset), &cpuset);
> +	if (err < 0) {
> +		stdio_restore();
> +		fprintf(stderr, "Failed to reset process affinity: %d!\n", err);
> +		exit(-1);
> +	}
> +	err = pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
> +	if (err < 0) {
> +		stdio_restore();
> +		fprintf(stderr, "Failed to reset thread affinity: %d!\n", err);
> +		exit(-1);
> +	}
> +}

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Potential Spoof] [PATCH bpf-next 3/3] selftests/bpf: reset process and thread affinity after each test/sub-test
  2020-03-17  5:35   ` [Potential Spoof] " Martin KaFai Lau
@ 2020-03-17  5:39     ` Andrii Nakryiko
  0 siblings, 0 replies; 8+ messages in thread
From: Andrii Nakryiko @ 2020-03-17  5:39 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team

On Mon, Mar 16, 2020 at 10:35 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Fri, Mar 13, 2020 at 06:39:32PM -0700, Andrii Nakryiko wrote:
> > Some tests and sub-tests are setting "custom" thread/process affinity and
> > don't reset it back. Instead of requiring each test to undo all this, ensure
> > that thread affinity is restored by test_progs test runner itself.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  tools/testing/selftests/bpf/test_progs.c | 42 +++++++++++++++++++++++-
> >  tools/testing/selftests/bpf/test_progs.h |  1 +
> >  2 files changed, 42 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
> > index c8cb407482c6..b521e0a512b6 100644
> > --- a/tools/testing/selftests/bpf/test_progs.c
> > +++ b/tools/testing/selftests/bpf/test_progs.c
> > @@ -1,12 +1,15 @@
> >  // SPDX-License-Identifier: GPL-2.0-only
> >  /* Copyright (c) 2017 Facebook
> >   */
> > +#define _GNU_SOURCE
> >  #include "test_progs.h"
> >  #include "cgroup_helpers.h"
> >  #include "bpf_rlimit.h"
> >  #include <argp.h>
> > -#include <string.h>
> > +#include <pthread.h>
> > +#include <sched.h>
> >  #include <signal.h>
> > +#include <string.h>
> >  #include <execinfo.h> /* backtrace */
> >
> >  /* defined in test_progs.h */
> > @@ -90,6 +93,34 @@ static void skip_account(void)
> >       }
> >  }
> >
> > +static void stdio_restore(void);
> > +
> > +/* A bunch of tests set custom affinity per-thread and/or per-process. Reset
> > + * it after each test/sub-test.
> > + */
> > +static void reset_affinity() {
> > +
> > +     cpu_set_t cpuset;
> > +     int i, err;
> > +
> > +     CPU_ZERO(&cpuset);
> > +     for (i = 0; i < env.nr_cpus; i++)
> > +             CPU_SET(i, &cpuset);
> In case the user may run "taskset somemask test_progs",
> is it better to store the inital_cpuset at the beginning
> of main and then restore to inital_cpuset after each run?

Not sure it's worth it (it's test runner, not really a general-purpose
tool), but I can add that for sure.

>
> > +
> > +     err = sched_setaffinity(0, sizeof(cpuset), &cpuset);
> > +     if (err < 0) {
> > +             stdio_restore();
> > +             fprintf(stderr, "Failed to reset process affinity: %d!\n", err);
> > +             exit(-1);
> > +     }
> > +     err = pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
> > +     if (err < 0) {
> > +             stdio_restore();
> > +             fprintf(stderr, "Failed to reset thread affinity: %d!\n", err);
> > +             exit(-1);
> > +     }
> > +}

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test
  2020-03-14  1:39 [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2020-03-17  5:27 ` [Potential Spoof] [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Martin KaFai Lau
@ 2020-03-17 18:58 ` Daniel Borkmann
  3 siblings, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2020-03-17 18:58 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, netdev, ast; +Cc: andrii.nakryiko, kernel-team

On 3/14/20 2:39 AM, Andrii Nakryiko wrote:
> Previous attempt to make tcp_rtt more robust introduced a new race, in which
> server_done might be set to true before server can actually accept any
> connection. Fix this by unconditionally waiting for accept(). Given socket is
> non-blocking, if there are any problems with client side, it should eventually
> close listening FD and let server thread exit with failure.
> 
> Fixes: 4cd729fa022c ("selftests/bpf: Make tcp_rtt test more robust to failures")
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>

Series applied, thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-03-17 18:58 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-14  1:39 [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Andrii Nakryiko
2020-03-14  1:39 ` [PATCH bpf-next 2/3] selftests/bpf: fix test_progs's parsing of test numbers Andrii Nakryiko
2020-03-17  5:28   ` Martin KaFai Lau
2020-03-14  1:39 ` [PATCH bpf-next 3/3] selftests/bpf: reset process and thread affinity after each test/sub-test Andrii Nakryiko
2020-03-17  5:35   ` [Potential Spoof] " Martin KaFai Lau
2020-03-17  5:39     ` Andrii Nakryiko
2020-03-17  5:27 ` [Potential Spoof] [PATCH bpf-next 1/3] selftests/bpf: fix race in tcp_rtt test Martin KaFai Lau
2020-03-17 18:58 ` Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).