bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [bpf-next PATCH 0/4] selftests/bpf: Migrate test_tcpbpf_user to be a part of test_progs framework
@ 2020-10-28  1:46 Alexander Duyck
  2020-10-28  1:47 ` [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs Alexander Duyck
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Alexander Duyck @ 2020-10-28  1:46 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, kafai, john.fastabend, kernel-team, netdev,
	edumazet, brakmo, alexanderduyck

Move the test functionality from test_tcpbpf_user into the test_progs
framework so that it will be run any time the test_progs framework is run.
This will help to prevent future test escapes as the individual tests,
such as test_tcpbpf_user, are less likely to be run by developers and CI
tests.

As a part of moving it over the series goes through and updates the code
to make use of the existing APIs included in the test_progs framework. 
This is meant to simplify and streamline the test code and avoid
duplication of effort.

---

Alexander Duyck (4):
      selftests/bpf: Move test_tcppbf_user into test_progs
      selftests/bpf: Drop python client/server in favor of threads
      selftests/bpf: Replace EXPECT_EQ with ASSERT_EQ and refactor verify_results
      selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton


 .../selftests/bpf/prog_tests/tcpbpf_user.c    | 260 +++++++++++-------
 tools/testing/selftests/bpf/tcp_client.py     |  50 ----
 tools/testing/selftests/bpf/tcp_server.py     |  80 ------
 3 files changed, 158 insertions(+), 232 deletions(-)
 delete mode 100755 tools/testing/selftests/bpf/tcp_client.py
 delete mode 100755 tools/testing/selftests/bpf/tcp_server.py

--


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs
  2020-10-28  1:46 [bpf-next PATCH 0/4] selftests/bpf: Migrate test_tcpbpf_user to be a part of test_progs framework Alexander Duyck
@ 2020-10-28  1:47 ` Alexander Duyck
  2020-10-29 23:27   ` Andrii Nakryiko
  2020-10-28  1:47 ` [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads Alexander Duyck
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2020-10-28  1:47 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, kafai, john.fastabend, kernel-team, netdev,
	edumazet, brakmo, alexanderduyck

From: Alexander Duyck <alexanderduyck@fb.com>

Recently a bug was missed due to the fact that test_tcpbpf_user is not a
part of test_progs. In order to prevent similar issues in the future move
the test functionality into test_progs. By doing this we can make certain
that it is a part of standard testing and will not be overlooked.

As a part of moving the functionality into test_progs it is necessary to
integrate with the test_progs framework and to drop any redundant code.
This patch:
1. Cleans up the include headers
2. Dropped a duplicate definition of bpf_find_map
3. Replaced printf calls with fprintf to stderr
4. Renamed main to test_tcpbpf_user
5. Dropped return value in favor of CHECK calls to check for errors

Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
---
 tools/testing/selftests/bpf/Makefile               |    3 
 .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |  138 +++++++++++++++++
 tools/testing/selftests/bpf/test_tcpbpf_user.c     |  165 --------------------
 3 files changed, 139 insertions(+), 167 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
 delete mode 100644 tools/testing/selftests/bpf/test_tcpbpf_user.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 542768f5195b..50e5b18fc455 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -32,7 +32,7 @@ LDLIBS += -lcap -lelf -lz -lrt -lpthread
 
 # Order correspond to 'make run_tests' order
 TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \
-	test_verifier_log test_dev_cgroup test_tcpbpf_user \
+	test_verifier_log test_dev_cgroup \
 	test_sock test_sockmap get_cgroup_id_user test_socket_cookie \
 	test_cgroup_storage \
 	test_netcnt test_tcpnotify_user test_sysctl \
@@ -163,7 +163,6 @@ $(OUTPUT)/test_sock: cgroup_helpers.c
 $(OUTPUT)/test_sock_addr: cgroup_helpers.c
 $(OUTPUT)/test_socket_cookie: cgroup_helpers.c
 $(OUTPUT)/test_sockmap: cgroup_helpers.c
-$(OUTPUT)/test_tcpbpf_user: cgroup_helpers.c
 $(OUTPUT)/test_tcpnotify_user: cgroup_helpers.c trace_helpers.c
 $(OUTPUT)/get_cgroup_id_user: cgroup_helpers.c
 $(OUTPUT)/test_cgroup_storage: cgroup_helpers.c
diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
new file mode 100644
index 000000000000..5becab8b04e3
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
@@ -0,0 +1,138 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <inttypes.h>
+#include <test_progs.h>
+
+#include "test_tcpbpf.h"
+#include "cgroup_helpers.h"
+
+#define CG_NAME "/tcpbpf-user-test"
+
+/* 3 comes from one listening socket + both ends of the connection */
+#define EXPECTED_CLOSE_EVENTS		3
+
+#define EXPECT_EQ(expected, actual, fmt)			\
+	do {							\
+		if ((expected) != (actual)) {			\
+			fprintf(stderr, "  Value of: " #actual "\n"	\
+			       "    Actual: %" fmt "\n"		\
+			       "  Expected: %" fmt "\n",	\
+			       (actual), (expected));		\
+			ret--;					\
+		}						\
+	} while (0)
+
+int verify_result(const struct tcpbpf_globals *result)
+{
+	__u32 expected_events;
+	int ret = 0;
+
+	expected_events = ((1 << BPF_SOCK_OPS_TIMEOUT_INIT) |
+			   (1 << BPF_SOCK_OPS_RWND_INIT) |
+			   (1 << BPF_SOCK_OPS_TCP_CONNECT_CB) |
+			   (1 << BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB) |
+			   (1 << BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB) |
+			   (1 << BPF_SOCK_OPS_NEEDS_ECN) |
+			   (1 << BPF_SOCK_OPS_STATE_CB) |
+			   (1 << BPF_SOCK_OPS_TCP_LISTEN_CB));
+
+	EXPECT_EQ(expected_events, result->event_map, "#" PRIx32);
+	EXPECT_EQ(501ULL, result->bytes_received, "llu");
+	EXPECT_EQ(1002ULL, result->bytes_acked, "llu");
+	EXPECT_EQ(1, result->data_segs_in, PRIu32);
+	EXPECT_EQ(1, result->data_segs_out, PRIu32);
+	EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
+	EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
+	EXPECT_EQ(1, result->num_listen, PRIu32);
+	EXPECT_EQ(EXPECTED_CLOSE_EVENTS, result->num_close_events, PRIu32);
+
+	return ret;
+}
+
+int verify_sockopt_result(int sock_map_fd)
+{
+	__u32 key = 0;
+	int ret = 0;
+	int res;
+	int rv;
+
+	/* check setsockopt for SAVE_SYN */
+	rv = bpf_map_lookup_elem(sock_map_fd, &key, &res);
+	EXPECT_EQ(0, rv, "d");
+	EXPECT_EQ(0, res, "d");
+	key = 1;
+	/* check getsockopt for SAVED_SYN */
+	rv = bpf_map_lookup_elem(sock_map_fd, &key, &res);
+	EXPECT_EQ(0, rv, "d");
+	EXPECT_EQ(1, res, "d");
+	return ret;
+}
+
+void test_tcpbpf_user(void)
+{
+	const char *file = "test_tcpbpf_kern.o";
+	int prog_fd, map_fd, sock_map_fd;
+	struct tcpbpf_globals g = {0};
+	struct bpf_object *obj;
+	int cg_fd = -1;
+	int retry = 10;
+	__u32 key = 0;
+	int rv;
+
+	cg_fd = cgroup_setup_and_join(CG_NAME);
+	if (CHECK_FAIL(cg_fd < 0))
+		goto err;
+
+	if (CHECK_FAIL(bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd))) {
+		fprintf(stderr, "FAILED: load_bpf_file failed for: %s\n", file);
+		goto err;
+	}
+
+	rv = bpf_prog_attach(prog_fd, cg_fd, BPF_CGROUP_SOCK_OPS, 0);
+	if (CHECK_FAIL(rv)) {
+		fprintf(stderr, "FAILED: bpf_prog_attach: %d (%s)\n",
+		       errno, strerror(errno));
+		goto err;
+	}
+
+	if (CHECK_FAIL(system("./tcp_server.py"))) {
+		fprintf(stderr, "FAILED: TCP server\n");
+		goto err;
+	}
+
+	map_fd = bpf_find_map(__func__, obj, "global_map");
+	if (CHECK_FAIL(map_fd < 0))
+		goto err;
+
+	sock_map_fd = bpf_find_map(__func__, obj, "sockopt_results");
+	if (CHECK_FAIL(sock_map_fd < 0))
+		goto err;
+
+retry_lookup:
+	rv = bpf_map_lookup_elem(map_fd, &key, &g);
+	if (CHECK_FAIL(rv != 0)) {
+		fprintf(stderr, "FAILED: bpf_map_lookup_elem returns %d\n", rv);
+		goto err;
+	}
+
+	if (g.num_close_events != EXPECTED_CLOSE_EVENTS && retry--) {
+		fprintf(stderr,
+			"Unexpected number of close events (%d), retrying!\n",
+			g.num_close_events);
+		usleep(100);
+		goto retry_lookup;
+	}
+
+	if (CHECK_FAIL(verify_result(&g))) {
+		fprintf(stderr, "FAILED: Wrong stats\n");
+		goto err;
+	}
+
+	if (CHECK_FAIL(verify_sockopt_result(sock_map_fd))) {
+		fprintf(stderr, "FAILED: Wrong sockopt stats\n");
+		goto err;
+	}
+err:
+	bpf_prog_detach(cg_fd, BPF_CGROUP_SOCK_OPS);
+	close(cg_fd);
+	cleanup_cgroup_environment();
+}
diff --git a/tools/testing/selftests/bpf/test_tcpbpf_user.c b/tools/testing/selftests/bpf/test_tcpbpf_user.c
deleted file mode 100644
index 74a9e49988b6..000000000000
--- a/tools/testing/selftests/bpf/test_tcpbpf_user.c
+++ /dev/null
@@ -1,165 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <inttypes.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <unistd.h>
-#include <errno.h>
-#include <string.h>
-#include <linux/bpf.h>
-#include <sys/types.h>
-#include <bpf/bpf.h>
-#include <bpf/libbpf.h>
-
-#include "bpf_rlimit.h"
-#include "bpf_util.h"
-#include "cgroup_helpers.h"
-
-#include "test_tcpbpf.h"
-
-/* 3 comes from one listening socket + both ends of the connection */
-#define EXPECTED_CLOSE_EVENTS		3
-
-#define EXPECT_EQ(expected, actual, fmt)			\
-	do {							\
-		if ((expected) != (actual)) {			\
-			printf("  Value of: " #actual "\n"	\
-			       "    Actual: %" fmt "\n"		\
-			       "  Expected: %" fmt "\n",	\
-			       (actual), (expected));		\
-			ret--;					\
-		}						\
-	} while (0)
-
-int verify_result(const struct tcpbpf_globals *result)
-{
-	__u32 expected_events;
-	int ret = 0;
-
-	expected_events = ((1 << BPF_SOCK_OPS_TIMEOUT_INIT) |
-			   (1 << BPF_SOCK_OPS_RWND_INIT) |
-			   (1 << BPF_SOCK_OPS_TCP_CONNECT_CB) |
-			   (1 << BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB) |
-			   (1 << BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB) |
-			   (1 << BPF_SOCK_OPS_NEEDS_ECN) |
-			   (1 << BPF_SOCK_OPS_STATE_CB) |
-			   (1 << BPF_SOCK_OPS_TCP_LISTEN_CB));
-
-	EXPECT_EQ(expected_events, result->event_map, "#" PRIx32);
-	EXPECT_EQ(501ULL, result->bytes_received, "llu");
-	EXPECT_EQ(1002ULL, result->bytes_acked, "llu");
-	EXPECT_EQ(1, result->data_segs_in, PRIu32);
-	EXPECT_EQ(1, result->data_segs_out, PRIu32);
-	EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
-	EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
-	EXPECT_EQ(1, result->num_listen, PRIu32);
-	EXPECT_EQ(EXPECTED_CLOSE_EVENTS, result->num_close_events, PRIu32);
-
-	return ret;
-}
-
-int verify_sockopt_result(int sock_map_fd)
-{
-	__u32 key = 0;
-	int ret = 0;
-	int res;
-	int rv;
-
-	/* check setsockopt for SAVE_SYN */
-	rv = bpf_map_lookup_elem(sock_map_fd, &key, &res);
-	EXPECT_EQ(0, rv, "d");
-	EXPECT_EQ(0, res, "d");
-	key = 1;
-	/* check getsockopt for SAVED_SYN */
-	rv = bpf_map_lookup_elem(sock_map_fd, &key, &res);
-	EXPECT_EQ(0, rv, "d");
-	EXPECT_EQ(1, res, "d");
-	return ret;
-}
-
-static int bpf_find_map(const char *test, struct bpf_object *obj,
-			const char *name)
-{
-	struct bpf_map *map;
-
-	map = bpf_object__find_map_by_name(obj, name);
-	if (!map) {
-		printf("%s:FAIL:map '%s' not found\n", test, name);
-		return -1;
-	}
-	return bpf_map__fd(map);
-}
-
-int main(int argc, char **argv)
-{
-	const char *file = "test_tcpbpf_kern.o";
-	int prog_fd, map_fd, sock_map_fd;
-	struct tcpbpf_globals g = {0};
-	const char *cg_path = "/foo";
-	int error = EXIT_FAILURE;
-	struct bpf_object *obj;
-	int cg_fd = -1;
-	int retry = 10;
-	__u32 key = 0;
-	int rv;
-
-	cg_fd = cgroup_setup_and_join(cg_path);
-	if (cg_fd < 0)
-		goto err;
-
-	if (bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd)) {
-		printf("FAILED: load_bpf_file failed for: %s\n", file);
-		goto err;
-	}
-
-	rv = bpf_prog_attach(prog_fd, cg_fd, BPF_CGROUP_SOCK_OPS, 0);
-	if (rv) {
-		printf("FAILED: bpf_prog_attach: %d (%s)\n",
-		       error, strerror(errno));
-		goto err;
-	}
-
-	if (system("./tcp_server.py")) {
-		printf("FAILED: TCP server\n");
-		goto err;
-	}
-
-	map_fd = bpf_find_map(__func__, obj, "global_map");
-	if (map_fd < 0)
-		goto err;
-
-	sock_map_fd = bpf_find_map(__func__, obj, "sockopt_results");
-	if (sock_map_fd < 0)
-		goto err;
-
-retry_lookup:
-	rv = bpf_map_lookup_elem(map_fd, &key, &g);
-	if (rv != 0) {
-		printf("FAILED: bpf_map_lookup_elem returns %d\n", rv);
-		goto err;
-	}
-
-	if (g.num_close_events != EXPECTED_CLOSE_EVENTS && retry--) {
-		printf("Unexpected number of close events (%d), retrying!\n",
-		       g.num_close_events);
-		usleep(100);
-		goto retry_lookup;
-	}
-
-	if (verify_result(&g)) {
-		printf("FAILED: Wrong stats\n");
-		goto err;
-	}
-
-	if (verify_sockopt_result(sock_map_fd)) {
-		printf("FAILED: Wrong sockopt stats\n");
-		goto err;
-	}
-
-	printf("PASSED!\n");
-	error = 0;
-err:
-	bpf_prog_detach(cg_fd, BPF_CGROUP_SOCK_OPS);
-	close(cg_fd);
-	cleanup_cgroup_environment();
-	return error;
-}



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads
  2020-10-28  1:46 [bpf-next PATCH 0/4] selftests/bpf: Migrate test_tcpbpf_user to be a part of test_progs framework Alexander Duyck
  2020-10-28  1:47 ` [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs Alexander Duyck
@ 2020-10-28  1:47 ` Alexander Duyck
  2020-10-29  1:51   ` Martin KaFai Lau
  2020-10-28  1:47 ` [bpf-next PATCH 3/4] selftests/bpf: Replace EXPECT_EQ with ASSERT_EQ and refactor verify_results Alexander Duyck
  2020-10-28  1:47 ` [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton Alexander Duyck
  3 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2020-10-28  1:47 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, kafai, john.fastabend, kernel-team, netdev,
	edumazet, brakmo, alexanderduyck

From: Alexander Duyck <alexanderduyck@fb.com>

Drop the tcp_client/server.py files in favor of using a client and server
thread within the test case. Specifically we spawn a new thread to play the
role of the server, and the main testing thread plays the role of client.

Doing this we are able to reduce overhead since we don't have two python
workers possibly floating around. In addition we don't have to worry about
synchronization issues and as such the retry loop waiting for the threads
to close the sockets can be dropped as we will have already closed the
sockets in the local executable and synchronized the server thread.

Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
---
 .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |  125 +++++++++++++++++---
 tools/testing/selftests/bpf/tcp_client.py          |   50 --------
 tools/testing/selftests/bpf/tcp_server.py          |   80 -------------
 3 files changed, 107 insertions(+), 148 deletions(-)
 delete mode 100755 tools/testing/selftests/bpf/tcp_client.py
 delete mode 100755 tools/testing/selftests/bpf/tcp_server.py

diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
index 5becab8b04e3..71ab82e37eb7 100644
--- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
+++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
@@ -1,14 +1,65 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <inttypes.h>
 #include <test_progs.h>
+#include <network_helpers.h>
 
 #include "test_tcpbpf.h"
 #include "cgroup_helpers.h"
 
+#define LO_ADDR6 "::1"
 #define CG_NAME "/tcpbpf-user-test"
 
-/* 3 comes from one listening socket + both ends of the connection */
-#define EXPECTED_CLOSE_EVENTS		3
+static pthread_mutex_t server_started_mtx = PTHREAD_MUTEX_INITIALIZER;
+static pthread_cond_t server_started = PTHREAD_COND_INITIALIZER;
+
+static void *server_thread(void *arg)
+{
+	struct sockaddr_storage addr;
+	socklen_t len = sizeof(addr);
+	int fd = *(int *)arg;
+	char buf[1000];
+	int client_fd;
+	int err = 0;
+	int i;
+
+	err = listen(fd, 1);
+
+	pthread_mutex_lock(&server_started_mtx);
+	pthread_cond_signal(&server_started);
+	pthread_mutex_unlock(&server_started_mtx);
+
+	if (err < 0) {
+		perror("Failed to listen on socket");
+		err = errno;
+		goto err;
+	}
+
+	client_fd = accept(fd, (struct sockaddr *)&addr, &len);
+	if (client_fd < 0) {
+		perror("Failed to accept client");
+		err = errno;
+		goto err;
+	}
+
+	if (recv(client_fd, buf, 1000, 0) < 1000) {
+		perror("failed/partial recv");
+		err = errno;
+		goto out_clean;
+	}
+
+	for (i = 0; i < 500; i++)
+		buf[i] = '.';
+
+	if (send(client_fd, buf, 500, 0) < 500) {
+		perror("failed/partial send");
+		err = errno;
+		goto out_clean;
+	}
+out_clean:
+	close(client_fd);
+err:
+	return (void *)(long)err;
+}
 
 #define EXPECT_EQ(expected, actual, fmt)			\
 	do {							\
@@ -43,7 +94,9 @@ int verify_result(const struct tcpbpf_globals *result)
 	EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
 	EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
 	EXPECT_EQ(1, result->num_listen, PRIu32);
-	EXPECT_EQ(EXPECTED_CLOSE_EVENTS, result->num_close_events, PRIu32);
+
+	/* 3 comes from one listening socket + both ends of the connection */
+	EXPECT_EQ(3, result->num_close_events, PRIu32);
 
 	return ret;
 }
@@ -67,6 +120,52 @@ int verify_sockopt_result(int sock_map_fd)
 	return ret;
 }
 
+static int run_test(void)
+{
+	int server_fd, client_fd;
+	void *server_err;
+	char buf[1000];
+	pthread_t tid;
+	int err = -1;
+	int i;
+
+	server_fd = start_server(AF_INET6, SOCK_STREAM, LO_ADDR6, 0, 0);
+	if (CHECK_FAIL(server_fd < 0))
+		return err;
+
+	pthread_mutex_lock(&server_started_mtx);
+	if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
+				      (void *)&server_fd)))
+		goto close_server_fd;
+
+	pthread_cond_wait(&server_started, &server_started_mtx);
+	pthread_mutex_unlock(&server_started_mtx);
+
+	client_fd = connect_to_fd(server_fd, 0);
+	if (client_fd < 0)
+		goto close_server_fd;
+
+	for (i = 0; i < 1000; i++)
+		buf[i] = '+';
+
+	if (CHECK_FAIL(send(client_fd, buf, 1000, 0) < 1000))
+		goto close_client_fd;
+
+	if (CHECK_FAIL(recv(client_fd, buf, 500, 0) < 500))
+		goto close_client_fd;
+
+	pthread_join(tid, &server_err);
+
+	err = (int)(long)server_err;
+	CHECK_FAIL(err);
+
+close_client_fd:
+	close(client_fd);
+close_server_fd:
+	close(server_fd);
+	return err;
+}
+
 void test_tcpbpf_user(void)
 {
 	const char *file = "test_tcpbpf_kern.o";
@@ -74,7 +173,6 @@ void test_tcpbpf_user(void)
 	struct tcpbpf_globals g = {0};
 	struct bpf_object *obj;
 	int cg_fd = -1;
-	int retry = 10;
 	__u32 key = 0;
 	int rv;
 
@@ -94,11 +192,6 @@ void test_tcpbpf_user(void)
 		goto err;
 	}
 
-	if (CHECK_FAIL(system("./tcp_server.py"))) {
-		fprintf(stderr, "FAILED: TCP server\n");
-		goto err;
-	}
-
 	map_fd = bpf_find_map(__func__, obj, "global_map");
 	if (CHECK_FAIL(map_fd < 0))
 		goto err;
@@ -107,21 +200,17 @@ void test_tcpbpf_user(void)
 	if (CHECK_FAIL(sock_map_fd < 0))
 		goto err;
 
-retry_lookup:
+	if (run_test()) {
+		fprintf(stderr, "FAILED: TCP server\n");
+		goto err;
+	}
+
 	rv = bpf_map_lookup_elem(map_fd, &key, &g);
 	if (CHECK_FAIL(rv != 0)) {
 		fprintf(stderr, "FAILED: bpf_map_lookup_elem returns %d\n", rv);
 		goto err;
 	}
 
-	if (g.num_close_events != EXPECTED_CLOSE_EVENTS && retry--) {
-		fprintf(stderr,
-			"Unexpected number of close events (%d), retrying!\n",
-			g.num_close_events);
-		usleep(100);
-		goto retry_lookup;
-	}
-
 	if (CHECK_FAIL(verify_result(&g))) {
 		fprintf(stderr, "FAILED: Wrong stats\n");
 		goto err;
diff --git a/tools/testing/selftests/bpf/tcp_client.py b/tools/testing/selftests/bpf/tcp_client.py
deleted file mode 100755
index bfff82be3fc1..000000000000
--- a/tools/testing/selftests/bpf/tcp_client.py
+++ /dev/null
@@ -1,50 +0,0 @@
-#!/usr/bin/env python3
-#
-# SPDX-License-Identifier: GPL-2.0
-#
-
-import sys, os, os.path, getopt
-import socket, time
-import subprocess
-import select
-
-def read(sock, n):
-    buf = b''
-    while len(buf) < n:
-        rem = n - len(buf)
-        try: s = sock.recv(rem)
-        except (socket.error) as e: return b''
-        buf += s
-    return buf
-
-def send(sock, s):
-    total = len(s)
-    count = 0
-    while count < total:
-        try: n = sock.send(s)
-        except (socket.error) as e: n = 0
-        if n == 0:
-            return count;
-        count += n
-    return count
-
-
-serverPort = int(sys.argv[1])
-
-# create active socket
-sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
-try:
-    sock.connect(('::1', serverPort))
-except socket.error as e:
-    sys.exit(1)
-
-buf = b''
-n = 0
-while n < 1000:
-    buf += b'+'
-    n += 1
-
-sock.settimeout(1);
-n = send(sock, buf)
-n = read(sock, 500)
-sys.exit(0)
diff --git a/tools/testing/selftests/bpf/tcp_server.py b/tools/testing/selftests/bpf/tcp_server.py
deleted file mode 100755
index 42ab8882f00f..000000000000
--- a/tools/testing/selftests/bpf/tcp_server.py
+++ /dev/null
@@ -1,80 +0,0 @@
-#!/usr/bin/env python3
-#
-# SPDX-License-Identifier: GPL-2.0
-#
-
-import sys, os, os.path, getopt
-import socket, time
-import subprocess
-import select
-
-def read(sock, n):
-    buf = b''
-    while len(buf) < n:
-        rem = n - len(buf)
-        try: s = sock.recv(rem)
-        except (socket.error) as e: return b''
-        buf += s
-    return buf
-
-def send(sock, s):
-    total = len(s)
-    count = 0
-    while count < total:
-        try: n = sock.send(s)
-        except (socket.error) as e: n = 0
-        if n == 0:
-            return count;
-        count += n
-    return count
-
-
-SERVER_PORT = 12877
-MAX_PORTS = 2
-
-serverPort = SERVER_PORT
-serverSocket = None
-
-# create passive socket
-serverSocket = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
-
-try: serverSocket.bind(('::1', 0))
-except socket.error as msg:
-    print('bind fails: ' + str(msg))
-
-sn = serverSocket.getsockname()
-serverPort = sn[1]
-
-cmdStr = ("./tcp_client.py %d &") % (serverPort)
-os.system(cmdStr)
-
-buf = b''
-n = 0
-while n < 500:
-    buf += b'.'
-    n += 1
-
-serverSocket.listen(MAX_PORTS)
-readList = [serverSocket]
-
-while True:
-    readyRead, readyWrite, inError = \
-        select.select(readList, [], [], 2)
-
-    if len(readyRead) > 0:
-        waitCount = 0
-        for sock in readyRead:
-            if sock == serverSocket:
-                (clientSocket, address) = serverSocket.accept()
-                address = str(address[0])
-                readList.append(clientSocket)
-            else:
-                sock.settimeout(1);
-                s = read(sock, 1000)
-                n = send(sock, buf)
-                sock.close()
-                serverSocket.close()
-                sys.exit(0)
-    else:
-        print('Select timeout!')
-        sys.exit(1)



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [bpf-next PATCH 3/4] selftests/bpf: Replace EXPECT_EQ with ASSERT_EQ and refactor verify_results
  2020-10-28  1:46 [bpf-next PATCH 0/4] selftests/bpf: Migrate test_tcpbpf_user to be a part of test_progs framework Alexander Duyck
  2020-10-28  1:47 ` [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs Alexander Duyck
  2020-10-28  1:47 ` [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads Alexander Duyck
@ 2020-10-28  1:47 ` Alexander Duyck
  2020-10-28  1:47 ` [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton Alexander Duyck
  3 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2020-10-28  1:47 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, kafai, john.fastabend, kernel-team, netdev,
	edumazet, brakmo, alexanderduyck

From: Alexander Duyck <alexanderduyck@fb.com>

There is already logic in test_progs.h for asserting that a value is
expected to be another value. So instead of reinventing it we should just
make use of ASSERT_EQ in tcpbpf_user.c. This will allow for better
debugging and integrates much more closely with the test_progs framework.

In addition we can refactor the code a bit to merge together the two
verify functions and tie them together into a single function. Doing this
helps to clean the code up a bit and makes it more readable as all the
verification is now done in one function.

Lastly we can relocate the verification to the end of the run_test since it
is logically part of the test itself. With this we can drop the need for a
return value from run_test since verification becomes the last step of the
call and then immediately following is the tear down of the test setup.

Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
---
 .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |  119 ++++++++------------
 1 file changed, 46 insertions(+), 73 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
index 71ab82e37eb7..4e1190894e1e 100644
--- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
+++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
@@ -11,6 +11,7 @@
 
 static pthread_mutex_t server_started_mtx = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t server_started = PTHREAD_COND_INITIALIZER;
+static __u32 duration;
 
 static void *server_thread(void *arg)
 {
@@ -61,66 +62,57 @@ static void *server_thread(void *arg)
 	return (void *)(long)err;
 }
 
-#define EXPECT_EQ(expected, actual, fmt)			\
-	do {							\
-		if ((expected) != (actual)) {			\
-			fprintf(stderr, "  Value of: " #actual "\n"	\
-			       "    Actual: %" fmt "\n"		\
-			       "  Expected: %" fmt "\n",	\
-			       (actual), (expected));		\
-			ret--;					\
-		}						\
-	} while (0)
-
-int verify_result(const struct tcpbpf_globals *result)
-{
-	__u32 expected_events;
-	int ret = 0;
-
-	expected_events = ((1 << BPF_SOCK_OPS_TIMEOUT_INIT) |
-			   (1 << BPF_SOCK_OPS_RWND_INIT) |
-			   (1 << BPF_SOCK_OPS_TCP_CONNECT_CB) |
-			   (1 << BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB) |
-			   (1 << BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB) |
-			   (1 << BPF_SOCK_OPS_NEEDS_ECN) |
-			   (1 << BPF_SOCK_OPS_STATE_CB) |
-			   (1 << BPF_SOCK_OPS_TCP_LISTEN_CB));
-
-	EXPECT_EQ(expected_events, result->event_map, "#" PRIx32);
-	EXPECT_EQ(501ULL, result->bytes_received, "llu");
-	EXPECT_EQ(1002ULL, result->bytes_acked, "llu");
-	EXPECT_EQ(1, result->data_segs_in, PRIu32);
-	EXPECT_EQ(1, result->data_segs_out, PRIu32);
-	EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
-	EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
-	EXPECT_EQ(1, result->num_listen, PRIu32);
-
-	/* 3 comes from one listening socket + both ends of the connection */
-	EXPECT_EQ(3, result->num_close_events, PRIu32);
-
-	return ret;
-}
-
-int verify_sockopt_result(int sock_map_fd)
+static void verify_result(int map_fd, int sock_map_fd)
 {
+	__u32 expected_events = ((1 << BPF_SOCK_OPS_TIMEOUT_INIT) |
+				 (1 << BPF_SOCK_OPS_RWND_INIT) |
+				 (1 << BPF_SOCK_OPS_TCP_CONNECT_CB) |
+				 (1 << BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB) |
+				 (1 << BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB) |
+				 (1 << BPF_SOCK_OPS_NEEDS_ECN) |
+				 (1 << BPF_SOCK_OPS_STATE_CB) |
+				 (1 << BPF_SOCK_OPS_TCP_LISTEN_CB));
+	struct tcpbpf_globals result = { 0 };
 	__u32 key = 0;
-	int ret = 0;
 	int res;
 	int rv;
 
+	rv = bpf_map_lookup_elem(map_fd, &key, &result);
+	if (CHECK_FAIL(rv != 0)) {
+		fprintf(stderr, "FAILED: bpf_map_lookup_elem returns %d\n", rv);
+		return;
+	}
+
+	/* check global map */
+	CHECK(expected_events != result.event_map, "event_map",
+	      "unexpected event_map: actual %#" PRIx32" != expected %#" PRIx32 "\n",
+	      result.event_map, expected_events);
+
+	ASSERT_EQ(result.bytes_received, 501, "bytes_received");
+	ASSERT_EQ(result.bytes_acked, 1002, "bytes_acked");
+	ASSERT_EQ(result.data_segs_in, 1, "data_segs_in");
+	ASSERT_EQ(result.data_segs_out, 1, "data_segs_out");
+	ASSERT_EQ(result.bad_cb_test_rv, 0x80, "bad_cb_test_rv");
+	ASSERT_EQ(result.good_cb_test_rv, 0, "good_cb_test_rv");
+	ASSERT_EQ(result.num_listen, 1, "num_listen");
+
+	/* 3 comes from one listening socket + both ends of the connection */
+	ASSERT_EQ(result.num_close_events, 3, "num_close_events");
+
 	/* check setsockopt for SAVE_SYN */
+	key = 0;
 	rv = bpf_map_lookup_elem(sock_map_fd, &key, &res);
-	EXPECT_EQ(0, rv, "d");
-	EXPECT_EQ(0, res, "d");
-	key = 1;
+	ASSERT_EQ(rv, 0, "rv");
+	ASSERT_EQ(res, 0, "res");
+
 	/* check getsockopt for SAVED_SYN */
+	key = 1;
 	rv = bpf_map_lookup_elem(sock_map_fd, &key, &res);
-	EXPECT_EQ(0, rv, "d");
-	EXPECT_EQ(1, res, "d");
-	return ret;
+	ASSERT_EQ(rv, 0, "rv");
+	ASSERT_EQ(res, 1, "res");
 }
 
-static int run_test(void)
+static void run_test(int map_fd, int sock_map_fd)
 {
 	int server_fd, client_fd;
 	void *server_err;
@@ -131,7 +123,7 @@ static int run_test(void)
 
 	server_fd = start_server(AF_INET6, SOCK_STREAM, LO_ADDR6, 0, 0);
 	if (CHECK_FAIL(server_fd < 0))
-		return err;
+		return;
 
 	pthread_mutex_lock(&server_started_mtx);
 	if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
@@ -163,17 +155,17 @@ static int run_test(void)
 	close(client_fd);
 close_server_fd:
 	close(server_fd);
-	return err;
+
+	if (!err)
+		verify_result(map_fd, sock_map_fd);
 }
 
 void test_tcpbpf_user(void)
 {
 	const char *file = "test_tcpbpf_kern.o";
 	int prog_fd, map_fd, sock_map_fd;
-	struct tcpbpf_globals g = {0};
 	struct bpf_object *obj;
 	int cg_fd = -1;
-	__u32 key = 0;
 	int rv;
 
 	cg_fd = cgroup_setup_and_join(CG_NAME);
@@ -200,26 +192,7 @@ void test_tcpbpf_user(void)
 	if (CHECK_FAIL(sock_map_fd < 0))
 		goto err;
 
-	if (run_test()) {
-		fprintf(stderr, "FAILED: TCP server\n");
-		goto err;
-	}
-
-	rv = bpf_map_lookup_elem(map_fd, &key, &g);
-	if (CHECK_FAIL(rv != 0)) {
-		fprintf(stderr, "FAILED: bpf_map_lookup_elem returns %d\n", rv);
-		goto err;
-	}
-
-	if (CHECK_FAIL(verify_result(&g))) {
-		fprintf(stderr, "FAILED: Wrong stats\n");
-		goto err;
-	}
-
-	if (CHECK_FAIL(verify_sockopt_result(sock_map_fd))) {
-		fprintf(stderr, "FAILED: Wrong sockopt stats\n");
-		goto err;
-	}
+	run_test(map_fd, sock_map_fd);
 err:
 	bpf_prog_detach(cg_fd, BPF_CGROUP_SOCK_OPS);
 	close(cg_fd);



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton
  2020-10-28  1:46 [bpf-next PATCH 0/4] selftests/bpf: Migrate test_tcpbpf_user to be a part of test_progs framework Alexander Duyck
                   ` (2 preceding siblings ...)
  2020-10-28  1:47 ` [bpf-next PATCH 3/4] selftests/bpf: Replace EXPECT_EQ with ASSERT_EQ and refactor verify_results Alexander Duyck
@ 2020-10-28  1:47 ` Alexander Duyck
  2020-10-29 23:20   ` Andrii Nakryiko
  3 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2020-10-28  1:47 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, kafai, john.fastabend, kernel-team, netdev,
	edumazet, brakmo, alexanderduyck

From: Alexander Duyck <alexanderduyck@fb.com>

Update tcpbpf_user.c to make use of the BPF skeleton. Doing this we can
simplify test_tcpbpf_user and reduce the overhead involved in setting up
the test.

Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
---
 .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |   48 +++++++++-----------
 1 file changed, 21 insertions(+), 27 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
index 4e1190894e1e..7e92c37976ac 100644
--- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
+++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
@@ -5,6 +5,7 @@
 
 #include "test_tcpbpf.h"
 #include "cgroup_helpers.h"
+#include "test_tcpbpf_kern.skel.h"
 
 #define LO_ADDR6 "::1"
 #define CG_NAME "/tcpbpf-user-test"
@@ -162,39 +163,32 @@ static void run_test(int map_fd, int sock_map_fd)
 
 void test_tcpbpf_user(void)
 {
-	const char *file = "test_tcpbpf_kern.o";
-	int prog_fd, map_fd, sock_map_fd;
-	struct bpf_object *obj;
+	struct test_tcpbpf_kern *skel;
+	int map_fd, sock_map_fd;
+	struct bpf_link *link;
 	int cg_fd = -1;
-	int rv;
-
-	cg_fd = cgroup_setup_and_join(CG_NAME);
-	if (CHECK_FAIL(cg_fd < 0))
-		goto err;
 
-	if (CHECK_FAIL(bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd))) {
-		fprintf(stderr, "FAILED: load_bpf_file failed for: %s\n", file);
-		goto err;
-	}
+	skel = test_tcpbpf_kern__open_and_load();
+	if (CHECK(!skel, "open and load skel", "failed"))
+		return;
 
-	rv = bpf_prog_attach(prog_fd, cg_fd, BPF_CGROUP_SOCK_OPS, 0);
-	if (CHECK_FAIL(rv)) {
-		fprintf(stderr, "FAILED: bpf_prog_attach: %d (%s)\n",
-		       errno, strerror(errno));
-		goto err;
-	}
+	cg_fd = test__join_cgroup(CG_NAME);
+	if (CHECK_FAIL(cg_fd < 0))
+		goto cleanup_skel;
 
-	map_fd = bpf_find_map(__func__, obj, "global_map");
-	if (CHECK_FAIL(map_fd < 0))
-		goto err;
+	map_fd = bpf_map__fd(skel->maps.global_map);
+	sock_map_fd = bpf_map__fd(skel->maps.sockopt_results);
 
-	sock_map_fd = bpf_find_map(__func__, obj, "sockopt_results");
-	if (CHECK_FAIL(sock_map_fd < 0))
-		goto err;
+	link = bpf_program__attach_cgroup(skel->progs.bpf_testcb, cg_fd);
+	if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n",
+		  PTR_ERR(link)))
+		goto cleanup_namespace;
 
 	run_test(map_fd, sock_map_fd);
-err:
-	bpf_prog_detach(cg_fd, BPF_CGROUP_SOCK_OPS);
+
+	bpf_link__destroy(link);
+cleanup_namespace:
 	close(cg_fd);
-	cleanup_cgroup_environment();
+cleanup_skel:
+	test_tcpbpf_kern__destroy(skel);
 }



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads
  2020-10-28  1:47 ` [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads Alexander Duyck
@ 2020-10-29  1:51   ` Martin KaFai Lau
  2020-10-29 16:58     ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Martin KaFai Lau @ 2020-10-29  1:51 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: bpf, ast, daniel, john.fastabend, kernel-team, netdev, edumazet,
	brakmo, alexanderduyck

On Tue, Oct 27, 2020 at 06:47:13PM -0700, Alexander Duyck wrote:
> From: Alexander Duyck <alexanderduyck@fb.com>
> 
> Drop the tcp_client/server.py files in favor of using a client and server
> thread within the test case. Specifically we spawn a new thread to play the
> role of the server, and the main testing thread plays the role of client.
> 
> Doing this we are able to reduce overhead since we don't have two python
> workers possibly floating around. In addition we don't have to worry about
> synchronization issues and as such the retry loop waiting for the threads
> to close the sockets can be dropped as we will have already closed the
> sockets in the local executable and synchronized the server thread.
Thanks for working on this.

> 
> Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> ---
>  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |  125 +++++++++++++++++---
>  tools/testing/selftests/bpf/tcp_client.py          |   50 --------
>  tools/testing/selftests/bpf/tcp_server.py          |   80 -------------
>  3 files changed, 107 insertions(+), 148 deletions(-)
>  delete mode 100755 tools/testing/selftests/bpf/tcp_client.py
>  delete mode 100755 tools/testing/selftests/bpf/tcp_server.py
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> index 5becab8b04e3..71ab82e37eb7 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> @@ -1,14 +1,65 @@
>  // SPDX-License-Identifier: GPL-2.0
>  #include <inttypes.h>
>  #include <test_progs.h>
> +#include <network_helpers.h>
>  
>  #include "test_tcpbpf.h"
>  #include "cgroup_helpers.h"
>  
> +#define LO_ADDR6 "::1"
>  #define CG_NAME "/tcpbpf-user-test"
>  
> -/* 3 comes from one listening socket + both ends of the connection */
> -#define EXPECTED_CLOSE_EVENTS		3
> +static pthread_mutex_t server_started_mtx = PTHREAD_MUTEX_INITIALIZER;
> +static pthread_cond_t server_started = PTHREAD_COND_INITIALIZER;
> +
> +static void *server_thread(void *arg)
> +{
> +	struct sockaddr_storage addr;
> +	socklen_t len = sizeof(addr);
> +	int fd = *(int *)arg;
> +	char buf[1000];
> +	int client_fd;
> +	int err = 0;
> +	int i;
> +
> +	err = listen(fd, 1);
This is not needed.  start_server() has done it.

> +
> +	pthread_mutex_lock(&server_started_mtx);
> +	pthread_cond_signal(&server_started);
> +	pthread_mutex_unlock(&server_started_mtx);
> +
> +	if (err < 0) {
> +		perror("Failed to listen on socket");
> +		err = errno;
> +		goto err;
> +	}
> +
> +	client_fd = accept(fd, (struct sockaddr *)&addr, &len);
> +	if (client_fd < 0) {
> +		perror("Failed to accept client");
> +		err = errno;
> +		goto err;
> +	}
> +
> +	if (recv(client_fd, buf, 1000, 0) < 1000) {
> +		perror("failed/partial recv");
> +		err = errno;
> +		goto out_clean;
> +	}
> +
> +	for (i = 0; i < 500; i++)
> +		buf[i] = '.';
> +
> +	if (send(client_fd, buf, 500, 0) < 500) {
> +		perror("failed/partial send");
> +		err = errno;
> +		goto out_clean;
> +	}
> +out_clean:
> +	close(client_fd);
> +err:
> +	return (void *)(long)err;
> +}
>  
>  #define EXPECT_EQ(expected, actual, fmt)			\
>  	do {							\
> @@ -43,7 +94,9 @@ int verify_result(const struct tcpbpf_globals *result)
>  	EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
>  	EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
>  	EXPECT_EQ(1, result->num_listen, PRIu32);
> -	EXPECT_EQ(EXPECTED_CLOSE_EVENTS, result->num_close_events, PRIu32);
> +
> +	/* 3 comes from one listening socket + both ends of the connection */
> +	EXPECT_EQ(3, result->num_close_events, PRIu32);
>  
>  	return ret;
>  }
> @@ -67,6 +120,52 @@ int verify_sockopt_result(int sock_map_fd)
>  	return ret;
>  }
>  
> +static int run_test(void)
> +{
> +	int server_fd, client_fd;
> +	void *server_err;
> +	char buf[1000];
> +	pthread_t tid;
> +	int err = -1;
> +	int i;
> +
> +	server_fd = start_server(AF_INET6, SOCK_STREAM, LO_ADDR6, 0, 0);
> +	if (CHECK_FAIL(server_fd < 0))
> +		return err;
> +
> +	pthread_mutex_lock(&server_started_mtx);
> +	if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
> +				      (void *)&server_fd)))
> +		goto close_server_fd;
> +
> +	pthread_cond_wait(&server_started, &server_started_mtx);
> +	pthread_mutex_unlock(&server_started_mtx);
> +
> +	client_fd = connect_to_fd(server_fd, 0);
> +	if (client_fd < 0)
> +		goto close_server_fd;
> +
> +	for (i = 0; i < 1000; i++)
> +		buf[i] = '+';
> +
> +	if (CHECK_FAIL(send(client_fd, buf, 1000, 0) < 1000))
> +		goto close_client_fd;
> +
> +	if (CHECK_FAIL(recv(client_fd, buf, 500, 0) < 500))
> +		goto close_client_fd;
> +
> +	pthread_join(tid, &server_err);
I think this can be further simplified without starting thread
and do everything in run_test() instead.

Something like this (uncompiled code):

	accept_fd = accept(server_fd, NULL, 0);
	send(client_fd, plus_buf, 1000, 0);
	recv(accept_fd, recv_buf, 1000, 0);
	send(accept_fd, dot_buf, 500, 0);
	recv(client_fd, recv_buf, 500, 0);

> +
> +	err = (int)(long)server_err;
> +	CHECK_FAIL(err);
> +
> +close_client_fd:
> +	close(client_fd);
> +close_server_fd:
> +	close(server_fd);
> +	return err;
> +}
> +
>  void test_tcpbpf_user(void)
>  {
>  	const char *file = "test_tcpbpf_kern.o";
> @@ -74,7 +173,6 @@ void test_tcpbpf_user(void)
>  	struct tcpbpf_globals g = {0};
>  	struct bpf_object *obj;
>  	int cg_fd = -1;
> -	int retry = 10;
>  	__u32 key = 0;
>  	int rv;
>  
> @@ -94,11 +192,6 @@ void test_tcpbpf_user(void)
>  		goto err;
>  	}
>  
> -	if (CHECK_FAIL(system("./tcp_server.py"))) {
> -		fprintf(stderr, "FAILED: TCP server\n");
> -		goto err;
> -	}
> -
>  	map_fd = bpf_find_map(__func__, obj, "global_map");
>  	if (CHECK_FAIL(map_fd < 0))
>  		goto err;
> @@ -107,21 +200,17 @@ void test_tcpbpf_user(void)
>  	if (CHECK_FAIL(sock_map_fd < 0))
>  		goto err;
>  
> -retry_lookup:
> +	if (run_test()) {
> +		fprintf(stderr, "FAILED: TCP server\n");
> +		goto err;
> +	}
> +
>  	rv = bpf_map_lookup_elem(map_fd, &key, &g);
>  	if (CHECK_FAIL(rv != 0)) {
CHECK() is a better one here if it needs to output error message.
The same goes for similar usages in this patch set.

For the start_server() above which has already logged the error message,
CHECK_FAIL() is good enough.

>  		fprintf(stderr, "FAILED: bpf_map_lookup_elem returns %d\n", rv);
>  		goto err;
>  	}
>  
> -	if (g.num_close_events != EXPECTED_CLOSE_EVENTS && retry--) {
It is good to have a solution to avoid a test depending on some number
of retries.

After looking at BPF_SOCK_OPS_STATE_CB in test_tcpbpf_kern.c,
it is not clear to me removing python alone is enough to avoid the
race (so the retry--).  One of the sk might still be in TCP_LAST_ACK
instead of TCP_CLOSE.

Also, when looking closer at BPF_SOCK_OPS_STATE_CB in test_tcpbpf_kern.c,
it seems the map value "gp" is slapped together across multiple
TCP_CLOSE events which may be not easy to understand.

How about it checks different states: TCP_CLOSE, TCP_LAST_ACK,
and BPF_TCP_FIN_WAIT2.  Each of this state will update its own
values under "gp".  Something like this (only compiler tested on
top of patch 4):

diff --git i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
index 7e92c37976ac..65b247b03dfc 100644
--- i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
+++ w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
@@ -90,15 +90,14 @@ static void verify_result(int map_fd, int sock_map_fd)
 	      result.event_map, expected_events);
 
 	ASSERT_EQ(result.bytes_received, 501, "bytes_received");
-	ASSERT_EQ(result.bytes_acked, 1002, "bytes_acked");
+	ASSERT_EQ(result.bytes_acked, 1001, "bytes_acked");
 	ASSERT_EQ(result.data_segs_in, 1, "data_segs_in");
 	ASSERT_EQ(result.data_segs_out, 1, "data_segs_out");
 	ASSERT_EQ(result.bad_cb_test_rv, 0x80, "bad_cb_test_rv");
 	ASSERT_EQ(result.good_cb_test_rv, 0, "good_cb_test_rv");
-	ASSERT_EQ(result.num_listen, 1, "num_listen");
-
-	/* 3 comes from one listening socket + both ends of the connection */
-	ASSERT_EQ(result.num_close_events, 3, "num_close_events");
+	ASSERT_EQ(result.num_listen_close, 1, "num_listen");
+	ASSERT_EQ(result.num_last_ack, 1, "num_last_ack");
+	ASSERT_EQ(result.num_fin_wait2, 1, "num_fin_wait2");
 
 	/* check setsockopt for SAVE_SYN */
 	key = 0;
diff --git i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
index 3e6912e4df3d..2c5ffb50d6e0 100644
--- i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
+++ w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
@@ -55,9 +55,11 @@ int bpf_testcb(struct bpf_sock_ops *skops)
 {
 	char header[sizeof(struct ipv6hdr) + sizeof(struct tcphdr)];
 	struct bpf_sock_ops *reuse = skops;
+	struct tcpbpf_globals *gp;
 	struct tcphdr *thdr;
 	int good_call_rv = 0;
 	int bad_call_rv = 0;
+	__u32 key_zero = 0;
 	int save_syn = 1;
 	int rv = -1;
 	int v = 0;
@@ -155,26 +157,21 @@ int bpf_testcb(struct bpf_sock_ops *skops)
 	case BPF_SOCK_OPS_RETRANS_CB:
 		break;
 	case BPF_SOCK_OPS_STATE_CB:
-		if (skops->args[1] == BPF_TCP_CLOSE) {
-			__u32 key = 0;
-			struct tcpbpf_globals g, *gp;
-
-			gp = bpf_map_lookup_elem(&global_map, &key);
-			if (!gp)
-				break;
-			g = *gp;
-			if (skops->args[0] == BPF_TCP_LISTEN) {
-				g.num_listen++;
-			} else {
-				g.total_retrans = skops->total_retrans;
-				g.data_segs_in = skops->data_segs_in;
-				g.data_segs_out = skops->data_segs_out;
-				g.bytes_received = skops->bytes_received;
-				g.bytes_acked = skops->bytes_acked;
-			}
-			g.num_close_events++;
-			bpf_map_update_elem(&global_map, &key, &g,
-					    BPF_ANY);
+		gp = bpf_map_lookup_elem(&global_map, &key_zero);
+		if (!gp)
+			break;
+		if (skops->args[1] == BPF_TCP_CLOSE &&
+		    skops->args[0] == BPF_TCP_LISTEN) {
+			gp->num_listen_close++;
+		} else if (skops->args[1] == BPF_TCP_LAST_ACK) {
+			gp->total_retrans = skops->total_retrans;
+			gp->data_segs_in = skops->data_segs_in;
+			gp->data_segs_out = skops->data_segs_out;
+			gp->bytes_received = skops->bytes_received;
+			gp->bytes_acked = skops->bytes_acked;
+			gp->num_last_ack++;
+		} else if (skops->args[1] == BPF_TCP_FIN_WAIT2) {
+			gp->num_fin_wait2++;
 		}
 		break;
 	case BPF_SOCK_OPS_TCP_LISTEN_CB:
diff --git i/tools/testing/selftests/bpf/test_tcpbpf.h w/tools/testing/selftests/bpf/test_tcpbpf.h
index 6220b95cbd02..0dec324ba4a6 100644
--- i/tools/testing/selftests/bpf/test_tcpbpf.h
+++ w/tools/testing/selftests/bpf/test_tcpbpf.h
@@ -12,7 +12,8 @@ struct tcpbpf_globals {
 	__u32 good_cb_test_rv;
 	__u64 bytes_received;
 	__u64 bytes_acked;
-	__u32 num_listen;
-	__u32 num_close_events;
+	__u32 num_listen_close;
+	__u32 num_last_ack;
+	__u32 num_fin_wait2;
 };
 #endif

I also noticed the bytes_received/acked depends on the order of close(),
i.e. always close the accepted fd first.  I think a comment
in the tcpbpf_user.c is good enough for now.

[ It does not have to be in this set and it can be done in another
  follow up effort.
  Instead of using a bpf map to store the result, using global
  variables in test_tcpbpf_kern.c will simplify the code further. ]

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads
  2020-10-29  1:51   ` Martin KaFai Lau
@ 2020-10-29 16:58     ` Alexander Duyck
  2020-10-29 18:12       ` Martin KaFai Lau
  0 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2020-10-29 16:58 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Kernel Team, Netdev, Eric Dumazet, brakmo, alexanderduyck

On Wed, Oct 28, 2020 at 6:51 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Tue, Oct 27, 2020 at 06:47:13PM -0700, Alexander Duyck wrote:
> > From: Alexander Duyck <alexanderduyck@fb.com>
> >
> > Drop the tcp_client/server.py files in favor of using a client and server
> > thread within the test case. Specifically we spawn a new thread to play the
> > role of the server, and the main testing thread plays the role of client.
> >
> > Doing this we are able to reduce overhead since we don't have two python
> > workers possibly floating around. In addition we don't have to worry about
> > synchronization issues and as such the retry loop waiting for the threads
> > to close the sockets can be dropped as we will have already closed the
> > sockets in the local executable and synchronized the server thread.
> Thanks for working on this.
>
> >
> > Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> > ---
> >  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |  125 +++++++++++++++++---
> >  tools/testing/selftests/bpf/tcp_client.py          |   50 --------
> >  tools/testing/selftests/bpf/tcp_server.py          |   80 -------------
> >  3 files changed, 107 insertions(+), 148 deletions(-)
> >  delete mode 100755 tools/testing/selftests/bpf/tcp_client.py
> >  delete mode 100755 tools/testing/selftests/bpf/tcp_server.py
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > index 5becab8b04e3..71ab82e37eb7 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > @@ -1,14 +1,65 @@
> >  // SPDX-License-Identifier: GPL-2.0
> >  #include <inttypes.h>
> >  #include <test_progs.h>
> > +#include <network_helpers.h>
> >
> >  #include "test_tcpbpf.h"
> >  #include "cgroup_helpers.h"
> >
> > +#define LO_ADDR6 "::1"
> >  #define CG_NAME "/tcpbpf-user-test"
> >
> > -/* 3 comes from one listening socket + both ends of the connection */
> > -#define EXPECTED_CLOSE_EVENTS                3
> > +static pthread_mutex_t server_started_mtx = PTHREAD_MUTEX_INITIALIZER;
> > +static pthread_cond_t server_started = PTHREAD_COND_INITIALIZER;
> > +
> > +static void *server_thread(void *arg)
> > +{
> > +     struct sockaddr_storage addr;
> > +     socklen_t len = sizeof(addr);
> > +     int fd = *(int *)arg;
> > +     char buf[1000];
> > +     int client_fd;
> > +     int err = 0;
> > +     int i;
> > +
> > +     err = listen(fd, 1);
> This is not needed.  start_server() has done it.

Okay, I will drop it.

> > +
> > +     pthread_mutex_lock(&server_started_mtx);
> > +     pthread_cond_signal(&server_started);
> > +     pthread_mutex_unlock(&server_started_mtx);
> > +
> > +     if (err < 0) {
> > +             perror("Failed to listen on socket");
> > +             err = errno;
> > +             goto err;
> > +     }
> > +
> > +     client_fd = accept(fd, (struct sockaddr *)&addr, &len);
> > +     if (client_fd < 0) {
> > +             perror("Failed to accept client");
> > +             err = errno;
> > +             goto err;
> > +     }
> > +
> > +     if (recv(client_fd, buf, 1000, 0) < 1000) {
> > +             perror("failed/partial recv");
> > +             err = errno;
> > +             goto out_clean;
> > +     }
> > +
> > +     for (i = 0; i < 500; i++)
> > +             buf[i] = '.';
> > +
> > +     if (send(client_fd, buf, 500, 0) < 500) {
> > +             perror("failed/partial send");
> > +             err = errno;
> > +             goto out_clean;
> > +     }
> > +out_clean:
> > +     close(client_fd);
> > +err:
> > +     return (void *)(long)err;
> > +}
> >
> >  #define EXPECT_EQ(expected, actual, fmt)                     \
> >       do {                                                    \
> > @@ -43,7 +94,9 @@ int verify_result(const struct tcpbpf_globals *result)
> >       EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
> >       EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
> >       EXPECT_EQ(1, result->num_listen, PRIu32);
> > -     EXPECT_EQ(EXPECTED_CLOSE_EVENTS, result->num_close_events, PRIu32);
> > +
> > +     /* 3 comes from one listening socket + both ends of the connection */
> > +     EXPECT_EQ(3, result->num_close_events, PRIu32);
> >
> >       return ret;
> >  }
> > @@ -67,6 +120,52 @@ int verify_sockopt_result(int sock_map_fd)
> >       return ret;
> >  }
> >
> > +static int run_test(void)
> > +{
> > +     int server_fd, client_fd;
> > +     void *server_err;
> > +     char buf[1000];
> > +     pthread_t tid;
> > +     int err = -1;
> > +     int i;
> > +
> > +     server_fd = start_server(AF_INET6, SOCK_STREAM, LO_ADDR6, 0, 0);
> > +     if (CHECK_FAIL(server_fd < 0))
> > +             return err;
> > +
> > +     pthread_mutex_lock(&server_started_mtx);
> > +     if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
> > +                                   (void *)&server_fd)))
> > +             goto close_server_fd;
> > +
> > +     pthread_cond_wait(&server_started, &server_started_mtx);
> > +     pthread_mutex_unlock(&server_started_mtx);
> > +
> > +     client_fd = connect_to_fd(server_fd, 0);
> > +     if (client_fd < 0)
> > +             goto close_server_fd;
> > +
> > +     for (i = 0; i < 1000; i++)
> > +             buf[i] = '+';
> > +
> > +     if (CHECK_FAIL(send(client_fd, buf, 1000, 0) < 1000))
> > +             goto close_client_fd;
> > +
> > +     if (CHECK_FAIL(recv(client_fd, buf, 500, 0) < 500))
> > +             goto close_client_fd;
> > +
> > +     pthread_join(tid, &server_err);
> I think this can be further simplified without starting thread
> and do everything in run_test() instead.
>
> Something like this (uncompiled code):
>
>         accept_fd = accept(server_fd, NULL, 0);
>         send(client_fd, plus_buf, 1000, 0);
>         recv(accept_fd, recv_buf, 1000, 0);
>         send(accept_fd, dot_buf, 500, 0);
>         recv(client_fd, recv_buf, 500, 0);

I can take a look at switching it over.

> > +
> > +     err = (int)(long)server_err;
> > +     CHECK_FAIL(err);
> > +
> > +close_client_fd:
> > +     close(client_fd);
> > +close_server_fd:
> > +     close(server_fd);
> > +     return err;
> > +}
> > +
> >  void test_tcpbpf_user(void)
> >  {
> >       const char *file = "test_tcpbpf_kern.o";
> > @@ -74,7 +173,6 @@ void test_tcpbpf_user(void)
> >       struct tcpbpf_globals g = {0};
> >       struct bpf_object *obj;
> >       int cg_fd = -1;
> > -     int retry = 10;
> >       __u32 key = 0;
> >       int rv;
> >
> > @@ -94,11 +192,6 @@ void test_tcpbpf_user(void)
> >               goto err;
> >       }
> >
> > -     if (CHECK_FAIL(system("./tcp_server.py"))) {
> > -             fprintf(stderr, "FAILED: TCP server\n");
> > -             goto err;
> > -     }
> > -
> >       map_fd = bpf_find_map(__func__, obj, "global_map");
> >       if (CHECK_FAIL(map_fd < 0))
> >               goto err;
> > @@ -107,21 +200,17 @@ void test_tcpbpf_user(void)
> >       if (CHECK_FAIL(sock_map_fd < 0))
> >               goto err;
> >
> > -retry_lookup:
> > +     if (run_test()) {
> > +             fprintf(stderr, "FAILED: TCP server\n");
> > +             goto err;
> > +     }
> > +
> >       rv = bpf_map_lookup_elem(map_fd, &key, &g);
> >       if (CHECK_FAIL(rv != 0)) {
> CHECK() is a better one here if it needs to output error message.
> The same goes for similar usages in this patch set.
>
> For the start_server() above which has already logged the error message,
> CHECK_FAIL() is good enough.
>
> >               fprintf(stderr, "FAILED: bpf_map_lookup_elem returns %d\n", rv);
> >               goto err;
> >       }
> >
> > -     if (g.num_close_events != EXPECTED_CLOSE_EVENTS && retry--) {
> It is good to have a solution to avoid a test depending on some number
> of retries.
>
> After looking at BPF_SOCK_OPS_STATE_CB in test_tcpbpf_kern.c,
> it is not clear to me removing python alone is enough to avoid the
> race (so the retry--).  One of the sk might still be in TCP_LAST_ACK
> instead of TCP_CLOSE.
>

After you pointed this out I decided to go back through and do some
further testing. After testing this for several thousand iterations it
does look like the issue can still happen, it was just significantly
less frequent with the threaded approach, but it was still there. So I
will go back through and add this back and then fold it into the
verify_results function in the third patch. Although I might reduce
the wait times as it seems like with the inline approach we only need
in the 10s of microseconds instead of 100s for the sockets to close
out.

> Also, when looking closer at BPF_SOCK_OPS_STATE_CB in test_tcpbpf_kern.c,
> it seems the map value "gp" is slapped together across multiple
> TCP_CLOSE events which may be not easy to understand.
>
> How about it checks different states: TCP_CLOSE, TCP_LAST_ACK,
> and BPF_TCP_FIN_WAIT2.  Each of this state will update its own
> values under "gp".  Something like this (only compiler tested on
> top of patch 4):
>
> diff --git i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> index 7e92c37976ac..65b247b03dfc 100644
> --- i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> +++ w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> @@ -90,15 +90,14 @@ static void verify_result(int map_fd, int sock_map_fd)
>               result.event_map, expected_events);
>
>         ASSERT_EQ(result.bytes_received, 501, "bytes_received");
> -       ASSERT_EQ(result.bytes_acked, 1002, "bytes_acked");
> +       ASSERT_EQ(result.bytes_acked, 1001, "bytes_acked");
>         ASSERT_EQ(result.data_segs_in, 1, "data_segs_in");
>         ASSERT_EQ(result.data_segs_out, 1, "data_segs_out");
>         ASSERT_EQ(result.bad_cb_test_rv, 0x80, "bad_cb_test_rv");
>         ASSERT_EQ(result.good_cb_test_rv, 0, "good_cb_test_rv");
> -       ASSERT_EQ(result.num_listen, 1, "num_listen");
> -
> -       /* 3 comes from one listening socket + both ends of the connection */
> -       ASSERT_EQ(result.num_close_events, 3, "num_close_events");
> +       ASSERT_EQ(result.num_listen_close, 1, "num_listen");
> +       ASSERT_EQ(result.num_last_ack, 1, "num_last_ack");
> +       ASSERT_EQ(result.num_fin_wait2, 1, "num_fin_wait2");
>
>         /* check setsockopt for SAVE_SYN */
>         key = 0;
> diff --git i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> index 3e6912e4df3d..2c5ffb50d6e0 100644
> --- i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> +++ w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> @@ -55,9 +55,11 @@ int bpf_testcb(struct bpf_sock_ops *skops)
>  {
>         char header[sizeof(struct ipv6hdr) + sizeof(struct tcphdr)];
>         struct bpf_sock_ops *reuse = skops;
> +       struct tcpbpf_globals *gp;
>         struct tcphdr *thdr;
>         int good_call_rv = 0;
>         int bad_call_rv = 0;
> +       __u32 key_zero = 0;
>         int save_syn = 1;
>         int rv = -1;
>         int v = 0;
> @@ -155,26 +157,21 @@ int bpf_testcb(struct bpf_sock_ops *skops)
>         case BPF_SOCK_OPS_RETRANS_CB:
>                 break;
>         case BPF_SOCK_OPS_STATE_CB:
> -               if (skops->args[1] == BPF_TCP_CLOSE) {
> -                       __u32 key = 0;
> -                       struct tcpbpf_globals g, *gp;
> -
> -                       gp = bpf_map_lookup_elem(&global_map, &key);
> -                       if (!gp)
> -                               break;
> -                       g = *gp;
> -                       if (skops->args[0] == BPF_TCP_LISTEN) {
> -                               g.num_listen++;
> -                       } else {
> -                               g.total_retrans = skops->total_retrans;
> -                               g.data_segs_in = skops->data_segs_in;
> -                               g.data_segs_out = skops->data_segs_out;
> -                               g.bytes_received = skops->bytes_received;
> -                               g.bytes_acked = skops->bytes_acked;
> -                       }
> -                       g.num_close_events++;
> -                       bpf_map_update_elem(&global_map, &key, &g,
> -                                           BPF_ANY);
> +               gp = bpf_map_lookup_elem(&global_map, &key_zero);
> +               if (!gp)
> +                       break;
> +               if (skops->args[1] == BPF_TCP_CLOSE &&
> +                   skops->args[0] == BPF_TCP_LISTEN) {
> +                       gp->num_listen_close++;
> +               } else if (skops->args[1] == BPF_TCP_LAST_ACK) {
> +                       gp->total_retrans = skops->total_retrans;
> +                       gp->data_segs_in = skops->data_segs_in;
> +                       gp->data_segs_out = skops->data_segs_out;
> +                       gp->bytes_received = skops->bytes_received;
> +                       gp->bytes_acked = skops->bytes_acked;
> +                       gp->num_last_ack++;
> +               } else if (skops->args[1] == BPF_TCP_FIN_WAIT2) {
> +                       gp->num_fin_wait2++;
>                 }
>                 break;
>         case BPF_SOCK_OPS_TCP_LISTEN_CB:
> diff --git i/tools/testing/selftests/bpf/test_tcpbpf.h w/tools/testing/selftests/bpf/test_tcpbpf.h
> index 6220b95cbd02..0dec324ba4a6 100644
> --- i/tools/testing/selftests/bpf/test_tcpbpf.h
> +++ w/tools/testing/selftests/bpf/test_tcpbpf.h
> @@ -12,7 +12,8 @@ struct tcpbpf_globals {
>         __u32 good_cb_test_rv;
>         __u64 bytes_received;
>         __u64 bytes_acked;
> -       __u32 num_listen;
> -       __u32 num_close_events;
> +       __u32 num_listen_close;
> +       __u32 num_last_ack;
> +       __u32 num_fin_wait2;
>  };
>  #endif

I can look at pulling this in and including it as a patch 5 if you
would prefer. If I find any issues I will let you know.

> I also noticed the bytes_received/acked depends on the order of close(),
> i.e. always close the accepted fd first.  I think a comment
> in the tcpbpf_user.c is good enough for now.

Okay, I can add a comment explaining this.

> [ It does not have to be in this set and it can be done in another
>   follow up effort.
>   Instead of using a bpf map to store the result, using global
>   variables in test_tcpbpf_kern.c will simplify the code further. ]

I assume this comment is about the changes to test_tcpbpf_kern.c? Just
want to clarify as I assume this isn't about adding the comment about
the socket closing order affecting the bytes_received/acked.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads
  2020-10-29 16:58     ` Alexander Duyck
@ 2020-10-29 18:12       ` Martin KaFai Lau
  2020-10-29 19:51         ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Martin KaFai Lau @ 2020-10-29 18:12 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Kernel Team, Netdev, Eric Dumazet, brakmo, alexanderduyck

On Thu, Oct 29, 2020 at 09:58:15AM -0700, Alexander Duyck wrote:
[ ... ]

> > > @@ -43,7 +94,9 @@ int verify_result(const struct tcpbpf_globals *result)
> > >       EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
> > >       EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
> > >       EXPECT_EQ(1, result->num_listen, PRIu32);
> > > -     EXPECT_EQ(EXPECTED_CLOSE_EVENTS, result->num_close_events, PRIu32);
> > > +
> > > +     /* 3 comes from one listening socket + both ends of the connection */
> > > +     EXPECT_EQ(3, result->num_close_events, PRIu32);
> > >
> > >       return ret;
> > >  }
> > > @@ -67,6 +120,52 @@ int verify_sockopt_result(int sock_map_fd)
> > >       return ret;
> > >  }
> > >
> > > +static int run_test(void)
> > > +{
> > > +     int server_fd, client_fd;
> > > +     void *server_err;
> > > +     char buf[1000];
> > > +     pthread_t tid;
> > > +     int err = -1;
> > > +     int i;
> > > +
> > > +     server_fd = start_server(AF_INET6, SOCK_STREAM, LO_ADDR6, 0, 0);
> > > +     if (CHECK_FAIL(server_fd < 0))
> > > +             return err;
> > > +
> > > +     pthread_mutex_lock(&server_started_mtx);
> > > +     if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
> > > +                                   (void *)&server_fd)))
> > > +             goto close_server_fd;
> > > +
> > > +     pthread_cond_wait(&server_started, &server_started_mtx);
> > > +     pthread_mutex_unlock(&server_started_mtx);
> > > +
> > > +     client_fd = connect_to_fd(server_fd, 0);
> > > +     if (client_fd < 0)
> > > +             goto close_server_fd;
> > > +
> > > +     for (i = 0; i < 1000; i++)
> > > +             buf[i] = '+';
> > > +
> > > +     if (CHECK_FAIL(send(client_fd, buf, 1000, 0) < 1000))
> > > +             goto close_client_fd;
> > > +
> > > +     if (CHECK_FAIL(recv(client_fd, buf, 500, 0) < 500))
> > > +             goto close_client_fd;
> > > +
> > > +     pthread_join(tid, &server_err);
> > I think this can be further simplified without starting thread
> > and do everything in run_test() instead.
> >
> > Something like this (uncompiled code):
> >
> >         accept_fd = accept(server_fd, NULL, 0);
> >         send(client_fd, plus_buf, 1000, 0);
> >         recv(accept_fd, recv_buf, 1000, 0);
> >         send(accept_fd, dot_buf, 500, 0);
> >         recv(client_fd, recv_buf, 500, 0);
> 
> I can take a look at switching it over.
> 
> > > +
> > > +     err = (int)(long)server_err;
> > > +     CHECK_FAIL(err);
> > > +
> > > +close_client_fd:
> > > +     close(client_fd);
> > > +close_server_fd:
> > > +     close(server_fd);
> > > +     return err;
> > > +}
> > > +
> > >  void test_tcpbpf_user(void)
> > >  {
> > >       const char *file = "test_tcpbpf_kern.o";
> > > @@ -74,7 +173,6 @@ void test_tcpbpf_user(void)
> > >       struct tcpbpf_globals g = {0};
> > >       struct bpf_object *obj;
> > >       int cg_fd = -1;
> > > -     int retry = 10;
> > >       __u32 key = 0;
> > >       int rv;
> > >
> > > @@ -94,11 +192,6 @@ void test_tcpbpf_user(void)
> > >               goto err;
> > >       }
> > >
> > > -     if (CHECK_FAIL(system("./tcp_server.py"))) {
> > > -             fprintf(stderr, "FAILED: TCP server\n");
> > > -             goto err;
> > > -     }
> > > -
> > >       map_fd = bpf_find_map(__func__, obj, "global_map");
> > >       if (CHECK_FAIL(map_fd < 0))
> > >               goto err;
> > > @@ -107,21 +200,17 @@ void test_tcpbpf_user(void)
> > >       if (CHECK_FAIL(sock_map_fd < 0))
> > >               goto err;
> > >
> > > -retry_lookup:
> > > +     if (run_test()) {
> > > +             fprintf(stderr, "FAILED: TCP server\n");
> > > +             goto err;
> > > +     }
> > > +
> > >       rv = bpf_map_lookup_elem(map_fd, &key, &g);
> > >       if (CHECK_FAIL(rv != 0)) {
> > CHECK() is a better one here if it needs to output error message.
> > The same goes for similar usages in this patch set.
> >
> > For the start_server() above which has already logged the error message,
> > CHECK_FAIL() is good enough.
> >
> > >               fprintf(stderr, "FAILED: bpf_map_lookup_elem returns %d\n", rv);
> > >               goto err;
> > >       }
> > >
> > > -     if (g.num_close_events != EXPECTED_CLOSE_EVENTS && retry--) {
> > It is good to have a solution to avoid a test depending on some number
> > of retries.
> >
> > After looking at BPF_SOCK_OPS_STATE_CB in test_tcpbpf_kern.c,
> > it is not clear to me removing python alone is enough to avoid the
> > race (so the retry--).  One of the sk might still be in TCP_LAST_ACK
> > instead of TCP_CLOSE.
> >
> 
> After you pointed this out I decided to go back through and do some
> further testing. After testing this for several thousand iterations it
> does look like the issue can still happen, it was just significantly
> less frequent with the threaded approach, but it was still there. So I
> will go back through and add this back and then fold it into the
> verify_results function in the third patch. Although I might reduce
> the wait times as it seems like with the inline approach we only need
> in the 10s of microseconds instead of 100s for the sockets to close
> out.
I think this retry-and-wait can be avoided.  More on this...

> 
> > Also, when looking closer at BPF_SOCK_OPS_STATE_CB in test_tcpbpf_kern.c,
> > it seems the map value "gp" is slapped together across multiple
> > TCP_CLOSE events which may be not easy to understand.
> >
> > How about it checks different states: TCP_CLOSE, TCP_LAST_ACK,
> > and BPF_TCP_FIN_WAIT2.  Each of this state will update its own
> > values under "gp".  Something like this (only compiler tested on
> > top of patch 4):
> >
> > diff --git i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > index 7e92c37976ac..65b247b03dfc 100644
> > --- i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > +++ w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > @@ -90,15 +90,14 @@ static void verify_result(int map_fd, int sock_map_fd)
> >               result.event_map, expected_events);
> >
> >         ASSERT_EQ(result.bytes_received, 501, "bytes_received");
> > -       ASSERT_EQ(result.bytes_acked, 1002, "bytes_acked");
> > +       ASSERT_EQ(result.bytes_acked, 1001, "bytes_acked");
> >         ASSERT_EQ(result.data_segs_in, 1, "data_segs_in");
> >         ASSERT_EQ(result.data_segs_out, 1, "data_segs_out");
> >         ASSERT_EQ(result.bad_cb_test_rv, 0x80, "bad_cb_test_rv");
> >         ASSERT_EQ(result.good_cb_test_rv, 0, "good_cb_test_rv");
> > -       ASSERT_EQ(result.num_listen, 1, "num_listen");
> > -
> > -       /* 3 comes from one listening socket + both ends of the connection */
> > -       ASSERT_EQ(result.num_close_events, 3, "num_close_events");
> > +       ASSERT_EQ(result.num_listen_close, 1, "num_listen");
> > +       ASSERT_EQ(result.num_last_ack, 1, "num_last_ack");
> > +       ASSERT_EQ(result.num_fin_wait2, 1, "num_fin_wait2");
> >
> >         /* check setsockopt for SAVE_SYN */
> >         key = 0;
> > diff --git i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> > index 3e6912e4df3d..2c5ffb50d6e0 100644
> > --- i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> > +++ w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> > @@ -55,9 +55,11 @@ int bpf_testcb(struct bpf_sock_ops *skops)
> >  {
> >         char header[sizeof(struct ipv6hdr) + sizeof(struct tcphdr)];
> >         struct bpf_sock_ops *reuse = skops;
> > +       struct tcpbpf_globals *gp;
> >         struct tcphdr *thdr;
> >         int good_call_rv = 0;
> >         int bad_call_rv = 0;
> > +       __u32 key_zero = 0;
> >         int save_syn = 1;
> >         int rv = -1;
> >         int v = 0;
> > @@ -155,26 +157,21 @@ int bpf_testcb(struct bpf_sock_ops *skops)
> >         case BPF_SOCK_OPS_RETRANS_CB:
> >                 break;
> >         case BPF_SOCK_OPS_STATE_CB:
> > -               if (skops->args[1] == BPF_TCP_CLOSE) {
> > -                       __u32 key = 0;
> > -                       struct tcpbpf_globals g, *gp;
> > -
> > -                       gp = bpf_map_lookup_elem(&global_map, &key);
> > -                       if (!gp)
> > -                               break;
> > -                       g = *gp;
> > -                       if (skops->args[0] == BPF_TCP_LISTEN) {
> > -                               g.num_listen++;
> > -                       } else {
> > -                               g.total_retrans = skops->total_retrans;
> > -                               g.data_segs_in = skops->data_segs_in;
> > -                               g.data_segs_out = skops->data_segs_out;
> > -                               g.bytes_received = skops->bytes_received;
> > -                               g.bytes_acked = skops->bytes_acked;
> > -                       }
> > -                       g.num_close_events++;
> > -                       bpf_map_update_elem(&global_map, &key, &g,
> > -                                           BPF_ANY);
> > +               gp = bpf_map_lookup_elem(&global_map, &key_zero);
> > +               if (!gp)
> > +                       break;
> > +               if (skops->args[1] == BPF_TCP_CLOSE &&
> > +                   skops->args[0] == BPF_TCP_LISTEN) {
> > +                       gp->num_listen_close++;
> > +               } else if (skops->args[1] == BPF_TCP_LAST_ACK) {
> > +                       gp->total_retrans = skops->total_retrans;
> > +                       gp->data_segs_in = skops->data_segs_in;
> > +                       gp->data_segs_out = skops->data_segs_out;
> > +                       gp->bytes_received = skops->bytes_received;
> > +                       gp->bytes_acked = skops->bytes_acked;
> > +                       gp->num_last_ack++;
> > +               } else if (skops->args[1] == BPF_TCP_FIN_WAIT2) {
> > +                       gp->num_fin_wait2++;
I meant with the above change in "case BPF_SOCK_OPS_STATE_CB".
The retry-and-wait in tcpbpf_user.c can be avoided.

What may still be needed in tcpbpf_user.c is to use shutdown and
read-zero to ensure the sk has gone through those states before
calling verify_result().  Something like this [ uncompiled code again :) ]:

        /* Always send FIN from accept_fd first to
         * ensure it will go through FIN_WAIT_2.
         */
        shutdown(accept_fd, SHUT_WR);
        /* Ensure client_fd gets the FIN */
        err = read(client_fd, buf, sizeof(buf));
        if (CHECK(err != 0, "read-after-shutdown(client_fd):",
                  "err:%d errno:%d\n", err, errno))
                goto close_accept_fd;

        /* FIN sends from client_fd and it must be in LAST_ACK now */
        shutdown(client_fd, SHUT_WR);
        /* Ensure accept_fd gets the FIN-ACK.
         * accept_fd must have passed the FIN_WAIT2.
         */
        err = read(accept_fd, buf, sizeof(buf));
        if (CHECK(err != 0, "read-after-shutdown(accept_fd):",
                  "err:%d errno:%d\n", err, errno))
                goto close_accept_fd;

	close(server_fd);
	close(accept_fd);
	close(client_fd);

	/* All sk has gone through the states being tested.
	 * check the results now.
	 */
	verify_result(map_fd, sock_map_fd);

> >                 }
> >                 break;
> >         case BPF_SOCK_OPS_TCP_LISTEN_CB:
> > diff --git i/tools/testing/selftests/bpf/test_tcpbpf.h w/tools/testing/selftests/bpf/test_tcpbpf.h
> > index 6220b95cbd02..0dec324ba4a6 100644
> > --- i/tools/testing/selftests/bpf/test_tcpbpf.h
> > +++ w/tools/testing/selftests/bpf/test_tcpbpf.h
> > @@ -12,7 +12,8 @@ struct tcpbpf_globals {
> >         __u32 good_cb_test_rv;
> >         __u64 bytes_received;
> >         __u64 bytes_acked;
> > -       __u32 num_listen;
> > -       __u32 num_close_events;
> > +       __u32 num_listen_close;
> > +       __u32 num_last_ack;
> > +       __u32 num_fin_wait2;
> >  };
> >  #endif
> 
> I can look at pulling this in and including it as a patch 5 if you
> would prefer. If I find any issues I will let you know.
> 
> > I also noticed the bytes_received/acked depends on the order of close(),
> > i.e. always close the accepted fd first.  I think a comment
> > in the tcpbpf_user.c is good enough for now.
> 
> Okay, I can add a comment explaining this.
> 
> > [ It does not have to be in this set and it can be done in another
> >   follow up effort.
> >   Instead of using a bpf map to store the result, using global
> >   variables in test_tcpbpf_kern.c will simplify the code further. ]
> 
> I assume this comment is about the changes to test_tcpbpf_kern.c? Just
> want to clarify as I assume this isn't about adding the comment about
> the socket closing order affecting the bytes_received/acked.
Right, it is unrelated to the "adding the comment about socket closing order".
It is about changing test_tcpbpf_kern.c and tcpbpf_user.c to
use global variables instead of bpf map to store results.
Again, it can be done later.  This can be used as an example:
b18c1f0aa477 ("bpf: selftest: Adapt sock_fields test to use skel and global variables")

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads
  2020-10-29 18:12       ` Martin KaFai Lau
@ 2020-10-29 19:51         ` Alexander Duyck
  0 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2020-10-29 19:51 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Kernel Team, Netdev, Eric Dumazet, brakmo, alexanderduyck

On Thu, Oct 29, 2020 at 11:13 AM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Thu, Oct 29, 2020 at 09:58:15AM -0700, Alexander Duyck wrote:
> [ ... ]
>

[...]

> >
> > > Also, when looking closer at BPF_SOCK_OPS_STATE_CB in test_tcpbpf_kern.c,
> > > it seems the map value "gp" is slapped together across multiple
> > > TCP_CLOSE events which may be not easy to understand.
> > >
> > > How about it checks different states: TCP_CLOSE, TCP_LAST_ACK,
> > > and BPF_TCP_FIN_WAIT2.  Each of this state will update its own
> > > values under "gp".  Something like this (only compiler tested on
> > > top of patch 4):
> > >
> > > diff --git i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > index 7e92c37976ac..65b247b03dfc 100644
> > > --- i/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > +++ w/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > @@ -90,15 +90,14 @@ static void verify_result(int map_fd, int sock_map_fd)
> > >               result.event_map, expected_events);
> > >
> > >         ASSERT_EQ(result.bytes_received, 501, "bytes_received");
> > > -       ASSERT_EQ(result.bytes_acked, 1002, "bytes_acked");
> > > +       ASSERT_EQ(result.bytes_acked, 1001, "bytes_acked");
> > >         ASSERT_EQ(result.data_segs_in, 1, "data_segs_in");
> > >         ASSERT_EQ(result.data_segs_out, 1, "data_segs_out");
> > >         ASSERT_EQ(result.bad_cb_test_rv, 0x80, "bad_cb_test_rv");
> > >         ASSERT_EQ(result.good_cb_test_rv, 0, "good_cb_test_rv");
> > > -       ASSERT_EQ(result.num_listen, 1, "num_listen");
> > > -
> > > -       /* 3 comes from one listening socket + both ends of the connection */
> > > -       ASSERT_EQ(result.num_close_events, 3, "num_close_events");
> > > +       ASSERT_EQ(result.num_listen_close, 1, "num_listen");
> > > +       ASSERT_EQ(result.num_last_ack, 1, "num_last_ack");
> > > +       ASSERT_EQ(result.num_fin_wait2, 1, "num_fin_wait2");
> > >
> > >         /* check setsockopt for SAVE_SYN */
> > >         key = 0;
> > > diff --git i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> > > index 3e6912e4df3d..2c5ffb50d6e0 100644
> > > --- i/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> > > +++ w/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c
> > > @@ -55,9 +55,11 @@ int bpf_testcb(struct bpf_sock_ops *skops)
> > >  {
> > >         char header[sizeof(struct ipv6hdr) + sizeof(struct tcphdr)];
> > >         struct bpf_sock_ops *reuse = skops;
> > > +       struct tcpbpf_globals *gp;
> > >         struct tcphdr *thdr;
> > >         int good_call_rv = 0;
> > >         int bad_call_rv = 0;
> > > +       __u32 key_zero = 0;
> > >         int save_syn = 1;
> > >         int rv = -1;
> > >         int v = 0;
> > > @@ -155,26 +157,21 @@ int bpf_testcb(struct bpf_sock_ops *skops)
> > >         case BPF_SOCK_OPS_RETRANS_CB:
> > >                 break;
> > >         case BPF_SOCK_OPS_STATE_CB:
> > > -               if (skops->args[1] == BPF_TCP_CLOSE) {
> > > -                       __u32 key = 0;
> > > -                       struct tcpbpf_globals g, *gp;
> > > -
> > > -                       gp = bpf_map_lookup_elem(&global_map, &key);
> > > -                       if (!gp)
> > > -                               break;
> > > -                       g = *gp;
> > > -                       if (skops->args[0] == BPF_TCP_LISTEN) {
> > > -                               g.num_listen++;
> > > -                       } else {
> > > -                               g.total_retrans = skops->total_retrans;
> > > -                               g.data_segs_in = skops->data_segs_in;
> > > -                               g.data_segs_out = skops->data_segs_out;
> > > -                               g.bytes_received = skops->bytes_received;
> > > -                               g.bytes_acked = skops->bytes_acked;
> > > -                       }
> > > -                       g.num_close_events++;
> > > -                       bpf_map_update_elem(&global_map, &key, &g,
> > > -                                           BPF_ANY);
> > > +               gp = bpf_map_lookup_elem(&global_map, &key_zero);
> > > +               if (!gp)
> > > +                       break;
> > > +               if (skops->args[1] == BPF_TCP_CLOSE &&
> > > +                   skops->args[0] == BPF_TCP_LISTEN) {
> > > +                       gp->num_listen_close++;
> > > +               } else if (skops->args[1] == BPF_TCP_LAST_ACK) {
> > > +                       gp->total_retrans = skops->total_retrans;
> > > +                       gp->data_segs_in = skops->data_segs_in;
> > > +                       gp->data_segs_out = skops->data_segs_out;
> > > +                       gp->bytes_received = skops->bytes_received;
> > > +                       gp->bytes_acked = skops->bytes_acked;
> > > +                       gp->num_last_ack++;
> > > +               } else if (skops->args[1] == BPF_TCP_FIN_WAIT2) {
> > > +                       gp->num_fin_wait2++;
> I meant with the above change in "case BPF_SOCK_OPS_STATE_CB".
> The retry-and-wait in tcpbpf_user.c can be avoided.
>
> What may still be needed in tcpbpf_user.c is to use shutdown and
> read-zero to ensure the sk has gone through those states before
> calling verify_result().  Something like this [ uncompiled code again :) ]:
>
>         /* Always send FIN from accept_fd first to
>          * ensure it will go through FIN_WAIT_2.
>          */
>         shutdown(accept_fd, SHUT_WR);
>         /* Ensure client_fd gets the FIN */
>         err = read(client_fd, buf, sizeof(buf));
>         if (CHECK(err != 0, "read-after-shutdown(client_fd):",
>                   "err:%d errno:%d\n", err, errno))
>                 goto close_accept_fd;
>
>         /* FIN sends from client_fd and it must be in LAST_ACK now */
>         shutdown(client_fd, SHUT_WR);
>         /* Ensure accept_fd gets the FIN-ACK.
>          * accept_fd must have passed the FIN_WAIT2.
>          */
>         err = read(accept_fd, buf, sizeof(buf));
>         if (CHECK(err != 0, "read-after-shutdown(accept_fd):",
>                   "err:%d errno:%d\n", err, errno))
>                 goto close_accept_fd;
>
>         close(server_fd);
>         close(accept_fd);
>         close(client_fd);
>
>         /* All sk has gone through the states being tested.
>          * check the results now.
>          */
>         verify_result(map_fd, sock_map_fd);

Okay. I think I see how that works then. Basically shutdown the write
on one end and read on the other expecting to hold until it forces us
out with a read of length 0 on the other end. Although I might just
use recv since that was the call being used to pull data from the
socket rather than read. I just need to make sure I perform it
starting with the shutdown on the accept end first so that it will
close first to avoid causing the received/acked to be swapped.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton
  2020-10-28  1:47 ` [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton Alexander Duyck
@ 2020-10-29 23:20   ` Andrii Nakryiko
  2020-10-30  0:42     ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Andrii Nakryiko @ 2020-10-29 23:20 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin Lau,
	john fastabend, Kernel Team, Networking, Eric Dumazet,
	Lawrence Brakmo, alexanderduyck

On Thu, Oct 29, 2020 at 12:57 AM Alexander Duyck
<alexander.duyck@gmail.com> wrote:
>
> From: Alexander Duyck <alexanderduyck@fb.com>
>
> Update tcpbpf_user.c to make use of the BPF skeleton. Doing this we can
> simplify test_tcpbpf_user and reduce the overhead involved in setting up
> the test.
>
> Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> ---

Few suggestions below, but overall looks good:

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |   48 +++++++++-----------
>  1 file changed, 21 insertions(+), 27 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> index 4e1190894e1e..7e92c37976ac 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> @@ -5,6 +5,7 @@
>
>  #include "test_tcpbpf.h"
>  #include "cgroup_helpers.h"
> +#include "test_tcpbpf_kern.skel.h"
>
>  #define LO_ADDR6 "::1"
>  #define CG_NAME "/tcpbpf-user-test"
> @@ -162,39 +163,32 @@ static void run_test(int map_fd, int sock_map_fd)
>
>  void test_tcpbpf_user(void)
>  {
> -       const char *file = "test_tcpbpf_kern.o";
> -       int prog_fd, map_fd, sock_map_fd;
> -       struct bpf_object *obj;
> +       struct test_tcpbpf_kern *skel;
> +       int map_fd, sock_map_fd;
> +       struct bpf_link *link;
>         int cg_fd = -1;
> -       int rv;
> -
> -       cg_fd = cgroup_setup_and_join(CG_NAME);
> -       if (CHECK_FAIL(cg_fd < 0))
> -               goto err;
>
> -       if (CHECK_FAIL(bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd))) {
> -               fprintf(stderr, "FAILED: load_bpf_file failed for: %s\n", file);
> -               goto err;
> -       }
> +       skel = test_tcpbpf_kern__open_and_load();
> +       if (CHECK(!skel, "open and load skel", "failed"))
> +               return;
>
> -       rv = bpf_prog_attach(prog_fd, cg_fd, BPF_CGROUP_SOCK_OPS, 0);
> -       if (CHECK_FAIL(rv)) {
> -               fprintf(stderr, "FAILED: bpf_prog_attach: %d (%s)\n",
> -                      errno, strerror(errno));
> -               goto err;
> -       }
> +       cg_fd = test__join_cgroup(CG_NAME);
> +       if (CHECK_FAIL(cg_fd < 0))

please use either CHECK() or one of the newer ASSERT_xxx() macro (also
defined in test_progs.h), CHECK_FAIL should be avoided in general.

> +               goto cleanup_skel;
>
> -       map_fd = bpf_find_map(__func__, obj, "global_map");
> -       if (CHECK_FAIL(map_fd < 0))
> -               goto err;
> +       map_fd = bpf_map__fd(skel->maps.global_map);
> +       sock_map_fd = bpf_map__fd(skel->maps.sockopt_results);
>
> -       sock_map_fd = bpf_find_map(__func__, obj, "sockopt_results");
> -       if (CHECK_FAIL(sock_map_fd < 0))
> -               goto err;
> +       link = bpf_program__attach_cgroup(skel->progs.bpf_testcb, cg_fd);

you can do skel->links.bpf_testcb = bpf_program__attach_cgroup() and
skeleton's __destroy() call will take care of destroying the link

> +       if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n",
> +                 PTR_ERR(link)))

there is a convenient ASSERT_OK_PTR() specifically for pointers like
this (and NULL ones as well, of course); saves a lot of typing and
encapsulates error extraction internally.

> +               goto cleanup_namespace;
>
>         run_test(map_fd, sock_map_fd);
> -err:
> -       bpf_prog_detach(cg_fd, BPF_CGROUP_SOCK_OPS);
> +
> +       bpf_link__destroy(link);
> +cleanup_namespace:
>         close(cg_fd);
> -       cleanup_cgroup_environment();
> +cleanup_skel:
> +       test_tcpbpf_kern__destroy(skel);
>  }
>
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs
  2020-10-28  1:47 ` [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs Alexander Duyck
@ 2020-10-29 23:27   ` Andrii Nakryiko
  2020-10-30  0:39     ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Andrii Nakryiko @ 2020-10-29 23:27 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin Lau,
	john fastabend, Kernel Team, Networking, Eric Dumazet,
	Lawrence Brakmo, alexanderduyck

On Wed, Oct 28, 2020 at 4:50 PM Alexander Duyck
<alexander.duyck@gmail.com> wrote:
>
> From: Alexander Duyck <alexanderduyck@fb.com>
>
> Recently a bug was missed due to the fact that test_tcpbpf_user is not a
> part of test_progs. In order to prevent similar issues in the future move
> the test functionality into test_progs. By doing this we can make certain
> that it is a part of standard testing and will not be overlooked.
>
> As a part of moving the functionality into test_progs it is necessary to
> integrate with the test_progs framework and to drop any redundant code.
> This patch:
> 1. Cleans up the include headers
> 2. Dropped a duplicate definition of bpf_find_map
> 3. Replaced printf calls with fprintf to stderr
> 4. Renamed main to test_tcpbpf_user
> 5. Dropped return value in favor of CHECK calls to check for errors
>
> Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> ---
>  tools/testing/selftests/bpf/Makefile               |    3
>  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |  138 +++++++++++++++++
>  tools/testing/selftests/bpf/test_tcpbpf_user.c     |  165 --------------------

Please remove the binary from .gitignore as well

>  3 files changed, 139 insertions(+), 167 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
>  delete mode 100644 tools/testing/selftests/bpf/test_tcpbpf_user.c

if this file is mostly the same, then Git should be able to detect
that this is a file rename. That will be captured in a diff explicitly
and will minimize this patch significantly. Please double-check why
this was not detected properly.

[...]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs
  2020-10-29 23:27   ` Andrii Nakryiko
@ 2020-10-30  0:39     ` Alexander Duyck
  0 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2020-10-30  0:39 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin Lau,
	john fastabend, Kernel Team, Networking, Eric Dumazet,
	Lawrence Brakmo, alexanderduyck

On Thu, Oct 29, 2020 at 4:27 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Wed, Oct 28, 2020 at 4:50 PM Alexander Duyck
> <alexander.duyck@gmail.com> wrote:
> >
> > From: Alexander Duyck <alexanderduyck@fb.com>
> >
> > Recently a bug was missed due to the fact that test_tcpbpf_user is not a
> > part of test_progs. In order to prevent similar issues in the future move
> > the test functionality into test_progs. By doing this we can make certain
> > that it is a part of standard testing and will not be overlooked.
> >
> > As a part of moving the functionality into test_progs it is necessary to
> > integrate with the test_progs framework and to drop any redundant code.
> > This patch:
> > 1. Cleans up the include headers
> > 2. Dropped a duplicate definition of bpf_find_map
> > 3. Replaced printf calls with fprintf to stderr
> > 4. Renamed main to test_tcpbpf_user
> > 5. Dropped return value in favor of CHECK calls to check for errors
> >
> > Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> > ---
> >  tools/testing/selftests/bpf/Makefile               |    3
> >  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |  138 +++++++++++++++++
> >  tools/testing/selftests/bpf/test_tcpbpf_user.c     |  165 --------------------
>
> Please remove the binary from .gitignore as well

Okay, I will update that.

> >  3 files changed, 139 insertions(+), 167 deletions(-)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> >  delete mode 100644 tools/testing/selftests/bpf/test_tcpbpf_user.c
>
> if this file is mostly the same, then Git should be able to detect
> that this is a file rename. That will be captured in a diff explicitly
> and will minimize this patch significantly. Please double-check why
> this was not detected properly.
>
> [...]

I'll look into it. I hadn't noticed that the patch it generated is
different then the one I am looking at in stgit. When I am doing a stg
show in stgit it is listing it as a rename so I suspect it is a
setting somewhere that is likely assuming legacy support or something.
I'll get that straightened out before I submit a v2.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton
  2020-10-29 23:20   ` Andrii Nakryiko
@ 2020-10-30  0:42     ` Alexander Duyck
  2020-10-30  2:30       ` Andrii Nakryiko
  0 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2020-10-30  0:42 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin Lau,
	john fastabend, Kernel Team, Networking, Eric Dumazet,
	Lawrence Brakmo, alexanderduyck

On Thu, Oct 29, 2020 at 4:20 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Oct 29, 2020 at 12:57 AM Alexander Duyck
> <alexander.duyck@gmail.com> wrote:
> >
> > From: Alexander Duyck <alexanderduyck@fb.com>
> >
> > Update tcpbpf_user.c to make use of the BPF skeleton. Doing this we can
> > simplify test_tcpbpf_user and reduce the overhead involved in setting up
> > the test.
> >
> > Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> > ---
>
> Few suggestions below, but overall looks good:
>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
>
> >  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |   48 +++++++++-----------
> >  1 file changed, 21 insertions(+), 27 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > index 4e1190894e1e..7e92c37976ac 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > @@ -5,6 +5,7 @@
> >
> >  #include "test_tcpbpf.h"
> >  #include "cgroup_helpers.h"
> > +#include "test_tcpbpf_kern.skel.h"
> >
> >  #define LO_ADDR6 "::1"
> >  #define CG_NAME "/tcpbpf-user-test"
> > @@ -162,39 +163,32 @@ static void run_test(int map_fd, int sock_map_fd)
> >
> >  void test_tcpbpf_user(void)
> >  {
> > -       const char *file = "test_tcpbpf_kern.o";
> > -       int prog_fd, map_fd, sock_map_fd;
> > -       struct bpf_object *obj;
> > +       struct test_tcpbpf_kern *skel;
> > +       int map_fd, sock_map_fd;
> > +       struct bpf_link *link;
> >         int cg_fd = -1;
> > -       int rv;
> > -
> > -       cg_fd = cgroup_setup_and_join(CG_NAME);
> > -       if (CHECK_FAIL(cg_fd < 0))
> > -               goto err;
> >
> > -       if (CHECK_FAIL(bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd))) {
> > -               fprintf(stderr, "FAILED: load_bpf_file failed for: %s\n", file);
> > -               goto err;
> > -       }
> > +       skel = test_tcpbpf_kern__open_and_load();
> > +       if (CHECK(!skel, "open and load skel", "failed"))
> > +               return;
> >
> > -       rv = bpf_prog_attach(prog_fd, cg_fd, BPF_CGROUP_SOCK_OPS, 0);
> > -       if (CHECK_FAIL(rv)) {
> > -               fprintf(stderr, "FAILED: bpf_prog_attach: %d (%s)\n",
> > -                      errno, strerror(errno));
> > -               goto err;
> > -       }
> > +       cg_fd = test__join_cgroup(CG_NAME);
> > +       if (CHECK_FAIL(cg_fd < 0))
>
> please use either CHECK() or one of the newer ASSERT_xxx() macro (also
> defined in test_progs.h), CHECK_FAIL should be avoided in general.

So the plan I had was to actually move over to the following:
        cg_fd = test__join_cgroup(CG_NAME);
        if (CHECK_FAIL(cg_fd < 0))
                goto cleanup_skel;

It still makes use of CHECK_FAIL but it looks like test__join_cgroup
already takes care of error messaging so using CHECK_FAIL in this case
makes more sense.

In addition I was looking at simplifying the first patch which should
just be the move with minimal changes to allow the functionality to
build as a part of the test_progs framework. The end result should be
the same it just helps to make the fact that the first patch should
just be a move a little more clear.

> > +               goto cleanup_skel;
> >
> > -       map_fd = bpf_find_map(__func__, obj, "global_map");
> > -       if (CHECK_FAIL(map_fd < 0))
> > -               goto err;
> > +       map_fd = bpf_map__fd(skel->maps.global_map);
> > +       sock_map_fd = bpf_map__fd(skel->maps.sockopt_results);
> >
> > -       sock_map_fd = bpf_find_map(__func__, obj, "sockopt_results");
> > -       if (CHECK_FAIL(sock_map_fd < 0))
> > -               goto err;
> > +       link = bpf_program__attach_cgroup(skel->progs.bpf_testcb, cg_fd);
>
> you can do skel->links.bpf_testcb = bpf_program__attach_cgroup() and
> skeleton's __destroy() call will take care of destroying the link

Okay, I can look into using that. Actually this has me wondering if
there wouldn't be some way to make use of test_tcpbpf_kern__attach to
achieve this in some standard way. I'll get that sorted before I
submit v2.

> > +       if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n",
> > +                 PTR_ERR(link)))
>
> there is a convenient ASSERT_OK_PTR() specifically for pointers like
> this (and NULL ones as well, of course); saves a lot of typing and
> encapsulates error extraction internally.

I'll update the code to address that.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton
  2020-10-30  0:42     ` Alexander Duyck
@ 2020-10-30  2:30       ` Andrii Nakryiko
  2020-10-30 15:50         ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Andrii Nakryiko @ 2020-10-30  2:30 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin Lau,
	john fastabend, Kernel Team, Networking, Eric Dumazet,
	Lawrence Brakmo, alexanderduyck

On Thu, Oct 29, 2020 at 5:42 PM Alexander Duyck
<alexander.duyck@gmail.com> wrote:
>
> On Thu, Oct 29, 2020 at 4:20 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Thu, Oct 29, 2020 at 12:57 AM Alexander Duyck
> > <alexander.duyck@gmail.com> wrote:
> > >
> > > From: Alexander Duyck <alexanderduyck@fb.com>
> > >
> > > Update tcpbpf_user.c to make use of the BPF skeleton. Doing this we can
> > > simplify test_tcpbpf_user and reduce the overhead involved in setting up
> > > the test.
> > >
> > > Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> > > ---
> >
> > Few suggestions below, but overall looks good:
> >
> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> >
> > >  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |   48 +++++++++-----------
> > >  1 file changed, 21 insertions(+), 27 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > index 4e1190894e1e..7e92c37976ac 100644
> > > --- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > +++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > @@ -5,6 +5,7 @@
> > >
> > >  #include "test_tcpbpf.h"
> > >  #include "cgroup_helpers.h"
> > > +#include "test_tcpbpf_kern.skel.h"
> > >
> > >  #define LO_ADDR6 "::1"
> > >  #define CG_NAME "/tcpbpf-user-test"
> > > @@ -162,39 +163,32 @@ static void run_test(int map_fd, int sock_map_fd)
> > >
> > >  void test_tcpbpf_user(void)
> > >  {
> > > -       const char *file = "test_tcpbpf_kern.o";
> > > -       int prog_fd, map_fd, sock_map_fd;
> > > -       struct bpf_object *obj;
> > > +       struct test_tcpbpf_kern *skel;
> > > +       int map_fd, sock_map_fd;
> > > +       struct bpf_link *link;
> > >         int cg_fd = -1;
> > > -       int rv;
> > > -
> > > -       cg_fd = cgroup_setup_and_join(CG_NAME);
> > > -       if (CHECK_FAIL(cg_fd < 0))
> > > -               goto err;
> > >
> > > -       if (CHECK_FAIL(bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd))) {
> > > -               fprintf(stderr, "FAILED: load_bpf_file failed for: %s\n", file);
> > > -               goto err;
> > > -       }
> > > +       skel = test_tcpbpf_kern__open_and_load();
> > > +       if (CHECK(!skel, "open and load skel", "failed"))
> > > +               return;
> > >
> > > -       rv = bpf_prog_attach(prog_fd, cg_fd, BPF_CGROUP_SOCK_OPS, 0);
> > > -       if (CHECK_FAIL(rv)) {
> > > -               fprintf(stderr, "FAILED: bpf_prog_attach: %d (%s)\n",
> > > -                      errno, strerror(errno));
> > > -               goto err;
> > > -       }
> > > +       cg_fd = test__join_cgroup(CG_NAME);
> > > +       if (CHECK_FAIL(cg_fd < 0))
> >
> > please use either CHECK() or one of the newer ASSERT_xxx() macro (also
> > defined in test_progs.h), CHECK_FAIL should be avoided in general.
>
> So the plan I had was to actually move over to the following:
>         cg_fd = test__join_cgroup(CG_NAME);
>         if (CHECK_FAIL(cg_fd < 0))
>                 goto cleanup_skel;
>
> It still makes use of CHECK_FAIL but it looks like test__join_cgroup
> already takes care of error messaging so using CHECK_FAIL in this case
> makes more sense.

CHECK (and ASSERT) leave a paper-trail in verbose test log, so it
makes it easier to debug tests, if something fails. CHECK_FAIL is
invisible, unless if fails. So CHECK_FAIL should be used only for
things that are happening on the order of hundreds of instances per
test, or more.

>
> In addition I was looking at simplifying the first patch which should
> just be the move with minimal changes to allow the functionality to
> build as a part of the test_progs framework. The end result should be
> the same it just helps to make the fact that the first patch should
> just be a move a little more clear.

sure


>
> > > +               goto cleanup_skel;
> > >
> > > -       map_fd = bpf_find_map(__func__, obj, "global_map");
> > > -       if (CHECK_FAIL(map_fd < 0))
> > > -               goto err;
> > > +       map_fd = bpf_map__fd(skel->maps.global_map);
> > > +       sock_map_fd = bpf_map__fd(skel->maps.sockopt_results);
> > >
> > > -       sock_map_fd = bpf_find_map(__func__, obj, "sockopt_results");
> > > -       if (CHECK_FAIL(sock_map_fd < 0))
> > > -               goto err;
> > > +       link = bpf_program__attach_cgroup(skel->progs.bpf_testcb, cg_fd);
> >
> > you can do skel->links.bpf_testcb = bpf_program__attach_cgroup() and
> > skeleton's __destroy() call will take care of destroying the link
>
> Okay, I can look into using that. Actually this has me wondering if
> there wouldn't be some way to make use of test_tcpbpf_kern__attach to
> achieve this in some standard way. I'll get that sorted before I
> submit v2.

cgroup BPF programs can't be auto-attached, because they expect cgroup
FD, which you can't provide at compilation time (declaratively) in BPF
code. So it has to be manually attached. skeleton's attach() will just
ignore such programs.

>
> > > +       if (CHECK(IS_ERR(link), "attach_cgroup(estab)", "err: %ld\n",
> > > +                 PTR_ERR(link)))
> >
> > there is a convenient ASSERT_OK_PTR() specifically for pointers like
> > this (and NULL ones as well, of course); saves a lot of typing and
> > encapsulates error extraction internally.
>
> I'll update the code to address that.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton
  2020-10-30  2:30       ` Andrii Nakryiko
@ 2020-10-30 15:50         ` Alexander Duyck
  0 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2020-10-30 15:50 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin Lau,
	john fastabend, Kernel Team, Networking, Eric Dumazet,
	Lawrence Brakmo, alexanderduyck

On Thu, Oct 29, 2020 at 7:30 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Oct 29, 2020 at 5:42 PM Alexander Duyck
> <alexander.duyck@gmail.com> wrote:
> >
> > On Thu, Oct 29, 2020 at 4:20 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Thu, Oct 29, 2020 at 12:57 AM Alexander Duyck
> > > <alexander.duyck@gmail.com> wrote:
> > > >
> > > > From: Alexander Duyck <alexanderduyck@fb.com>
> > > >
> > > > Update tcpbpf_user.c to make use of the BPF skeleton. Doing this we can
> > > > simplify test_tcpbpf_user and reduce the overhead involved in setting up
> > > > the test.
> > > >
> > > > Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
> > > > ---
> > >
> > > Few suggestions below, but overall looks good:
> > >
> > > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > >
> > > >  .../testing/selftests/bpf/prog_tests/tcpbpf_user.c |   48 +++++++++-----------
> > > >  1 file changed, 21 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > > index 4e1190894e1e..7e92c37976ac 100644
> > > > --- a/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > > +++ b/tools/testing/selftests/bpf/prog_tests/tcpbpf_user.c
> > > > @@ -5,6 +5,7 @@
> > > >
> > > >  #include "test_tcpbpf.h"
> > > >  #include "cgroup_helpers.h"
> > > > +#include "test_tcpbpf_kern.skel.h"
> > > >
> > > >  #define LO_ADDR6 "::1"
> > > >  #define CG_NAME "/tcpbpf-user-test"
> > > > @@ -162,39 +163,32 @@ static void run_test(int map_fd, int sock_map_fd)
> > > >
> > > >  void test_tcpbpf_user(void)
> > > >  {
> > > > -       const char *file = "test_tcpbpf_kern.o";
> > > > -       int prog_fd, map_fd, sock_map_fd;
> > > > -       struct bpf_object *obj;
> > > > +       struct test_tcpbpf_kern *skel;
> > > > +       int map_fd, sock_map_fd;
> > > > +       struct bpf_link *link;
> > > >         int cg_fd = -1;
> > > > -       int rv;
> > > > -
> > > > -       cg_fd = cgroup_setup_and_join(CG_NAME);
> > > > -       if (CHECK_FAIL(cg_fd < 0))
> > > > -               goto err;
> > > >
> > > > -       if (CHECK_FAIL(bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd))) {
> > > > -               fprintf(stderr, "FAILED: load_bpf_file failed for: %s\n", file);
> > > > -               goto err;
> > > > -       }
> > > > +       skel = test_tcpbpf_kern__open_and_load();
> > > > +       if (CHECK(!skel, "open and load skel", "failed"))
> > > > +               return;
> > > >
> > > > -       rv = bpf_prog_attach(prog_fd, cg_fd, BPF_CGROUP_SOCK_OPS, 0);
> > > > -       if (CHECK_FAIL(rv)) {
> > > > -               fprintf(stderr, "FAILED: bpf_prog_attach: %d (%s)\n",
> > > > -                      errno, strerror(errno));
> > > > -               goto err;
> > > > -       }
> > > > +       cg_fd = test__join_cgroup(CG_NAME);
> > > > +       if (CHECK_FAIL(cg_fd < 0))
> > >
> > > please use either CHECK() or one of the newer ASSERT_xxx() macro (also
> > > defined in test_progs.h), CHECK_FAIL should be avoided in general.
> >
> > So the plan I had was to actually move over to the following:
> >         cg_fd = test__join_cgroup(CG_NAME);
> >         if (CHECK_FAIL(cg_fd < 0))
> >                 goto cleanup_skel;
> >
> > It still makes use of CHECK_FAIL but it looks like test__join_cgroup
> > already takes care of error messaging so using CHECK_FAIL in this case
> > makes more sense.
>
> CHECK (and ASSERT) leave a paper-trail in verbose test log, so it
> makes it easier to debug tests, if something fails. CHECK_FAIL is
> invisible, unless if fails. So CHECK_FAIL should be used only for
> things that are happening on the order of hundreds of instances per
> test, or more.

Okay, well in that case I will go through and replace the CHECK_FAIL
calls with a CHECK.

> >
> > > > +               goto cleanup_skel;
> > > >
> > > > -       map_fd = bpf_find_map(__func__, obj, "global_map");
> > > > -       if (CHECK_FAIL(map_fd < 0))
> > > > -               goto err;
> > > > +       map_fd = bpf_map__fd(skel->maps.global_map);
> > > > +       sock_map_fd = bpf_map__fd(skel->maps.sockopt_results);
> > > >
> > > > -       sock_map_fd = bpf_find_map(__func__, obj, "sockopt_results");
> > > > -       if (CHECK_FAIL(sock_map_fd < 0))
> > > > -               goto err;
> > > > +       link = bpf_program__attach_cgroup(skel->progs.bpf_testcb, cg_fd);
> > >
> > > you can do skel->links.bpf_testcb = bpf_program__attach_cgroup() and
> > > skeleton's __destroy() call will take care of destroying the link
> >
> > Okay, I can look into using that. Actually this has me wondering if
> > there wouldn't be some way to make use of test_tcpbpf_kern__attach to
> > achieve this in some standard way. I'll get that sorted before I
> > submit v2.
>
> cgroup BPF programs can't be auto-attached, because they expect cgroup
> FD, which you can't provide at compilation time (declaratively) in BPF
> code. So it has to be manually attached. skeleton's attach() will just
> ignore such programs.

Yeah, I figured that out after reviewing the code further.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-10-30 15:50 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-28  1:46 [bpf-next PATCH 0/4] selftests/bpf: Migrate test_tcpbpf_user to be a part of test_progs framework Alexander Duyck
2020-10-28  1:47 ` [bpf-next PATCH 1/4] selftests/bpf: Move test_tcppbf_user into test_progs Alexander Duyck
2020-10-29 23:27   ` Andrii Nakryiko
2020-10-30  0:39     ` Alexander Duyck
2020-10-28  1:47 ` [bpf-next PATCH 2/4] selftests/bpf: Drop python client/server in favor of threads Alexander Duyck
2020-10-29  1:51   ` Martin KaFai Lau
2020-10-29 16:58     ` Alexander Duyck
2020-10-29 18:12       ` Martin KaFai Lau
2020-10-29 19:51         ` Alexander Duyck
2020-10-28  1:47 ` [bpf-next PATCH 3/4] selftests/bpf: Replace EXPECT_EQ with ASSERT_EQ and refactor verify_results Alexander Duyck
2020-10-28  1:47 ` [bpf-next PATCH 4/4] selftests/bpf: Migrate tcpbpf_user.c to use BPF skeleton Alexander Duyck
2020-10-29 23:20   ` Andrii Nakryiko
2020-10-30  0:42     ` Alexander Duyck
2020-10-30  2:30       ` Andrii Nakryiko
2020-10-30 15:50         ` Alexander Duyck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).