All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/2] Fix missing synack in BPF cgroup_skb filters
@ 2023-06-12 19:16 Kui-Feng Lee
  2023-06-12 19:16 ` [PATCH bpf-next 1/2] net: bpf: Always call BPF cgroup filters for egress Kui-Feng Lee
  2023-06-12 19:16 ` [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets Kui-Feng Lee
  0 siblings, 2 replies; 9+ messages in thread
From: Kui-Feng Lee @ 2023-06-12 19:16 UTC (permalink / raw)
  To: bpf, ast, martin.lau, song, kernel-team, andrii, daniel; +Cc: Kui-Feng Lee

TCP SYN/ACK packets of connections from processes/sockets outside a
cgroup on the same host are not received by the cgroup's installed
cgroup_skb filters.

There were two BPF cgroup_skb programs attached to a cgroup named
"my_cgroup".

    SEC("cgroup_skb/ingress")
    int ingress(struct __sk_buff *skb)
    {
        /* .... process skb ... */
        return 1;
    }

    SEC("cgroup_skb/egress")
    int egress(struct __sk_buff *skb)
    {
        /* .... process skb ... */
        return 1;
    
    }

We discovered that when running the command "nc -6 -l 8000" in
"my_group" and connecting to it from outside of "my_cgroup" with the
command "nc -6 localhost 8000", the egress filter did not detect the
SYN/ACK packet. However, we did observe the SYN/ACK packet at the
ingress when connecting from a socket in "my_cgroup" to a socket
outside of it.

We came across BPF_CGROUP_RUN_PROG_INET_EGRESS(). This macro is
responsible for calling BPF programs that are attached to the egress
hook of a cgroup and it skips programs if the sending socket is not the
owner of the sk_buff. Specifically, in our situation, the SYN/ACK
sk_buff is owned by a struct request_sock instance, but the sending
socket is the listener socket we use to receive incoming
connections. The request_sock is created to manage an incoming
connection.

It has been determined that checking the owner of a sk_buff against
the sending socket is not required. Removing this check will allow the
filters to receive SYN/ACK packets.

To ensure that cgroup_skb filters can receive all signaling packets,
including SYN, SYN/ACK, ACK, FIN, and FIN/ACK. A new self-test has
been added as well.

Kui-Feng Lee (2):
  net: bpf: Always call BPF cgroup filters for egress.
  selftests/bpf: Verify that the cgroup_skb filters receive expected
    packets.

 include/linux/bpf-cgroup.h                    |   2 +-
 tools/testing/selftests/bpf/cgroup_helpers.c  |  12 +
 tools/testing/selftests/bpf/cgroup_helpers.h  |   1 +
 tools/testing/selftests/bpf/cgroup_tcp_skb.h  |  35 ++
 .../selftests/bpf/prog_tests/cgroup_tcp_skb.c | 363 ++++++++++++++++++
 .../selftests/bpf/progs/cgroup_tcp_skb.c      | 340 ++++++++++++++++
 6 files changed, 752 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/cgroup_tcp_skb.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_tcp_skb.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_tcp_skb.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 1/2] net: bpf: Always call BPF cgroup filters for egress.
  2023-06-12 19:16 [PATCH bpf-next 0/2] Fix missing synack in BPF cgroup_skb filters Kui-Feng Lee
@ 2023-06-12 19:16 ` Kui-Feng Lee
  2023-06-12 20:17   ` Alexei Starovoitov
  2023-06-12 19:16 ` [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets Kui-Feng Lee
  1 sibling, 1 reply; 9+ messages in thread
From: Kui-Feng Lee @ 2023-06-12 19:16 UTC (permalink / raw)
  To: bpf, ast, martin.lau, song, kernel-team, andrii, daniel; +Cc: Kui-Feng Lee

Always call BPF filters if CGROUP BPF is enabled for EGRESS without
checking skb->sk against sk.

The filters were called only if sk_buff is owned by the sock that the
sk_buff is sent out through.  In another words, sk_buff::sk should point to
the sock that it is sending through its egress.  However, the filters would
miss SYNACK sk_buffs that they are owned by a request_sock but sent through
the listening sock, that is the socket listening incoming connections.
This is an unnecessary restrict.

Signed-off-by: Kui-Feng Lee <kuifeng@meta.com>
---
 include/linux/bpf-cgroup.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 57e9e109257e..e656da531f9f 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -199,7 +199,7 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
 #define BPF_CGROUP_RUN_PROG_INET_EGRESS(sk, skb)			       \
 ({									       \
 	int __ret = 0;							       \
-	if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk && sk == skb->sk) { \
+	if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk) {		       \
 		typeof(sk) __sk = sk_to_full_sk(sk);			       \
 		if (sk_fullsock(__sk) &&				       \
 		    cgroup_bpf_sock_enabled(__sk, CGROUP_INET_EGRESS))	       \
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets.
  2023-06-12 19:16 [PATCH bpf-next 0/2] Fix missing synack in BPF cgroup_skb filters Kui-Feng Lee
  2023-06-12 19:16 ` [PATCH bpf-next 1/2] net: bpf: Always call BPF cgroup filters for egress Kui-Feng Lee
@ 2023-06-12 19:16 ` Kui-Feng Lee
  2023-06-12 20:31   ` Alexei Starovoitov
  1 sibling, 1 reply; 9+ messages in thread
From: Kui-Feng Lee @ 2023-06-12 19:16 UTC (permalink / raw)
  To: bpf, ast, martin.lau, song, kernel-team, andrii, daniel; +Cc: Kui-Feng Lee

This test case includes four scenarios:
1. Connect to the server from outside the cgroup and close the connection
   from outside the cgroup.
2. Connect to the server from outside the cgroup and close the connection
   from inside the cgroup.
3. Connect to the server from inside the cgroup and close the connection
   from outside the cgroup.
4. Connect to the server from inside the cgroup and close the connection
   from inside the cgroup.

The test case is to verify that cgroup_skb/{egress, ingress} filters
receive expected packets including SYN, SYN/ACK, ACK, FIN, and FIN/ACK.

Signed-off-by: Kui-Feng Lee <kuifeng@meta.com>
---
 tools/testing/selftests/bpf/cgroup_helpers.c  |  12 +
 tools/testing/selftests/bpf/cgroup_helpers.h  |   1 +
 tools/testing/selftests/bpf/cgroup_tcp_skb.h  |  35 ++
 .../selftests/bpf/prog_tests/cgroup_tcp_skb.c | 363 ++++++++++++++++++
 .../selftests/bpf/progs/cgroup_tcp_skb.c      | 340 ++++++++++++++++
 5 files changed, 751 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/cgroup_tcp_skb.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_tcp_skb.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_tcp_skb.c

diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
index 9e95b37a7dff..2caee8423ee0 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.c
+++ b/tools/testing/selftests/bpf/cgroup_helpers.c
@@ -277,6 +277,18 @@ int join_cgroup(const char *relative_path)
 	return join_cgroup_from_top(cgroup_path);
 }
 
+/**
+ * join_root_cgroup() - Join the root cgroup
+ *
+ * This function joins the root cgroup.
+ *
+ * On success, it returns 0, otherwise on failure it returns 1.
+ */
+int join_root_cgroup(void)
+{
+	return join_cgroup_from_top(CGROUP_MOUNT_PATH);
+}
+
 /**
  * join_parent_cgroup() - Join a cgroup in the parent process workdir
  * @relative_path: The cgroup path, relative to parent process workdir, to join
diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h
index f099a166c94d..5c2cb9c8b546 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.h
+++ b/tools/testing/selftests/bpf/cgroup_helpers.h
@@ -22,6 +22,7 @@ void remove_cgroup(const char *relative_path);
 unsigned long long get_cgroup_id(const char *relative_path);
 
 int join_cgroup(const char *relative_path);
+int join_root_cgroup(void);
 int join_parent_cgroup(const char *relative_path);
 
 int setup_cgroup_environment(void);
diff --git a/tools/testing/selftests/bpf/cgroup_tcp_skb.h b/tools/testing/selftests/bpf/cgroup_tcp_skb.h
new file mode 100644
index 000000000000..1054b3633983
--- /dev/null
+++ b/tools/testing/selftests/bpf/cgroup_tcp_skb.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2023 Facebook */
+
+/* Define states of a socket to tracking messages sending to and from the
+ * socket.
+ *
+ * These states are based on rfc9293 with some modifications to support
+ * tracking of messages sent out from a socket. For example, when a SYN is
+ * received, a new socket is transiting to the SYN_RECV state defined in
+ * rfc9293. But, we put it in SYN_RECV_SENDING_SYN_ACK state and when
+ * SYN-ACK is sent out, it moves to SYN_RECV state. With this modification,
+ * we can track the message sent out from a socket.
+ */
+
+#ifndef __CGROUP_TCP_SKB_H__
+#define __CGROUP_TCP_SKB_H__
+
+enum {
+	INIT,
+	CLOSED,
+	SYN_SENT,
+	SYN_RECV_SENDING_SYN_ACK,
+	SYN_RECV,
+	ESTABLISHED,
+	FIN_WAIT1,
+	FIN_WAIT2,
+	CLOSE_WAIT_SENDING_ACK,
+	CLOSE_WAIT,
+	CLOSING,
+	LAST_ACK,
+	TIME_WAIT_SENDING_ACK,
+	TIME_WAIT,
+};
+
+#endif /* __CGROUP_TCP_SKB_H__ */
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_tcp_skb.c b/tools/testing/selftests/bpf/prog_tests/cgroup_tcp_skb.c
new file mode 100644
index 000000000000..399e8f8199e0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_tcp_skb.c
@@ -0,0 +1,363 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Facebook */
+#include <test_progs.h>
+#include <linux/in6.h>
+#include <sys/socket.h>
+#include <sched.h>
+#include <unistd.h>
+#include "cgroup_helpers.h"
+#include "testing_helpers.h"
+#include "cgroup_tcp_skb.skel.h"
+#include "cgroup_tcp_skb.h"
+
+#define CGROUP_TCP_SKB_PATH "/test_cgroup_tcp_skb"
+static __u32 duration;
+
+static int install_filters(int cgroup_fd,
+			   struct bpf_link **egress_link,
+			   struct bpf_link **ingress_link,
+			   struct bpf_program *egress_prog,
+			   struct bpf_program *ingress_prog,
+			   struct cgroup_tcp_skb *skel)
+{
+	/* Prepare filters */
+	skel->bss->g_sock_state = 0;
+	skel->bss->g_unexpected = 0;
+	*egress_link =
+		bpf_program__attach_cgroup(egress_prog,
+					   cgroup_fd);
+	if (!ASSERT_NEQ(*egress_link, NULL, "egress_link"))
+		return -1;
+	*ingress_link =
+		bpf_program__attach_cgroup(ingress_prog,
+					   cgroup_fd);
+	if (!ASSERT_NEQ(*ingress_link, NULL, "ingress_link"))
+		return -1;
+
+	return 0;
+}
+
+static void uninstall_filters(struct bpf_link **egress_link,
+			      struct bpf_link **ingress_link)
+{
+	bpf_link__destroy(*egress_link);
+	*egress_link = NULL;
+	bpf_link__destroy(*ingress_link);
+	*ingress_link = NULL;
+}
+
+static int create_client_sock_v6(void)
+{
+	int fd;
+
+	fd = socket(AF_INET6, SOCK_STREAM, 0);
+	if (fd < 0) {
+		perror("socket");
+		return -1;
+	}
+
+	return fd;
+}
+
+static int create_server_sock_v6(void)
+{
+	struct sockaddr_in6 addr = {
+		.sin6_family = AF_INET6,
+		.sin6_port = htons(0),
+		.sin6_addr = IN6ADDR_LOOPBACK_INIT,
+	};
+	int fd, err;
+
+	fd = socket(AF_INET6, SOCK_STREAM, 0);
+	if (fd < 0) {
+		perror("socket");
+		return -1;
+	}
+
+	err = bind(fd, (struct sockaddr *)&addr, sizeof(addr));
+	if (err < 0) {
+		perror("bind");
+		return -1;
+	}
+
+	err = listen(fd, 1);
+	if (err < 0) {
+		perror("listen");
+		return -1;
+	}
+
+	return fd;
+}
+
+static int get_sock_port_v6(int fd)
+{
+	struct sockaddr_in6 addr;
+	socklen_t len;
+	int err;
+
+	len = sizeof(addr);
+	err = getsockname(fd, (struct sockaddr *)&addr, &len);
+	if (err < 0) {
+		perror("getsockname");
+		return -1;
+	}
+
+	return ntohs(addr.sin6_port);
+}
+
+static int connect_client_server_v6(int client_fd, int listen_fd)
+{
+	struct sockaddr_in6 addr = {
+		.sin6_family = AF_INET6,
+		.sin6_addr = IN6ADDR_LOOPBACK_INIT,
+	};
+	int err;
+
+	addr.sin6_port = htons(get_sock_port_v6(listen_fd));
+	if (addr.sin6_port < 0)
+		return -1;
+
+	err = connect(client_fd, (struct sockaddr *)&addr, sizeof(addr));
+	if (err < 0) {
+		perror("connect");
+		return -1;
+	}
+
+	return 0;
+}
+
+/* Connect to the server in a cgroup from the outside of the cgroup. */
+static int talk_to_cgroup(int *client_fd, int *listen_fd, int *service_fd,
+			  struct cgroup_tcp_skb *skel)
+{
+	int err, cp;
+	char buf[5];
+
+	/* Create client & server socket */
+	err = join_root_cgroup();
+	if (CHECK(err, "join_root_cgroup", "failed: %d\n", err))
+		return -1;
+	*client_fd = create_client_sock_v6();
+	if (!ASSERT_GE(*client_fd, 0, "client_fd"))
+		return -1;
+	err = join_cgroup(CGROUP_TCP_SKB_PATH);
+	if (CHECK(err, "join_cgroup", "failed: %d\n", err))
+		return -1;
+	*listen_fd = create_server_sock_v6();
+	if (!ASSERT_GE(*listen_fd, 0, "listen_fd"))
+		return -1;
+	skel->bss->g_sock_port = get_sock_port_v6(*listen_fd);
+
+	/* Connect client to server */
+	err = connect_client_server_v6(*client_fd, *listen_fd);
+	if (CHECK(err, "connect_client_server_v6", "failed: %d\n", err))
+		return -1;
+	*service_fd = accept(*listen_fd, NULL, NULL);
+	if (!ASSERT_GE(*service_fd, 0, "service_fd"))
+		return -1;
+	err = join_root_cgroup();
+	if (CHECK(err, "join_root_cgroup", "failed: %d\n", err))
+		return -1;
+	cp = write(*client_fd, "hello", 5);
+	if (!ASSERT_EQ(cp, 5, "write"))
+		return -1;
+	cp = read(*service_fd, buf, 5);
+	if (!ASSERT_EQ(cp, 5, "read"))
+		return -1;
+
+	return 0;
+}
+
+/* Connect to the server out of a cgroup from inside the cgroup. */
+static int talk_to_outside(int *client_fd, int *listen_fd, int *service_fd,
+			   struct cgroup_tcp_skb *skel)
+
+{
+	int err, cp;
+	char buf[5];
+
+	/* Create client & server socket */
+	err = join_root_cgroup();
+	if (CHECK(err, "join_root_cgroup", "failed: %d\n", err))
+		return -1;
+	*listen_fd = create_server_sock_v6();
+	if (!ASSERT_GE(*listen_fd, 0, "listen_fd"))
+		return -1;
+	err = join_cgroup(CGROUP_TCP_SKB_PATH);
+	if (CHECK(err, "join_cgroup", "failed: %d\n", err))
+		return -1;
+	*client_fd = create_client_sock_v6();
+	if (!ASSERT_GE(*client_fd, 0, "client_fd"))
+		return -1;
+	err = join_root_cgroup();
+	if (CHECK(err, "join_root_cgroup", "failed: %d\n", err))
+		return -1;
+	skel->bss->g_sock_port = get_sock_port_v6(*listen_fd);
+
+	/* Connect client to server */
+	err = connect_client_server_v6(*client_fd, *listen_fd);
+	if (CHECK(err, "connect_client_server_v6", "failed: %d\n", err))
+		return -1;
+	*service_fd = accept(*listen_fd, NULL, NULL);
+	if (!ASSERT_GE(*service_fd, 0, "service_fd"))
+		return -1;
+	cp = write(*client_fd, "hello", 5);
+	if (!ASSERT_EQ(cp, 5, "write"))
+		return -1;
+	cp = read(*service_fd, buf, 5);
+	if (!ASSERT_EQ(cp, 5, "read"))
+		return -1;
+
+	return 0;
+}
+
+static int close_connection(int *closing_fd, int *peer_fd, int *listen_fd)
+{
+	int err;
+
+	/* Half shutdown to make sure the closing socket having a chance to
+	 * receive a FIN from the client.
+	 */
+	err = shutdown(*closing_fd, SHUT_WR);
+	if (CHECK(err, "shutdown closing_fd", "failed: %d\n", err))
+		return -1;
+	usleep(100000);
+	err = close(*peer_fd);
+	if (CHECK(err, "close peer_fd", "failed: %d\n", err))
+		return -1;
+	*peer_fd = -1;
+	usleep(100000);
+	err = close(*closing_fd);
+	if (CHECK(err, "close closing_fd", "failed: %d\n", err))
+		return -1;
+	*closing_fd = -1;
+
+	close(*listen_fd);
+	*listen_fd = -1;
+
+	return 0;
+}
+
+/* This test case includes four scenarios:
+ * 1. Connect to the server from outside the cgroup and close the connection
+ *    from outside the cgroup.
+ * 2. Connect to the server from outside the cgroup and close the connection
+ *    from inside the cgroup.
+ * 3. Connect to the server from inside the cgroup and close the connection
+ *    from outside the cgroup.
+ * 4. Connect to the server from inside the cgroup and close the connection
+ *    from inside the cgroup.
+ *
+ * The test case is to verify that cgroup_skb/{egress,ingress} filters
+ * receive expected packets including SYN, SYN/ACK, ACK, FIN, and FIN/ACK.
+ */
+void test_cgroup_tcp_skb(void)
+{
+	struct bpf_link *ingress_link = NULL;
+	struct bpf_link *egress_link = NULL;
+	int client_fd = -1, listen_fd = -1;
+	struct cgroup_tcp_skb *skel;
+	int service_fd = -1;
+	int cgroup_fd = -1;
+	int err;
+
+	err = setup_cgroup_environment();
+	if (CHECK(err, "setup_cgroup_environment", "failed: %d\n", err))
+		return;
+
+	cgroup_fd = create_and_get_cgroup(CGROUP_TCP_SKB_PATH);
+	if (!ASSERT_GE(cgroup_fd, 0, "cgroup_fd"))
+		goto cleanup;
+
+	skel = cgroup_tcp_skb__open_and_load();
+	if (CHECK(!skel, "skel_open_load", "failed to open/load skeleton\n"))
+		return;
+
+	/* Scenario 1 */
+	err = install_filters(cgroup_fd, &egress_link, &ingress_link,
+			      skel->progs.server_egress,
+			      skel->progs.server_ingress,
+			      skel);
+	if (CHECK(err, "install_filters", "failed\n"))
+		goto cleanup;
+
+	err = talk_to_cgroup(&client_fd, &listen_fd, &service_fd, skel);
+	if (CHECK(err, "talk_to_cgroup", "failed\n"))
+		goto cleanup;
+
+	err = close_connection(&client_fd, &service_fd, &listen_fd);
+	if (CHECK(err, "close_connection", "failed\n"))
+		goto cleanup;
+
+	ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected");
+	ASSERT_EQ(skel->bss->g_sock_state, CLOSED, "g_sock_state");
+
+	uninstall_filters(&egress_link, &ingress_link);
+
+	/* Scenario 2 */
+	err = install_filters(cgroup_fd, &egress_link, &ingress_link,
+			      skel->progs.server_egress_srv,
+			      skel->progs.server_ingress_srv,
+			      skel);
+
+	err = talk_to_cgroup(&client_fd, &listen_fd, &service_fd, skel);
+	if (CHECK(err, "talk_to_cgroup", "failed\n"))
+		goto cleanup;
+
+	err = close_connection(&service_fd, &client_fd, &listen_fd);
+	if (CHECK(err, "close_connection", "failed\n"))
+		goto cleanup;
+
+	ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected");
+	ASSERT_EQ(skel->bss->g_sock_state, TIME_WAIT, "g_sock_state");
+
+	uninstall_filters(&egress_link, &ingress_link);
+
+	/* Scenario 3 */
+	err = install_filters(cgroup_fd, &egress_link, &ingress_link,
+			      skel->progs.client_egress_srv,
+			      skel->progs.client_ingress_srv,
+			      skel);
+
+	err = talk_to_outside(&client_fd, &listen_fd, &service_fd, skel);
+	if (CHECK(err, "talk_to_outside", "failed\n"))
+		goto cleanup;
+
+	err = close_connection(&service_fd, &client_fd, &listen_fd);
+	if (CHECK(err, "close_connection", "failed\n"))
+		goto cleanup;
+
+	ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected");
+	ASSERT_EQ(skel->bss->g_sock_state, CLOSED, "g_sock_state");
+
+	uninstall_filters(&egress_link, &ingress_link);
+
+	/* Scenario 4 */
+	err = install_filters(cgroup_fd, &egress_link, &ingress_link,
+			      skel->progs.client_egress,
+			      skel->progs.client_ingress,
+			      skel);
+
+	err = talk_to_outside(&client_fd, &listen_fd, &service_fd, skel);
+	if (CHECK(err, "talk_to_outside", "failed\n"))
+		goto cleanup;
+
+	err = close_connection(&client_fd, &service_fd, &listen_fd);
+	if (CHECK(err, "close_connection", "failed\n"))
+		goto cleanup;
+
+	ASSERT_EQ(skel->bss->g_unexpected, 0, "g_unexpected");
+	ASSERT_EQ(skel->bss->g_sock_state, TIME_WAIT, "g_sock_state");
+
+	uninstall_filters(&egress_link, &ingress_link);
+
+cleanup:
+	close(client_fd);
+	close(listen_fd);
+	close(service_fd);
+	close(cgroup_fd);
+	bpf_link__destroy(egress_link);
+	bpf_link__destroy(ingress_link);
+	cgroup_tcp_skb__destroy(skel);
+	cleanup_cgroup_environment();
+}
diff --git a/tools/testing/selftests/bpf/progs/cgroup_tcp_skb.c b/tools/testing/selftests/bpf/progs/cgroup_tcp_skb.c
new file mode 100644
index 000000000000..78bc6d6efbd1
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/cgroup_tcp_skb.c
@@ -0,0 +1,340 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Facebook */
+#include <linux/bpf.h>
+#include <bpf/bpf_endian.h>
+#include <bpf/bpf_helpers.h>
+
+#include <linux/if_ether.h>
+#include <linux/in.h>
+#include <linux/in6.h>
+#include <linux/ipv6.h>
+#include <linux/tcp.h>
+
+#include <sys/types.h>
+#include <sys/socket.h>
+
+#include "cgroup_tcp_skb.h"
+
+char _license[] SEC("license") = "GPL";
+
+__u16 g_sock_port = 0;
+__u32 g_sock_state = 0;
+int g_unexpected = 0;
+
+int needed_tcp_pkt(struct __sk_buff *skb, struct tcphdr *tcph)
+{
+	struct ipv6hdr ip6h;
+
+	if (skb->protocol != bpf_htons(ETH_P_IPV6))
+		return 0;
+	if (bpf_skb_load_bytes(skb, 0, &ip6h, sizeof(ip6h)))
+		return 0;
+
+	if (ip6h.nexthdr != IPPROTO_TCP)
+		return 0;
+
+	if (bpf_skb_load_bytes(skb, sizeof(ip6h), tcph, sizeof(*tcph)))
+		return 0;
+
+	if (tcph->source != bpf_htons(g_sock_port) &&
+	    tcph->dest != bpf_htons(g_sock_port))
+		return 0;
+
+	return 1;
+}
+
+/* Run accept() on a socket in the cgroup to receive a new connection. */
+#define EGRESS_ACCEPT							\
+	case SYN_RECV_SENDING_SYN_ACK:					\
+		if (tcph.fin || !tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = SYN_RECV;			\
+		break
+
+#define INGRESS_ACCEPT							\
+	case INIT:							\
+		if (!tcph.syn || tcph.fin || tcph.rst || tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = SYN_RECV_SENDING_SYN_ACK;	\
+		break;							\
+	case SYN_RECV:							\
+		if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = ESTABLISHED;			\
+		break
+
+/* Run connect() on a socket in the cgroup to start a new connection. */
+#define EGRESS_CONNECT							\
+	case INIT:							\
+		if (!tcph.syn || tcph.fin || tcph.rst || tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = SYN_SENT;			\
+		break
+
+#define INGRESS_CONNECT							\
+	case SYN_SENT:							\
+		if (tcph.fin || !tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = ESTABLISHED;			\
+		break
+
+/* The connection is closed by the peer outside the cgroup. */
+#define EGRESS_CLOSE_REMOTE						\
+	case ESTABLISHED:						\
+		break;							\
+	case CLOSE_WAIT_SENDING_ACK:					\
+		if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = CLOSE_WAIT;			\
+		break;							\
+	case CLOSE_WAIT:						\
+		if (!tcph.fin)						\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = LAST_ACK;			\
+		break
+
+#define INGRESS_CLOSE_REMOTE						\
+	case ESTABLISHED:						\
+		if (tcph.fin)						\
+			g_sock_state = CLOSE_WAIT_SENDING_ACK;		\
+		break;							\
+	case LAST_ACK:							\
+		if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = CLOSED;				\
+		break
+
+/* The connection is closed by the endpoint inside the cgroup. */
+#define EGRESS_CLOSE_LOCAL						\
+	case ESTABLISHED:						\
+		if (tcph.fin)						\
+			g_sock_state = FIN_WAIT1;			\
+		break;							\
+	case TIME_WAIT_SENDING_ACK:					\
+		if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = TIME_WAIT;			\
+		break
+
+#define INGRESS_CLOSE_LOCAL						\
+	case ESTABLISHED:						\
+		break;							\
+	case FIN_WAIT1:							\
+		if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = FIN_WAIT2;			\
+		break;							\
+	case FIN_WAIT2:							\
+		if (!tcph.fin || tcph.syn || tcph.rst || !tcph.ack)	\
+			g_unexpected++;					\
+		else							\
+			g_sock_state = TIME_WAIT_SENDING_ACK;		\
+		break
+
+/* Check the types of outgoing packets of a server socket to make sure they
+ * are consistent with the state of the server socket.
+ *
+ * The connection is closed by the client side.
+ */
+SEC("cgroup_skb/egress")
+int server_egress(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Egress of the server socket. */
+	switch (g_sock_state) {
+	EGRESS_ACCEPT;
+	EGRESS_CLOSE_REMOTE;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+/* Check the types of incoming packets of a server socket to make sure they
+ * are consistent with the state of the server socket.
+ *
+ * The connection is closed by the client side.
+ */
+SEC("cgroup_skb/ingress")
+int server_ingress(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Ingress of the server socket. */
+	switch (g_sock_state) {
+	INGRESS_ACCEPT;
+	INGRESS_CLOSE_REMOTE;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+
+/* Check the types of outgoing packets of a server socket to make sure they
+ * are consistent with the state of the server socket.
+ *
+ * The connection is closed by the server side.
+ */
+SEC("cgroup_skb/egress")
+int server_egress_srv(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Egress of the server socket. */
+	switch (g_sock_state) {
+	EGRESS_ACCEPT;
+	EGRESS_CLOSE_LOCAL;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+/* Check the types of incoming packets of a server socket to make sure they
+ * are consistent with the state of the server socket.
+ *
+ * The connection is closed by the server side.
+ */
+SEC("cgroup_skb/ingress")
+int server_ingress_srv(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Ingress of the server socket. */
+	switch (g_sock_state) {
+	INGRESS_ACCEPT;
+	INGRESS_CLOSE_LOCAL;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+
+/* Check the types of outgoing packets of a client socket to make sure they
+ * are consistent with the state of the client socket.
+ *
+ * The connection is closed by the server side.
+ */
+SEC("cgroup_skb/egress")
+int client_egress_srv(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Egress of the server socket. */
+	switch (g_sock_state) {
+	EGRESS_CONNECT;
+	EGRESS_CLOSE_REMOTE;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+/* Check the types of incoming packets of a client socket to make sure they
+ * are consistent with the state of the client socket.
+ *
+ * The connection is closed by the server side.
+ */
+SEC("cgroup_skb/ingress")
+int client_ingress_srv(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Ingress of the server socket. */
+	switch (g_sock_state) {
+	INGRESS_CONNECT;
+	INGRESS_CLOSE_REMOTE;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+
+/* Check the types of outgoing packets of a client socket to make sure they
+ * are consistent with the state of the client socket.
+ *
+ * The connection is closed by the client side.
+ */
+SEC("cgroup_skb/egress")
+int client_egress(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Egress of the server socket. */
+	switch (g_sock_state) {
+	EGRESS_CONNECT;
+	EGRESS_CLOSE_LOCAL;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+/* Check the types of incoming packets of a client socket to make sure they
+ * are consistent with the state of the client socket.
+ *
+ * The connection is closed by the client side.
+ */
+SEC("cgroup_skb/ingress")
+int client_ingress(struct __sk_buff *skb)
+{
+	struct tcphdr tcph;
+
+	if (!needed_tcp_pkt(skb, &tcph))
+		return 1;
+
+	/* Ingress of the server socket. */
+	switch (g_sock_state) {
+	INGRESS_CONNECT;
+	INGRESS_CLOSE_LOCAL;
+	default:
+		g_unexpected++;
+		break;
+	}
+	return 1;
+}
+
+
+
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 1/2] net: bpf: Always call BPF cgroup filters for egress.
  2023-06-12 19:16 ` [PATCH bpf-next 1/2] net: bpf: Always call BPF cgroup filters for egress Kui-Feng Lee
@ 2023-06-12 20:17   ` Alexei Starovoitov
  2023-06-12 21:49     ` Kui-Feng Lee
  0 siblings, 1 reply; 9+ messages in thread
From: Alexei Starovoitov @ 2023-06-12 20:17 UTC (permalink / raw)
  To: Kui-Feng Lee
  Cc: bpf, Alexei Starovoitov, Martin KaFai Lau, Song Liu, Kernel Team,
	Andrii Nakryiko, Daniel Borkmann, Kui-Feng Lee

On Mon, Jun 12, 2023 at 12:16 PM Kui-Feng Lee <thinker.li@gmail.com> wrote:
>
> Always call BPF filters if CGROUP BPF is enabled for EGRESS without
> checking skb->sk against sk.
>
> The filters were called only if sk_buff is owned by the sock that the
> sk_buff is sent out through.  In another words, sk_buff::sk should point to

What is "sk_buff::sk" ? Did you mean skb->sk ?

> the sock that it is sending through its egress.  However, the filters would
> miss SYNACK sk_buffs that they are owned by a request_sock but sent through
> the listening sock, that is the socket listening incoming connections.
> This is an unnecessary restrict.
>
> Signed-off-by: Kui-Feng Lee <kuifeng@meta.com>
> ---
>  include/linux/bpf-cgroup.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
> index 57e9e109257e..e656da531f9f 100644
> --- a/include/linux/bpf-cgroup.h
> +++ b/include/linux/bpf-cgroup.h
> @@ -199,7 +199,7 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
>  #define BPF_CGROUP_RUN_PROG_INET_EGRESS(sk, skb)                              \
>  ({                                                                            \
>         int __ret = 0;                                                         \
> -       if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk && sk == skb->sk) { \
> +       if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk) {


I did a bit of git-archeology.
That check was there since the beginning of cgroup-bpf and
came as a suggestion to use 'sk' instead of 'skb->sk':
https://lore.kernel.org/all/58193E9D.7040201@iogearbox.net/

Using sk is certainly correct. It looks to me that the check
was added just for a "piece of mind".

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets.
  2023-06-12 19:16 ` [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets Kui-Feng Lee
@ 2023-06-12 20:31   ` Alexei Starovoitov
  2023-06-12 21:56     ` Kui-Feng Lee
  0 siblings, 1 reply; 9+ messages in thread
From: Alexei Starovoitov @ 2023-06-12 20:31 UTC (permalink / raw)
  To: Kui-Feng Lee
  Cc: bpf, Alexei Starovoitov, Martin KaFai Lau, Song Liu, Kernel Team,
	Andrii Nakryiko, Daniel Borkmann, Kui-Feng Lee

On Mon, Jun 12, 2023 at 12:16 PM Kui-Feng Lee <thinker.li@gmail.com> wrote:
> +static int close_connection(int *closing_fd, int *peer_fd, int *listen_fd)
> +{
> +       int err;
> +
> +       /* Half shutdown to make sure the closing socket having a chance to
> +        * receive a FIN from the client.
> +        */
> +       err = shutdown(*closing_fd, SHUT_WR);
> +       if (CHECK(err, "shutdown closing_fd", "failed: %d\n", err))
> +               return -1;
> +       usleep(100000);
> +       err = close(*peer_fd);
> +       if (CHECK(err, "close peer_fd", "failed: %d\n", err))
> +               return -1;
> +       *peer_fd = -1;
> +       usleep(100000);
> +       err = close(*closing_fd);

usleep() won't guarantee it. The test will be flaky.
Can you make it reliable?

> +
> +/* Run accept() on a socket in the cgroup to receive a new connection. */
> +#define EGRESS_ACCEPT                                                  \
> +       case SYN_RECV_SENDING_SYN_ACK:                                  \
> +               if (tcph.fin || !tcph.syn || tcph.rst || !tcph.ack)     \
> +                       g_unexpected++;                                 \
> +               else                                                    \
> +                       g_sock_state = SYN_RECV;                        \
> +               break
> +
> +#define INGRESS_ACCEPT                                                 \
> +       case INIT:                                                      \
> +               if (!tcph.syn || tcph.fin || tcph.rst || tcph.ack)      \
> +                       g_unexpected++;                                 \
> +               else                                                    \
> +                       g_sock_state = SYN_RECV_SENDING_SYN_ACK;        \
> +               break;                                                  \
> +       case SYN_RECV:                                                  \
> +               if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)      \
> +                       g_unexpected++;                                 \
> +               else                                                    \
> +                       g_sock_state = ESTABLISHED;                     \
> +               break
> +

The macros make the code difficult to follow.
Could you do it with normal functions?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 1/2] net: bpf: Always call BPF cgroup filters for egress.
  2023-06-12 20:17   ` Alexei Starovoitov
@ 2023-06-12 21:49     ` Kui-Feng Lee
  0 siblings, 0 replies; 9+ messages in thread
From: Kui-Feng Lee @ 2023-06-12 21:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Kui-Feng Lee
  Cc: bpf, Alexei Starovoitov, Martin KaFai Lau, Song Liu, Kernel Team,
	Andrii Nakryiko, Daniel Borkmann, Kui-Feng Lee



On 6/12/23 13:17, Alexei Starovoitov wrote:
> On Mon, Jun 12, 2023 at 12:16 PM Kui-Feng Lee <thinker.li@gmail.com> wrote:
>>
>> Always call BPF filters if CGROUP BPF is enabled for EGRESS without
>> checking skb->sk against sk.
>>
>> The filters were called only if sk_buff is owned by the sock that the
>> sk_buff is sent out through.  In another words, sk_buff::sk should point to
> 
> What is "sk_buff::sk" ? Did you mean skb->sk ?

Yes!
> 
>> the sock that it is sending through its egress.  However, the filters would
>> miss SYNACK sk_buffs that they are owned by a request_sock but sent through
>> the listening sock, that is the socket listening incoming connections.
>> This is an unnecessary restrict.
>>
>> Signed-off-by: Kui-Feng Lee <kuifeng@meta.com>
>> ---
>>   include/linux/bpf-cgroup.h | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
>> index 57e9e109257e..e656da531f9f 100644
>> --- a/include/linux/bpf-cgroup.h
>> +++ b/include/linux/bpf-cgroup.h
>> @@ -199,7 +199,7 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
>>   #define BPF_CGROUP_RUN_PROG_INET_EGRESS(sk, skb)                              \
>>   ({                                                                            \
>>          int __ret = 0;                                                         \
>> -       if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk && sk == skb->sk) { \
>> +       if (cgroup_bpf_enabled(CGROUP_INET_EGRESS) && sk) {
> 
> 
> I did a bit of git-archeology.
> That check was there since the beginning of cgroup-bpf and
> came as a suggestion to use 'sk' instead of 'skb->sk':
> https://lore.kernel.org/all/58193E9D.7040201@iogearbox.net/
> 
> Using sk is certainly correct. It looks to me that the check
> was added just for a "piece of mind".
> 
Good to know that.  Thank you for the confirmation.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets.
  2023-06-12 20:31   ` Alexei Starovoitov
@ 2023-06-12 21:56     ` Kui-Feng Lee
  2023-06-12 23:31       ` Andrii Nakryiko
  0 siblings, 1 reply; 9+ messages in thread
From: Kui-Feng Lee @ 2023-06-12 21:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Kui-Feng Lee
  Cc: bpf, Alexei Starovoitov, Martin KaFai Lau, Song Liu, Kernel Team,
	Andrii Nakryiko, Daniel Borkmann, Kui-Feng Lee



On 6/12/23 13:31, Alexei Starovoitov wrote:
> On Mon, Jun 12, 2023 at 12:16 PM Kui-Feng Lee <thinker.li@gmail.com> wrote:
>> +static int close_connection(int *closing_fd, int *peer_fd, int *listen_fd)
>> +{
>> +       int err;
>> +
>> +       /* Half shutdown to make sure the closing socket having a chance to
>> +        * receive a FIN from the client.
>> +        */
>> +       err = shutdown(*closing_fd, SHUT_WR);
>> +       if (CHECK(err, "shutdown closing_fd", "failed: %d\n", err))
>> +               return -1;
>> +       usleep(100000);
>> +       err = close(*peer_fd);
>> +       if (CHECK(err, "close peer_fd", "failed: %d\n", err))
>> +               return -1;
>> +       *peer_fd = -1;
>> +       usleep(100000);
>> +       err = close(*closing_fd);
> 
> usleep() won't guarantee it. The test will be flaky.
> Can you make it reliable?
What if it checks a counter of packets going through the filter?
Will try a couple times until it is too long, one second
for example.

> 
>> +
>> +/* Run accept() on a socket in the cgroup to receive a new connection. */
>> +#define EGRESS_ACCEPT                                                  \
>> +       case SYN_RECV_SENDING_SYN_ACK:                                  \
>> +               if (tcph.fin || !tcph.syn || tcph.rst || !tcph.ack)     \
>> +                       g_unexpected++;                                 \
>> +               else                                                    \
>> +                       g_sock_state = SYN_RECV;                        \
>> +               break
>> +
>> +#define INGRESS_ACCEPT                                                 \
>> +       case INIT:                                                      \
>> +               if (!tcph.syn || tcph.fin || tcph.rst || tcph.ack)      \
>> +                       g_unexpected++;                                 \
>> +               else                                                    \
>> +                       g_sock_state = SYN_RECV_SENDING_SYN_ACK;        \
>> +               break;                                                  \
>> +       case SYN_RECV:                                                  \
>> +               if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)      \
>> +                       g_unexpected++;                                 \
>> +               else                                                    \
>> +                       g_sock_state = ESTABLISHED;                     \
>> +               break
>> +
> 
> The macros make the code difficult to follow.
> Could you do it with normal functions?
> 
Sure!


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets.
  2023-06-12 21:56     ` Kui-Feng Lee
@ 2023-06-12 23:31       ` Andrii Nakryiko
  2023-06-13  2:36         ` Kui-Feng Lee
  0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2023-06-12 23:31 UTC (permalink / raw)
  To: Kui-Feng Lee
  Cc: Alexei Starovoitov, Kui-Feng Lee, bpf, Alexei Starovoitov,
	Martin KaFai Lau, Song Liu, Kernel Team, Andrii Nakryiko,
	Daniel Borkmann, Kui-Feng Lee

On Mon, Jun 12, 2023 at 2:56 PM Kui-Feng Lee <sinquersw@gmail.com> wrote:
>
>
>
> On 6/12/23 13:31, Alexei Starovoitov wrote:
> > On Mon, Jun 12, 2023 at 12:16 PM Kui-Feng Lee <thinker.li@gmail.com> wrote:
> >> +static int close_connection(int *closing_fd, int *peer_fd, int *listen_fd)
> >> +{
> >> +       int err;
> >> +
> >> +       /* Half shutdown to make sure the closing socket having a chance to
> >> +        * receive a FIN from the client.
> >> +        */
> >> +       err = shutdown(*closing_fd, SHUT_WR);
> >> +       if (CHECK(err, "shutdown closing_fd", "failed: %d\n", err))
> >> +               return -1;
> >> +       usleep(100000);
> >> +       err = close(*peer_fd);
> >> +       if (CHECK(err, "close peer_fd", "failed: %d\n", err))
> >> +               return -1;
> >> +       *peer_fd = -1;
> >> +       usleep(100000);
> >> +       err = close(*closing_fd);
> >
> > usleep() won't guarantee it. The test will be flaky.
> > Can you make it reliable?
> What if it checks a counter of packets going through the filter?
> Will try a couple times until it is too long, one second
> for example.
>
> >
> >> +
> >> +/* Run accept() on a socket in the cgroup to receive a new connection. */
> >> +#define EGRESS_ACCEPT                                                  \
> >> +       case SYN_RECV_SENDING_SYN_ACK:                                  \
> >> +               if (tcph.fin || !tcph.syn || tcph.rst || !tcph.ack)     \
> >> +                       g_unexpected++;                                 \
> >> +               else                                                    \
> >> +                       g_sock_state = SYN_RECV;                        \
> >> +               break
> >> +
> >> +#define INGRESS_ACCEPT                                                 \
> >> +       case INIT:                                                      \
> >> +               if (!tcph.syn || tcph.fin || tcph.rst || tcph.ack)      \
> >> +                       g_unexpected++;                                 \
> >> +               else                                                    \
> >> +                       g_sock_state = SYN_RECV_SENDING_SYN_ACK;        \
> >> +               break;                                                  \
> >> +       case SYN_RECV:                                                  \
> >> +               if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)      \
> >> +                       g_unexpected++;                                 \
> >> +               else                                                    \
> >> +                       g_sock_state = ESTABLISHED;                     \
> >> +               break
> >> +
> >
> > The macros make the code difficult to follow.
> > Could you do it with normal functions?
> >
> Sure!
>

please don't use CHECK() macros for new code, use ASSERT_xxx() family

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets.
  2023-06-12 23:31       ` Andrii Nakryiko
@ 2023-06-13  2:36         ` Kui-Feng Lee
  0 siblings, 0 replies; 9+ messages in thread
From: Kui-Feng Lee @ 2023-06-13  2:36 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Kui-Feng Lee, bpf, Alexei Starovoitov,
	Martin KaFai Lau, Song Liu, Kernel Team, Andrii Nakryiko,
	Daniel Borkmann, Kui-Feng Lee



On 6/12/23 16:31, Andrii Nakryiko wrote:
> On Mon, Jun 12, 2023 at 2:56 PM Kui-Feng Lee <sinquersw@gmail.com> wrote:
>>
>>
>>
>> On 6/12/23 13:31, Alexei Starovoitov wrote:
>>> On Mon, Jun 12, 2023 at 12:16 PM Kui-Feng Lee <thinker.li@gmail.com> wrote:
>>>> +static int close_connection(int *closing_fd, int *peer_fd, int *listen_fd)
>>>> +{
>>>> +       int err;
>>>> +
>>>> +       /* Half shutdown to make sure the closing socket having a chance to
>>>> +        * receive a FIN from the client.
>>>> +        */
>>>> +       err = shutdown(*closing_fd, SHUT_WR);
>>>> +       if (CHECK(err, "shutdown closing_fd", "failed: %d\n", err))
>>>> +               return -1;
>>>> +       usleep(100000);
>>>> +       err = close(*peer_fd);
>>>> +       if (CHECK(err, "close peer_fd", "failed: %d\n", err))
>>>> +               return -1;
>>>> +       *peer_fd = -1;
>>>> +       usleep(100000);
>>>> +       err = close(*closing_fd);
>>>
>>> usleep() won't guarantee it. The test will be flaky.
>>> Can you make it reliable?
>> What if it checks a counter of packets going through the filter?
>> Will try a couple times until it is too long, one second
>> for example.
>>
>>>
>>>> +
>>>> +/* Run accept() on a socket in the cgroup to receive a new connection. */
>>>> +#define EGRESS_ACCEPT                                                  \
>>>> +       case SYN_RECV_SENDING_SYN_ACK:                                  \
>>>> +               if (tcph.fin || !tcph.syn || tcph.rst || !tcph.ack)     \
>>>> +                       g_unexpected++;                                 \
>>>> +               else                                                    \
>>>> +                       g_sock_state = SYN_RECV;                        \
>>>> +               break
>>>> +
>>>> +#define INGRESS_ACCEPT                                                 \
>>>> +       case INIT:                                                      \
>>>> +               if (!tcph.syn || tcph.fin || tcph.rst || tcph.ack)      \
>>>> +                       g_unexpected++;                                 \
>>>> +               else                                                    \
>>>> +                       g_sock_state = SYN_RECV_SENDING_SYN_ACK;        \
>>>> +               break;                                                  \
>>>> +       case SYN_RECV:                                                  \
>>>> +               if (tcph.fin || tcph.syn || tcph.rst || !tcph.ack)      \
>>>> +                       g_unexpected++;                                 \
>>>> +               else                                                    \
>>>> +                       g_sock_state = ESTABLISHED;                     \
>>>> +               break
>>>> +
>>>
>>> The macros make the code difficult to follow.
>>> Could you do it with normal functions?
>>>
>> Sure!
>>
> 
> please don't use CHECK() macros for new code, use ASSERT_xxx() family
Got it!


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-06-13  2:36 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-12 19:16 [PATCH bpf-next 0/2] Fix missing synack in BPF cgroup_skb filters Kui-Feng Lee
2023-06-12 19:16 ` [PATCH bpf-next 1/2] net: bpf: Always call BPF cgroup filters for egress Kui-Feng Lee
2023-06-12 20:17   ` Alexei Starovoitov
2023-06-12 21:49     ` Kui-Feng Lee
2023-06-12 19:16 ` [PATCH bpf-next 2/2] selftests/bpf: Verify that the cgroup_skb filters receive expected packets Kui-Feng Lee
2023-06-12 20:31   ` Alexei Starovoitov
2023-06-12 21:56     ` Kui-Feng Lee
2023-06-12 23:31       ` Andrii Nakryiko
2023-06-13  2:36         ` Kui-Feng Lee

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.