All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-next v3 00/10] BPF packet scheduler
@ 2022-05-28 15:11 Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

v3:
 - use new BPF scheduler API:
 - add backup scheduler
 - add round-robin scheduler
 - check bytes_sent of 'ss' output.

v2:
- Use new BPF scheduler API:
 unsigned long (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
                              struct mptcp_sched_data *data);

Geliang Tang (10):
  Squash to "mptcp: add struct mptcp_sched_ops"
  Squash to "mptcp: add sched in mptcp_sock"
  Squash to "mptcp: add get_subflow wrappers"
  Squash to "mptcp: add bpf_mptcp_sched_ops"
  Squash to "selftests/bpf: add bpf_first scheduler"
  Squash to "selftests/bpf: add bpf_first test"
  selftests/bpf: add bpf_backup scheduler
  selftests/bpf: add bpf_backup test
  selftests/bpf: add bpf_rr scheduler
  selftests/bpf: add bpf_rr test

 include/net/mptcp.h                           | 12 ++-
 net/mptcp/bpf.c                               | 36 ++++----
 net/mptcp/sched.c                             | 59 +++++++++---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h | 21 ++++-
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 89 ++++++++++++++++++-
 .../selftests/bpf/progs/mptcp_bpf_backup.c    | 43 +++++++++
 .../selftests/bpf/progs/mptcp_bpf_first.c     |  5 +-
 .../selftests/bpf/progs/mptcp_bpf_rr.c        | 46 ++++++++++
 8 files changed, 271 insertions(+), 40 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_backup.c
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 01/10] Squash to "mptcp: add struct mptcp_sched_ops"
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 02/10] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Use bitmap instead of sock in struct mptcp_sched_data.

Please update the commit log:

'''
This patch defines struct mptcp_sched_ops, which has three struct members,
name, owner and list, and three function pointers, init, release and
get_subflow.

Add the scheduler registering, unregistering and finding functions to add,
delete and find a packet scheduler on the global list mptcp_sched_list.

The BPF scheduler function get_subflow() has a struct mptcp_sched_data
parameter, which contains a mptcp_sched_subflow array. The struct
mptcp_sched_subflow has a member is_scheduled, it will be set in the
MPTCP scheduler context when the scheduler picks this subflow to send
data.
'''

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 include/net/mptcp.h                           | 12 +++++++++---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h | 16 +++++++++++++---
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/include/net/mptcp.h b/include/net/mptcp.h
index 6456ea26e4c7..8a53583a9745 100644
--- a/include/net/mptcp.h
+++ b/include/net/mptcp.h
@@ -97,14 +97,20 @@ struct mptcp_out_options {
 };
 
 #define MPTCP_SCHED_NAME_MAX	16
+#define MPTCP_SUBFLOWS_MAX	8
+
+struct mptcp_sched_subflow {
+	struct mptcp_subflow_context *context;
+	bool	is_scheduled;
+};
 
 struct mptcp_sched_data {
-	struct sock	*sock;
-	bool		call_again;
+	bool	reinject;
+	struct mptcp_sched_subflow subflows[MPTCP_SUBFLOWS_MAX];
 };
 
 struct mptcp_sched_ops {
-	void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
+	void (*get_subflow)(const struct mptcp_sock *msk,
 			    struct mptcp_sched_data *data);
 
 	char			name[MPTCP_SCHED_NAME_MAX];
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index aca4e3c6ac48..91900f2f047a 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -231,10 +231,20 @@ extern __u32 tcp_slow_start(struct tcp_sock *tp, __u32 acked) __ksym;
 extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
 
 #define MPTCP_SCHED_NAME_MAX	16
+#define MPTCP_SUBFLOWS_MAX	8
+
+struct mptcp_subflow_context {
+	__u32	token;
+} __attribute__((preserve_access_index));
+
+struct mptcp_sched_subflow {
+	struct mptcp_subflow_context *context;
+	bool	is_scheduled;
+};
 
 struct mptcp_sched_data {
-	struct sock	*sock;
-	bool		call_again;
+	bool	reinject;
+	struct mptcp_sched_subflow subflows[MPTCP_SUBFLOWS_MAX];
 };
 
 struct mptcp_sched_ops {
@@ -243,7 +253,7 @@ struct mptcp_sched_ops {
 	void (*init)(const struct mptcp_sock *msk);
 	void (*release)(const struct mptcp_sock *msk);
 
-	void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
+	void (*get_subflow)(const struct mptcp_sock *msk,
 			    struct mptcp_sched_data *data);
 	void *owner;
 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 02/10] Squash to "mptcp: add sched in mptcp_sock"
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

No need to export sched in bpf_tcp_helpers.h, drop it.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 91900f2f047a..988dbeeac040 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -263,7 +263,6 @@ struct mptcp_sock {
 
 	__u32		token;
 	struct sock	*first;
-	struct mptcp_sched_ops	*sched;
 	char		ca_name[TCP_CA_NAME_MAX];
 } __attribute__((preserve_access_index));
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 03/10] Squash to "mptcp: add get_subflow wrappers"
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 02/10] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Please update the commit log:

'''
This patch defines two new wrappers mptcp_sched_get_send() and
mptcp_sched_get_retrans(), invoke get_subflow() of msk->sched in them.
Use them instead of using mptcp_subflow_get_send() or
mptcp_subflow_get_retrans() directly.

Set the subflow pointers array in struct mptcp_sched_data before invoking
get_subflow(), then it can be used in get_subflow() in the BPF contexts.

Check the is_scheduled flags to test which subflow or subflows are picked
by the scheduler.
'''

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/sched.c | 59 ++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 48 insertions(+), 11 deletions(-)

diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 3ceb721e6489..46396eed62d0 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -88,11 +88,28 @@ void mptcp_release_sched(struct mptcp_sock *msk)
 	bpf_module_put(sched, sched->owner);
 }
 
-static int mptcp_sched_data_init(struct mptcp_sock *msk,
+static int mptcp_sched_data_init(struct mptcp_sock *msk, bool reinject,
 				 struct mptcp_sched_data *data)
 {
-	data->sock = NULL;
-	data->call_again = 0;
+	struct mptcp_subflow_context *subflow;
+	int i = 0;
+
+	data->reinject = reinject;
+
+	mptcp_for_each_subflow(msk, subflow) {
+		if (i == MPTCP_SUBFLOWS_MAX) {
+			pr_warn_once("too many subflows");
+			break;
+		}
+		data->subflows[i].context = subflow;
+		data->subflows[i].is_scheduled = 0;
+		i++;
+	}
+
+	for (; i < MPTCP_SUBFLOWS_MAX; i++) {
+		data->subflows[i].context = NULL;
+		data->subflows[i].is_scheduled = 0;
+	}
 
 	return 0;
 }
@@ -100,6 +117,8 @@ static int mptcp_sched_data_init(struct mptcp_sock *msk,
 struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
 {
 	struct mptcp_sched_data data;
+	struct sock *ssk = NULL;
+	int i;
 
 	sock_owned_by_me((struct sock *)msk);
 
@@ -113,16 +132,26 @@ struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
 	if (!msk->sched)
 		return mptcp_subflow_get_send(msk);
 
-	mptcp_sched_data_init(msk, &data);
-	msk->sched->get_subflow(msk, false, &data);
+	mptcp_sched_data_init(msk, false, &data);
+	msk->sched->get_subflow(msk, &data);
 
-	msk->last_snd = data.sock;
-	return data.sock;
+	for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (data.subflows[i].is_scheduled &&
+		    data.subflows[i].context) {
+			ssk = data.subflows[i].context->tcp_sock;
+			msk->last_snd = ssk;
+			break;
+		}
+	}
+
+	return ssk;
 }
 
 struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
 {
 	struct mptcp_sched_data data;
+	struct sock *ssk = NULL;
+	int i;
 
 	sock_owned_by_me((const struct sock *)msk);
 
@@ -133,9 +162,17 @@ struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
 	if (!msk->sched)
 		return mptcp_subflow_get_retrans(msk);
 
-	mptcp_sched_data_init(msk, &data);
-	msk->sched->get_subflow(msk, true, &data);
+	mptcp_sched_data_init(msk, true, &data);
+	msk->sched->get_subflow(msk, &data);
+
+	for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (data.subflows[i].is_scheduled &&
+		    data.subflows[i].context) {
+			ssk = data.subflows[i].context->tcp_sock;
+			msk->last_snd = ssk;
+			break;
+		}
+	}
 
-	msk->last_snd = data.sock;
-	return data.sock;
+	return ssk;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops"
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
                   ` (2 preceding siblings ...)
  2022-05-28 15:11 ` [PATCH mptcp-next v3 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 05/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Drop the access code for mptcp_sched_data.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/bpf.c | 36 ++++++++++++++++++------------------
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index 338146d173f4..4f4559d37f9e 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -33,6 +33,12 @@ bpf_mptcp_sched_get_func_proto(enum bpf_func_id func_id,
 	return bpf_base_func_proto(func_id);
 }
 
+static size_t subflow_offset(int i)
+{
+	return offsetof(struct mptcp_sched_data, subflows) +
+	       i * sizeof(struct mptcp_sched_subflow);
+}
+
 static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
 					     const struct btf *btf,
 					     const struct btf_type *t, int off,
@@ -40,36 +46,30 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
 					     u32 *next_btf_id,
 					     enum bpf_type_flag *flag)
 {
-	size_t end;
+	size_t start, end, soff;
+	int i;
 
-	if (atype == BPF_READ)
+	if (atype == BPF_READ) {
 		return btf_struct_access(log, btf, t, off, size, atype,
 					 next_btf_id, flag);
+	}
 
 	if (t != mptcp_sched_type) {
 		bpf_log(log, "only access to mptcp_sched_data is supported\n");
 		return -EACCES;
 	}
 
-	switch (off) {
-	case offsetof(struct mptcp_sched_data, sock):
-		end = offsetofend(struct mptcp_sched_data, sock);
-		break;
-	case offsetof(struct mptcp_sched_data, call_again):
-		end = offsetofend(struct mptcp_sched_data, call_again);
-		break;
-	default:
-		bpf_log(log, "no write support to mptcp_sched_data at off %d\n", off);
-		return -EACCES;
-	}
+	start = offsetof(struct mptcp_sched_subflow, is_scheduled);
+	end = offsetofend(struct mptcp_sched_subflow, is_scheduled);
 
-	if (off + size > end) {
-		bpf_log(log, "access beyond mptcp_sched_data at off %u size %u ended at %zu",
-			off, size, end);
-		return -EACCES;
+	for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		soff = subflow_offset(i);
+		if (off == soff + start && off + size <= soff + end)
+			return NOT_INIT; /* offsets match up with is_scheduled */
 	}
 
-	return NOT_INIT;
+	bpf_log(log, "no write support to mptcp_sched_data at off %d\n", off);
+	return -EACCES;
 }
 
 static const struct bpf_verifier_ops bpf_mptcp_sched_verifier_ops = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 05/10] Squash to "selftests/bpf: add bpf_first scheduler"
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
                   ` (3 preceding siblings ...)
  2022-05-28 15:11 ` [PATCH mptcp-next v3 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 06/10] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Use new get_subflow API.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/progs/mptcp_bpf_first.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
index fd67b5f42964..0baacd8b6426 100644
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
@@ -17,10 +17,9 @@ void BPF_PROG(mptcp_sched_first_release, const struct mptcp_sock *msk)
 }
 
 void BPF_STRUCT_OPS(bpf_first_get_subflow, const struct mptcp_sock *msk,
-		    bool reinject, struct mptcp_sched_data *data)
+		    struct mptcp_sched_data *data)
 {
-	data->sock = msk->first;
-	data->call_again = 0;
+	data->subflows[0].is_scheduled = 1;
 }
 
 SEC(".struct_ops")
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 06/10] Squash to "selftests/bpf: add bpf_first test"
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
                   ` (4 preceding siblings ...)
  2022-05-28 15:11 ` [PATCH mptcp-next v3 05/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 07/10] selftests/bpf: add bpf_backup scheduler Geliang Tang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Add two helpers, add_veth() and cleanup().

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 8e9764275b07..eaea4105728d 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -250,6 +250,20 @@ static void send_data(int lfd, int fd)
 	      PTR_ERR(thread_ret));
 }
 
+static void add_veth(void)
+{
+	system("ip link add veth1 type veth");
+	system("ip addr add 10.0.1.1/24 dev veth1");
+	system("ip link set veth1 up");
+}
+
+static void cleanup(void)
+{
+	system("sysctl -qw net.mptcp.scheduler=default");
+	system("ip mptcp endpoint flush");
+	system("ip link del veth1");
+}
+
 static void test_first(void)
 {
 	struct mptcp_bpf_first *first_skel;
@@ -266,15 +280,18 @@ static void test_first(void)
 		return;
 	}
 
+	add_veth();
+	system("ip mptcp endpoint add 10.0.1.1 subflow");
 	system("sysctl -qw net.mptcp.scheduler=bpf_first");
 	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
 	client_fd = connect_to_fd(server_fd, 0);
 
 	send_data(server_fd, client_fd);
+	ASSERT_GT(system("ss -MOenita | grep '10.0.1.1' | grep 'bytes_sent:'"), 0, "ss");
 
 	close(client_fd);
 	close(server_fd);
-	system("sysctl -qw net.mptcp.scheduler=default");
+	cleanup();
 	bpf_link__destroy(link);
 	mptcp_bpf_first__destroy(first_skel);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 07/10] selftests/bpf: add bpf_backup scheduler
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
                   ` (5 preceding siblings ...)
  2022-05-28 15:11 ` [PATCH mptcp-next v3 06/10] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 08/10] selftests/bpf: add bpf_backup test Geliang Tang
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements the backup flag test scheduler, named bpf_backup,
which picks the first non-backup subflow to send data.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  2 +
 .../selftests/bpf/progs/mptcp_bpf_backup.c    | 43 +++++++++++++++++++
 2 files changed, 45 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_backup.c

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 988dbeeac040..1b9dd8865ae2 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -235,6 +235,8 @@ extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
 
 struct mptcp_subflow_context {
 	__u32	token;
+	__u32	padding : 12,
+		backup : 1;
 } __attribute__((preserve_access_index));
 
 struct mptcp_sched_subflow {
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_backup.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_backup.c
new file mode 100644
index 000000000000..4f394e971c03
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_backup.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_backup_init")
+void BPF_PROG(mptcp_sched_backup_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_backup_release")
+void BPF_PROG(mptcp_sched_backup_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_backup_get_subflow, const struct mptcp_sock *msk,
+		    struct mptcp_sched_data *data)
+{
+	int nr = 0;
+
+	for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (!data->subflows[i].context)
+			break;
+
+		if (!data->subflows[i].context->backup) {
+			nr = i;
+			break;
+		}
+	}
+
+	data->subflows[nr].is_scheduled = 1;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops backup = {
+	.init		= (void *)mptcp_sched_backup_init,
+	.release	= (void *)mptcp_sched_backup_release,
+	.get_subflow	= (void *)bpf_backup_get_subflow,
+	.name		= "bpf_backup",
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 08/10] selftests/bpf: add bpf_backup test
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
                   ` (6 preceding siblings ...)
  2022-05-28 15:11 ` [PATCH mptcp-next v3 07/10] selftests/bpf: add bpf_backup scheduler Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 09/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds the backup BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command
to add this new endpoint to PM netlink with backup flag.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 35 +++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index eaea4105728d..1f92d251a9ba 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -7,6 +7,7 @@
 #include "network_helpers.h"
 #include "mptcp_sock.skel.h"
 #include "mptcp_bpf_first.skel.h"
+#include "mptcp_bpf_backup.skel.h"
 
 #ifndef TCP_CA_NAME_MAX
 #define TCP_CA_NAME_MAX	16
@@ -296,10 +297,44 @@ static void test_first(void)
 	mptcp_bpf_first__destroy(first_skel);
 }
 
+static void test_backup(void)
+{
+	struct mptcp_bpf_backup *backup_skel;
+	int server_fd, client_fd;
+	struct bpf_link *link;
+
+	backup_skel = mptcp_bpf_backup__open_and_load();
+	if (!ASSERT_OK_PTR(backup_skel, "bpf_backup__open_and_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(backup_skel->maps.backup);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+		mptcp_bpf_backup__destroy(backup_skel);
+		return;
+	}
+
+	add_veth();
+	system("ip mptcp endpoint add 10.0.1.1 subflow backup");
+	system("sysctl -qw net.mptcp.scheduler=bpf_backup");
+	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+	client_fd = connect_to_fd(server_fd, 0);
+
+	send_data(server_fd, client_fd);
+	ASSERT_GT(system("ss -MOenita | grep '10.0.1.1' | grep 'bytes_sent:'"), 0, "ss");
+
+	close(client_fd);
+	close(server_fd);
+	cleanup();
+	bpf_link__destroy(link);
+	mptcp_bpf_backup__destroy(backup_skel);
+}
+
 void test_mptcp(void)
 {
 	if (test__start_subtest("base"))
 		test_base();
 	if (test__start_subtest("first"))
 		test_first();
+	if (test__start_subtest("backup"))
+		test_backup();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 09/10] selftests/bpf: add bpf_rr scheduler
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
                   ` (7 preceding siblings ...)
  2022-05-28 15:11 ` [PATCH mptcp-next v3 08/10] selftests/bpf: add bpf_backup test Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:11 ` [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements the round-robin BPF MPTCP scheduler, named bpf_rr,
which always picks the next available subflow to send data. If no such
next subflow available, picks the first one.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  2 +
 .../selftests/bpf/progs/mptcp_bpf_rr.c        | 46 +++++++++++++++++++
 2 files changed, 48 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 1b9dd8865ae2..480be2ea7d59 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -237,6 +237,7 @@ struct mptcp_subflow_context {
 	__u32	token;
 	__u32	padding : 12,
 		backup : 1;
+	struct	sock *tcp_sock;	    /* tcp sk backpointer */
 } __attribute__((preserve_access_index));
 
 struct mptcp_sched_subflow {
@@ -263,6 +264,7 @@ struct mptcp_sched_ops {
 struct mptcp_sock {
 	struct inet_connection_sock	sk;
 
+	struct sock	*last_snd;
 	__u32		token;
 	struct sock	*first;
 	char		ca_name[TCP_CA_NAME_MAX];
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
new file mode 100644
index 000000000000..de0d893e08b4
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_rr_init")
+void BPF_PROG(mptcp_sched_rr_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_rr_release")
+void BPF_PROG(mptcp_sched_rr_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_rr_get_subflow, const struct mptcp_sock *msk,
+		    struct mptcp_sched_data *data)
+{
+	int nr = 0;
+
+	for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (!msk->last_snd || !data->subflows[i].context)
+			break;
+
+		if (data->subflows[i].context->tcp_sock == msk->last_snd) {
+			if (i + 1 == MPTCP_SUBFLOWS_MAX || !data->subflows[i + 1].context)
+				break;
+
+			nr = i + 1;
+			break;
+		}
+	}
+
+	data->subflows[nr].is_scheduled = 1;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops rr = {
+	.init		= (void *)mptcp_sched_rr_init,
+	.release	= (void *)mptcp_sched_rr_release,
+	.get_subflow	= (void *)bpf_rr_get_subflow,
+	.name		= "bpf_rr",
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test
  2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
                   ` (8 preceding siblings ...)
  2022-05-28 15:11 ` [PATCH mptcp-next v3 09/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
@ 2022-05-28 15:11 ` Geliang Tang
  2022-05-28 15:21   ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
  2022-05-28 17:05   ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
  9 siblings, 2 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-28 15:11 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 35 +++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 1f92d251a9ba..17536ea8dbab 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
 #include "mptcp_sock.skel.h"
 #include "mptcp_bpf_first.skel.h"
 #include "mptcp_bpf_backup.skel.h"
+#include "mptcp_bpf_rr.skel.h"
 
 #ifndef TCP_CA_NAME_MAX
 #define TCP_CA_NAME_MAX	16
@@ -329,6 +330,38 @@ static void test_backup(void)
 	mptcp_bpf_backup__destroy(backup_skel);
 }
 
+static void test_rr(void)
+{
+	struct mptcp_bpf_rr *rr_skel;
+	int server_fd, client_fd;
+	struct bpf_link *link;
+
+	rr_skel = mptcp_bpf_rr__open_and_load();
+	if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+		mptcp_bpf_rr__destroy(rr_skel);
+		return;
+	}
+
+	add_veth();
+	system("ip mptcp endpoint add 10.0.1.1 subflow");
+	system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+	client_fd = connect_to_fd(server_fd, 0);
+
+	send_data(server_fd, client_fd);
+	ASSERT_OK(system("ss -MOenita | grep '10.0.1.1' | grep -q 'bytes_sent:'"), "ss");
+
+	close(client_fd);
+	close(server_fd);
+	cleanup();
+	bpf_link__destroy(link);
+	mptcp_bpf_rr__destroy(rr_skel);
+}
+
 void test_mptcp(void)
 {
 	if (test__start_subtest("base"))
@@ -337,4 +370,6 @@ void test_mptcp(void)
 		test_first();
 	if (test__start_subtest("backup"))
 		test_backup();
+	if (test__start_subtest("rr"))
+		test_rr();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Build Failure
  2022-05-28 15:11 ` [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-28 15:21   ` MPTCP CI
  2022-05-28 17:05   ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
  1 sibling, 0 replies; 13+ messages in thread
From: MPTCP CI @ 2022-05-28 15:21 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://patchwork.kernel.org/project/mptcp/patch/1e2b27869e72af5e2928450ff6216f0d78d609da.1653750351.git.geliang.tang@suse.com/
  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/2401426843

Status: failure
Initiator: MPTCPimporter
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/982d66320a49

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Tests Results
  2022-05-28 15:11 ` [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
  2022-05-28 15:21   ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
@ 2022-05-28 17:05   ` MPTCP CI
  1 sibling, 0 replies; 13+ messages in thread
From: MPTCP CI @ 2022-05-28 17:05 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/5604026279526400
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5604026279526400/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6729926186369024
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6729926186369024/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/982d66320a49


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-05-28 17:05 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-28 15:11 [PATCH mptcp-next v3 00/10] BPF packet scheduler Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 02/10] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 05/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 06/10] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 07/10] selftests/bpf: add bpf_backup scheduler Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 08/10] selftests/bpf: add bpf_backup test Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 09/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
2022-05-28 15:11 ` [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-28 15:21   ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
2022-05-28 17:05   ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.