mptcp.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH mptcp-next v5 00/11] BPF packet scheduler
@ 2022-06-01  6:45 Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 01/11] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
                   ` (10 more replies)
  0 siblings, 11 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

v5:
 - add bpf_mptcp_subflow_set_scheduled helper.
 - drop padding bits before backup, use BPF_CORE_READ_BITFIELD_PROBED()
   instead.
 - The new patch "mptcp: add bpf set scheduled helper" should be inserted
   between the commits "mptcp: add bpf_mptcp_sched_ops" and
   "selftests/bpf: add bpf_first scheduler"

v4:
 - merge "mptcp: move is_scheduled into mptcp_subflow_context"
 - rename bpf_backup tp bpf_bkup
 - full patches of this series: https://github.com/geliangtang/mptcp_net-next

v3:
 - use new BPF scheduler API:
 - add backup scheduler
 - add round-robin scheduler
 - check bytes_sent of 'ss' output.

v2:
- Use new BPF scheduler API:
 unsigned long (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
                              struct mptcp_sched_data *data);

Geliang Tang (11):
  Squash to "mptcp: add struct mptcp_sched_ops"
  Squash to "mptcp: add sched in mptcp_sock"
  Squash to "mptcp: add get_subflow wrappers"
  Squash to "mptcp: add bpf_mptcp_sched_ops"
  mptcp: add bpf set scheduled helper
  Squash to "selftests/bpf: add bpf_first scheduler"
  Squash to "selftests/bpf: add bpf_first test"
  selftests/bpf: add bpf_bkup scheduler
  selftests/bpf: add bpf_bkup test
  selftests/bpf: add bpf_rr scheduler
  selftests/bpf: add bpf_rr test

 include/net/mptcp.h                           |  7 +-
 net/mptcp/bpf.c                               | 40 ++++++---
 net/mptcp/protocol.h                          |  1 +
 net/mptcp/sched.c                             | 54 ++++++++---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h | 16 +++-
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 89 ++++++++++++++++++-
 .../selftests/bpf/progs/mptcp_bpf_bkup.c      | 43 +++++++++
 .../selftests/bpf/progs/mptcp_bpf_first.c     |  5 +-
 .../selftests/bpf/progs/mptcp_bpf_rr.c        | 46 ++++++++++
 9 files changed, 269 insertions(+), 32 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 01/11] Squash to "mptcp: add struct mptcp_sched_ops"
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 02/11] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Use new BPF API.

Please update the commit log:

'''
This patch defines struct mptcp_sched_ops, which has three struct members,
name, owner and list, and three function pointers, init(), release() and
get_subflow().

The scheduler function get_subflow() has a struct mptcp_sched_data
parameter, which contains a reinject flag and a mptcp_subflow_context
array.

Add the scheduler registering, unregistering and finding functions to add,
delete and find a packet scheduler on the global list mptcp_sched_list.

Add a new member scheduled in struct mptcp_subflow_context, which will be
set in the MPTCP scheduler context when the scheduler picks this subflow
to send data.
'''

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 include/net/mptcp.h                           |  7 ++++---
 net/mptcp/protocol.h                          |  1 +
 tools/testing/selftests/bpf/bpf_tcp_helpers.h | 11 ++++++++---
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/include/net/mptcp.h b/include/net/mptcp.h
index 6456ea26e4c7..7af7fd48acc7 100644
--- a/include/net/mptcp.h
+++ b/include/net/mptcp.h
@@ -97,14 +97,15 @@ struct mptcp_out_options {
 };
 
 #define MPTCP_SCHED_NAME_MAX	16
+#define MPTCP_SUBFLOWS_MAX	8
 
 struct mptcp_sched_data {
-	struct sock	*sock;
-	bool		call_again;
+	bool	reinject;
+	struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
 };
 
 struct mptcp_sched_ops {
-	void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
+	void (*get_subflow)(const struct mptcp_sock *msk,
 			    struct mptcp_sched_data *data);
 
 	char			name[MPTCP_SCHED_NAME_MAX];
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 8739794166d8..48c5261b7b15 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -469,6 +469,7 @@ struct mptcp_subflow_context {
 		valid_csum_seen : 1;        /* at least one csum validated */
 	enum mptcp_data_avail data_avail;
 	bool	mp_fail_response_expect;
+	bool	scheduled;
 	u32	remote_nonce;
 	u64	thmac;
 	u32	local_nonce;
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index aca4e3c6ac48..8338c7b31f87 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -231,10 +231,15 @@ extern __u32 tcp_slow_start(struct tcp_sock *tp, __u32 acked) __ksym;
 extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
 
 #define MPTCP_SCHED_NAME_MAX	16
+#define MPTCP_SUBFLOWS_MAX	8
+
+struct mptcp_subflow_context {
+	struct	sock *tcp_sock;	    /* tcp sk backpointer */
+} __attribute__((preserve_access_index));
 
 struct mptcp_sched_data {
-	struct sock	*sock;
-	bool		call_again;
+	bool	reinject;
+	struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
 };
 
 struct mptcp_sched_ops {
@@ -243,7 +248,7 @@ struct mptcp_sched_ops {
 	void (*init)(const struct mptcp_sock *msk);
 	void (*release)(const struct mptcp_sock *msk);
 
-	void (*get_subflow)(const struct mptcp_sock *msk, bool reinject,
+	void (*get_subflow)(const struct mptcp_sock *msk,
 			    struct mptcp_sched_data *data);
 	void *owner;
 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 02/11] Squash to "mptcp: add sched in mptcp_sock"
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 01/11] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 03/11] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

No need to export sched in bpf_tcp_helpers.h, drop it.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 8338c7b31f87..5f6a7fa2269b 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -258,7 +258,6 @@ struct mptcp_sock {
 
 	__u32		token;
 	struct sock	*first;
-	struct mptcp_sched_ops	*sched;
 	char		ca_name[TCP_CA_NAME_MAX];
 } __attribute__((preserve_access_index));
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 03/11] Squash to "mptcp: add get_subflow wrappers"
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 01/11] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 02/11] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 04/11] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Please update the commit log:

'''
This patch defines two new wrappers mptcp_sched_get_send() and
mptcp_sched_get_retrans(), invoke get_subflow() of msk->sched in them.
Use them instead of using mptcp_subflow_get_send() or
mptcp_subflow_get_retrans() directly.

Set the subflow pointers array in struct mptcp_sched_data before invoking
get_subflow(), then it can be used in get_subflow() in the BPF contexts.

Check the subflow scheduled flags to test which subflow or subflows are
picked by the scheduler.
'''

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/sched.c | 54 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 43 insertions(+), 11 deletions(-)

diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 3ceb721e6489..613b7005938c 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -88,11 +88,25 @@ void mptcp_release_sched(struct mptcp_sock *msk)
 	bpf_module_put(sched, sched->owner);
 }
 
-static int mptcp_sched_data_init(struct mptcp_sock *msk,
+static int mptcp_sched_data_init(struct mptcp_sock *msk, bool reinject,
 				 struct mptcp_sched_data *data)
 {
-	data->sock = NULL;
-	data->call_again = 0;
+	struct mptcp_subflow_context *subflow;
+	int i = 0;
+
+	data->reinject = reinject;
+
+	mptcp_for_each_subflow(msk, subflow) {
+		if (i == MPTCP_SUBFLOWS_MAX) {
+			pr_warn_once("too many subflows");
+			break;
+		}
+		WRITE_ONCE(subflow->scheduled, false);
+		data->contexts[i++] = subflow;
+	}
+
+	for (; i < MPTCP_SUBFLOWS_MAX; i++)
+		data->contexts[i] = NULL;
 
 	return 0;
 }
@@ -100,6 +114,8 @@ static int mptcp_sched_data_init(struct mptcp_sock *msk,
 struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
 {
 	struct mptcp_sched_data data;
+	struct sock *ssk = NULL;
+	int i;
 
 	sock_owned_by_me((struct sock *)msk);
 
@@ -113,16 +129,25 @@ struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
 	if (!msk->sched)
 		return mptcp_subflow_get_send(msk);
 
-	mptcp_sched_data_init(msk, &data);
-	msk->sched->get_subflow(msk, false, &data);
+	mptcp_sched_data_init(msk, false, &data);
+	msk->sched->get_subflow(msk, &data);
+
+	for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (data.contexts[i] && READ_ONCE(data.contexts[i]->scheduled)) {
+			ssk = data.contexts[i]->tcp_sock;
+			msk->last_snd = ssk;
+			break;
+		}
+	}
 
-	msk->last_snd = data.sock;
-	return data.sock;
+	return ssk;
 }
 
 struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
 {
 	struct mptcp_sched_data data;
+	struct sock *ssk = NULL;
+	int i;
 
 	sock_owned_by_me((const struct sock *)msk);
 
@@ -133,9 +158,16 @@ struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
 	if (!msk->sched)
 		return mptcp_subflow_get_retrans(msk);
 
-	mptcp_sched_data_init(msk, &data);
-	msk->sched->get_subflow(msk, true, &data);
+	mptcp_sched_data_init(msk, true, &data);
+	msk->sched->get_subflow(msk, &data);
+
+	for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (data.contexts[i] && READ_ONCE(data.contexts[i]->scheduled)) {
+			ssk = data.contexts[i]->tcp_sock;
+			msk->last_snd = ssk;
+			break;
+		}
+	}
 
-	msk->last_snd = data.sock;
-	return data.sock;
+	return ssk;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 04/11] Squash to "mptcp: add bpf_mptcp_sched_ops"
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (2 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 03/11] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 05/11] mptcp: add bpf set scheduled helper Geliang Tang
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Change the access code from mptcp_sched_data to mptcp_subflow_context.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/bpf.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index 338146d173f4..0529e70d53b1 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -42,29 +42,27 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
 {
 	size_t end;
 
-	if (atype == BPF_READ)
+	if (atype == BPF_READ) {
 		return btf_struct_access(log, btf, t, off, size, atype,
 					 next_btf_id, flag);
+	}
 
 	if (t != mptcp_sched_type) {
-		bpf_log(log, "only access to mptcp_sched_data is supported\n");
+		bpf_log(log, "only access to mptcp_subflow_context is supported\n");
 		return -EACCES;
 	}
 
 	switch (off) {
-	case offsetof(struct mptcp_sched_data, sock):
-		end = offsetofend(struct mptcp_sched_data, sock);
-		break;
-	case offsetof(struct mptcp_sched_data, call_again):
-		end = offsetofend(struct mptcp_sched_data, call_again);
+	case offsetof(struct mptcp_subflow_context, scheduled):
+		end = offsetofend(struct mptcp_subflow_context, scheduled);
 		break;
 	default:
-		bpf_log(log, "no write support to mptcp_sched_data at off %d\n", off);
+		bpf_log(log, "no write support to mptcp_subflow_context at off %d\n", off);
 		return -EACCES;
 	}
 
 	if (off + size > end) {
-		bpf_log(log, "access beyond mptcp_sched_data at off %u size %u ended at %zu",
+		bpf_log(log, "access beyond mptcp_subflow_context at off %u size %u ended at %zu",
 			off, size, end);
 		return -EACCES;
 	}
@@ -144,7 +142,7 @@ static int bpf_mptcp_sched_init(struct btf *btf)
 {
 	s32 type_id;
 
-	type_id = btf_find_by_name_kind(btf, "mptcp_sched_data",
+	type_id = btf_find_by_name_kind(btf, "mptcp_subflow_context",
 					BTF_KIND_STRUCT);
 	if (type_id < 0)
 		return -EINVAL;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 05/11] mptcp: add bpf set scheduled helper
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (3 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 04/11] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  9:25   ` kernel test robot
  2022-06-01  6:45 ` [PATCH mptcp-next v5 06/11] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds a new helper bpf_mptcp_subflow_set_scheduled() to set the
scheduled flag of struct mptcp_subflow_context using WRITE_ONCE().
Register this helper in bpf_mptcp_sched_kfunc_init() to make sure it can
be accessed from the BPF context.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/bpf.c                               | 22 +++++++++++++++++++
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  2 ++
 2 files changed, 24 insertions(+)

diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index 0529e70d53b1..23c2776bb989 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -161,6 +161,28 @@ struct bpf_struct_ops bpf_mptcp_sched_ops = {
 	.init		= bpf_mptcp_sched_init,
 	.name		= "mptcp_sched_ops",
 };
+
+void bpf_mptcp_subflow_set_scheduled(struct mptcp_subflow_context *subflow)
+{
+	WRITE_ONCE(subflow->scheduled, true);
+}
+EXPORT_SYMBOL(bpf_mptcp_subflow_set_scheduled);
+
+BTF_SET_START(bpf_mptcp_sched_kfunc_ids)
+BTF_ID(func, bpf_mptcp_subflow_set_scheduled)
+BTF_SET_END(bpf_mptcp_sched_kfunc_ids)
+
+static const struct btf_kfunc_id_set bpf_mptcp_sched_kfunc_set = {
+	.owner		= THIS_MODULE,
+	.check_set	= &bpf_mptcp_sched_kfunc_ids,
+};
+
+static int __init bpf_mptcp_sched_kfunc_init(void)
+{
+	return register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS,
+					 &bpf_mptcp_sched_kfunc_set);
+}
+late_initcall(bpf_mptcp_sched_kfunc_init);
 #endif /* CONFIG_BPF_JIT */
 
 struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk)
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 5f6a7fa2269b..488de4b920ef 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -261,4 +261,6 @@ struct mptcp_sock {
 	char		ca_name[TCP_CA_NAME_MAX];
 } __attribute__((preserve_access_index));
 
+extern void bpf_mptcp_subflow_set_scheduled(struct mptcp_subflow_context *subflow) __ksym;
+
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 06/11] Squash to "selftests/bpf: add bpf_first scheduler"
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (4 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 05/11] mptcp: add bpf set scheduled helper Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 07/11] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Use new get_subflow API.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/progs/mptcp_bpf_first.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
index fd67b5f42964..4217d0e6445b 100644
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
@@ -17,10 +17,9 @@ void BPF_PROG(mptcp_sched_first_release, const struct mptcp_sock *msk)
 }
 
 void BPF_STRUCT_OPS(bpf_first_get_subflow, const struct mptcp_sock *msk,
-		    bool reinject, struct mptcp_sched_data *data)
+		    struct mptcp_sched_data *data)
 {
-	data->sock = msk->first;
-	data->call_again = 0;
+	bpf_mptcp_subflow_set_scheduled(data->contexts[0]);
 }
 
 SEC(".struct_ops")
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 07/11] Squash to "selftests/bpf: add bpf_first test"
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (5 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 06/11] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 08/11] selftests/bpf: add bpf_bkup scheduler Geliang Tang
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Add two helpers, add_veth() and cleanup().

Please update the commit log:

'''
This patch expends the MPTCP test base to support MPTCP packet scheduler
tests. Add the bpf_first scheduler test in it.

Use sysctl to set net.mptcp.scheduler to use this sched. The new helper
add_veth() adds a veth net device to simulate the multiple addresses case.
Use 'ip mptcp endpoint' command to add this new endpoint to PM netlink.

Some code in send_data() is from prog_tests/bpf_tcp_ca.c.

Check bytes_sent of 'ss' output after send_data() to make sure no data
has been sent on the new veth net device. All data has been sent on the
first subflow.

Invoke the new helper cleanup() to set back net.mptcp.scheduler to default,
flush all mptcp endpoints, and delete the veth net device.
'''

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 8e9764275b07..eaea4105728d 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -250,6 +250,20 @@ static void send_data(int lfd, int fd)
 	      PTR_ERR(thread_ret));
 }
 
+static void add_veth(void)
+{
+	system("ip link add veth1 type veth");
+	system("ip addr add 10.0.1.1/24 dev veth1");
+	system("ip link set veth1 up");
+}
+
+static void cleanup(void)
+{
+	system("sysctl -qw net.mptcp.scheduler=default");
+	system("ip mptcp endpoint flush");
+	system("ip link del veth1");
+}
+
 static void test_first(void)
 {
 	struct mptcp_bpf_first *first_skel;
@@ -266,15 +280,18 @@ static void test_first(void)
 		return;
 	}
 
+	add_veth();
+	system("ip mptcp endpoint add 10.0.1.1 subflow");
 	system("sysctl -qw net.mptcp.scheduler=bpf_first");
 	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
 	client_fd = connect_to_fd(server_fd, 0);
 
 	send_data(server_fd, client_fd);
+	ASSERT_GT(system("ss -MOenita | grep '10.0.1.1' | grep 'bytes_sent:'"), 0, "ss");
 
 	close(client_fd);
 	close(server_fd);
-	system("sysctl -qw net.mptcp.scheduler=default");
+	cleanup();
 	bpf_link__destroy(link);
 	mptcp_bpf_first__destroy(first_skel);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 08/11] selftests/bpf: add bpf_bkup scheduler
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (6 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 07/11] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 09/11] selftests/bpf: add bpf_bkup test Geliang Tang
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements the backup flag test scheduler, named bpf_bkup,
which picks the first non-backup subflow to send data.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  1 +
 .../selftests/bpf/progs/mptcp_bpf_bkup.c      | 43 +++++++++++++++++++
 2 files changed, 44 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 488de4b920ef..1f8addd61f14 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -234,6 +234,7 @@ extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
 #define MPTCP_SUBFLOWS_MAX	8
 
 struct mptcp_subflow_context {
+	__u32	backup : 1;
 	struct	sock *tcp_sock;	    /* tcp sk backpointer */
 } __attribute__((preserve_access_index));
 
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
new file mode 100644
index 000000000000..2431d5f89150
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_bkup.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_bkup_init")
+void BPF_PROG(mptcp_sched_bkup_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_bkup_release")
+void BPF_PROG(mptcp_sched_bkup_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_bkup_get_subflow, const struct mptcp_sock *msk,
+		    struct mptcp_sched_data *data)
+{
+	int nr = 0;
+
+	for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (!data->contexts[i])
+			break;
+
+		if (!BPF_CORE_READ_BITFIELD_PROBED(data->contexts[i], backup)) {
+			nr = i;
+			break;
+		}
+	}
+
+	bpf_mptcp_subflow_set_scheduled(data->contexts[nr]);
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops bkup = {
+	.init		= (void *)mptcp_sched_bkup_init,
+	.release	= (void *)mptcp_sched_bkup_release,
+	.get_subflow	= (void *)bpf_bkup_get_subflow,
+	.name		= "bpf_bkup",
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 09/11] selftests/bpf: add bpf_bkup test
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (7 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 08/11] selftests/bpf: add bpf_bkup scheduler Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:45 ` [PATCH mptcp-next v5 10/11] selftests/bpf: add bpf_rr scheduler Geliang Tang
  2022-06-01  6:46 ` [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds the backup BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command
to add this new endpoint to PM netlink with backup flag. Send data,
check bytes_sent of 'ss' output, and do some cleanups.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 35 +++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index eaea4105728d..f3c73cd2c786 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -7,6 +7,7 @@
 #include "network_helpers.h"
 #include "mptcp_sock.skel.h"
 #include "mptcp_bpf_first.skel.h"
+#include "mptcp_bpf_bkup.skel.h"
 
 #ifndef TCP_CA_NAME_MAX
 #define TCP_CA_NAME_MAX	16
@@ -296,10 +297,44 @@ static void test_first(void)
 	mptcp_bpf_first__destroy(first_skel);
 }
 
+static void test_bkup(void)
+{
+	struct mptcp_bpf_bkup *bkup_skel;
+	int server_fd, client_fd;
+	struct bpf_link *link;
+
+	bkup_skel = mptcp_bpf_bkup__open_and_load();
+	if (!ASSERT_OK_PTR(bkup_skel, "bpf_bkup__open_and_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(bkup_skel->maps.bkup);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+		mptcp_bpf_bkup__destroy(bkup_skel);
+		return;
+	}
+
+	add_veth();
+	system("ip mptcp endpoint add 10.0.1.1 subflow backup");
+	system("sysctl -qw net.mptcp.scheduler=bpf_bkup");
+	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+	client_fd = connect_to_fd(server_fd, 0);
+
+	send_data(server_fd, client_fd);
+	ASSERT_GT(system("ss -MOenita | grep '10.0.1.1' | grep 'bytes_sent:'"), 0, "ss");
+
+	close(client_fd);
+	close(server_fd);
+	cleanup();
+	bpf_link__destroy(link);
+	mptcp_bpf_bkup__destroy(bkup_skel);
+}
+
 void test_mptcp(void)
 {
 	if (test__start_subtest("base"))
 		test_base();
 	if (test__start_subtest("first"))
 		test_first();
+	if (test__start_subtest("bkup"))
+		test_bkup();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 10/11] selftests/bpf: add bpf_rr scheduler
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (8 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 09/11] selftests/bpf: add bpf_bkup test Geliang Tang
@ 2022-06-01  6:45 ` Geliang Tang
  2022-06-01  6:46 ` [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
  10 siblings, 0 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:45 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements the round-robin BPF MPTCP scheduler, named bpf_rr,
which always picks the next available subflow to send data. If no such
next subflow available, picks the first one.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  1 +
 .../selftests/bpf/progs/mptcp_bpf_rr.c        | 46 +++++++++++++++++++
 2 files changed, 47 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 1f8addd61f14..928f34cce4ce 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -257,6 +257,7 @@ struct mptcp_sched_ops {
 struct mptcp_sock {
 	struct inet_connection_sock	sk;
 
+	struct sock	*last_snd;
 	__u32		token;
 	struct sock	*first;
 	char		ca_name[TCP_CA_NAME_MAX];
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
new file mode 100644
index 000000000000..c59fe2eff420
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_rr_init")
+void BPF_PROG(mptcp_sched_rr_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_rr_release")
+void BPF_PROG(mptcp_sched_rr_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_rr_get_subflow, const struct mptcp_sock *msk,
+		    struct mptcp_sched_data *data)
+{
+	int nr = 0;
+
+	for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (!msk->last_snd || !data->contexts[i])
+			break;
+
+		if (data->contexts[i]->tcp_sock == msk->last_snd) {
+			if (i + 1 == MPTCP_SUBFLOWS_MAX || !data->contexts[i + 1])
+				break;
+
+			nr = i + 1;
+			break;
+		}
+	}
+
+	bpf_mptcp_subflow_set_scheduled(data->contexts[nr]);
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops rr = {
+	.init		= (void *)mptcp_sched_rr_init,
+	.release	= (void *)mptcp_sched_rr_release,
+	.get_subflow	= (void *)bpf_rr_get_subflow,
+	.name		= "bpf_rr",
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test
  2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
                   ` (9 preceding siblings ...)
  2022-06-01  6:45 ` [PATCH mptcp-next v5 10/11] selftests/bpf: add bpf_rr scheduler Geliang Tang
@ 2022-06-01  6:46 ` Geliang Tang
  2022-06-01  6:57   ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
  2022-06-01  8:16   ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
  10 siblings, 2 replies; 20+ messages in thread
From: Geliang Tang @ 2022-06-01  6:46 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink. Send data and check bytes_sent of
'ss' output after it to make sure the data has been sent on the new veth
net device.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 35 +++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index f3c73cd2c786..1ecc8a2b76b6 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
 #include "mptcp_sock.skel.h"
 #include "mptcp_bpf_first.skel.h"
 #include "mptcp_bpf_bkup.skel.h"
+#include "mptcp_bpf_rr.skel.h"
 
 #ifndef TCP_CA_NAME_MAX
 #define TCP_CA_NAME_MAX	16
@@ -329,6 +330,38 @@ static void test_bkup(void)
 	mptcp_bpf_bkup__destroy(bkup_skel);
 }
 
+static void test_rr(void)
+{
+	struct mptcp_bpf_rr *rr_skel;
+	int server_fd, client_fd;
+	struct bpf_link *link;
+
+	rr_skel = mptcp_bpf_rr__open_and_load();
+	if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+		mptcp_bpf_rr__destroy(rr_skel);
+		return;
+	}
+
+	add_veth();
+	system("ip mptcp endpoint add 10.0.1.1 subflow");
+	system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+	client_fd = connect_to_fd(server_fd, 0);
+
+	send_data(server_fd, client_fd);
+	ASSERT_OK(system("ss -MOenita | grep '10.0.1.1' | grep -q 'bytes_sent:'"), "ss");
+
+	close(client_fd);
+	close(server_fd);
+	cleanup();
+	bpf_link__destroy(link);
+	mptcp_bpf_rr__destroy(rr_skel);
+}
+
 void test_mptcp(void)
 {
 	if (test__start_subtest("base"))
@@ -337,4 +370,6 @@ void test_mptcp(void)
 		test_first();
 	if (test__start_subtest("bkup"))
 		test_bkup();
+	if (test__start_subtest("rr"))
+		test_rr();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Build Failure
  2022-06-01  6:46 ` [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-06-01  6:57   ` MPTCP CI
  2022-06-01  8:16   ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
  1 sibling, 0 replies; 20+ messages in thread
From: MPTCP CI @ 2022-06-01  6:57 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://patchwork.kernel.org/project/mptcp/patch/076285ed7f05ec809cfb5866c6ff2d6b8cd04e95.1654065674.git.geliang.tang@suse.com/
  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/2419771412

Status: failure
Initiator: MPTCPimporter
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/f2e8aeb9dc5b

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Tests Results
  2022-06-01  6:46 ` [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
  2022-06-01  6:57   ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
@ 2022-06-01  8:16   ` MPTCP CI
  1 sibling, 0 replies; 20+ messages in thread
From: MPTCP CI @ 2022-06-01  8:16 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6448318602543104
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6448318602543104/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 3 failed test(s): packetdrill_add_addr selftest_diag selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/5040943718989824
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5040943718989824/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/f2e8aeb9dc5b


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH mptcp-next v5 05/11] mptcp: add bpf set scheduled helper
  2022-06-01  6:45 ` [PATCH mptcp-next v5 05/11] mptcp: add bpf set scheduled helper Geliang Tang
@ 2022-06-01  9:25   ` kernel test robot
  0 siblings, 0 replies; 20+ messages in thread
From: kernel test robot @ 2022-06-01  9:25 UTC (permalink / raw)
  To: Geliang Tang, mptcp; +Cc: kbuild-all, Geliang Tang

Hi Geliang,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on mptcp/export]
[cannot apply to bpf-next/master bpf/master linus/master v5.18 next-20220601]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Geliang-Tang/BPF-packet-scheduler/20220601-144812
base:   https://github.com/multipath-tcp/mptcp_net-next.git export
config: x86_64-randconfig-a006 (https://download.01.org/0day-ci/archive/20220601/202206011710.y7OIrPON-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-1) 11.3.0
reproduce (this is a W=1 build):
        # https://github.com/intel-lab-lkp/linux/commit/4bfb6a3d6d5ea1744a301d97642ff0077eb280f6
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Geliang-Tang/BPF-packet-scheduler/20220601-144812
        git checkout 4bfb6a3d6d5ea1744a301d97642ff0077eb280f6
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash net/mptcp/

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> net/mptcp/bpf.c:165:6: warning: no previous prototype for 'bpf_mptcp_subflow_set_scheduled' [-Wmissing-prototypes]
     165 | void bpf_mptcp_subflow_set_scheduled(struct mptcp_subflow_context *subflow)
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


vim +/bpf_mptcp_subflow_set_scheduled +165 net/mptcp/bpf.c

   164	
 > 165	void bpf_mptcp_subflow_set_scheduled(struct mptcp_subflow_context *subflow)
   166	{
   167		WRITE_ONCE(subflow->scheduled, true);
   168	}
   169	EXPORT_SYMBOL(bpf_mptcp_subflow_set_scheduled);
   170	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: selftests/bpf: Add bpf_rr test: Tests Results
  2022-06-02  4:53 [PATCH mptcp-next v7 13/13] selftests/bpf: Add bpf_rr test Geliang Tang
@ 2022-06-02  6:36 ` MPTCP CI
  0 siblings, 0 replies; 20+ messages in thread
From: MPTCP CI @ 2022-06-02  6:36 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6186355024723968
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6186355024723968/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 3 failed test(s): packetdrill_add_addr selftest_diag selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5623405071302656
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5623405071302656/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/09f1dd4b0a46


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Tests Results
  2022-06-01 14:08 [PATCH mptcp-next v6 11/11] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-06-01 16:02 ` MPTCP CI
  0 siblings, 0 replies; 20+ messages in thread
From: MPTCP CI @ 2022-06-01 16:02 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6191871876661248
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6191871876661248/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 1 failed test(s): selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/4951362948562944
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4951362948562944/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/4d61a8b660c4


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Tests Results
  2022-05-31  9:09 [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-31 10:39 ` MPTCP CI
  0 siblings, 0 replies; 20+ messages in thread
From: MPTCP CI @ 2022-05-31 10:39 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/4799661683769344
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4799661683769344/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 3 failed test(s): selftest_diag selftest_mptcp_join selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/5925561590611968
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5925561590611968/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c91606f830c4


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Tests Results
  2022-05-28 15:11 [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-28 17:05 ` MPTCP CI
  0 siblings, 0 replies; 20+ messages in thread
From: MPTCP CI @ 2022-05-28 17:05 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/5604026279526400
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5604026279526400/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6729926186369024
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6729926186369024/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/982d66320a49


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: selftests/bpf: add bpf_rr test: Tests Results
  2022-05-09 14:56 [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-09 16:48 ` MPTCP CI
  0 siblings, 0 replies; 20+ messages in thread
From: MPTCP CI @ 2022-05-09 16:48 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6702307382394880
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6702307382394880/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 1 failed test(s): selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/4556060684976128
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4556060684976128/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/4c681daf367a


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-06-02  6:36 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-01  6:45 [PATCH mptcp-next v5 00/11] BPF packet scheduler Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 01/11] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 02/11] Squash to "mptcp: add sched in mptcp_sock" Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 03/11] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 04/11] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 05/11] mptcp: add bpf set scheduled helper Geliang Tang
2022-06-01  9:25   ` kernel test robot
2022-06-01  6:45 ` [PATCH mptcp-next v5 06/11] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 07/11] Squash to "selftests/bpf: add bpf_first test" Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 08/11] selftests/bpf: add bpf_bkup scheduler Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 09/11] selftests/bpf: add bpf_bkup test Geliang Tang
2022-06-01  6:45 ` [PATCH mptcp-next v5 10/11] selftests/bpf: add bpf_rr scheduler Geliang Tang
2022-06-01  6:46 ` [PATCH mptcp-next v5 11/11] selftests/bpf: add bpf_rr test Geliang Tang
2022-06-01  6:57   ` selftests/bpf: add bpf_rr test: Build Failure MPTCP CI
2022-06-01  8:16   ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
  -- strict thread matches above, loose matches on Subject: below --
2022-06-02  4:53 [PATCH mptcp-next v7 13/13] selftests/bpf: Add bpf_rr test Geliang Tang
2022-06-02  6:36 ` selftests/bpf: Add bpf_rr test: Tests Results MPTCP CI
2022-06-01 14:08 [PATCH mptcp-next v6 11/11] selftests/bpf: add bpf_rr test Geliang Tang
2022-06-01 16:02 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-31  9:09 [PATCH mptcp-next v4 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-31 10:39 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-28 15:11 [PATCH mptcp-next v3 10/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-28 17:05 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI
2022-05-09 14:56 [PATCH mptcp-next v13 3/3] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-09 16:48 ` selftests/bpf: add bpf_rr test: Tests Results MPTCP CI

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).