All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-next 00/10] BPF packet scheduler
@ 2022-05-20  8:04 Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

- Use new BPF scheduler API
- merged "BPF round-robin scheduler" v14
- merged "BPF redundant scheduler" v2
- base-commit: export/20220519T160645
- This series will break two link failure tests:

Created /tmp/tmp.bTGaRxJupX (size 1 KB) containing data sent by client
Created /tmp/tmp.K2Llsy3W03 (size 1 KB) containing data sent by server
Created /tmp/tmp.4uLEQcKbee (size 23622 KB) containing data sent by client
001 multiple flows, signal, link failure syn[ ok ] - synack[ ok ] - ack[ ok ]
                                         add[ ok ] - echo  [ ok ]
                                         stale             [ ok ]
Created /tmp/tmp.hV4OKPkqoH (size 12288 KB) containing data sent by server
002 multi flows, signal, bidi, link fail syn[ ok ] - synack[ ok ] - ack[ ok ]
                                         add[ ok ] - echo  [ ok ]
                                         stale             [ ok ]
003 backup subflow unused, link failure  syn[ ok ] - synack[ ok ] - ack[ ok ]
                                         add[ ok ] - echo  [ ok ]
                                         link usage        [ ok ]
004 backup flow used, multi links fail   syn[ ok ] - synack[ ok ] - ack[ ok ]
                                         add[ ok ] - echo  [ ok ]
                                         stale             [ ok ]
                                         link usage        [fail] got 11% usage, expected 50%
005 backup flow used, bidi, link failure syn[ ok ] - synack[ ok ] - ack[ ok ]
                                         add[ ok ] - echo  [ ok ]
                                         stale             [ ok ]
                                         link usage        [fail] got 10% usage, expected 50%

2 failure(s) has(ve) been detected:
	- 4: backup flow used, multi links fail
	- 5: backup flow used, bidi, link failure

Geliang Tang (10):
  Squash to "mptcp: add struct mptcp_sched_ops"
  mptcp: reflect first flag in subflow_push_pending
  Squash to "mptcp: add get_subflow wrappers"
  Squash to "mptcp: add bpf_mptcp_sched_ops"
  mptcp: add subflows array in sched data
  Squash to "selftests/bpf: add bpf_first scheduler"
  selftests/bpf: add bpf_rr scheduler
  selftests/bpf: add bpf_rr test
  selftests/bpf: add bpf_red scheduler
  selftests/bpf: add bpf_red test

 include/net/mptcp.h                           |   5 +-
 net/mptcp/bpf.c                               |   7 +-
 net/mptcp/protocol.c                          | 201 ++++++++++++------
 net/mptcp/protocol.h                          |   5 +-
 net/mptcp/sched.c                             | 107 ++++++++--
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  22 +-
 .../testing/selftests/bpf/prog_tests/mptcp.c  |  76 +++++++
 .../selftests/bpf/progs/mptcp_bpf_first.c     |   6 +-
 .../selftests/bpf/progs/mptcp_bpf_red.c       |  39 ++++
 .../selftests/bpf/progs/mptcp_bpf_rr.c        |  48 +++++
 10 files changed, 419 insertions(+), 97 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 01/10] Squash to "mptcp: add struct mptcp_sched_ops"
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 02/10] mptcp: reflect first flag in subflow_push_pending Geliang Tang
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Use bitmap instead of sock in struct mptcp_sched_data.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 include/net/mptcp.h                           | 3 +--
 tools/testing/selftests/bpf/bpf_tcp_helpers.h | 3 +--
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/include/net/mptcp.h b/include/net/mptcp.h
index 6456ea26e4c7..33a44ec21701 100644
--- a/include/net/mptcp.h
+++ b/include/net/mptcp.h
@@ -99,8 +99,7 @@ struct mptcp_out_options {
 #define MPTCP_SCHED_NAME_MAX	16
 
 struct mptcp_sched_data {
-	struct sock	*sock;
-	bool		call_again;
+	unsigned long	bitmap;
 };
 
 struct mptcp_sched_ops {
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index aca4e3c6ac48..60c8239f95ff 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -233,8 +233,7 @@ extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
 #define MPTCP_SCHED_NAME_MAX	16
 
 struct mptcp_sched_data {
-	struct sock	*sock;
-	bool		call_again;
+	unsigned long	bitmap;
 };
 
 struct mptcp_sched_ops {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 02/10] mptcp: reflect first flag in subflow_push_pending
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch reflects the first flag in __mptcp_subflow_push_pending.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/protocol.c | 35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index d6aef4b13b8a..96cf1620348b 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1634,23 +1634,34 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
 			 * check for a different subflow usage only after
 			 * spooling the first chunk of data
 			 */
-			xmit_ssk = first ? ssk : mptcp_sched_get_send(mptcp_sk(sk));
-			if (!xmit_ssk)
-				goto out;
-			if (xmit_ssk != ssk) {
-				mptcp_subflow_delegate(mptcp_subflow_ctx(xmit_ssk),
-						       MPTCP_DELEGATE_SEND);
-				goto out;
+			if (first) {
+				xmit_ssk = ssk;
+
+				if (!xmit_ssk)
+					goto out;
+				ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+				if (ret <= 0)
+					goto out;
+				first = false;
+			} else {
+				xmit_ssk = mptcp_sched_get_send(mptcp_sk(sk));
+
+				if (!xmit_ssk)
+					goto out;
+				if (xmit_ssk != ssk) {
+					mptcp_subflow_delegate(mptcp_subflow_ctx(xmit_ssk),
+							       MPTCP_DELEGATE_SEND);
+					goto out;
+				}
+
+				ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+				if (ret <= 0)
+					goto out;
 			}
 
-			ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
-			if (ret <= 0)
-				goto out;
-
 			info.sent += ret;
 			copied += ret;
 			len -= ret;
-			first = false;
 
 			mptcp_update_post_push(msk, dfrag, ret);
 		}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 03/10] Squash to "mptcp: add get_subflow wrappers"
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 02/10] mptcp: reflect first flag in subflow_push_pending Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds the redundant subflows support, sending all packets
redundantly on all available subflows.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/protocol.c | 192 +++++++++++++++++++++++++++----------------
 net/mptcp/protocol.h |   5 +-
 net/mptcp/sched.c    |  78 ++++++++++++++----
 3 files changed, 188 insertions(+), 87 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 96cf1620348b..274818480a36 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1564,37 +1564,50 @@ void __mptcp_push_pending(struct sock *sk, unsigned int flags)
 		info.limit = dfrag->data_len;
 		len = dfrag->data_len - dfrag->already_sent;
 		while (len > 0) {
+			struct mptcp_subflow_context *subflow;
 			int ret = 0;
+			int max = 0;
+			int err;
 
-			prev_ssk = ssk;
-			ssk = mptcp_sched_get_send(msk);
-
-			/* First check. If the ssk has changed since
-			 * the last round, release prev_ssk
-			 */
-			if (ssk != prev_ssk && prev_ssk)
-				mptcp_push_release(prev_ssk, &info);
-			if (!ssk)
-				goto out;
-
-			/* Need to lock the new subflow only if different
-			 * from the previous one, otherwise we are still
-			 * helding the relevant lock
-			 */
-			if (ssk != prev_ssk)
-				lock_sock(ssk);
-
-			ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
-			if (ret <= 0) {
-				mptcp_push_release(ssk, &info);
+			err = mptcp_sched_get_send(msk);
+			if (err)
 				goto out;
+			mptcp_for_each_subflow(msk, subflow) {
+				if (subflow->scheduled) {
+					prev_ssk = ssk;
+					ssk = mptcp_subflow_tcp_sock(subflow);
+
+					/* First check. If the ssk has changed since
+					 * the last round, release prev_ssk
+					 */
+					if (ssk != prev_ssk && prev_ssk)
+						mptcp_push_release(prev_ssk, &info);
+					if (!ssk)
+						goto out;
+
+					/* Need to lock the new subflow only if different
+					 * from the previous one, otherwise we are still
+					 * helding the relevant lock
+					 */
+					if (ssk != prev_ssk)
+						lock_sock(ssk);
+
+					ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+					if (ret <= 0) {
+						mptcp_push_release(ssk, &info);
+						goto out;
+					}
+
+					if (ret > max)
+						max = ret;
+				}
 			}
 
-			info.sent += ret;
-			copied += ret;
-			len -= ret;
+			info.sent += max;
+			copied += max;
+			len -= max;
 
-			mptcp_update_post_push(msk, dfrag, ret);
+			mptcp_update_post_push(msk, dfrag, max);
 		}
 		WRITE_ONCE(msk->first_pending, mptcp_send_next(sk));
 	}
@@ -1628,7 +1641,10 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
 		info.limit = dfrag->data_len;
 		len = dfrag->data_len - dfrag->already_sent;
 		while (len > 0) {
+			struct mptcp_subflow_context *subflow;
 			int ret = 0;
+			int max = 0;
+			int err;
 
 			/* the caller already invoked the packet scheduler,
 			 * check for a different subflow usage only after
@@ -1639,31 +1655,41 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
 
 				if (!xmit_ssk)
 					goto out;
-				ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
-				if (ret <= 0)
+				max = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+				if (max <= 0)
 					goto out;
 				first = false;
 			} else {
-				xmit_ssk = mptcp_sched_get_send(mptcp_sk(sk));
-
-				if (!xmit_ssk)
-					goto out;
-				if (xmit_ssk != ssk) {
-					mptcp_subflow_delegate(mptcp_subflow_ctx(xmit_ssk),
-							       MPTCP_DELEGATE_SEND);
+				err = mptcp_sched_get_send(mptcp_sk(sk));
+				if (err)
 					goto out;
+				mptcp_for_each_subflow(msk, subflow) {
+					if (subflow->scheduled) {
+						xmit_ssk = mptcp_subflow_tcp_sock(subflow);
+
+						if (!xmit_ssk)
+							goto out;
+						if (xmit_ssk != ssk) {
+							mptcp_subflow_delegate(mptcp_subflow_ctx(xmit_ssk),
+									       MPTCP_DELEGATE_SEND);
+							goto out;
+						}
+
+						ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+						if (ret <= 0)
+							goto out;
+
+						if (ret > max)
+							max = ret;
+					}
 				}
-
-				ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
-				if (ret <= 0)
-					goto out;
 			}
 
-			info.sent += ret;
-			copied += ret;
-			len -= ret;
+			info.sent += max;
+			copied += max;
+			len -= max;
 
-			mptcp_update_post_push(msk, dfrag, ret);
+			mptcp_update_post_push(msk, dfrag, max);
 		}
 		WRITE_ONCE(msk->first_pending, mptcp_send_next(sk));
 	}
@@ -2455,16 +2481,17 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
 static void __mptcp_retrans(struct sock *sk)
 {
 	struct mptcp_sock *msk = mptcp_sk(sk);
+	struct mptcp_subflow_context *subflow;
 	struct mptcp_sendmsg_info info = {};
 	struct mptcp_data_frag *dfrag;
 	size_t copied = 0;
 	struct sock *ssk;
-	int ret;
+	int err;
 
 	mptcp_clean_una_wakeup(sk);
 
 	/* first check ssk: need to kick "stale" logic */
-	ssk = mptcp_sched_get_retrans(msk);
+	err = mptcp_sched_get_retrans(msk);
 	dfrag = mptcp_rtx_head(sk);
 	if (!dfrag) {
 		if (mptcp_data_fin_enabled(msk)) {
@@ -2483,31 +2510,45 @@ static void __mptcp_retrans(struct sock *sk)
 		goto reset_timer;
 	}
 
-	if (!ssk)
+	if (err)
 		goto reset_timer;
 
-	lock_sock(ssk);
+	mptcp_for_each_subflow(msk, subflow) {
+		if (subflow->scheduled) {
+			int ret = 0;
+			int max = 0;
 
-	/* limit retransmission to the bytes already sent on some subflows */
-	info.sent = 0;
-	info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len : dfrag->already_sent;
-	while (info.sent < info.limit) {
-		ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
-		if (ret <= 0)
-			break;
+			ssk = mptcp_subflow_tcp_sock(subflow);
+			if (!ssk)
+				goto reset_timer;
 
-		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RETRANSSEGS);
-		copied += ret;
-		info.sent += ret;
-	}
-	if (copied) {
-		dfrag->already_sent = max(dfrag->already_sent, info.sent);
-		tcp_push(ssk, 0, info.mss_now, tcp_sk(ssk)->nonagle,
-			 info.size_goal);
-		WRITE_ONCE(msk->allow_infinite_fallback, false);
-	}
+			lock_sock(ssk);
 
-	release_sock(ssk);
+			/* limit retransmission to the bytes already sent on some subflows */
+			info.sent = 0;
+			info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len : dfrag->already_sent;
+			while (info.sent < info.limit) {
+				ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+				if (ret <= 0)
+					break;
+
+				if (ret > max)
+					max = ret;
+
+				MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RETRANSSEGS);
+				copied += max;
+				info.sent += max;
+			}
+			if (copied) {
+				dfrag->already_sent = max(dfrag->already_sent, info.sent);
+				tcp_push(ssk, 0, info.mss_now, tcp_sk(ssk)->nonagle,
+					 info.size_goal);
+				WRITE_ONCE(msk->allow_infinite_fallback, false);
+			}
+
+			release_sock(ssk);
+		}
+	}
 
 reset_timer:
 	mptcp_check_and_set_pending(sk);
@@ -3118,12 +3159,25 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
 		return;
 
 	if (!sock_owned_by_user(sk)) {
-		struct sock *xmit_ssk = mptcp_sched_get_send(mptcp_sk(sk));
+		struct mptcp_sock *msk = mptcp_sk(sk);
+		struct mptcp_subflow_context *subflow;
+		struct sock *xmit_ssk;
+		int err;
 
-		if (xmit_ssk == ssk)
-			__mptcp_subflow_push_pending(sk, ssk);
-		else if (xmit_ssk)
-			mptcp_subflow_delegate(mptcp_subflow_ctx(xmit_ssk), MPTCP_DELEGATE_SEND);
+		pr_debug("%s", __func__);
+		err = mptcp_sched_get_send(msk);
+		if (err)
+			return;
+		mptcp_for_each_subflow(msk, subflow) {
+			if (subflow->scheduled) {
+				xmit_ssk = mptcp_subflow_tcp_sock(subflow);
+
+				if (xmit_ssk == ssk)
+					__mptcp_subflow_push_pending(sk, ssk);
+				else if (xmit_ssk)
+					mptcp_subflow_delegate(mptcp_subflow_ctx(xmit_ssk), MPTCP_DELEGATE_SEND);
+			}
+		}
 	} else {
 		__set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->cb_flags);
 	}
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 8739794166d8..1ce01db60b7e 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -461,6 +461,7 @@ struct mptcp_subflow_context {
 		send_mp_fail : 1,
 		send_fastclose : 1,
 		send_infinite_map : 1,
+		scheduled : 1,
 		rx_eof : 1,
 		can_ack : 1,        /* only after processing the remote a key */
 		disposable : 1,	    /* ctx can be free at ulp release time */
@@ -630,8 +631,8 @@ int mptcp_init_sched(struct mptcp_sock *msk,
 void mptcp_release_sched(struct mptcp_sock *msk);
 struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk);
 struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk);
-struct sock *mptcp_sched_get_send(struct mptcp_sock *msk);
-struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk);
+int mptcp_sched_get_send(struct mptcp_sock *msk);
+int mptcp_sched_get_retrans(struct mptcp_sock *msk);
 
 static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
 {
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 3ceb721e6489..207ab422ac5d 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -91,51 +91,97 @@ void mptcp_release_sched(struct mptcp_sock *msk)
 static int mptcp_sched_data_init(struct mptcp_sock *msk,
 				 struct mptcp_sched_data *data)
 {
-	data->sock = NULL;
-	data->call_again = 0;
+	data->bitmap = 0;
 
 	return 0;
 }
 
-struct sock *mptcp_sched_get_send(struct mptcp_sock *msk)
+int mptcp_sched_get_send(struct mptcp_sock *msk)
 {
+	struct mptcp_subflow_context *subflow;
 	struct mptcp_sched_data data;
+	struct sock *ssk;
 
 	sock_owned_by_me((struct sock *)msk);
 
+	mptcp_for_each_subflow(msk, subflow)
+		subflow->scheduled = 0;
+
 	/* the following check is moved out of mptcp_subflow_get_send */
 	if (__mptcp_check_fallback(msk)) {
-		if (!msk->first)
-			return NULL;
-		return sk_stream_memory_free(msk->first) ? msk->first : NULL;
+		if (msk->first && sk_stream_memory_free(msk->first)) {
+			subflow = mptcp_subflow_ctx(msk->first);
+			subflow->scheduled = 1;
+			return 0;
+		}
+		return -EINVAL;
 	}
 
-	if (!msk->sched)
-		return mptcp_subflow_get_send(msk);
+	if (!msk->sched) {
+		ssk = mptcp_subflow_get_send(msk);
+		if (!ssk)
+			goto err;
+
+		subflow = mptcp_subflow_ctx(ssk);
+		if (!subflow)
+			goto err;
+
+		subflow->scheduled = 1;
+		return 0;
+	}
 
 	mptcp_sched_data_init(msk, &data);
 	msk->sched->get_subflow(msk, false, &data);
 
-	msk->last_snd = data.sock;
-	return data.sock;
+	return 0;
+
+err:
+	if (msk->first) {
+		subflow = mptcp_subflow_ctx(msk->first);
+		subflow->scheduled = 1;
+		return 0;
+	}
+	return -EINVAL;
 }
 
-struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
+int mptcp_sched_get_retrans(struct mptcp_sock *msk)
 {
+	struct mptcp_subflow_context *subflow;
 	struct mptcp_sched_data data;
+	struct sock *ssk;
 
 	sock_owned_by_me((const struct sock *)msk);
 
+	mptcp_for_each_subflow(msk, subflow)
+		subflow->scheduled = 0;
+
 	/* the following check is moved out of mptcp_subflow_get_retrans */
 	if (__mptcp_check_fallback(msk))
-		return NULL;
+		goto err;
+
+	if (!msk->sched) {
+		ssk = mptcp_subflow_get_retrans(msk);
+		if (!ssk)
+			goto err;
+
+		subflow = mptcp_subflow_ctx(ssk);
+		if (!subflow)
+			goto err;
 
-	if (!msk->sched)
-		return mptcp_subflow_get_retrans(msk);
+		subflow->scheduled = 1;
+		return 0;
+	}
 
 	mptcp_sched_data_init(msk, &data);
 	msk->sched->get_subflow(msk, true, &data);
 
-	msk->last_snd = data.sock;
-	return data.sock;
+	return 0;
+
+err:
+	if (msk->first) {
+		subflow = mptcp_subflow_ctx(msk->first);
+		subflow->scheduled = 1;
+		return 0;
+	}
+	return -EINVAL;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops"
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
                   ` (2 preceding siblings ...)
  2022-05-20  8:04 ` [PATCH mptcp-next 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 05/10] mptcp: add subflows array in sched data Geliang Tang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Use bitmap instead of sock in struct mptcp_sched_data.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 net/mptcp/bpf.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index 338146d173f4..8a19ee98c3ad 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -52,11 +52,8 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
 	}
 
 	switch (off) {
-	case offsetof(struct mptcp_sched_data, sock):
-		end = offsetofend(struct mptcp_sched_data, sock);
-		break;
-	case offsetof(struct mptcp_sched_data, call_again):
-		end = offsetofend(struct mptcp_sched_data, call_again);
+	case offsetof(struct mptcp_sched_data, bitmap):
+		end = offsetofend(struct mptcp_sched_data, bitmap);
 		break;
 	default:
 		bpf_log(log, "no write support to mptcp_sched_data at off %d\n", off);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 05/10] mptcp: add subflows array in sched data
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
                   ` (3 preceding siblings ...)
  2022-05-20  8:04 ` [PATCH mptcp-next 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 06/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds a subflow pointers array in struct mptcp_sched_data. Set
the array before invoking get_subflow(), then get it in get_subflow() in
the BPF contexts.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 include/net/mptcp.h                           |  2 ++
 net/mptcp/sched.c                             | 29 +++++++++++++++++++
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  7 +++++
 3 files changed, 38 insertions(+)

diff --git a/include/net/mptcp.h b/include/net/mptcp.h
index 33a44ec21701..10eb96ed2ecc 100644
--- a/include/net/mptcp.h
+++ b/include/net/mptcp.h
@@ -97,9 +97,11 @@ struct mptcp_out_options {
 };
 
 #define MPTCP_SCHED_NAME_MAX	16
+#define MPTCP_SUBFLOWS_MAX	8
 
 struct mptcp_sched_data {
 	unsigned long	bitmap;
+	struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
 };
 
 struct mptcp_sched_ops {
diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 207ab422ac5d..5e224e3a17e8 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -91,8 +91,22 @@ void mptcp_release_sched(struct mptcp_sock *msk)
 static int mptcp_sched_data_init(struct mptcp_sock *msk,
 				 struct mptcp_sched_data *data)
 {
+	struct mptcp_subflow_context *subflow;
+	int i = 0;
+
 	data->bitmap = 0;
 
+	mptcp_for_each_subflow(msk, subflow) {
+		if (i == MPTCP_SUBFLOWS_MAX) {
+			pr_warn_once("too many subflows");
+			break;
+		}
+		data->contexts[i++] = subflow;
+	}
+
+	for (; i < MPTCP_SUBFLOWS_MAX; i++)
+		data->contexts[i++] = NULL;
+
 	return 0;
 }
 
@@ -101,6 +115,7 @@ int mptcp_sched_get_send(struct mptcp_sock *msk)
 	struct mptcp_subflow_context *subflow;
 	struct mptcp_sched_data data;
 	struct sock *ssk;
+	int i;
 
 	sock_owned_by_me((struct sock *)msk);
 
@@ -133,6 +148,12 @@ int mptcp_sched_get_send(struct mptcp_sock *msk)
 	mptcp_sched_data_init(msk, &data);
 	msk->sched->get_subflow(msk, false, &data);
 
+	for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (test_bit(i, &data.bitmap) && data.contexts[i]) {
+			data.contexts[i]->scheduled = 1;
+			msk->last_snd = data.contexts[i]->tcp_sock;
+		}
+	}
 	return 0;
 
 err:
@@ -149,6 +170,7 @@ int mptcp_sched_get_retrans(struct mptcp_sock *msk)
 	struct mptcp_subflow_context *subflow;
 	struct mptcp_sched_data data;
 	struct sock *ssk;
+	int i;
 
 	sock_owned_by_me((const struct sock *)msk);
 
@@ -175,6 +197,13 @@ int mptcp_sched_get_retrans(struct mptcp_sock *msk)
 	mptcp_sched_data_init(msk, &data);
 	msk->sched->get_subflow(msk, true, &data);
 
+	for (i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (test_bit(i, &data.bitmap) && data.contexts[i]) {
+			data.contexts[i]->scheduled = 1;
+			msk->last_snd = data.contexts[i]->tcp_sock;
+		}
+	}
+
 	return 0;
 
 err:
diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 60c8239f95ff..4c7192cb6134 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -231,9 +231,16 @@ extern __u32 tcp_slow_start(struct tcp_sock *tp, __u32 acked) __ksym;
 extern void tcp_cong_avoid_ai(struct tcp_sock *tp, __u32 w, __u32 acked) __ksym;
 
 #define MPTCP_SCHED_NAME_MAX	16
+#define MPTCP_SUBFLOWS_MAX	8
+
+struct mptcp_subflow_context {
+	__u32	token;
+	struct	sock *tcp_sock;	    /* tcp sk backpointer */
+} __attribute__((preserve_access_index));
 
 struct mptcp_sched_data {
 	unsigned long	bitmap;
+	struct mptcp_subflow_context *contexts[MPTCP_SUBFLOWS_MAX];
 };
 
 struct mptcp_sched_ops {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 06/10] Squash to "selftests/bpf: add bpf_first scheduler"
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
                   ` (4 preceding siblings ...)
  2022-05-20  8:04 ` [PATCH mptcp-next 05/10] mptcp: add subflows array in sched data Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 07/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Add set_bit().

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h       | 11 +++++++++++
 tools/testing/selftests/bpf/progs/mptcp_bpf_first.c |  6 ++++--
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 4c7192cb6134..97407c02dc48 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -263,4 +263,15 @@ struct mptcp_sock {
 	char		ca_name[TCP_CA_NAME_MAX];
 } __attribute__((preserve_access_index));
 
+#define _AC(X,Y)	(X##Y)
+#define UL(x)		(_AC(x, UL))
+
+static inline void set_bit(unsigned int nr, volatile unsigned long *addr)
+{
+        unsigned long *p = ((unsigned long *)addr) + (nr / sizeof(unsigned long));
+        unsigned long mask = UL(1) << (nr % sizeof(unsigned long));
+
+        *p  |= mask;
+}
+
 #endif
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
index fd67b5f42964..5a938249dfdd 100644
--- a/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_first.c
@@ -19,8 +19,10 @@ void BPF_PROG(mptcp_sched_first_release, const struct mptcp_sock *msk)
 void BPF_STRUCT_OPS(bpf_first_get_subflow, const struct mptcp_sock *msk,
 		    bool reinject, struct mptcp_sched_data *data)
 {
-	data->sock = msk->first;
-	data->call_again = 0;
+	unsigned long bitmap = 0;
+
+	set_bit(0, &bitmap);
+	data->bitmap = bitmap;
 }
 
 SEC(".struct_ops")
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 07/10] selftests/bpf: add bpf_rr scheduler
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
                   ` (5 preceding siblings ...)
  2022-05-20  8:04 ` [PATCH mptcp-next 06/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 08/10] selftests/bpf: add bpf_rr test Geliang Tang
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements the round-robin BPF MPTCP scheduler, named bpf_rr,
which always picks the next available subflow to send data. If no such
next subflow available, picks the first one.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 tools/testing/selftests/bpf/bpf_tcp_helpers.h |  1 +
 .../selftests/bpf/progs/mptcp_bpf_rr.c        | 48 +++++++++++++++++++
 2 files changed, 49 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c

diff --git a/tools/testing/selftests/bpf/bpf_tcp_helpers.h b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
index 97407c02dc48..a95eda06fe7f 100644
--- a/tools/testing/selftests/bpf/bpf_tcp_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_tcp_helpers.h
@@ -257,6 +257,7 @@ struct mptcp_sched_ops {
 struct mptcp_sock {
 	struct inet_connection_sock	sk;
 
+	struct sock	*last_snd;
 	__u32		token;
 	struct sock	*first;
 	struct mptcp_sched_ops	*sched;
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
new file mode 100644
index 000000000000..74a98f5bb06a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_rr.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_rr_init")
+void BPF_PROG(mptcp_sched_rr_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_rr_release")
+void BPF_PROG(mptcp_sched_rr_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_rr_get_subflow, const struct mptcp_sock *msk,
+		    bool reinject, struct mptcp_sched_data *data)
+{
+	unsigned long bitmap = 0;
+	int nr = 0;
+
+	for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (!msk->last_snd || !data->contexts[i])
+			break;
+
+		if (data->contexts[i]->tcp_sock == msk->last_snd) {
+			if (i + 1 == MPTCP_SUBFLOWS_MAX || !data->contexts[i + 1])
+				break;
+
+			nr = i + 1;
+			break;
+		}
+	}
+
+	set_bit(nr, &bitmap);
+	data->bitmap = bitmap;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops rr = {
+	.init		= (void *)mptcp_sched_rr_init,
+	.release	= (void *)mptcp_sched_rr_release,
+	.get_subflow	= (void *)bpf_rr_get_subflow,
+	.name		= "bpf_rr",
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 08/10] selftests/bpf: add bpf_rr test
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
                   ` (6 preceding siblings ...)
  2022-05-20  8:04 ` [PATCH mptcp-next 07/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 09/10] selftests/bpf: add bpf_red scheduler Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 10/10] selftests/bpf: add bpf_red test Geliang Tang
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds the round-robin BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched. Add a veth net device to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add this new endpoint to PM netlink.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 38 +++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 9d37c509d3ce..afa4de991f1e 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -7,6 +7,7 @@
 #include "network_helpers.h"
 #include "mptcp_sock.skel.h"
 #include "mptcp_bpf_first.skel.h"
+#include "mptcp_bpf_rr.skel.h"
 
 #ifndef TCP_CA_NAME_MAX
 #define TCP_CA_NAME_MAX	16
@@ -279,10 +280,47 @@ static void test_first(void)
 	mptcp_bpf_first__destroy(first_skel);
 }
 
+static void test_rr(void)
+{
+	struct mptcp_bpf_rr *rr_skel;
+	int server_fd, client_fd;
+	struct bpf_link *link;
+
+	rr_skel = mptcp_bpf_rr__open_and_load();
+	if (!ASSERT_OK_PTR(rr_skel, "bpf_rr__open_and_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(rr_skel->maps.rr);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+		mptcp_bpf_rr__destroy(rr_skel);
+		return;
+	}
+
+	system("ip link add veth1 type veth");
+	system("ip addr add 10.0.1.1/24 dev veth1");
+	system("ip link set veth1 up");
+	system("ip mptcp endpoint add 10.0.1.1 subflow");
+	system("sysctl -qw net.mptcp.scheduler=bpf_rr");
+	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+	client_fd = connect_to_fd(server_fd, 0);
+
+	send_data(server_fd, client_fd);
+
+	close(client_fd);
+	close(server_fd);
+	system("sysctl -qw net.mptcp.scheduler=default");
+	system("ip mptcp endpoint flush");
+	system("ip link del veth1");
+	bpf_link__destroy(link);
+	mptcp_bpf_rr__destroy(rr_skel);
+}
+
 void test_mptcp(void)
 {
 	if (test__start_subtest("base"))
 		test_base();
 	if (test__start_subtest("first"))
 		test_first();
+	if (test__start_subtest("rr"))
+		test_rr();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 09/10] selftests/bpf: add bpf_red scheduler
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
                   ` (7 preceding siblings ...)
  2022-05-20  8:04 ` [PATCH mptcp-next 08/10] selftests/bpf: add bpf_rr test Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:04 ` [PATCH mptcp-next 10/10] selftests/bpf: add bpf_red test Geliang Tang
  9 siblings, 0 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch implements the redundant BPF MPTCP scheduler, named bpf_red,
which sends all packets redundantly on all available subflows.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../selftests/bpf/progs/mptcp_bpf_red.c       | 39 +++++++++++++++++++
 1 file changed, 39 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_red.c

diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
new file mode 100644
index 000000000000..e82181d23931
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_red.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/mptcp_sched_red_init")
+void BPF_PROG(mptcp_sched_red_init, const struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_red_release")
+void BPF_PROG(mptcp_sched_red_release, const struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_red_get_subflow, const struct mptcp_sock *msk,
+		    bool reinject, struct mptcp_sched_data *data)
+{
+	unsigned long bitmap = 0;
+
+	for (int i = 0; i < MPTCP_SUBFLOWS_MAX; i++) {
+		if (!data->contexts[i])
+			break;
+
+		set_bit(i, &bitmap);
+	}
+	data->bitmap = bitmap;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops red = {
+	.init		= (void *)mptcp_sched_red_init,
+	.release	= (void *)mptcp_sched_red_release,
+	.get_subflow	= (void *)bpf_red_get_subflow,
+	.name		= "bpf_red",
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH mptcp-next 10/10] selftests/bpf: add bpf_red test
  2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
                   ` (8 preceding siblings ...)
  2022-05-20  8:04 ` [PATCH mptcp-next 09/10] selftests/bpf: add bpf_red scheduler Geliang Tang
@ 2022-05-20  8:04 ` Geliang Tang
  2022-05-20  8:17   ` selftests/bpf: add bpf_red test: Build Failure MPTCP CI
  2022-05-20  9:52   ` selftests/bpf: add bpf_red test: Tests Results MPTCP CI
  9 siblings, 2 replies; 13+ messages in thread
From: Geliang Tang @ 2022-05-20  8:04 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

This patch adds the redundant BPF MPTCP scheduler test. Use sysctl to
set net.mptcp.scheduler to use this sched.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 38 +++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index afa4de991f1e..f9feddace825 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -8,6 +8,7 @@
 #include "mptcp_sock.skel.h"
 #include "mptcp_bpf_first.skel.h"
 #include "mptcp_bpf_rr.skel.h"
+#include "mptcp_bpf_red.skel.h"
 
 #ifndef TCP_CA_NAME_MAX
 #define TCP_CA_NAME_MAX	16
@@ -315,6 +316,41 @@ static void test_rr(void)
 	mptcp_bpf_rr__destroy(rr_skel);
 }
 
+static void test_red(void)
+{
+	struct mptcp_bpf_red *red_skel;
+	int server_fd, client_fd;
+	struct bpf_link *link;
+
+	red_skel = mptcp_bpf_red__open_and_load();
+	if (!ASSERT_OK_PTR(red_skel, "bpf_red__open_and_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(red_skel->maps.red);
+	if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+		mptcp_bpf_red__destroy(red_skel);
+		return;
+	}
+
+	system("ip link add veth1 type veth");
+	system("ip addr add 10.0.1.1/24 dev veth1");
+	system("ip link set veth1 up");
+	system("ip mptcp endpoint add 10.0.1.1 subflow");
+	system("sysctl -qw net.mptcp.scheduler=bpf_red");
+	server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
+	client_fd = connect_to_fd(server_fd, 0);
+
+	send_data(server_fd, client_fd);
+
+	close(client_fd);
+	close(server_fd);
+	system("sysctl -qw net.mptcp.scheduler=default");
+	system("ip mptcp endpoint flush");
+	system("ip link del veth1");
+	bpf_link__destroy(link);
+	mptcp_bpf_red__destroy(red_skel);
+}
+
 void test_mptcp(void)
 {
 	if (test__start_subtest("base"))
@@ -323,4 +359,6 @@ void test_mptcp(void)
 		test_first();
 	if (test__start_subtest("rr"))
 		test_rr();
+	if (test__start_subtest("red"))
+		test_red();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: selftests/bpf: add bpf_red test: Build Failure
  2022-05-20  8:04 ` [PATCH mptcp-next 10/10] selftests/bpf: add bpf_red test Geliang Tang
@ 2022-05-20  8:17   ` MPTCP CI
  2022-05-20  9:52   ` selftests/bpf: add bpf_red test: Tests Results MPTCP CI
  1 sibling, 0 replies; 13+ messages in thread
From: MPTCP CI @ 2022-05-20  8:17 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://patchwork.kernel.org/project/mptcp/patch/d232fd9eb44b9e665bfb03dcda7863f428cad7d8.1653033459.git.geliang.tang@suse.com/
  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/2356992779

Status: failure
Initiator: MPTCPimporter
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/806258f333e3

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: selftests/bpf: add bpf_red test: Tests Results
  2022-05-20  8:04 ` [PATCH mptcp-next 10/10] selftests/bpf: add bpf_red test Geliang Tang
  2022-05-20  8:17   ` selftests/bpf: add bpf_red test: Build Failure MPTCP CI
@ 2022-05-20  9:52   ` MPTCP CI
  1 sibling, 0 replies; 13+ messages in thread
From: MPTCP CI @ 2022-05-20  9:52 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 2 failed test(s): packetdrill_sockopts selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5160997752143872
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5160997752143872/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join - Critical: 11 Call Trace(s) - Critical: Global Timeout ❌:
  - Task: https://cirrus-ci.com/task/6286897658986496
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6286897658986496/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/806258f333e3


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-05-20  9:52 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-20  8:04 [PATCH mptcp-next 00/10] BPF packet scheduler Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 01/10] Squash to "mptcp: add struct mptcp_sched_ops" Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 02/10] mptcp: reflect first flag in subflow_push_pending Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 03/10] Squash to "mptcp: add get_subflow wrappers" Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 04/10] Squash to "mptcp: add bpf_mptcp_sched_ops" Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 05/10] mptcp: add subflows array in sched data Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 06/10] Squash to "selftests/bpf: add bpf_first scheduler" Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 07/10] selftests/bpf: add bpf_rr scheduler Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 08/10] selftests/bpf: add bpf_rr test Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 09/10] selftests/bpf: add bpf_red scheduler Geliang Tang
2022-05-20  8:04 ` [PATCH mptcp-next 10/10] selftests/bpf: add bpf_red test Geliang Tang
2022-05-20  8:17   ` selftests/bpf: add bpf_red test: Build Failure MPTCP CI
2022-05-20  9:52   ` selftests/bpf: add bpf_red test: Tests Results MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.