All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use
@ 2022-12-19 10:23 menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 1/6] mptcp: introduce 'sk' to replace 'sock->sk' in mptcp_listen() menglong8.dong
                   ` (7 more replies)
  0 siblings, 8 replies; 22+ messages in thread
From: menglong8.dong @ 2022-12-19 10:23 UTC (permalink / raw)
  To: pabeni, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

From: Menglong Dong <imagedong@tencent.com>

In the 1th patch, we do some code cleanup with replease 'sock->sk'
with 'sk'. In the 2th patch, we rename 'sk' to 'ssk' in
mptcp_token_new_connect(). In the 3th patch, we add statistics for mptcp
socket in use. In the 4th patch, we init the 'sk_prot' field with
tcp_prot in build_msk. In the 5th patch, we make mptcp_connect can exit
when receive 'SIGUSR1' with '-r' flag. And in the 6th patch, we add the
testing for this commit.

With the commit e8695e504942("mptcp: don't orphan ssk in mptcp_close()"),
I belive that the testing of diag.sh can pass now. In fallback and
simultaneous close case, the msk can't release normal (sometimes?) without
that commit, and makes the testing fail. Enn...let's just see that the
CI is saying~

Changes since v9:
- rename 'sk' to 'ssk' in the document of mptcp_token_new_connect()

Changes since v8:
- fix the panic caused by mptcp token KUNIT tests by init sk->sk_prot in
  build_msk()
- check the state of msk instead of ssk for listening socket

Changes since v7:
- remove the MPTCP_INUSE flag and do the statistics according to the
  creation and destruction of the token in the 2th patch.

Changes since v6:
- check all processes exit in flush_pids() in the 4th patch

Changes since v5:
- introduce MPTCP_INUSE flag to store if msk is in use, as I find
  that it's not correct to check is a msk is in use by
  !sk_unhashed(sk) in mptcp_destroy_common(), because the token
  can be release in mptcp_check_fastclose()
- add the 3th patch
- reuse __chk_nr in 4th patch

Changes since v4:
- rebase to solve merge conflict

Changes since v3:
- rename MPTCP_DESTROIED to MPTCP_DESTROYED in the 2th patch

Changes since v2:
- add testing

Changes since v1:
- split the code cleanup into the 1th patch.
- decrease the statistics for listening mptcp socket inuse with
  mptcp_listen_inuse_dec()
- add MPTCP_DESTROIED flags to store if mptcp_destroy_common() was
  called on the msk. For fallback case, we need to decrease the
  statistics only once, and mptcp_destroy_common() can be called
  more than once.

Menglong Dong (6):
  mptcp: introduce 'sk' to replace 'sock->sk' in mptcp_listen()
  mptcp: init sk->sk_prot in build_msk()
  mptcp: rename 'sk' to 'ssk' in mptcp_token_new_connect()
  mptcp: add statistics for mptcp socket in use
  selftest: mptcp: exit from copyfd_io_poll() when receive SIGUSR1
  selftest: mptcp: add test for mptcp socket in use

 net/mptcp/protocol.c                          | 25 ++++++---
 net/mptcp/protocol.h                          |  2 +-
 net/mptcp/token.c                             | 14 +++--
 net/mptcp/token_test.c                        |  3 +
 tools/testing/selftests/net/mptcp/diag.sh     | 56 +++++++++++++++++--
 .../selftests/net/mptcp/mptcp_connect.c       |  4 +-
 6 files changed, 85 insertions(+), 19 deletions(-)

-- 
2.37.2


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH mptcp-next v10 1/6] mptcp: introduce 'sk' to replace 'sock->sk' in mptcp_listen()
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
@ 2022-12-19 10:23 ` menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 2/6] mptcp: init sk->sk_prot in build_msk() menglong8.dong
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: menglong8.dong @ 2022-12-19 10:23 UTC (permalink / raw)
  To: pabeni, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

From: Menglong Dong <imagedong@tencent.com>

'sock->sk' is used frequently in mptcp_listen(). Therefore, we can
introduce the 'sk' and replace 'sock->sk' with it.

Signed-off-by: Menglong Dong <imagedong@tencent.com>
---
 net/mptcp/protocol.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 5e813580c394..3de45444557d 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3655,12 +3655,13 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
 static int mptcp_listen(struct socket *sock, int backlog)
 {
 	struct mptcp_sock *msk = mptcp_sk(sock->sk);
+	struct sock *sk = sock->sk;
 	struct socket *ssock;
 	int err;
 
 	pr_debug("msk=%p", msk);
 
-	lock_sock(sock->sk);
+	lock_sock(sk);
 	ssock = __mptcp_nmpc_socket(msk);
 	if (!ssock) {
 		err = -EINVAL;
@@ -3668,18 +3669,18 @@ static int mptcp_listen(struct socket *sock, int backlog)
 	}
 
 	mptcp_token_destroy(msk);
-	inet_sk_state_store(sock->sk, TCP_LISTEN);
-	sock_set_flag(sock->sk, SOCK_RCU_FREE);
+	inet_sk_state_store(sk, TCP_LISTEN);
+	sock_set_flag(sk, SOCK_RCU_FREE);
 
 	err = ssock->ops->listen(ssock, backlog);
-	inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk));
+	inet_sk_state_store(sk, inet_sk_state_load(ssock->sk));
 	if (!err)
-		mptcp_copy_inaddrs(sock->sk, ssock->sk);
+		mptcp_copy_inaddrs(sk, ssock->sk);
 
 	mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED);
 
 unlock:
-	release_sock(sock->sk);
+	release_sock(sk);
 	return err;
 }
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH mptcp-next v10 2/6] mptcp: init sk->sk_prot in build_msk()
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 1/6] mptcp: introduce 'sk' to replace 'sock->sk' in mptcp_listen() menglong8.dong
@ 2022-12-19 10:23 ` menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 3/6] mptcp: rename 'sk' to 'ssk' in mptcp_token_new_connect() menglong8.dong
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: menglong8.dong @ 2022-12-19 10:23 UTC (permalink / raw)
  To: pabeni, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

From: Menglong Dong <imagedong@tencent.com>

The 'sk_prot' field in token KUNIT self-tests will be dereferenced in
mptcp_token_new_connect(). Therefore, init it with tcp_prot.

Signed-off-by: Menglong Dong <imagedong@tencent.com>
---
 net/mptcp/token_test.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/net/mptcp/token_test.c b/net/mptcp/token_test.c
index 5d984bec1cd8..0758865ab658 100644
--- a/net/mptcp/token_test.c
+++ b/net/mptcp/token_test.c
@@ -57,6 +57,9 @@ static struct mptcp_sock *build_msk(struct kunit *test)
 	KUNIT_EXPECT_NOT_ERR_OR_NULL(test, msk);
 	refcount_set(&((struct sock *)msk)->sk_refcnt, 1);
 	sock_net_set((struct sock *)msk, &init_net);
+
+	/* be sure the token helpers can dereference sk->sk_prot */
+	((struct sock *)msk)->sk_prot = &tcp_prot;
 	return msk;
 }
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH mptcp-next v10 3/6] mptcp: rename 'sk' to 'ssk' in mptcp_token_new_connect()
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 1/6] mptcp: introduce 'sk' to replace 'sock->sk' in mptcp_listen() menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 2/6] mptcp: init sk->sk_prot in build_msk() menglong8.dong
@ 2022-12-19 10:23 ` menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 4/6] mptcp: add statistics for mptcp socket in use menglong8.dong
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: menglong8.dong @ 2022-12-19 10:23 UTC (permalink / raw)
  To: pabeni, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

From: Menglong Dong <imagedong@tencent.com>

'ssk' should be more appropriate to be the name of the first argument
in mptcp_token_new_connect().

Signed-off-by: Menglong Dong <imagedong@tencent.com>
---
 net/mptcp/protocol.h | 2 +-
 net/mptcp/token.c    | 8 ++++----
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 3cd3270407b0..de7d901a79c6 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -766,7 +766,7 @@ static inline void mptcp_token_init_request(struct request_sock *req)
 
 int mptcp_token_new_request(struct request_sock *req);
 void mptcp_token_destroy_request(struct request_sock *req);
-int mptcp_token_new_connect(struct sock *sk);
+int mptcp_token_new_connect(struct sock *ssk);
 void mptcp_token_accept(struct mptcp_subflow_request_sock *r,
 			struct mptcp_sock *msk);
 bool mptcp_token_exists(u32 token);
diff --git a/net/mptcp/token.c b/net/mptcp/token.c
index 65430f314a68..3af502a374bc 100644
--- a/net/mptcp/token.c
+++ b/net/mptcp/token.c
@@ -134,7 +134,7 @@ int mptcp_token_new_request(struct request_sock *req)
 
 /**
  * mptcp_token_new_connect - create new key/idsn/token for subflow
- * @sk: the socket that will initiate a connection
+ * @ssk: the socket that will initiate a connection
  *
  * This function is called when a new outgoing mptcp connection is
  * initiated.
@@ -148,9 +148,9 @@ int mptcp_token_new_request(struct request_sock *req)
  *
  * returns 0 on success.
  */
-int mptcp_token_new_connect(struct sock *sk)
+int mptcp_token_new_connect(struct sock *ssk)
 {
-	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
 	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
 	int retries = MPTCP_TOKEN_MAX_RETRIES;
 	struct token_bucket *bucket;
@@ -169,7 +169,7 @@ int mptcp_token_new_connect(struct sock *sk)
 	}
 
 	pr_debug("ssk=%p, local_key=%llu, token=%u, idsn=%llu\n",
-		 sk, subflow->local_key, subflow->token, subflow->idsn);
+		 ssk, subflow->local_key, subflow->token, subflow->idsn);
 
 	WRITE_ONCE(msk->token, subflow->token);
 	__sk_nulls_add_node_rcu((struct sock *)msk, &bucket->msk_chain);
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH mptcp-next v10 4/6] mptcp: add statistics for mptcp socket in use
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
                   ` (2 preceding siblings ...)
  2022-12-19 10:23 ` [PATCH mptcp-next v10 3/6] mptcp: rename 'sk' to 'ssk' in mptcp_token_new_connect() menglong8.dong
@ 2022-12-19 10:23 ` menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 5/6] selftest: mptcp: exit from copyfd_io_poll() when receive SIGUSR1 menglong8.dong
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: menglong8.dong @ 2022-12-19 10:23 UTC (permalink / raw)
  To: pabeni, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

From: Menglong Dong <imagedong@tencent.com>

Do the statistics of mptcp socket in use with sock_prot_inuse_add().
Therefore, we can get the count of used mptcp socket from
/proc/net/protocols:

& cat /proc/net/protocols
protocol  size sockets  memory press maxhdr  slab module     cl co di ac io in de sh ss gs se re sp bi br ha uh gp em
MPTCPv6   2048      0       0   no       0   yes  kernel      y  n  y  y  y  y  y  y  y  y  y  y  n  n  n  y  y  y  n
MPTCP     1896      1       0   no       0   yes  kernel      y  n  y  y  y  y  y  y  y  y  y  y  n  n  n  y  y  y  n

Signed-off-by: Menglong Dong <imagedong@tencent.com>
---
v9:
- check the state of msk instead of ssk for listening socket

v8:
- remove the MPTCP_INUSE flag and do the statistics according to the
  creation and destruction of the token

v6:
- introduce the 'MPTCP_INUSE' flag and check if msk is in use by it

v5:
- rebase to solve merge conflict

v4:
- rename MPTCP_DESTROIED to MPTCP_DESTROYED

v2:
- decrease the statistics for listening mptcp socket inuse with
  mptcp_listen_inuse_dec()
- add MPTCP_DESTROIED flags to store if mptcp_destroy_common() was
  called on the msk. For fallback case, we need to decrease the
---
 net/mptcp/protocol.c | 12 +++++++++++-
 net/mptcp/token.c    |  6 ++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 3de45444557d..44d66bd0f51e 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2907,6 +2907,12 @@ static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
 	return EPOLLIN | EPOLLRDNORM;
 }
 
+static void mptcp_listen_inuse_dec(struct sock *sk)
+{
+	if (inet_sk_state_load(sk) == TCP_LISTEN)
+		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+}
+
 bool __mptcp_close(struct sock *sk, long timeout)
 {
 	struct mptcp_subflow_context *subflow;
@@ -2916,6 +2922,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
 	sk->sk_shutdown = SHUTDOWN_MASK;
 
 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) {
+		mptcp_listen_inuse_dec(sk);
 		inet_sk_state_store(sk, TCP_CLOSE);
 		goto cleanup;
 	}
@@ -3016,6 +3023,7 @@ static int mptcp_disconnect(struct sock *sk, int flags)
 	if (msk->fastopening)
 		return 0;
 
+	mptcp_listen_inuse_dec(sk);
 	inet_sk_state_store(sk, TCP_CLOSE);
 
 	mptcp_stop_timer(sk);
@@ -3674,8 +3682,10 @@ static int mptcp_listen(struct socket *sock, int backlog)
 
 	err = ssock->ops->listen(ssock, backlog);
 	inet_sk_state_store(sk, inet_sk_state_load(ssock->sk));
-	if (!err)
+	if (!err) {
+		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
 		mptcp_copy_inaddrs(sk, ssock->sk);
+	}
 
 	mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED);
 
diff --git a/net/mptcp/token.c b/net/mptcp/token.c
index 3af502a374bc..5bb924534387 100644
--- a/net/mptcp/token.c
+++ b/net/mptcp/token.c
@@ -153,6 +153,7 @@ int mptcp_token_new_connect(struct sock *ssk)
 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
 	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
 	int retries = MPTCP_TOKEN_MAX_RETRIES;
+	struct sock *sk = subflow->conn;
 	struct token_bucket *bucket;
 
 again:
@@ -175,6 +176,7 @@ int mptcp_token_new_connect(struct sock *ssk)
 	__sk_nulls_add_node_rcu((struct sock *)msk, &bucket->msk_chain);
 	bucket->chain_len++;
 	spin_unlock_bh(&bucket->lock);
+	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
 	return 0;
 }
 
@@ -190,8 +192,10 @@ void mptcp_token_accept(struct mptcp_subflow_request_sock *req,
 			struct mptcp_sock *msk)
 {
 	struct mptcp_subflow_request_sock *pos;
+	struct sock *sk = (struct sock *)msk;
 	struct token_bucket *bucket;
 
+	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
 	bucket = token_bucket(req->token);
 	spin_lock_bh(&bucket->lock);
 
@@ -370,12 +374,14 @@ void mptcp_token_destroy_request(struct request_sock *req)
  */
 void mptcp_token_destroy(struct mptcp_sock *msk)
 {
+	struct sock *sk = (struct sock *)msk;
 	struct token_bucket *bucket;
 	struct mptcp_sock *pos;
 
 	if (sk_unhashed((struct sock *)msk))
 		return;
 
+	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
 	bucket = token_bucket(msk->token);
 	spin_lock_bh(&bucket->lock);
 	pos = __token_lookup_msk(bucket, msk->token);
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH mptcp-next v10 5/6] selftest: mptcp: exit from copyfd_io_poll() when receive SIGUSR1
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
                   ` (3 preceding siblings ...)
  2022-12-19 10:23 ` [PATCH mptcp-next v10 4/6] mptcp: add statistics for mptcp socket in use menglong8.dong
@ 2022-12-19 10:23 ` menglong8.dong
  2022-12-19 10:23 ` [PATCH mptcp-next v10 6/6] selftest: mptcp: add test for mptcp socket in use menglong8.dong
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: menglong8.dong @ 2022-12-19 10:23 UTC (permalink / raw)
  To: pabeni, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

From: Menglong Dong <imagedong@tencent.com>

For now, mptcp_connect won't exit after receiving the 'SIGUSR1' signal
if '-r' is set. Fix this by skipping poll and sleep in copyfd_io_poll()
if 'quit' is set.

Signed-off-by: Menglong Dong <imagedong@tencent.com>
---
 tools/testing/selftests/net/mptcp/mptcp_connect.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
index 8a8266957bc5..b25a31445ded 100644
--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
+++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
@@ -627,7 +627,7 @@ static int copyfd_io_poll(int infd, int peerfd, int outfd,
 		char rbuf[8192];
 		ssize_t len;
 
-		if (fds.events == 0)
+		if (fds.events == 0 || quit)
 			break;
 
 		switch (poll(&fds, 1, poll_timeout)) {
@@ -733,7 +733,7 @@ static int copyfd_io_poll(int infd, int peerfd, int outfd,
 	}
 
 	/* leave some time for late join/announce */
-	if (cfg_remove)
+	if (cfg_remove && !quit)
 		usleep(cfg_wait);
 
 	return 0;
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH mptcp-next v10 6/6] selftest: mptcp: add test for mptcp socket in use
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
                   ` (4 preceding siblings ...)
  2022-12-19 10:23 ` [PATCH mptcp-next v10 5/6] selftest: mptcp: exit from copyfd_io_poll() when receive SIGUSR1 menglong8.dong
@ 2022-12-19 10:23 ` menglong8.dong
  2022-12-19 12:01   ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
  2022-12-19 13:50   ` MPTCP CI
  2022-12-19 12:44 ` [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use Paolo Abeni
  2022-12-20 12:13 ` Matthieu Baerts
  7 siblings, 2 replies; 22+ messages in thread
From: menglong8.dong @ 2022-12-19 10:23 UTC (permalink / raw)
  To: pabeni, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

From: Menglong Dong <imagedong@tencent.com>

Add the function chk_msk_inuse() to diag.sh, which is used to check the
statistics of mptcp socket in use. As mptcp socket in listen state will
be closed randomly after 'accept', we need to get the count of listening
mptcp socket through 'ss' command.

All tests pass.

Signed-off-by: Menglong Dong <imagedong@tencent.com>
---
v7:
- check all processes exit in flush_pids()
---
 tools/testing/selftests/net/mptcp/diag.sh | 56 +++++++++++++++++++++--
 1 file changed, 51 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
index 24bcd7b9bdb2..ef628b16fe9b 100755
--- a/tools/testing/selftests/net/mptcp/diag.sh
+++ b/tools/testing/selftests/net/mptcp/diag.sh
@@ -17,6 +17,11 @@ flush_pids()
 	sleep 1.1
 
 	ip netns pids "${ns}" | xargs --no-run-if-empty kill -SIGUSR1 &>/dev/null
+
+	for _ in $(seq 10); do
+		[ -z "$(ip netns pids "${ns}")" ] && break
+		sleep 0.1
+	done
 }
 
 cleanup()
@@ -37,15 +42,20 @@ if [ $? -ne 0 ];then
 	exit $ksft_skip
 fi
 
+get_msk_inuse()
+{
+	ip netns exec $ns cat /proc/net/protocols | awk '$1~/^MPTCP$/{print $3}'
+}
+
 __chk_nr()
 {
-	local condition="$1"
+	local command="$1"
 	local expected=$2
 	local msg nr
 
 	shift 2
 	msg=$*
-	nr=$(ss -inmHMN $ns | $condition)
+	nr=$(eval $command)
 
 	printf "%-50s" "$msg"
 	if [ $nr != $expected ]; then
@@ -57,9 +67,17 @@ __chk_nr()
 	test_cnt=$((test_cnt+1))
 }
 
+__chk_msk_nr()
+{
+	local condition=$1
+	shift 1
+
+	__chk_nr "ss -inmHMN $ns | $condition" $*
+}
+
 chk_msk_nr()
 {
-	__chk_nr "grep -c token:" $*
+	__chk_msk_nr "grep -c token:" $*
 }
 
 wait_msk_nr()
@@ -97,12 +115,12 @@ wait_msk_nr()
 
 chk_msk_fallback_nr()
 {
-		__chk_nr "grep -c fallback" $*
+		__chk_msk_nr "grep -c fallback" $*
 }
 
 chk_msk_remote_key_nr()
 {
-		__chk_nr "grep -c remote_key" $*
+		__chk_msk_nr "grep -c remote_key" $*
 }
 
 __chk_listen()
@@ -142,6 +160,26 @@ chk_msk_listen()
 	nr=$(ss -Ml $filter | wc -l)
 }
 
+chk_msk_inuse()
+{
+	local expected=$1
+	local listen_nr
+
+	shift 1
+
+	listen_nr=$(ss -N "${ns}" -Ml | grep -c LISTEN)
+	expected=$((expected + listen_nr))
+
+	for _ in $(seq 10); do
+		if [ $(get_msk_inuse) -eq $expected ];then
+			break
+		fi
+		sleep 0.1
+	done
+
+	__chk_nr get_msk_inuse $expected $*
+}
+
 # $1: ns, $2: port
 wait_local_port_listen()
 {
@@ -195,8 +233,10 @@ wait_connected $ns 10000
 chk_msk_nr 2 "after MPC handshake "
 chk_msk_remote_key_nr 2 "....chk remote_key"
 chk_msk_fallback_nr 0 "....chk no fallback"
+chk_msk_inuse 2 "....chk 2 msk in use"
 flush_pids
 
+chk_msk_inuse 0 "....chk 0 msk in use after flush"
 
 echo "a" | \
 	timeout ${timeout_test} \
@@ -211,8 +251,11 @@ echo "b" | \
 				127.0.0.1 >/dev/null &
 wait_connected $ns 10001
 chk_msk_fallback_nr 1 "check fallback"
+chk_msk_inuse 1 "....chk 1 msk in use"
 flush_pids
 
+chk_msk_inuse 0 "....chk 0 msk in use after flush"
+
 NR_CLIENTS=100
 for I in `seq 1 $NR_CLIENTS`; do
 	echo "a" | \
@@ -232,6 +275,9 @@ for I in `seq 1 $NR_CLIENTS`; do
 done
 
 wait_msk_nr $((NR_CLIENTS*2)) "many msk socket present"
+chk_msk_inuse $((NR_CLIENTS*2)) "....chk many msk in use"
 flush_pids
 
+chk_msk_inuse 0 "....chk 0 msk in use after flush"
+
 exit $ret
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-12-19 10:23 ` [PATCH mptcp-next v10 6/6] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-12-19 12:01   ` MPTCP CI
  2022-12-19 13:50   ` MPTCP CI
  1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-12-19 12:01 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5688095572492288
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5688095572492288/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5125145619070976
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5125145619070976/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5447656726593536
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5447656726593536/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6251045525913600
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6251045525913600/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c88d4cb39174


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
                   ` (5 preceding siblings ...)
  2022-12-19 10:23 ` [PATCH mptcp-next v10 6/6] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-12-19 12:44 ` Paolo Abeni
  2022-12-20 12:13 ` Matthieu Baerts
  7 siblings, 0 replies; 22+ messages in thread
From: Paolo Abeni @ 2022-12-19 12:44 UTC (permalink / raw)
  To: menglong8.dong, mathew.j.martineau, matthieu.baerts; +Cc: mptcp, Menglong Dong

On Mon, 2022-12-19 at 18:23 +0800, menglong8.dong@gmail.com wrote:
> From: Menglong Dong <imagedong@tencent.com>
> 
> In the 1th patch, we do some code cleanup with replease 'sock->sk'
> with 'sk'. In the 2th patch, we rename 'sk' to 'ssk' in
> mptcp_token_new_connect(). In the 3th patch, we add statistics for mptcp
> socket in use. In the 4th patch, we init the 'sk_prot' field with
> tcp_prot in build_msk. In the 5th patch, we make mptcp_connect can exit
> when receive 'SIGUSR1' with '-r' flag. And in the 6th patch, we add the
> testing for this commit.
> 
> With the commit e8695e504942("mptcp: don't orphan ssk in mptcp_close()"),
> I belive that the testing of diag.sh can pass now. In fallback and
> simultaneous close case, the msk can't release normal (sometimes?) without
> that commit, and makes the testing fail. Enn...let's just see that the
> CI is saying~
> 
> Changes since v9:
> - rename 'sk' to 'ssk' in the document of mptcp_token_new_connect()
> 
> Changes since v8:
> - fix the panic caused by mptcp token KUNIT tests by init sk->sk_prot in
>   build_msk()
> - check the state of msk instead of ssk for listening socket
> 
> Changes since v7:
> - remove the MPTCP_INUSE flag and do the statistics according to the
>   creation and destruction of the token in the 2th patch.
> 
> Changes since v6:
> - check all processes exit in flush_pids() in the 4th patch
> 
> Changes since v5:
> - introduce MPTCP_INUSE flag to store if msk is in use, as I find
>   that it's not correct to check is a msk is in use by
>   !sk_unhashed(sk) in mptcp_destroy_common(), because the token
>   can be release in mptcp_check_fastclose()
> - add the 3th patch
> - reuse __chk_nr in 4th patch
> 
> Changes since v4:
> - rebase to solve merge conflict
> 
> Changes since v3:
> - rename MPTCP_DESTROIED to MPTCP_DESTROYED in the 2th patch
> 
> Changes since v2:
> - add testing
> 
> Changes since v1:
> - split the code cleanup into the 1th patch.
> - decrease the statistics for listening mptcp socket inuse with
>   mptcp_listen_inuse_dec()
> - add MPTCP_DESTROIED flags to store if mptcp_destroy_common() was
>   called on the msk. For fallback case, we need to decrease the
>   statistics only once, and mptcp_destroy_common() can be called
>   more than once.
> 
> Menglong Dong (6):
>   mptcp: introduce 'sk' to replace 'sock->sk' in mptcp_listen()
>   mptcp: init sk->sk_prot in build_msk()
>   mptcp: rename 'sk' to 'ssk' in mptcp_token_new_connect()
>   mptcp: add statistics for mptcp socket in use
>   selftest: mptcp: exit from copyfd_io_poll() when receive SIGUSR1
>   selftest: mptcp: add test for mptcp socket in use
> 
>  net/mptcp/protocol.c                          | 25 ++++++---
>  net/mptcp/protocol.h                          |  2 +-
>  net/mptcp/token.c                             | 14 +++--
>  net/mptcp/token_test.c                        |  3 +
>  tools/testing/selftests/net/mptcp/diag.sh     | 56 +++++++++++++++++--
>  .../selftests/net/mptcp/mptcp_connect.c       |  4 +-
>  6 files changed, 85 insertions(+), 19 deletions(-)
> 

LGTM, thanks for all the hard work!

Acked-by: Paolo Abeni <pabeni@redhat.com>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-12-19 10:23 ` [PATCH mptcp-next v10 6/6] selftest: mptcp: add test for mptcp socket in use menglong8.dong
  2022-12-19 12:01   ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
@ 2022-12-19 13:50   ` MPTCP CI
  1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-12-19 13:50 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/5235989799501824
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5235989799501824/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4954514822791168
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4954514822791168/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6361889706344448
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6361889706344448/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6080414729633792
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6080414729633792/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/7683ddc9684a


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use
  2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
                   ` (6 preceding siblings ...)
  2022-12-19 12:44 ` [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use Paolo Abeni
@ 2022-12-20 12:13 ` Matthieu Baerts
  7 siblings, 0 replies; 22+ messages in thread
From: Matthieu Baerts @ 2022-12-20 12:13 UTC (permalink / raw)
  To: menglong8.dong, pabeni, mathew.j.martineau; +Cc: mptcp, Menglong Dong

Hi Menglong Dong, Paolo, Mat,

On 19/12/2022 11:23, menglong8.dong@gmail.com wrote:
> From: Menglong Dong <imagedong@tencent.com>
> 
> In the 1th patch, we do some code cleanup with replease 'sock->sk'
> with 'sk'. In the 2th patch, we rename 'sk' to 'ssk' in
> mptcp_token_new_connect(). In the 3th patch, we add statistics for mptcp
> socket in use. In the 4th patch, we init the 'sk_prot' field with
> tcp_prot in build_msk. In the 5th patch, we make mptcp_connect can exit
> when receive 'SIGUSR1' with '-r' flag. And in the 6th patch, we add the
> testing for this commit.

Thank you for the patches and the reviews!

Now in our tree (feat. for net-next) with Paolo's ACK:

New patches for t/upstream:
- 3e81eef55e91: mptcp: introduce 'sk' to replace 'sock->sk' in
mptcp_listen()
- f5581c182734: mptcp: init sk->sk_prot in build_msk()
- ab505b464266: mptcp: rename 'sk' to 'ssk' in mptcp_token_new_connect()
- d0633acb8e61: mptcp: add statistics for mptcp socket in use
- b1bf2f9ebe13: selftest: mptcp: exit from copyfd_io_poll() when receive
SIGUSR1
- 1398fc057f98: selftest: mptcp: add test for mptcp socket in use
- Results: c9ea6d26632d..63450d7ce1cd (export)

Tests are now in progress:

https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export/20221220T121253

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-12-19  7:50 [PATCH mptcp-next v9 6/6] selftest: mptcp: add test " menglong8.dong
@ 2022-12-19  8:57 ` MPTCP CI
  0 siblings, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-12-19  8:57 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6683962794311680
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6683962794311680/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/4643269213159424
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4643269213159424/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5769169120002048
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5769169120002048/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5206219166580736
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5206219166580736/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/93db6277a7d9


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-12-08  2:45 [PATCH mptcp-next v8 4/4] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-12-08  4:51 ` MPTCP CI
  0 siblings, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-12-08  4:51 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): insmod - Critical: 3 Call Trace(s) ❌:
  - Task: https://cirrus-ci.com/task/5270512411082752
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5270512411082752/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6396412317925376
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6396412317925376/summary/summary.txt

- {"code":404,"message":
  - "Can't find artifacts containing file conclusion.txt"}:
  - Task: https://cirrus-ci.com/task/4989037434372096
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4989037434372096/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6114937341214720
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6114937341214720/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/27b157db80ef


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-11-22  3:49 [PATCH mptcp-next v7 4/4] selftest: mptcp: add test for mptcp socket in use menglong8.dong
  2022-11-23 17:27 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
@ 2022-11-23 19:14 ` MPTCP CI
  1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-11-23 19:14 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6563194622181376
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6563194622181376/summary/summary.txt

- KVM Validation: debug:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4733607273562112
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4733607273562112/summary/summary.txt

Initiator: Matthieu Baerts
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/5afcc5ab13e3


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-11-22  3:49 [PATCH mptcp-next v7 4/4] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-11-23 17:27 ` MPTCP CI
  2022-11-23 19:14 ` MPTCP CI
  1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-11-23 17:27 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6201775137488896
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6201775137488896/summary/summary.txt

- KVM Validation: debug:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4794400253935616
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4794400253935616/summary/summary.txt

Initiator: Matthieu Baerts
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/afd026792ca1


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-11-03 17:34   ` Matthieu Baerts
@ 2022-11-04  0:04     ` Mat Martineau
  0 siblings, 0 replies; 22+ messages in thread
From: Mat Martineau @ 2022-11-04  0:04 UTC (permalink / raw)
  To: Menglong Dong; +Cc: Matthieu Baerts, mptcp

[-- Attachment #1: Type: text/plain, Size: 2703 bytes --]

On Thu, 3 Nov 2022, Matthieu Baerts wrote:

> Hi Menglong,
>
> Thank you for the v6!
>
> It looks like the CI is not happy with it:
>
> On 03/11/2022 14:21, MPTCP CI wrote:
>> Hi Menglong,
>>
>> Thank you for your modifications, that's great!
>>
>> Our CI did some validations and here is its report:
>>
>> - KVM Validation: normal:
>>   - Success! ✅:
>>   - Task: https://cirrus-ci.com/task/4909154062565376
>>   - Summary: https://api.cirrus-ci.com/v1/artifact/task/4909154062565376/summary/summary.txt
>>
>> - KVM Validation: debug:
>>   - Unstable: 1 failed test(s): selftest_diag 🔴:
>>   - Task: https://cirrus-ci.com/task/6035053969408000
>>   - Summary: https://api.cirrus-ci.com/v1/artifact/task/6035053969408000/summary/summary.txt
>
> As you can see:
>
> ----------------------------
> (...)
> # all listen sockets                                [  ok  ]
> # after MPC handshake                               [  ok  ]
> # ....chk remote_key                                [  ok  ]
> # ....chk no fallback                               [  ok  ]
> # chk 2 msk in use                                  [  ok  ]
> # chk 0 msk in use after flush                      [  ok  ]
> # check fallback                                    [  ok  ]
> # many msk socket present                           [ fail ] timeout
> while expecting 200 max 201 last 1
> # chk many msk in use                               [ fail ] expected
> 200 found 0
> # chk 0 msk in use after flush                      [  ok  ]
> ----------------------------
>
> I guess one socket is still present after the 'check fallback': you
> probably need to modify flush_pids() to wait for the processes to be
> over, as suggested on a comment in your v5, no?
>
> https://lore.kernel.org/all/b3f3c01e-4010-d5ce-970d-394711bcd0e1@tessares.net/
>
> I don't think it is a good idea to wait for >= 200 except if it takes a
> very long time to have the previous socket terminated. If it does, maybe
> we should re-order the test or re-create the netns instead of re-using it.
>
>
> About the patch 3/4, note that the SIGUSR1 is probably stopping the test
> earlier than expected because the interrupt will cause some actions to
> stop but still good to check for the 'quit' variable.
>
>
> Also, one small detail for patch 4/4: can you add "...." at the
> beginning of the new lines you print in the selftest, similar to
> "....chk no fallback"?
>

Menglong -

Thanks for the updated patches. The test ran ok on my local system, but 
the CI is slow on the debug build which makes the timing trickier. I don't 
have anything to add to Matthieu's comments above, seems like his 
suggestions will resolve the CI issue.

--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-11-03 13:21 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
@ 2022-11-03 17:34   ` Matthieu Baerts
  2022-11-04  0:04     ` Mat Martineau
  0 siblings, 1 reply; 22+ messages in thread
From: Matthieu Baerts @ 2022-11-03 17:34 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for the v6!

It looks like the CI is not happy with it:

On 03/11/2022 14:21, MPTCP CI wrote:
> Hi Menglong,
> 
> Thank you for your modifications, that's great!
> 
> Our CI did some validations and here is its report:
> 
> - KVM Validation: normal:
>   - Success! ✅:
>   - Task: https://cirrus-ci.com/task/4909154062565376
>   - Summary: https://api.cirrus-ci.com/v1/artifact/task/4909154062565376/summary/summary.txt
> 
> - KVM Validation: debug:
>   - Unstable: 1 failed test(s): selftest_diag 🔴:
>   - Task: https://cirrus-ci.com/task/6035053969408000
>   - Summary: https://api.cirrus-ci.com/v1/artifact/task/6035053969408000/summary/summary.txt

As you can see:

----------------------------
(...)
# all listen sockets                                [  ok  ]
# after MPC handshake                               [  ok  ]
# ....chk remote_key                                [  ok  ]
# ....chk no fallback                               [  ok  ]
# chk 2 msk in use                                  [  ok  ]
# chk 0 msk in use after flush                      [  ok  ]
# check fallback                                    [  ok  ]
# many msk socket present                           [ fail ] timeout
while expecting 200 max 201 last 1
# chk many msk in use                               [ fail ] expected
200 found 0
# chk 0 msk in use after flush                      [  ok  ]
----------------------------

I guess one socket is still present after the 'check fallback': you
probably need to modify flush_pids() to wait for the processes to be
over, as suggested on a comment in your v5, no?

https://lore.kernel.org/all/b3f3c01e-4010-d5ce-970d-394711bcd0e1@tessares.net/

I don't think it is a good idea to wait for >= 200 except if it takes a
very long time to have the previous socket terminated. If it does, maybe
we should re-order the test or re-create the netns instead of re-using it.


About the patch 3/4, note that the SIGUSR1 is probably stopping the test
earlier than expected because the interrupt will cause some actions to
stop but still good to check for the 'quit' variable.


Also, one small detail for patch 4/4: can you add "...." at the
beginning of the new lines you print in the selftest, similar to
"....chk no fallback"?

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-11-03 11:06 [PATCH mptcp-next v6 4/4] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-11-03 13:21 ` MPTCP CI
  2022-11-03 17:34   ` Matthieu Baerts
  0 siblings, 1 reply; 22+ messages in thread
From: MPTCP CI @ 2022-11-03 13:21 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4909154062565376
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4909154062565376/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 1 failed test(s): selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/6035053969408000
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6035053969408000/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/aa327877c6f9


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-10-07  9:29 [PATCH mptcp-next v5 3/3] selftest: mptcp: add test for mptcp socket in use menglong8.dong
  2022-10-07 10:57 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
@ 2022-10-12  2:24 ` MPTCP CI
  1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-10-12  2:24 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): packetdrill_add_addr 🔴:
  - Task: https://cirrus-ci.com/task/5412647641284608
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5412647641284608/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6538547548127232
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6538547548127232/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/ebf2e7e28f93


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-10-07  9:29 [PATCH mptcp-next v5 3/3] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-10-07 10:57 ` MPTCP CI
  2022-10-12  2:24 ` MPTCP CI
  1 sibling, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-10-07 10:57 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 2 failed test(s): selftest_mptcp_join selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/6562201610747904
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6562201610747904/summary/summary.txt

- KVM Validation: debug:
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4732614262128640
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4732614262128640/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/83a72b913f3a


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-09-30  2:46 [PATCH mptcp-next v3 3/3] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-10-06 17:25 ` MPTCP CI
  0 siblings, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-10-06 17:25 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/4902994138365952
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4902994138365952/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6028894045208576
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6028894045208576/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/9efac8bc4497


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: selftest: mptcp: add test for mptcp socket in use: Tests Results
  2022-10-06  6:11 [PATCH mptcp-next v4 3/3] selftest: mptcp: add test for mptcp socket in use menglong8.dong
@ 2022-10-06 13:59 ` MPTCP CI
  0 siblings, 0 replies; 22+ messages in thread
From: MPTCP CI @ 2022-10-06 13:59 UTC (permalink / raw)
  To: Menglong Dong; +Cc: mptcp

Hi Menglong,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5793338778451968
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5793338778451968/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5230388825030656
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5230388825030656/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/abf5029a32b9


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-12-20 12:13 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-19 10:23 [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use menglong8.dong
2022-12-19 10:23 ` [PATCH mptcp-next v10 1/6] mptcp: introduce 'sk' to replace 'sock->sk' in mptcp_listen() menglong8.dong
2022-12-19 10:23 ` [PATCH mptcp-next v10 2/6] mptcp: init sk->sk_prot in build_msk() menglong8.dong
2022-12-19 10:23 ` [PATCH mptcp-next v10 3/6] mptcp: rename 'sk' to 'ssk' in mptcp_token_new_connect() menglong8.dong
2022-12-19 10:23 ` [PATCH mptcp-next v10 4/6] mptcp: add statistics for mptcp socket in use menglong8.dong
2022-12-19 10:23 ` [PATCH mptcp-next v10 5/6] selftest: mptcp: exit from copyfd_io_poll() when receive SIGUSR1 menglong8.dong
2022-12-19 10:23 ` [PATCH mptcp-next v10 6/6] selftest: mptcp: add test for mptcp socket in use menglong8.dong
2022-12-19 12:01   ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
2022-12-19 13:50   ` MPTCP CI
2022-12-19 12:44 ` [PATCH mptcp-next v10 0/6] mptcp: add statistics for mptcp socket in use Paolo Abeni
2022-12-20 12:13 ` Matthieu Baerts
  -- strict thread matches above, loose matches on Subject: below --
2022-12-19  7:50 [PATCH mptcp-next v9 6/6] selftest: mptcp: add test " menglong8.dong
2022-12-19  8:57 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
2022-12-08  2:45 [PATCH mptcp-next v8 4/4] selftest: mptcp: add test for mptcp socket in use menglong8.dong
2022-12-08  4:51 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
2022-11-22  3:49 [PATCH mptcp-next v7 4/4] selftest: mptcp: add test for mptcp socket in use menglong8.dong
2022-11-23 17:27 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
2022-11-23 19:14 ` MPTCP CI
2022-11-03 11:06 [PATCH mptcp-next v6 4/4] selftest: mptcp: add test for mptcp socket in use menglong8.dong
2022-11-03 13:21 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
2022-11-03 17:34   ` Matthieu Baerts
2022-11-04  0:04     ` Mat Martineau
2022-10-07  9:29 [PATCH mptcp-next v5 3/3] selftest: mptcp: add test for mptcp socket in use menglong8.dong
2022-10-07 10:57 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
2022-10-12  2:24 ` MPTCP CI
2022-10-06  6:11 [PATCH mptcp-next v4 3/3] selftest: mptcp: add test for mptcp socket in use menglong8.dong
2022-10-06 13:59 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI
2022-09-30  2:46 [PATCH mptcp-next v3 3/3] selftest: mptcp: add test for mptcp socket in use menglong8.dong
2022-10-06 17:25 ` selftest: mptcp: add test for mptcp socket in use: Tests Results MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.