All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info
@ 2023-05-24 13:50 Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 1/6] mptcp: add subflow unique id Paolo Abeni
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 13:50 UTC (permalink / raw)
  To: mptcp; +Cc: Florian Westphal

This is a follow up to the topic discussed in recent pubblic mtg.

Introduces unique id for accurate subflow stats tracking and
aggregate mptcp counters, plus some minimal self-tests.

The tests themself do not take in account support for running on
older kernel.

This is on top of "mptcp: a bunch of data race fixes".

There should be non trivial conflicts with:

"mptcp: use get_retrans wrapper".

v3 -> v4:
 - change binary layout for MPTCP_FULL_INFO structs (Florian)

v2 -> v3:
 - address Matttbe comments on patch 1, 2 and 5, see the indivdual
   patches changelog for the details

v1 -> v2:
 - introduce MPTCP_FULL_INFO instead of overloading a tcp_info field
 - add related self-tests
 - fix a couple of subflow_id initialization bugs

Paolo Abeni (6):
  mptcp: add subflow unique id
  mptcp: introduce MPTCP_FULL_INFO getsockopt
  mptcp: move snd_una update earlier for fallback socket.
  mptcp: track some aggregate data counters.
  selftests: mptcp: explicitly tests aggregate counters
  selftests: mptcp: add MPTCP_FULL_INFO testcase

 include/uapi/linux/mptcp.h                    |  30 +++
 net/mptcp/options.c                           |  14 +-
 net/mptcp/protocol.c                          |  24 ++-
 net/mptcp/protocol.h                          |   9 +-
 net/mptcp/sockopt.c                           | 187 ++++++++++++++++--
 net/mptcp/subflow.c                           |   2 +
 .../selftests/net/mptcp/mptcp_sockopt.c       | 123 +++++++++++-
 7 files changed, 356 insertions(+), 33 deletions(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 mptcp-next 1/6] mptcp: add subflow unique id
  2023-05-24 13:50 [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info Paolo Abeni
@ 2023-05-24 13:50 ` Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt Paolo Abeni
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 13:50 UTC (permalink / raw)
  To: mptcp; +Cc: Florian Westphal

The user-space need to preperly account the data received/sent by
individual subflows. When additional subflows are created and/or
closed during the MPTCP socket lifetime, the information currently
exposed via MPTCP_TCPINFO are not enough: subflows are identifed only
by the sequential position inside the info dumps, and that will change
with the above mentioned events.

To solve the above problem, this patch introduces a new subflow identifier
that is unique inside the given mptcp socket scope.

The initial subflow get the id 1 and the other subflows get incremental
values at join time.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v2 -> v3:
 - fix msk subflow_id init (Matttbe)

v1 -> v2:
 - properly set subflow_id for the first passive subflow and active subflows, too
 - drop the tcpi_fackets overload
---
 net/mptcp/protocol.c | 6 ++++++
 net/mptcp/protocol.h | 5 ++++-
 net/mptcp/subflow.c  | 2 ++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 28da6a9fe8fd..9998b2dd150e 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -96,6 +96,7 @@ static int __mptcp_socket_create(struct mptcp_sock *msk)
 	list_add(&subflow->node, &msk->conn_list);
 	sock_hold(ssock->sk);
 	subflow->request_mptcp = 1;
+	subflow->subflow_id = msk->subflow_id++;
 
 	/* This is the first subflow, always with id 0 */
 	subflow->local_id_valid = 1;
@@ -845,6 +846,7 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
 	if (sk->sk_socket && !ssk->sk_socket)
 		mptcp_sock_graft(ssk, sk->sk_socket);
 
+	mptcp_subflow_ctx(ssk)->subflow_id = msk->subflow_id++;
 	mptcp_sockopt_sync_locked(msk, ssk);
 	mptcp_subflow_joined(msk, ssk);
 	return true;
@@ -2775,6 +2777,7 @@ static int __mptcp_init_sock(struct sock *sk)
 	WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));
 	WRITE_ONCE(msk->allow_infinite_fallback, true);
 	msk->recovery = false;
+	msk->subflow_id = 1;
 
 	mptcp_pm_data_init(msk);
 
@@ -3206,6 +3209,9 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
 	msk->setsockopt_seq = mptcp_sk(sk)->setsockopt_seq;
 	mptcp_init_sched(msk, mptcp_sk(sk)->sched);
 
+	/* passive msk is created after the first/MPC subflow */
+	msk->subflow_id = 2;
+
 	sock_reset_flag(nsk, SOCK_RCU_FREE);
 	security_inet_csk_clone(nsk, req);
 
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index de94c01746dc..f9180ecce5e4 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -319,7 +319,8 @@ struct mptcp_sock {
 		u64	rtt_us; /* last maximum rtt of subflows */
 	} rcvq_space;
 
-	u32 setsockopt_seq;
+	u32		subflow_id;
+	u32		setsockopt_seq;
 	char		ca_name[TCP_CA_NAME_MAX];
 	struct mptcp_sock	*dl_next;
 };
@@ -501,6 +502,8 @@ struct mptcp_subflow_context {
 	u8	reset_reason:4;
 	u8	stale_count;
 
+	u32	subflow_id;
+
 	long	delegated_status;
 	unsigned long	fail_tout;
 
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index 63ac4dc621d4..c7001a23550a 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -819,6 +819,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
 			if (!ctx->conn)
 				goto fallback;
 
+			ctx->subflow_id = 1;
 			owner = mptcp_sk(ctx->conn);
 			mptcp_pm_new_connection(owner, child, 1);
 
@@ -1574,6 +1575,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
 	subflow->remote_id = remote_id;
 	subflow->request_join = 1;
 	subflow->request_bkup = !!(flags & MPTCP_PM_ADDR_FLAG_BACKUP);
+	subflow->subflow_id = msk->subflow_id++;
 	mptcp_info2sockaddr(remote, &addr, ssk->sk_family);
 
 	sock_hold(ssk);
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt
  2023-05-24 13:50 [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 1/6] mptcp: add subflow unique id Paolo Abeni
@ 2023-05-24 13:50 ` Paolo Abeni
  2023-05-24 15:19   ` Matthieu Baerts
  2023-05-24 15:26   ` Matthieu Baerts
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 3/6] mptcp: move snd_una update earlier for fallback socket Paolo Abeni
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 13:50 UTC (permalink / raw)
  To: mptcp; +Cc: Florian Westphal

Some user-space applications want to monitor the subflows utilization.

Dumping the per subflow tcp_info is not enough, as the PM could close
and re-create the subflows under-the-hood, fooling the accounting.
Even checking the src/dst addresses used by each subflow could not
be enough, because new subflows could re-use the same address/port of
the just closed one.

This patch introduces a new socket option, allow dumping all the relevant
information all-at-once (everything, everywhere...), in a consistent manner.

To reuse the existing helper to manipulate the new struct, keep the binary
layout of the initial few fields the same as mptcp_subflow_data.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v3 -> v4:
 - full_info struct re-design (Florian)

v2 -> v3:
 - added missing changelog (oops)
---
 include/uapi/linux/mptcp.h |  25 ++++++
 net/mptcp/sockopt.c        | 165 +++++++++++++++++++++++++++++++++----
 2 files changed, 174 insertions(+), 16 deletions(-)

diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
index 32af2d278cb4..37c46cf05795 100644
--- a/include/uapi/linux/mptcp.h
+++ b/include/uapi/linux/mptcp.h
@@ -12,6 +12,7 @@
 #include <linux/in.h>		/* for sockaddr_in			*/
 #include <linux/in6.h>		/* for sockaddr_in6			*/
 #include <linux/socket.h>	/* for sockaddr_storage and sa_family	*/
+#include <linux/tcp.h>		/* for tcp_info				*/
 
 #define MPTCP_SUBFLOW_FLAG_MCAP_REM		_BITUL(0)
 #define MPTCP_SUBFLOW_FLAG_MCAP_LOC		_BITUL(1)
@@ -244,9 +245,33 @@ struct mptcp_subflow_addrs {
 	};
 };
 
+struct mptcp_subflow_info {
+	__u32				id;
+	struct mptcp_subflow_addrs	addrs;
+};
+
+struct mptcp_subflow_full_info {
+	__u32		size_subflow_full_info;	/* size of this structure in userspace */
+	__u32		num_subflows_kern;	/* must be 0, set by kernel (real subflow count) */
+	__u32		size_tcpinfo_kernel;	/* must be 0, set by kernel */
+	__u32		size_tcpinfo_user;
+	__u32		size_sfinfo_kernel;	/* must be 0, set by kernel */
+	__u32		size_sfinfo_user;
+	__u32		num_subflows_user;	/* max subflows that userspace is interested in;
+						 * the buffers at subflow_info_addr/tcp_info_addr
+						 * are respectively at least:
+						 *  num_subflows_user * size_sfinfo_user
+						 *  num_subflows_user * size_tcpinfo_user
+						 * bytes wide
+						 */
+	__aligned_u64	subflow_info_addr;
+	__aligned_u64	tcp_info_addr;
+} __attribute__((aligned(8)));
+
 /* MPTCP socket options */
 #define MPTCP_INFO		1
 #define MPTCP_TCPINFO		2
 #define MPTCP_SUBFLOW_ADDRS	3
+#define MPTCP_FULL_INFO		4
 
 #endif /* _UAPI_MPTCP_H */
diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index d4258869ac48..59a174ee1d54 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -14,7 +14,8 @@
 #include <net/mptcp.h>
 #include "protocol.h"
 
-#define MIN_INFO_OPTLEN_SIZE	16
+#define MIN_INFO_OPTLEN_SIZE		16
+#define MIN_FULL_INFO_OPTLEN_SIZE	48
 
 static struct sock *__mptcp_tcp_fallback(struct mptcp_sock *msk)
 {
@@ -943,12 +944,13 @@ static int mptcp_getsockopt_info(struct mptcp_sock *msk, char __user *optval, in
 	return 0;
 }
 
-static int mptcp_put_subflow_data(struct mptcp_subflow_data *sfd,
-				  char __user *optval,
-				  u32 copied,
-				  int __user *optlen)
+static int __mptcp_put_subflow_data(struct mptcp_subflow_data *sfd,
+				    int size_subflow_data_kern,
+				    char __user *optval,
+				    u32 copied,
+				    int __user *optlen)
 {
-	u32 copylen = min_t(u32, sfd->size_subflow_data, sizeof(*sfd));
+	u32 copylen = min_t(u32, sfd->size_subflow_data, size_subflow_data_kern);
 
 	if (copied)
 		copied += sfd->size_subflow_data;
@@ -964,25 +966,30 @@ static int mptcp_put_subflow_data(struct mptcp_subflow_data *sfd,
 	return 0;
 }
 
-static int mptcp_get_subflow_data(struct mptcp_subflow_data *sfd,
-				  char __user *optval, int __user *optlen)
+static int mptcp_put_subflow_data(struct mptcp_subflow_data *sfd,
+				  char __user *optval,
+				  u32 copied,
+				  int __user *optlen)
+{
+	return __mptcp_put_subflow_data(sfd, sizeof(*sfd), optval, copied, optlen);
+}
+
+static int __mptcp_get_subflow_data(struct mptcp_subflow_data *sfd,
+				    int min_info_optlen_size,
+				    char __user *optval,
+				    int __user *optlen)
 {
 	int len, copylen;
 
 	if (get_user(len, optlen))
 		return -EFAULT;
 
-	/* if mptcp_subflow_data size is changed, need to adjust
-	 * this function to deal with programs using old version.
-	 */
-	BUILD_BUG_ON(sizeof(*sfd) != MIN_INFO_OPTLEN_SIZE);
-
-	if (len < MIN_INFO_OPTLEN_SIZE)
+	if (len < min_info_optlen_size)
 		return -EINVAL;
 
 	memset(sfd, 0, sizeof(*sfd));
 
-	copylen = min_t(unsigned int, len, sizeof(*sfd));
+	copylen = min_t(unsigned int, len, min_info_optlen_size);
 	if (copy_from_user(sfd, optval, copylen))
 		return -EFAULT;
 
@@ -991,7 +998,7 @@ static int mptcp_get_subflow_data(struct mptcp_subflow_data *sfd,
 	    sfd->size_user > INT_MAX)
 		return -EINVAL;
 
-	if (sfd->size_subflow_data < MIN_INFO_OPTLEN_SIZE ||
+	if (sfd->size_subflow_data < min_info_optlen_size ||
 	    sfd->size_subflow_data > len)
 		return -EINVAL;
 
@@ -1001,6 +1008,19 @@ static int mptcp_get_subflow_data(struct mptcp_subflow_data *sfd,
 	return len - sfd->size_subflow_data;
 }
 
+static int mptcp_get_subflow_data(struct mptcp_subflow_data *sfd,
+				  char __user *optval,
+				  int __user *optlen)
+{
+	/* if mptcp_subflow_data size is changed, need to adjust
+	 * this function to deal with programs using old version.
+	 */
+	BUILD_BUG_ON(sizeof(*sfd) != MIN_INFO_OPTLEN_SIZE);
+
+	return __mptcp_get_subflow_data(sfd, MIN_INFO_OPTLEN_SIZE,
+					optval, optlen);
+}
+
 static int mptcp_getsockopt_tcpinfo(struct mptcp_sock *msk, char __user *optval,
 				    int __user *optlen)
 {
@@ -1146,6 +1166,117 @@ static int mptcp_getsockopt_subflow_addrs(struct mptcp_sock *msk, char __user *o
 	return 0;
 }
 
+static int mptcp_get_full_subflow_info(struct mptcp_subflow_full_info *sffi,
+				       char __user *optval,
+				       int __user *optlen)
+{
+	struct mptcp_subflow_data *sfd = (struct mptcp_subflow_data *)sffi;
+	int len;
+
+	BUILD_BUG_ON(sizeof(*sffi) != MIN_FULL_INFO_OPTLEN_SIZE);
+
+	len = __mptcp_get_subflow_data(sfd, MIN_FULL_INFO_OPTLEN_SIZE,
+				       optval, optlen);
+	if (len < 0)
+		return len;
+
+	if (sffi->size_tcpinfo_kernel)
+		return -EINVAL;
+
+	if (sffi->size_sfinfo_user > INT_MAX)
+		return -EINVAL;
+
+	return len;
+}
+
+static int mptcp_put_subflow_full_info(struct mptcp_subflow_full_info *sffi,
+				       char __user *optval,
+				       u32 copied,
+				       int __user *optlen)
+{
+	struct mptcp_subflow_data *sfd = (struct mptcp_subflow_data *)sffi;
+
+	return __mptcp_put_subflow_data(sfd, sizeof(*sffi), optval, copied, optlen);
+}
+
+static int mptcp_getsockopt_full_info(struct mptcp_sock *msk, char __user *optval,
+				      int __user *optlen)
+{
+	struct mptcp_subflow_context *subflow;
+	struct sock *sk = (struct sock *)msk;
+	unsigned int sfcount = 0, copied = 0;
+	struct mptcp_subflow_full_info sffi;
+	void __user *tcpinfoptr, *sfinfoptr;
+	int len;
+
+	len = mptcp_get_full_subflow_info(&sffi, optval, optlen);
+	if (len < 0)
+		return len;
+
+	/* don't bother filling the mptcp info if there is not enough
+	 * user-space-provided storage
+	 */
+	if (len > 0) {
+		struct mptcp_info mptcp_info;
+		char __user *infoptr;
+		int mptcp_info_len;
+
+		infoptr = optval + sffi.size_subflow_full_info;
+		memset(&mptcp_info, 0, sizeof(mptcp_info));
+		mptcp_info_len = min_t(unsigned int, len, sizeof(struct mptcp_info));
+
+		mptcp_diag_fill_info(msk, &mptcp_info);
+
+		if (copy_to_user(infoptr, &mptcp_info, mptcp_info_len))
+			return -EFAULT;
+
+		copied += mptcp_info_len;
+	}
+
+	sffi.size_tcpinfo_kernel = sizeof(struct tcp_info);
+	sffi.size_tcpinfo_user = min_t(unsigned int, sffi.size_tcpinfo_user,
+				       sizeof(struct tcp_info));
+	sfinfoptr = (void __force __user *)sffi.subflow_info_addr;
+	sffi.size_sfinfo_kernel = sizeof(struct mptcp_subflow_info);
+	sffi.size_sfinfo_user = min_t(unsigned int, sffi.size_sfinfo_user,
+				      sizeof(struct mptcp_subflow_info));
+	tcpinfoptr = (void __force __user *)sffi.tcp_info_addr;
+
+	lock_sock(sk);
+	mptcp_for_each_subflow(msk, subflow) {
+		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+
+		if (sfcount++ < sffi.num_subflows_user) {
+			struct mptcp_subflow_info sfinfo;
+			struct tcp_info tcp_info;
+
+			memset(&sfinfo, 0, sizeof(sfinfo));
+			sfinfo.id = subflow->subflow_id;
+			mptcp_get_sub_addrs(ssk, &sfinfo.addrs);
+			if (copy_to_user(sfinfoptr, &sfinfo, sffi.size_sfinfo_user))
+				goto fail_release;
+
+			tcp_get_info(ssk, &tcp_info);
+			if (copy_to_user(tcpinfoptr, &tcp_info, sffi.size_tcpinfo_user))
+				goto fail_release;
+
+			tcpinfoptr += sffi.size_tcpinfo_user;
+			sfinfoptr += sffi.size_sfinfo_user;
+		}
+	}
+	release_sock(sk);
+
+	sffi.num_subflows_kern = sfcount;
+	if (mptcp_put_subflow_full_info(&sffi, optval, copied, optlen))
+		return -EFAULT;
+
+	return 0;
+
+fail_release:
+	release_sock(sk);
+	return -EFAULT;
+}
+
 static int mptcp_put_int_option(struct mptcp_sock *msk, char __user *optval,
 				int __user *optlen, int val)
 {
@@ -1219,6 +1350,8 @@ static int mptcp_getsockopt_sol_mptcp(struct mptcp_sock *msk, int optname,
 	switch (optname) {
 	case MPTCP_INFO:
 		return mptcp_getsockopt_info(msk, optval, optlen);
+	case MPTCP_FULL_INFO:
+		return mptcp_getsockopt_full_info(msk, optval, optlen);
 	case MPTCP_TCPINFO:
 		return mptcp_getsockopt_tcpinfo(msk, optval, optlen);
 	case MPTCP_SUBFLOW_ADDRS:
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 mptcp-next 3/6] mptcp: move snd_una update earlier for fallback socket.
  2023-05-24 13:50 [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 1/6] mptcp: add subflow unique id Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt Paolo Abeni
@ 2023-05-24 13:50 ` Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 4/6] mptcp: track some aggregate data counters Paolo Abeni
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 13:50 UTC (permalink / raw)
  To: mptcp; +Cc: Florian Westphal

That will avoid an unneeded conditional in both the fast-path
and in the fallback case and will simplify a bit the next
patch.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/options.c  | 6 ++++++
 net/mptcp/protocol.c | 6 ------
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index 8a8083207be4..4bdcd2b326bd 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -1119,6 +1119,12 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
 		mptcp_data_lock(subflow->conn);
 		if (sk_stream_memory_free(sk))
 			__mptcp_check_push(subflow->conn, sk);
+
+		/* on fallback we just need to ignore the msk-level snd_una, as
+		 * this is really plain TCP
+		 */
+		msk->snd_una = READ_ONCE(msk->snd_nxt);
+
 		__mptcp_data_acked(subflow->conn);
 		mptcp_data_unlock(subflow->conn);
 		return true;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 9998b2dd150e..89fee2ac84e2 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1006,12 +1006,6 @@ static void __mptcp_clean_una(struct sock *sk)
 	struct mptcp_data_frag *dtmp, *dfrag;
 	u64 snd_una;
 
-	/* on fallback we just need to ignore snd_una, as this is really
-	 * plain TCP
-	 */
-	if (__mptcp_check_fallback(msk))
-		msk->snd_una = READ_ONCE(msk->snd_nxt);
-
 	snd_una = msk->snd_una;
 	list_for_each_entry_safe(dfrag, dtmp, &msk->rtx_queue, list) {
 		if (after64(dfrag->data_seq + dfrag->data_len, snd_una))
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 mptcp-next 4/6] mptcp: track some aggregate data counters.
  2023-05-24 13:50 [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info Paolo Abeni
                   ` (2 preceding siblings ...)
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 3/6] mptcp: move snd_una update earlier for fallback socket Paolo Abeni
@ 2023-05-24 13:50 ` Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 5/6] selftests: mptcp: explicitly tests aggregate counters Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
  5 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 13:50 UTC (permalink / raw)
  To: mptcp; +Cc: Florian Westphal

Currently there are no data transfer counters accounting for all
the subflows used by a given MPTCP socket. The user-space can compute
such figures aggregating the subflow info, but that is inaccurate
if any subflow is closed before the MPTCP socket itself.

Add the new counters in the MPTCP socket itself and expose them
via the existing diag and sockopt. While touching mptcp_diag_fill_info(),
acquire the relevant locks before fetching the msk data, to ensure
better data consistency

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 include/uapi/linux/mptcp.h |  5 +++++
 net/mptcp/options.c        | 10 ++++++++--
 net/mptcp/protocol.c       | 12 +++++++++++-
 net/mptcp/protocol.h       |  4 ++++
 net/mptcp/sockopt.c        | 22 +++++++++++++++++-----
 5 files changed, 45 insertions(+), 8 deletions(-)

diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
index 37c46cf05795..9445c5c06053 100644
--- a/include/uapi/linux/mptcp.h
+++ b/include/uapi/linux/mptcp.h
@@ -124,6 +124,11 @@ struct mptcp_info {
 	__u8	mptcpi_local_addr_used;
 	__u8	mptcpi_local_addr_max;
 	__u8	mptcpi_csum_enabled;
+	__u32	mptcpi_retransmits;
+	__u64	mptcpi_bytes_retrans;
+	__u64	mptcpi_bytes_sent;
+	__u64	mptcpi_bytes_received;
+	__u64	mptcpi_bytes_acked;
 };
 
 /*
diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index 4bdcd2b326bd..c254accb14de 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -1026,6 +1026,12 @@ u64 __mptcp_expand_seq(u64 old_seq, u64 cur_seq)
 	return cur_seq;
 }
 
+static void __mptcp_snd_una_update(struct mptcp_sock *msk, u64 new_snd_una)
+{
+	msk->bytes_acked += new_snd_una - msk->snd_una;
+	msk->snd_una = new_snd_una;
+}
+
 static void ack_update_msk(struct mptcp_sock *msk,
 			   struct sock *ssk,
 			   struct mptcp_options_received *mp_opt)
@@ -1057,7 +1063,7 @@ static void ack_update_msk(struct mptcp_sock *msk,
 		__mptcp_check_push(sk, ssk);
 
 	if (after64(new_snd_una, old_snd_una)) {
-		msk->snd_una = new_snd_una;
+		__mptcp_snd_una_update(msk, new_snd_una);
 		__mptcp_data_acked(sk);
 	}
 	mptcp_data_unlock(sk);
@@ -1123,7 +1129,7 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
 		/* on fallback we just need to ignore the msk-level snd_una, as
 		 * this is really plain TCP
 		 */
-		msk->snd_una = READ_ONCE(msk->snd_nxt);
+		__mptcp_snd_una_update(msk, READ_ONCE(msk->snd_nxt));
 
 		__mptcp_data_acked(subflow->conn);
 		mptcp_data_unlock(subflow->conn);
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 89fee2ac84e2..adf26b991c1e 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -378,6 +378,7 @@ static bool __mptcp_move_skb(struct mptcp_sock *msk, struct sock *ssk,
 
 	if (MPTCP_SKB_CB(skb)->map_seq == msk->ack_seq) {
 		/* in sequence */
+		msk->bytes_received += copy_len;
 		WRITE_ONCE(msk->ack_seq, msk->ack_seq + copy_len);
 		tail = skb_peek_tail(&sk->sk_receive_queue);
 		if (tail && mptcp_try_coalesce(sk, tail, skb))
@@ -761,6 +762,7 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
 			MPTCP_SKB_CB(skb)->map_seq += delta;
 			__skb_queue_tail(&sk->sk_receive_queue, skb);
 		}
+		msk->bytes_received += end_seq - msk->ack_seq;
 		msk->ack_seq = end_seq;
 		moved = true;
 	}
@@ -1513,8 +1515,10 @@ static void mptcp_update_post_push(struct mptcp_sock *msk,
 	 * that has been handed to the subflow for transmission
 	 * and skip update in case it was old dfrag.
 	 */
-	if (likely(after64(snd_nxt_new, msk->snd_nxt)))
+	if (likely(after64(snd_nxt_new, msk->snd_nxt))) {
+		msk->bytes_sent += snd_nxt_new - msk->snd_nxt;
 		msk->snd_nxt = snd_nxt_new;
+	}
 }
 
 void mptcp_check_and_set_pending(struct sock *sk)
@@ -2639,6 +2643,8 @@ static void __mptcp_retrans(struct sock *sk)
 			msk->last_snd = ssk;
 		}
 	}
+
+	msk->bytes_retrans += len;
 	dfrag->already_sent = max(dfrag->already_sent, len);
 
 reset_timer:
@@ -3153,6 +3159,10 @@ static int mptcp_disconnect(struct sock *sk, int flags)
 	WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));
 	mptcp_pm_data_reset(msk);
 	mptcp_ca_reset(sk);
+	msk->bytes_acked = 0;
+	msk->bytes_received = 0;
+	msk->bytes_sent = 0;
+	msk->bytes_retrans = 0;
 
 	WRITE_ONCE(sk->sk_shutdown, 0);
 	sk_error_report(sk);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index f9180ecce5e4..0283383a09f4 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -261,10 +261,13 @@ struct mptcp_sock {
 	u64		local_key;
 	u64		remote_key;
 	u64		write_seq;
+	u64		bytes_sent;
 	u64		snd_nxt;
+	u64		bytes_received;
 	u64		ack_seq;
 	atomic64_t	rcv_wnd_sent;
 	u64		rcv_data_fin_seq;
+	u64		bytes_retrans;
 	int		rmem_fwd_alloc;
 	struct sock	*last_snd;
 	int		snd_burst;
@@ -273,6 +276,7 @@ struct mptcp_sock {
 						 * recovery related fields are under data_lock
 						 * protection
 						 */
+	u64		bytes_acked;
 	u64		snd_una;
 	u64		wnd_end;
 	unsigned long	timer_ival;
diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index 59a174ee1d54..a22e2463251b 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -889,7 +889,9 @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
 
 void mptcp_diag_fill_info(struct mptcp_sock *msk, struct mptcp_info *info)
 {
+	struct sock *sk = (struct sock *)msk;
 	u32 flags = 0;
+	bool slow;
 
 	memset(info, 0, sizeof(*info));
 
@@ -915,11 +917,21 @@ void mptcp_diag_fill_info(struct mptcp_sock *msk, struct mptcp_info *info)
 	if (READ_ONCE(msk->can_ack))
 		flags |= MPTCP_INFO_FLAG_REMOTE_KEY_RECEIVED;
 	info->mptcpi_flags = flags;
-	info->mptcpi_token = READ_ONCE(msk->token);
-	info->mptcpi_write_seq = READ_ONCE(msk->write_seq);
-	info->mptcpi_snd_una = READ_ONCE(msk->snd_una);
-	info->mptcpi_rcv_nxt = READ_ONCE(msk->ack_seq);
-	info->mptcpi_csum_enabled = READ_ONCE(msk->csum_enabled);
+	mptcp_data_lock(sk);
+	info->mptcpi_snd_una = msk->snd_una;
+	info->mptcpi_rcv_nxt = msk->ack_seq;
+	info->mptcpi_bytes_acked = msk->bytes_acked;
+	mptcp_data_unlock(sk);
+
+	slow = lock_sock_fast(sk);
+	info->mptcpi_csum_enabled = msk->csum_enabled;
+	info->mptcpi_token = msk->token;
+	info->mptcpi_write_seq = msk->write_seq;
+	info->mptcpi_retransmits = inet_csk(sk)->icsk_retransmits;
+	info->mptcpi_bytes_sent = msk->bytes_sent;
+	info->mptcpi_bytes_received = msk->bytes_received;
+	info->mptcpi_bytes_retrans = msk->bytes_retrans;
+	unlock_sock_fast(sk, slow);
 }
 EXPORT_SYMBOL_GPL(mptcp_diag_fill_info);
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 mptcp-next 5/6] selftests: mptcp: explicitly tests aggregate counters
  2023-05-24 13:50 [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info Paolo Abeni
                   ` (3 preceding siblings ...)
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 4/6] mptcp: track some aggregate data counters Paolo Abeni
@ 2023-05-24 13:50 ` Paolo Abeni
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
  5 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 13:50 UTC (permalink / raw)
  To: mptcp; +Cc: Florian Westphal

Update the existing sockopt test-case to do some basic checks
on the newly added counters.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v2 -> v3:
 - be more kind with older kernel (Matttbe)
---
 .../selftests/net/mptcp/mptcp_sockopt.c       | 27 ++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
index ae61f39556ca..ff8fcdfccf76 100644
--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
@@ -51,6 +51,11 @@ struct mptcp_info {
 	__u8	mptcpi_local_addr_used;
 	__u8	mptcpi_local_addr_max;
 	__u8	mptcpi_csum_enabled;
+	__u32	mptcpi_retransmits;
+	__u64	mptcpi_bytes_retrans;
+	__u64	mptcpi_bytes_sent;
+	__u64	mptcpi_bytes_received;
+	__u64	mptcpi_bytes_acked;
 };
 
 struct mptcp_subflow_data {
@@ -83,8 +88,10 @@ struct mptcp_subflow_addrs {
 
 struct so_state {
 	struct mptcp_info mi;
+	struct mptcp_info last_sample;
 	uint64_t mptcpi_rcv_delta;
 	uint64_t tcpi_rcv_delta;
+	bool pkt_stats_avail;
 };
 
 static void die_perror(const char *msg)
@@ -318,8 +325,9 @@ static void do_getsockopt_mptcp_info(struct so_state *s, int fd, size_t w)
 	if (ret < 0)
 		die_perror("getsockopt MPTCP_INFO");
 
-	assert(olen == sizeof(i));
+	s->pkt_stats_avail = olen >= sizeof(i);
 
+	s->last_sample = i;
 	if (s->mi.mptcpi_write_seq == 0)
 		s->mi = i;
 
@@ -556,6 +564,23 @@ static void process_one_client(int fd, int pipefd)
 	do_getsockopts(&s, fd, ret, ret2);
 	if (s.mptcpi_rcv_delta != (uint64_t)ret + 1)
 		xerror("mptcpi_rcv_delta %" PRIu64 ", expect %" PRIu64, s.mptcpi_rcv_delta, ret + 1, s.mptcpi_rcv_delta - ret);
+
+	/* be nice when running on top of older kernel */
+	if (s.pkt_stats_avail) {
+		if (s.last_sample.mptcpi_bytes_sent != ret2)
+			xerror("mptcpi_bytes_sent %" PRIu64 ", expect %" PRIu64,
+			       s.last_sample.mptcpi_bytes_sent, ret2,
+			       s.last_sample.mptcpi_bytes_sent - ret2);
+		if (s.last_sample.mptcpi_bytes_received != ret)
+			xerror("mptcpi_bytes_received %" PRIu64 ", expect %" PRIu64,
+			       s.last_sample.mptcpi_bytes_received, ret,
+			       s.last_sample.mptcpi_bytes_received - ret);
+		if (s.last_sample.mptcpi_bytes_acked != ret)
+			xerror("mptcpi_bytes_acked %" PRIu64 ", expect %" PRIu64,
+			       s.last_sample.mptcpi_bytes_acked, ret2,
+			       s.last_sample.mptcpi_bytes_acked - ret2);
+	}
+
 	close(fd);
 }
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase
  2023-05-24 13:50 [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info Paolo Abeni
                   ` (4 preceding siblings ...)
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 5/6] selftests: mptcp: explicitly tests aggregate counters Paolo Abeni
@ 2023-05-24 13:50 ` Paolo Abeni
  2023-05-24 14:17   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Build Failure MPTCP CI
                     ` (3 more replies)
  5 siblings, 4 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 13:50 UTC (permalink / raw)
  To: mptcp; +Cc: Florian Westphal

Add a testcase explicitly triggering the newly introduce
MPTCP_FULL_INFO getsockopt.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 .../selftests/net/mptcp/mptcp_sockopt.c       | 96 ++++++++++++++++++-
 1 file changed, 94 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
index ff8fcdfccf76..e74790c10558 100644
--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
@@ -86,9 +86,38 @@ struct mptcp_subflow_addrs {
 #define MPTCP_SUBFLOW_ADDRS	3
 #endif
 
+#ifndef MPTCP_FULL_INFO
+struct mptcp_subflow_info {
+	__u32				id;
+	struct mptcp_subflow_addrs	addrs;
+};
+
+struct mptcp_subflow_full_info {
+	__u32		size_subflow_full_info;	/* size of this structure in userspace */
+	__u32		num_subflows_kern;	/* must be 0, set by kernel (real subflow count) */
+	__u32		size_tcpinfo_kernel;	/* must be 0, set by kernel */
+	__u32		size_tcpinfo_user;
+	__u32		size_sfinfo_kernel;	/* must be 0, set by kernel */
+	__u32		size_sfinfo_user;
+	__u32		num_subflows_user;	/* max subflows that userspace is interested in;
+						 * the buffers at subflow_info_addr/tcp_info_addr
+						 * are respectively at least:
+						 *  num_subflows_user * size_sfinfo_user
+						 *  num_subflows_user * size_tcpinfo_user
+						 * bytes wide
+						 */
+	__aligned_u64	subflow_info_addr;
+	__aligned_u64	tcp_info_addr;
+};
+
+#define MPTCP_FULL_INFO		4
+#endif
+
 struct so_state {
 	struct mptcp_info mi;
 	struct mptcp_info last_sample;
+	struct tcp_info tcp_info;
+	struct mptcp_subflow_addrs addrs;
 	uint64_t mptcpi_rcv_delta;
 	uint64_t tcpi_rcv_delta;
 	bool pkt_stats_avail;
@@ -365,6 +394,8 @@ static void do_getsockopt_tcp_info(struct so_state *s, int fd, size_t r, size_t
 		olen -= sizeof(struct mptcp_subflow_data);
 		assert(olen == sizeof(struct tcp_info));
 
+		s->tcp_info = ti.ti[0];
+
 		if (ti.ti[0].tcpi_bytes_sent == w &&
 		    ti.ti[0].tcpi_bytes_received == r)
 			goto done;
@@ -386,7 +417,7 @@ static void do_getsockopt_tcp_info(struct so_state *s, int fd, size_t r, size_t
 	do_getsockopt_bogus_sf_data(fd, MPTCP_TCPINFO);
 }
 
-static void do_getsockopt_subflow_addrs(int fd)
+static void do_getsockopt_subflow_addrs(struct so_state *s, int fd)
 {
 	struct sockaddr_storage remote, local;
 	socklen_t olen, rlen, llen;
@@ -433,6 +464,7 @@ static void do_getsockopt_subflow_addrs(int fd)
 
 	assert(memcmp(&local, &addrs.addr[0].ss_local, sizeof(local)) == 0);
 	assert(memcmp(&remote, &addrs.addr[0].ss_remote, sizeof(remote)) == 0);
+	s->addrs = addrs.addr[0];
 
 	memset(&addrs, 0, sizeof(addrs));
 
@@ -453,13 +485,73 @@ static void do_getsockopt_subflow_addrs(int fd)
 	do_getsockopt_bogus_sf_data(fd, MPTCP_SUBFLOW_ADDRS);
 }
 
+struct my_mptcp_full_info {
+	struct mptcp_subflow_full_info i;
+	struct mptcp_info mi;
+};
+static void do_getsockopt_mptcp_full_info(struct so_state *s, int fd)
+{
+	size_t data_size = sizeof(struct my_mptcp_full_info);
+	struct mptcp_subflow_info sfinfo[2];
+	struct my_mptcp_full_info mmfi;
+	struct tcp_info tcp_info[2];
+	socklen_t olen;
+	int ret;
+
+	memset(&mmfi, 0, data_size);
+	memset(tcp_info, 0, sizeof(tcp_info));
+	memset(sfinfo, 0, sizeof(sfinfo));
+
+	mmfi.i.size_subflow_full_info = sizeof(struct mptcp_subflow_full_info);
+	mmfi.i.size_tcpinfo_user = sizeof(struct tcp_info);
+	mmfi.i.size_sfinfo_user = sizeof(struct mptcp_subflow_info);
+	mmfi.i.num_subflows_user = 2;
+	mmfi.i.subflow_info_addr = (unsigned long) &sfinfo[0];
+	mmfi.i.tcp_info_addr = (unsigned long) &tcp_info[0];
+	olen = data_size;
+
+	ret = getsockopt(fd, SOL_MPTCP, MPTCP_FULL_INFO, &mmfi, &olen);
+	if (ret < 0) {
+		if (errno == EOPNOTSUPP) {
+			fprintf(stderr, "\tMPTCP_FULL_INFO test skipped due to lack of kernel support\n");
+			return;
+		}
+		xerror("getsockopt MPTCP_FULL_INFO");
+	}
+
+	assert(olen <= data_size);
+	assert(mmfi.i.size_tcpinfo_user == mmfi.i.size_tcpinfo_kernel);
+	assert(mmfi.i.size_tcpinfo_user == sizeof(struct tcp_info));
+	assert(mmfi.i.size_sfinfo_user == mmfi.i.size_sfinfo_kernel);
+	assert(mmfi.i.size_sfinfo_user == sizeof(struct mptcp_subflow_info));
+	assert(mmfi.i.num_subflows_kern == 1);
+
+	/* Tolerate future extension to mptcp_info struct and running newer
+	 * test on top of older kernel.
+	 * Anyway any kernel supporting MPTCP_FULL_INFO must at least include
+	 * the following in mptcp_info.
+	 */
+	assert(olen > (socklen_t)sizeof(struct mptcp_subflow_full_info));
+	assert(mmfi.mi.mptcpi_subflows == 0);
+	assert(mmfi.mi.mptcpi_bytes_sent == s->last_sample.mptcpi_bytes_sent);
+	assert(mmfi.mi.mptcpi_bytes_received == s->last_sample.mptcpi_bytes_received);
+
+	assert(sfinfo[0].id == 1);
+	assert(tcp_info[0].tcpi_bytes_sent == s->tcp_info.tcpi_bytes_sent);
+	assert(tcp_info[0].tcpi_bytes_received == s->tcp_info.tcpi_bytes_received);
+	assert(!memcmp(&sfinfo->addrs, &s->addrs, sizeof(struct mptcp_subflow_addrs)));
+}
+
 static void do_getsockopts(struct so_state *s, int fd, size_t r, size_t w)
 {
 	do_getsockopt_mptcp_info(s, fd, w);
 
 	do_getsockopt_tcp_info(s, fd, r, w);
 
-	do_getsockopt_subflow_addrs(fd);
+	do_getsockopt_subflow_addrs(s, fd);
+
+	if (r)
+		do_getsockopt_mptcp_full_info(s, fd);
 }
 
 static void connect_one_server(int fd, int pipefd)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: selftests: mptcp: add MPTCP_FULL_INFO testcase: Build Failure
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
@ 2023-05-24 14:17   ` MPTCP CI
  2023-05-24 14:50     ` Paolo Abeni
  2023-05-24 15:22   ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Matthieu Baerts
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: MPTCP CI @ 2023-05-24 14:17 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://patchwork.kernel.org/project/mptcp/patch/930270fbc3358d0c423357f770b679745f8dc4a8.1684935727.git.pabeni@redhat.com/
  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/5069697633

Status: failure
Initiator: MPTCPimporter
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/63022ec27b06

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: selftests: mptcp: add MPTCP_FULL_INFO testcase: Build Failure
  2023-05-24 14:17   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Build Failure MPTCP CI
@ 2023-05-24 14:50     ` Paolo Abeni
  0 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2023-05-24 14:50 UTC (permalink / raw)
  To: mptcp

On Wed, 2023-05-24 at 14:17 +0000, MPTCP CI wrote:
> Hi Paolo,
> 
> Thank you for your modifications, that's great!
> 
> But sadly, our CI spotted some issues with it when trying to build it.
> 
> You can find more details there:
> 
>   https://patchwork.kernel.org/project/mptcp/patch/930270fbc3358d0c423357f770b679745f8dc4a8.1684935727.git.pabeni@redhat.com/
>   https://github.com/multipath-tcp/mptcp_net-next/actions/runs/5069697633

sigh, 32 bit issue with:

	sfinfoptr = (void __force __user *)sffi.subflow_info_addr;

I need to use u64_to_user_ptr()

I'll repost in a little time (hopefully we don't have the 24h ban
period ;)

/P


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt Paolo Abeni
@ 2023-05-24 15:19   ` Matthieu Baerts
  2023-05-24 15:26   ` Matthieu Baerts
  1 sibling, 0 replies; 17+ messages in thread
From: Matthieu Baerts @ 2023-05-24 15:19 UTC (permalink / raw)
  To: Paolo Abeni, mptcp; +Cc: Florian Westphal

Hi Paolo,

On 24/05/2023 15:50, Paolo Abeni wrote:
> Some user-space applications want to monitor the subflows utilization.
> 
> Dumping the per subflow tcp_info is not enough, as the PM could close
> and re-create the subflows under-the-hood, fooling the accounting.
> Even checking the src/dst addresses used by each subflow could not
> be enough, because new subflows could re-use the same address/port of
> the just closed one.
> 
> This patch introduces a new socket option, allow dumping all the relevant
> information all-at-once (everything, everywhere...), in a consistent manner.
> 
> To reuse the existing helper to manipulate the new struct, keep the binary
> layout of the initial few fields the same as mptcp_subflow_data.
> 
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> v3 -> v4:
>  - full_info struct re-design (Florian)
> 
> v2 -> v3:
>  - added missing changelog (oops)
> ---
>  include/uapi/linux/mptcp.h |  25 ++++++
>  net/mptcp/sockopt.c        | 165 +++++++++++++++++++++++++++++++++----
>  2 files changed, 174 insertions(+), 16 deletions(-)
> 
> diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
> index 32af2d278cb4..37c46cf05795 100644
> --- a/include/uapi/linux/mptcp.h
> +++ b/include/uapi/linux/mptcp.h
> @@ -12,6 +12,7 @@
>  #include <linux/in.h>		/* for sockaddr_in			*/
>  #include <linux/in6.h>		/* for sockaddr_in6			*/
>  #include <linux/socket.h>	/* for sockaddr_storage and sa_family	*/
> +#include <linux/tcp.h>		/* for tcp_info				*/

(we don't directly use tcp_info here below, no?)

>  
>  #define MPTCP_SUBFLOW_FLAG_MCAP_REM		_BITUL(0)
>  #define MPTCP_SUBFLOW_FLAG_MCAP_LOC		_BITUL(1)
> @@ -244,9 +245,33 @@ struct mptcp_subflow_addrs {
>  	};
>  };
>  
> +struct mptcp_subflow_info {
> +	__u32				id;
> +	struct mptcp_subflow_addrs	addrs;
> +};
> +
> +struct mptcp_subflow_full_info {
> +	__u32		size_subflow_full_info;	/* size of this structure in userspace */
> +	__u32		num_subflows_kern;	/* must be 0, set by kernel (real subflow count) */

(maybe better to call it num_subflows if it is the "real" number of
subflows?)

> +	__u32		size_tcpinfo_kernel;	/* must be 0, set by kernel */
> +	__u32		size_tcpinfo_user;
> +	__u32		size_sfinfo_kernel;	/* must be 0, set by kernel */
> +	__u32		size_sfinfo_user;
> +	__u32		num_subflows_user;	/* max subflows that userspace is interested in;
> +						 * the buffers at subflow_info_addr/tcp_info_addr
> +						 * are respectively at least:
> +						 *  num_subflows_user * size_sfinfo_user
> +						 *  num_subflows_user * size_tcpinfo_user
> +						 * bytes wide

The "longish" comment seems to suggest the name is not appropriated,
maybe: size_arrays(_user) or similar?

> +						 */
> +	__aligned_u64	subflow_info_addr;
> +	__aligned_u64	tcp_info_addr;

(the "_addr" suffix is confusing, I thought it was linked to subflow
addresses :) why not "_ptr" instead? or put "addr_" as a prefix?)

> +} __attribute__((aligned(8)));

Out of curiosity, why not including "struct mptcp_info" in "struct
mptcp_subflow_full_info" (and strip "_subflow": "mptcp_full_info")?

It looks like it is a bit hidden that this getsockopt() also writes
mptcp_info in the buffer if there is space and mptcp_info size could be
explicitly given making clean it can be set to 0 if we don't want it
(instead of not knowing we can give a bigger size to have it included).

Plus the userspace has to create a custom structure to get it:

  struct my_mptcp_full_info {
  	struct mptcp_subflow_full_info i;
  	struct mptcp_info mi;
  };

(but maybe that is common and fine?)

And while at it, why not doing like subflow_info_addr and tcp_info_addr
and use a pointer to a structure? I don't know if it would be better, it
is just to know why, just because it is not an array I suppose :)

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
  2023-05-24 14:17   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Build Failure MPTCP CI
@ 2023-05-24 15:22   ` Matthieu Baerts
  2023-05-24 17:04   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results MPTCP CI
  2023-05-24 19:56   ` MPTCP CI
  3 siblings, 0 replies; 17+ messages in thread
From: Matthieu Baerts @ 2023-05-24 15:22 UTC (permalink / raw)
  To: Paolo Abeni, mptcp; +Cc: Florian Westphal

Hi Paolo,

On 24/05/2023 15:50, Paolo Abeni wrote:
> Add a testcase explicitly triggering the newly introduce
> MPTCP_FULL_INFO getsockopt.

If you need to send a v5, maybe better to move this patch after the 2nd
one, no?

Patches 3-5 looks good to me and only concern MPTCP_INFO while 1-2,6 are
linked to the new MPTCP_FULL_INFO.

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt Paolo Abeni
  2023-05-24 15:19   ` Matthieu Baerts
@ 2023-05-24 15:26   ` Matthieu Baerts
  1 sibling, 0 replies; 17+ messages in thread
From: Matthieu Baerts @ 2023-05-24 15:26 UTC (permalink / raw)
  To: Paolo Abeni, mptcp; +Cc: Florian Westphal

Hi Paolo,

On 24/05/2023 15:50, Paolo Abeni wrote:
> Some user-space applications want to monitor the subflows utilization.
> 
> Dumping the per subflow tcp_info is not enough, as the PM could close
> and re-create the subflows under-the-hood, fooling the accounting.
> Even checking the src/dst addresses used by each subflow could not
> be enough, because new subflows could re-use the same address/port of
> the just closed one.
> 
> This patch introduces a new socket option, allow dumping all the relevant
> information all-at-once (everything, everywhere...), in a consistent manner.
> 
> To reuse the existing helper to manipulate the new struct, keep the binary
> layout of the initial few fields the same as mptcp_subflow_data.

(...)

> +	lock_sock(sk);
> +	mptcp_for_each_subflow(msk, subflow) {
> +		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> +
> +		if (sfcount++ < sffi.num_subflows_user) {
> +			struct mptcp_subflow_info sfinfo;
> +			struct tcp_info tcp_info;
> +
> +			memset(&sfinfo, 0, sizeof(sfinfo));
> +			sfinfo.id = subflow->subflow_id;
> +			mptcp_get_sub_addrs(ssk, &sfinfo.addrs);

Would it be worth it to first check size_sfinfo_user > sizeof(sfinfo.id)
before retreiving the addresses?

> +			if (copy_to_user(sfinfoptr, &sfinfo, sffi.size_sfinfo_user))
> +				goto fail_release;> +
> +			tcp_get_info(ssk, &tcp_info);

Same here: just checking if sffi.size_tcpinfo_user > 0, just in case the
userspace doesn't one and/or the other.

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
  2023-05-24 14:17   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Build Failure MPTCP CI
  2023-05-24 15:22   ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Matthieu Baerts
@ 2023-05-24 17:04   ` MPTCP CI
  2023-05-24 19:56   ` MPTCP CI
  3 siblings, 0 replies; 17+ messages in thread
From: MPTCP CI @ 2023-05-24 17:04 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_fastopen 🔴:
  - Task: https://cirrus-ci.com/task/4503925617852416
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4503925617852416/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5629825524695040
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5629825524695040/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 4 failed test(s): packetdrill_add_addr packetdrill_dss packetdrill_syscalls selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/5066875571273728
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5066875571273728/summary/summary.txt

- {"code":404,"message":
  - "Can't find artifacts containing file conclusion.txt"}:
  - Task: https://cirrus-ci.com/task/6192775478116352
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6192775478116352/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/63022ec27b06


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results
  2023-05-24 13:50 ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
                     ` (2 preceding siblings ...)
  2023-05-24 17:04   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results MPTCP CI
@ 2023-05-24 19:56   ` MPTCP CI
  3 siblings, 0 replies; 17+ messages in thread
From: MPTCP CI @ 2023-05-24 19:56 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_fastopen 🔴:
  - Task: https://cirrus-ci.com/task/4503925617852416
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4503925617852416/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5629825524695040
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5629825524695040/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 4 failed test(s): packetdrill_add_addr packetdrill_dss packetdrill_syscalls selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/5066875571273728
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5066875571273728/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6036925308469248
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6036925308469248/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/63022ec27b06


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results
  2023-05-25  7:17 [PATCH v6 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
  2023-05-25  8:35 ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results MPTCP CI
@ 2023-05-25 16:51 ` MPTCP CI
  1 sibling, 0 replies; 17+ messages in thread
From: MPTCP CI @ 2023-05-25 16:51 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4764400553295872
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4764400553295872/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5890300460138496
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5890300460138496/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 2 failed test(s): packetdrill_fastopen selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/5327350506717184
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5327350506717184/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6453250413559808
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6453250413559808/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/de1f158647f1


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results
  2023-05-25  7:17 [PATCH v6 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
@ 2023-05-25  8:35 ` MPTCP CI
  2023-05-25 16:51 ` MPTCP CI
  1 sibling, 0 replies; 17+ messages in thread
From: MPTCP CI @ 2023-05-25  8:35 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5296261184618496
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5296261184618496/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 4 failed test(s): packetdrill_add_addr packetdrill_dss packetdrill_fastopen selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/5014786207907840
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5014786207907840/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6422161091461120
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6422161091461120/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6140686114750464
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6140686114750464/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/0ae31b1c5534


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results
  2023-05-24 21:29 [PATCH v5 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
@ 2023-05-24 22:55 ` MPTCP CI
  0 siblings, 0 replies; 17+ messages in thread
From: MPTCP CI @ 2023-05-24 22:55 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_fastopen 🔴:
  - Task: https://cirrus-ci.com/task/5986916420550656
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5986916420550656/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5423966467129344
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5423966467129344/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4720279025352704
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4720279025352704/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 4 failed test(s): packetdrill_fastopen packetdrill_mp_capable packetdrill_syscalls selftest_diag 🔴:
  - Task: https://cirrus-ci.com/task/6549866373971968
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6549866373971968/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/d241aa733adf


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-05-25 16:52 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-24 13:50 [PATCH v4 mptcp-next 0/6] mptcp: add some more diag info Paolo Abeni
2023-05-24 13:50 ` [PATCH v4 mptcp-next 1/6] mptcp: add subflow unique id Paolo Abeni
2023-05-24 13:50 ` [PATCH v4 mptcp-next 2/6] mptcp: introduce MPTCP_FULL_INFO getsockopt Paolo Abeni
2023-05-24 15:19   ` Matthieu Baerts
2023-05-24 15:26   ` Matthieu Baerts
2023-05-24 13:50 ` [PATCH v4 mptcp-next 3/6] mptcp: move snd_una update earlier for fallback socket Paolo Abeni
2023-05-24 13:50 ` [PATCH v4 mptcp-next 4/6] mptcp: track some aggregate data counters Paolo Abeni
2023-05-24 13:50 ` [PATCH v4 mptcp-next 5/6] selftests: mptcp: explicitly tests aggregate counters Paolo Abeni
2023-05-24 13:50 ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
2023-05-24 14:17   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Build Failure MPTCP CI
2023-05-24 14:50     ` Paolo Abeni
2023-05-24 15:22   ` [PATCH v4 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Matthieu Baerts
2023-05-24 17:04   ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results MPTCP CI
2023-05-24 19:56   ` MPTCP CI
2023-05-24 21:29 [PATCH v5 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
2023-05-24 22:55 ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results MPTCP CI
2023-05-25  7:17 [PATCH v6 mptcp-next 6/6] selftests: mptcp: add MPTCP_FULL_INFO testcase Paolo Abeni
2023-05-25  8:35 ` selftests: mptcp: add MPTCP_FULL_INFO testcase: Tests Results MPTCP CI
2023-05-25 16:51 ` MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.