mptcp.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [MPTCP] [RFC PATCH mptcp-next 0/9] initial SOL_SOCKET support
@ 2021-03-17 16:38 Florian Westphal
  2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 1/9] mptcp: add skeleton to sync msk socket options to subflows Florian Westphal
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Florian Westphal @ 2021-03-17 16:38 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 2256 bytes --]

This patch set improves support for several SOL_SOCKET tuneables.

First patch adds skeleton synchronization functions to copy mptcp socket
settings to a subflow socket.

For the incoming connection case a work queue based synchronization
point is added.

A sequence number tells which subflows are out-of-sync and need
updating.

setsockopt SOL_SOCKET gets extended to handle several
tuneables.

Some of these get copied to all subflows:
- keepalive
- packet mark (influences routing decision)
- SO_LINGER

Others are only set on the initial socket, e.g.
BINDTODEVICE.

Some of these are debateable, for instance SO_INCOMING_CPU.

Last two patches add support for two TCP sockopts, TCP_CONGESTION
and TCP_INFO.

The former is only set for the initial subflow, since that is what
userspace program knows about (bind/connect address).

It seems better to allow mptcpd/iproute2 to augment subflows
with TCPcc info instead.

TCP_INFO is another item open for debate.
The patch here will take the TCP info from the subflow that was
last used to send data.

This may not be what is expected, however.
Since the struct format is fixed, there are not many alternatives:
1. - always fail with -EOPNOTSUPP
2. - try to artificially synthesize TCP_INFO record based on mptcp
level sequence numbers, etc.
3. - always follow the initial subflow only.

Comments welcome.

Furthermore, support for various SO_TIMESTAMP* settings will need
cmsg support in mptcp to be useful.  Let me know if anyone is already
looking at that.

Florian Westphal (9):
  mptcp: add skeleton to sync msk socket options to subflows
  mptcp: setsockopt: handle SO_KEEPALIVE and SO_PRIORITY
  mptcp: setsockopt: handle receive/send buffer and device bind
  mptcp: setsockopt: support SO_LINGER
  mptcp: setsockopt: add SO_MARK support
  mptcp: setsockopt: add SO_INCOMING_CPU
  mptcp: setsockopt: SO_DEBUG and no-op options
  mptcp: sockopt: add TCP_CONGESTION
  mptcp: sockopt: handle TCP_INFO

 net/mptcp/protocol.c |  31 +++-
 net/mptcp/protocol.h |  12 ++
 net/mptcp/sockopt.c  | 431 +++++++++++++++++++++++++++++++++++++++++++
 net/mptcp/subflow.c  |   1 +
 4 files changed, 472 insertions(+), 3 deletions(-)

-- 
2.26.2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] [PATCH mptcp-next 1/9] mptcp: add skeleton to sync msk socket options to subflows
@ 2021-03-17 16:38 ` Florian Westphal
  2021-03-19 11:33   ` [MPTCP] " Paolo Abeni
  0 siblings, 1 reply; 16+ messages in thread
From: Florian Westphal @ 2021-03-17 16:38 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 5811 bytes --]

Handle following cases:
1. setsockopt is called with multiple subflows.
   Change might have to be mirrored to all of them.
2. Outgoing subflow is created after one or several setsockopt()
   calls have been made.  Old setsockopt changes should be
   synced to the new socket.
3. Like 2, but for incoming subflow.
   This needs the work queue because socket lock (and in the future
   possibly rtnl mutex) might be needed.

Add sequence numbers to subflow context and mptcp socket so
synchronization functions know which subflow is already updated
and which ones are not.

Signed-off-by: Florian Westphal <fw(a)strlen.de>
---
 net/mptcp/protocol.c | 31 ++++++++++++++++++++++++++++---
 net/mptcp/protocol.h | 12 ++++++++++++
 net/mptcp/sockopt.c  | 27 +++++++++++++++++++++++++++
 net/mptcp/subflow.c  |  1 +
 4 files changed, 68 insertions(+), 3 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 9d7e7e13fba8..0b9ef8ddff55 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -730,18 +730,42 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
 		sk->sk_data_ready(sk);
 }
 
-void __mptcp_flush_join_list(struct mptcp_sock *msk)
+static bool mptcp_do_flush_join_list(struct mptcp_sock *msk)
 {
 	struct mptcp_subflow_context *subflow;
 
 	if (likely(list_empty(&msk->join_list)))
-		return;
+		return false;
 
 	spin_lock_bh(&msk->join_list_lock);
 	list_for_each_entry(subflow, &msk->join_list, node)
 		mptcp_propagate_sndbuf((struct sock *)msk, mptcp_subflow_tcp_sock(subflow));
 	list_splice_tail_init(&msk->join_list, &msk->conn_list);
 	spin_unlock_bh(&msk->join_list_lock);
+
+	return true;
+}
+
+static void mptcp_work_flush_join_list(struct mptcp_sock *msk)
+{
+	bool sync_needed = test_and_clear_bit(MPTCP_WORK_SYNC_SETSOCKOPT, &msk->flags);
+
+	if (!mptcp_do_flush_join_list(msk) && !sync_needed)
+		return;
+
+	mptcp_sockopt_sync_all(msk);
+}
+
+void __mptcp_flush_join_list(struct mptcp_sock *msk)
+{
+	if (likely(!mptcp_do_flush_join_list(msk)))
+		return;
+
+	if (msk->setsockopt_seq == msk->setsockopt_seq_old)
+		return;
+
+	if (!test_and_set_bit(MPTCP_WORK_SYNC_SETSOCKOPT, &msk->flags))
+		mptcp_schedule_work((struct sock *)msk);
 }
 
 static bool mptcp_timer_pending(struct sock *sk)
@@ -2307,7 +2331,7 @@ static void mptcp_worker(struct work_struct *work)
 		goto unlock;
 
 	mptcp_check_data_fin_ack(sk);
-	__mptcp_flush_join_list(msk);
+	mptcp_work_flush_join_list(msk);
 
 	mptcp_check_fastclose(msk);
 
@@ -2569,6 +2593,7 @@ static void __mptcp_destroy_sock(struct sock *sk)
 	xfrm_sk_free_policy(sk);
 	sk_refcnt_debug_release(sk);
 	mptcp_dispose_initial_subflow(msk);
+
 	sock_put(sk);
 }
 
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index a0e301ae56b0..ca938a799f66 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -111,6 +111,7 @@
 #define MPTCP_CLEAN_UNA		7
 #define MPTCP_ERROR_REPORT	8
 #define MPTCP_RETRANSMIT	9
+#define MPTCP_WORK_SYNC_SETSOCKOPT 10
 
 static inline bool before64(__u64 seq1, __u64 seq2)
 {
@@ -280,6 +281,9 @@ struct mptcp_sock {
 		u64	time;	/* start time of measurement window */
 		u64	rtt_us; /* last maximum rtt of subflows */
 	} rcvq_space;
+
+	u32 setsockopt_seq;
+	u32 setsockopt_seq_old;
 };
 
 #define mptcp_lock_sock(___sk, cb) do {					\
@@ -438,6 +442,8 @@ struct mptcp_subflow_context {
 	long	delegated_status;
 	struct	list_head delegated_node;   /* link into delegated_action, protected by local BH */
 
+	u32 setsockopt_seq;
+
 	struct	sock *tcp_sock;	    /* tcp sk backpointer */
 	struct	sock *conn;	    /* parent mptcp_sock */
 	const	struct inet_connection_sock_af_ops *icsk_af_ops;
@@ -758,6 +764,12 @@ unsigned int mptcp_pm_get_add_addr_accept_max(struct mptcp_sock *msk);
 unsigned int mptcp_pm_get_subflows_max(struct mptcp_sock *msk);
 unsigned int mptcp_pm_get_local_addr_max(struct mptcp_sock *msk);
 
+int mptcp_setsockopt(struct sock *sk, int level, int optname,
+		     sockptr_t optval, unsigned int optlen);
+
+void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk);
+void mptcp_sockopt_sync_all(struct mptcp_sock *msk);
+
 static inline struct mptcp_ext *mptcp_get_ext(const struct sk_buff *skb)
 {
 	return (struct mptcp_ext *)skb_ext_find(skb, SKB_EXT_MPTCP);
diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index fb98fab252df..fa71216e11a3 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -350,3 +350,30 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
 	return -EOPNOTSUPP;
 }
 
+void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk)
+{
+	struct mptcp_subflow_context *subflow;
+
+	msk_owned_by_me(msk);
+
+	subflow = mptcp_subflow_ctx(ssk);
+	if (subflow->setsockopt_seq == msk->setsockopt_seq)
+		return;
+
+	subflow->setsockopt_seq = msk->setsockopt_seq;
+}
+
+void mptcp_sockopt_sync_all(struct mptcp_sock *msk)
+{
+	struct mptcp_subflow_context *subflow;
+
+	msk_owned_by_me(msk);
+
+	mptcp_for_each_subflow(msk, subflow) {
+		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+
+		mptcp_sockopt_sync(msk, ssk);
+	}
+
+	msk->setsockopt_seq_old = msk->setsockopt_seq;
+}
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index 6af443a18bac..af18f9041673 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -1311,6 +1311,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
 	mptcp_info2sockaddr(remote, &addr, ssk->sk_family);
 
 	mptcp_add_pending_subflow(msk, subflow);
+	mptcp_sockopt_sync(msk, ssk);
 	err = kernel_connect(sf, (struct sockaddr *)&addr, addrlen, O_NONBLOCK);
 	if (err && err != -EINPROGRESS)
 		goto failed_unlink;
-- 
2.26.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MPTCP] [PATCH mptcp-next 2/9] mptcp: setsockopt: handle SO_KEEPALIVE and SO_PRIORITY
@ 2021-03-17 16:38 ` Florian Westphal
  2021-03-19 11:43   ` [MPTCP] " Paolo Abeni
  0 siblings, 1 reply; 16+ messages in thread
From: Florian Westphal @ 2021-03-17 16:38 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 3701 bytes --]

start with something simple: both take an integer value, both
need to be mirrored to all subflows.

Signed-off-by: Florian Westphal <fw(a)strlen.de>
---
 net/mptcp/sockopt.c | 95 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 95 insertions(+)

diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index fa71216e11a3..33ca67e99f8f 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -24,6 +24,81 @@ static struct sock *__mptcp_tcp_fallback(struct mptcp_sock *msk)
 	return msk->first;
 }
 
+static int mptcp_get_int_option(struct mptcp_sock *msk, sockptr_t optval, unsigned int optlen, int *val)
+{
+	if (optlen < sizeof(int))
+		return -EINVAL;
+
+	if (copy_from_sockptr(val, optval, sizeof(*val)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static void mptcp_so_keepalive(struct mptcp_sock *msk, int val)
+{
+	struct mptcp_subflow_context *subflow;
+	struct sock *sk = (struct sock *)msk;
+
+	lock_sock(sk);
+
+	mptcp_for_each_subflow(msk, subflow) {
+		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+		bool slow = lock_sock_fast(ssk);
+
+		if (ssk->sk_prot->keepalive)
+			ssk->sk_prot->keepalive(ssk, !!val);
+		sock_valbool_flag(ssk, SOCK_KEEPOPEN, !!val);
+		unlock_sock_fast(ssk, slow);
+	}
+
+	sock_valbool_flag(sk, SOCK_KEEPOPEN, !!val);
+	release_sock(sk);
+}
+
+static int mptcp_so_priority(struct mptcp_sock *msk, int val)
+{
+	sockptr_t optval = KERNEL_SOCKPTR(&val);
+	struct mptcp_subflow_context *subflow;
+	struct sock *sk = (struct sock *)msk;
+	int ret;
+
+	ret = sock_setsockopt(sk->sk_socket, SOL_SOCKET, SO_PRIORITY,
+			      optval, sizeof(val));
+	if (ret)
+		return ret;
+
+	lock_sock(sk);
+	mptcp_for_each_subflow(msk, subflow) {
+		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+
+		WRITE_ONCE(ssk->sk_priority, val);
+	}
+	release_sock(sk);
+
+	return 0;
+}
+
+static int mptcp_setsockopt_sol_socket_int(struct mptcp_sock *msk, int optname,
+					   sockptr_t optval, unsigned int optlen)
+{
+	int val, ret;
+
+	ret = mptcp_get_int_option(msk, optval, optlen, &val);
+	if (ret)
+		return ret;
+
+	switch (optname) {
+	case SO_KEEPALIVE:
+		mptcp_so_keepalive(msk, val);
+		return 0;
+	case SO_PRIORITY:
+		return mptcp_so_priority(msk, val);
+	}
+
+	return -ENOPROTOOPT;
+}
+
 static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
 				       sockptr_t optval, unsigned int optlen)
 {
@@ -50,6 +125,9 @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
 		}
 		release_sock(sk);
 		return ret;
+	case SO_KEEPALIVE:
+	case SO_PRIORITY:
+		return mptcp_setsockopt_sol_socket_int(msk, optname, optval, optlen);
 	}
 
 	return sock_setsockopt(sk->sk_socket, SOL_SOCKET, optname, optval, optlen);
@@ -350,6 +428,22 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
 	return -EOPNOTSUPP;
 }
 
+static void sync_socket_options(struct mptcp_sock *msk, struct sock *ssk)
+{
+	struct sock *sk = (struct sock *)msk;
+	bool slow = lock_sock_fast(ssk);
+
+	if (ssk->sk_prot->keepalive) {
+		if (sock_flag(sk, SOCK_KEEPOPEN))
+			ssk->sk_prot->keepalive(ssk, 1);
+		else
+			ssk->sk_prot->keepalive(ssk, 0);
+	}
+
+	ssk->sk_priority = sk->sk_priority;
+	unlock_sock_fast(ssk, slow);
+}
+
 void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk)
 {
 	struct mptcp_subflow_context *subflow;
@@ -360,6 +454,7 @@ void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk)
 	if (subflow->setsockopt_seq == msk->setsockopt_seq)
 		return;
 
+	sync_socket_options(msk, ssk);
 	subflow->setsockopt_seq = msk->setsockopt_seq;
 }
 
-- 
2.26.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MPTCP] [PATCH mptcp-next 3/9] mptcp: setsockopt: handle receive/send buffer and device bind
@ 2021-03-17 16:38 ` Florian Westphal
  2021-03-19 11:44   ` [MPTCP] " Paolo Abeni
  0 siblings, 1 reply; 16+ messages in thread
From: Florian Westphal @ 2021-03-17 16:38 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 3194 bytes --]

Similar to previous patch: needs to be mirrored to all subflows.

Device bind is simpler: it is only done on the initial (listener) sk.

Signed-off-by: Florian Westphal <fw(a)strlen.de>
---
 net/mptcp/sockopt.c | 51 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index 33ca67e99f8f..9a87c50e21a4 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -79,6 +79,38 @@ static int mptcp_so_priority(struct mptcp_sock *msk, int val)
 	return 0;
 }
 
+static int mptcp_so_sndrcvbuf(struct mptcp_sock *msk, int optname, int val)
+{
+	sockptr_t optval = KERNEL_SOCKPTR(&val);
+	struct mptcp_subflow_context *subflow;
+	struct sock *sk = (struct sock *)msk;
+	int ret;
+
+	ret = sock_setsockopt(sk->sk_socket, SOL_SOCKET, optname,
+			      optval, sizeof(val));
+	if (ret)
+		return ret;
+
+	lock_sock(sk);
+	mptcp_for_each_subflow(msk, subflow) {
+		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+		bool slow = lock_sock_fast(ssk);
+		unsigned int ulock;
+
+		ulock = sk->sk_userlocks;
+		ulock &= SOCK_SNDBUF_LOCK | SOCK_RCVBUF_LOCK;
+
+		ssk->sk_priority = val;
+		ssk->sk_userlocks |= ulock;
+		WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf);
+		WRITE_ONCE(ssk->sk_rcvbuf, sk->sk_rcvbuf);
+		unlock_sock_fast(ssk, slow);
+	}
+
+	release_sock(sk);
+	return 0;
+}
+
 static int mptcp_setsockopt_sol_socket_int(struct mptcp_sock *msk, int optname,
 					   sockptr_t optval, unsigned int optlen)
 {
@@ -94,6 +126,11 @@ static int mptcp_setsockopt_sol_socket_int(struct mptcp_sock *msk, int optname,
 		return 0;
 	case SO_PRIORITY:
 		return mptcp_so_priority(msk, val);
+	case SO_SNDBUF:
+	case SO_SNDBUFFORCE:
+	case SO_RCVBUF:
+	case SO_RCVBUFFORCE:
+		return mptcp_so_sndrcvbuf(msk, optname, val);
 	}
 
 	return -ENOPROTOOPT;
@@ -109,6 +146,8 @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
 	switch (optname) {
 	case SO_REUSEPORT:
 	case SO_REUSEADDR:
+	case SO_BINDTODEVICE:
+	case SO_BINDTOIFINDEX:
 		lock_sock(sk);
 		ssock = __mptcp_nmpc_socket(msk);
 		if (!ssock) {
@@ -122,11 +161,19 @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
 				sk->sk_reuseport = ssock->sk->sk_reuseport;
 			else if (optname == SO_REUSEADDR)
 				sk->sk_reuse = ssock->sk->sk_reuse;
+			else if (optname == SO_BINDTODEVICE)
+				sk->sk_bound_dev_if = ssock->sk->sk_bound_dev_if;
+			else if (optname == SO_BINDTOIFINDEX)
+				sk->sk_bound_dev_if = ssock->sk->sk_bound_dev_if;
 		}
 		release_sock(sk);
 		return ret;
 	case SO_KEEPALIVE:
 	case SO_PRIORITY:
+	case SO_SNDBUF:
+	case SO_SNDBUFFORCE:
+	case SO_RCVBUF:
+	case SO_RCVBUFFORCE:
 		return mptcp_setsockopt_sol_socket_int(msk, optname, optval, optlen);
 	}
 
@@ -441,6 +488,10 @@ static void sync_socket_options(struct mptcp_sock *msk, struct sock *ssk)
 	}
 
 	ssk->sk_priority = sk->sk_priority;
+	ssk->sk_bound_dev_if = sk->sk_bound_dev_if;
+
+	WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf);
+	WRITE_ONCE(ssk->sk_rcvbuf, sk->sk_rcvbuf);
 	unlock_sock_fast(ssk, slow);
 }
 
-- 
2.26.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MPTCP] [PATCH mptcp-next 8/9] mptcp: sockopt: add TCP_CONGESTION
@ 2021-03-17 16:38 ` Florian Westphal
  2021-03-19 11:54   ` [MPTCP] " Paolo Abeni
  2021-03-19 12:19   ` Paolo Abeni
  0 siblings, 2 replies; 16+ messages in thread
From: Florian Westphal @ 2021-03-17 16:38 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 3054 bytes --]

Make this interact with the initial subflow only, as that is the one
exposed to userspace (connect, bind).

Signed-off-by: Florian Westphal <fw(a)strlen.de>
---
 net/mptcp/sockopt.c | 83 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index 3ba31f4a73e3..c1ffc3603b4f 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -556,6 +556,46 @@ static bool mptcp_supported_sockopt(int level, int optname)
 	return false;
 }
 
+static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int optname,
+					  sockptr_t optval, unsigned int optlen)
+{
+	struct sock *sk = (struct sock *)msk;
+	struct socket *ssock;
+	int ret = -EINVAL;
+	struct sock *ssk;
+
+	lock_sock(sk);
+	ssk = msk->first;
+	if (ssk)
+		goto do_sockopt;
+
+	ssock = __mptcp_nmpc_socket(msk);
+	if (!ssock)
+		goto out;
+
+	ssk = ssock->sk;
+do_sockopt:
+	ret = tcp_setsockopt(ssk, level, optname, optval, optlen);
+out:
+
+	release_sock(sk);
+	return ret;
+}
+
+static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
+				    sockptr_t optval, unsigned int optlen)
+{
+	switch (optname) {
+	case TCP_ULP:
+		return -EOPNOTSUPP;
+	case TCP_CONGESTION:
+		return mptcp_setsockopt_first_sf_only(msk, SOL_TCP, optname,
+						      optval, optlen);
+	}
+
+	return -EOPNOTSUPP;
+}
+
 int mptcp_setsockopt(struct sock *sk, int level, int optname,
 		     sockptr_t optval, unsigned int optlen)
 {
@@ -585,6 +625,47 @@ int mptcp_setsockopt(struct sock *sk, int level, int optname,
 	if (level == SOL_IPV6)
 		return mptcp_setsockopt_v6(msk, optname, optval, optlen);
 
+	if (level == SOL_TCP)
+		return mptcp_setsockopt_sol_tcp(msk, optname, optval, optlen);
+
+	return -EOPNOTSUPP;
+}
+
+static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int optname,
+					  char __user *optval, int __user *optlen)
+{
+	struct sock *sk = (struct sock *)msk;
+	struct socket *ssock;
+	int ret = -EINVAL;
+	struct sock *ssk;
+
+	lock_sock(sk);
+	ssk = msk->first;
+	if (ssk) {
+		ret = tcp_getsockopt(ssk, level, optname, optval, optlen);
+		goto out;
+	}
+
+	ssock = __mptcp_nmpc_socket(msk);
+	if (!ssock)
+		goto out;
+
+	ret = tcp_getsockopt(ssock->sk, level, optname, optval, optlen);
+
+out:
+	release_sock(sk);
+	return ret;
+}
+
+static int mptcp_getsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
+				    char __user *optval, int __user *optlen)
+{
+	switch (optname) {
+	case TCP_ULP:
+	case TCP_CONGESTION:
+		return mptcp_getsockopt_first_sf_only(msk, SOL_TCP, optname,
+						      optval, optlen);
+	}
 	return -EOPNOTSUPP;
 }
 
@@ -608,6 +689,8 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
 	if (ssk)
 		return tcp_getsockopt(ssk, level, optname, optval, option);
 
+	if (level == SOL_TCP)
+		return mptcp_getsockopt_sol_tcp(msk, optname, optval, option);
 	return -EOPNOTSUPP;
 }
 
-- 
2.26.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MPTCP] [PATCH mptcp-next 9/9] mptcp: sockopt: handle TCP_INFO
@ 2021-03-17 16:38 ` Florian Westphal
  2021-03-19 12:00   ` [MPTCP] " Paolo Abeni
  0 siblings, 1 reply; 16+ messages in thread
From: Florian Westphal @ 2021-03-17 16:38 UTC (permalink / raw)
  To: mptcp

[-- Attachment #1: Type: text/plain, Size: 1479 bytes --]

Tries to follow the subflow most recently used from transmit.

Signed-off-by: Florian Westphal <fw(a)strlen.de>
---
 net/mptcp/sockopt.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index c1ffc3603b4f..27ea5b1ea57e 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -657,6 +657,28 @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
 	return ret;
 }
 
+static int mptcp_getsockopt_tcp_info(struct mptcp_sock *msk, char __user *optval, int __user *optlen)
+{
+	struct mptcp_subflow_context *subflow;
+	struct sock *sk = (struct sock *)msk;
+	struct sock *ssk = NULL;
+	int ret = -EOPNOTSUPP;
+
+	lock_sock(sk);
+	mptcp_for_each_subflow(msk, subflow) {
+		ssk = mptcp_subflow_tcp_sock(subflow);
+
+		if (ssk == msk->last_snd)
+			break;
+	}
+
+	if (ssk)
+		ret = tcp_getsockopt(ssk, SOL_TCP, TCP_INFO, optval, optlen);
+
+	release_sock(sk);
+	return ret;
+}
+
 static int mptcp_getsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
 				    char __user *optval, int __user *optlen)
 {
@@ -665,6 +687,8 @@ static int mptcp_getsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
 	case TCP_CONGESTION:
 		return mptcp_getsockopt_first_sf_only(msk, SOL_TCP, optname,
 						      optval, optlen);
+	case TCP_INFO:
+		return mptcp_getsockopt_tcp_info(msk, optval, optlen);
 	}
 	return -EOPNOTSUPP;
 }
-- 
2.26.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 1/9] mptcp: add skeleton to sync msk socket options to subflows
  2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 1/9] mptcp: add skeleton to sync msk socket options to subflows Florian Westphal
@ 2021-03-19 11:33   ` Paolo Abeni
  2021-03-19 12:34     ` Florian Westphal
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Abeni @ 2021-03-19 11:33 UTC (permalink / raw)
  To: Florian Westphal, mptcp

Hello,

First thing first, thank you for this great effort!

On Wed, 2021-03-17 at 17:38 +0100, Florian Westphal wrote:
> Handle following cases:
> 1. setsockopt is called with multiple subflows.
>    Change might have to be mirrored to all of them.
> 2. Outgoing subflow is created after one or several setsockopt()
>    calls have been made.  Old setsockopt changes should be
>    synced to the new socket.
> 3. Like 2, but for incoming subflow.
>    This needs the work queue because socket lock (and in the future
>    possibly rtnl mutex) might be needed.
> 
> Add sequence numbers to subflow context and mptcp socket so
> synchronization functions know which subflow is already updated
> and which ones are not.
> 
> Signed-off-by: Florian Westphal <fw@strlen.de>
> ---
>  net/mptcp/protocol.c | 31 ++++++++++++++++++++++++++++---
>  net/mptcp/protocol.h | 12 ++++++++++++
>  net/mptcp/sockopt.c  | 27 +++++++++++++++++++++++++++
>  net/mptcp/subflow.c  |  1 +
>  4 files changed, 68 insertions(+), 3 deletions(-)
> 
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 9d7e7e13fba8..0b9ef8ddff55 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -730,18 +730,42 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
>  		sk->sk_data_ready(sk);
>  }
>  
> -void __mptcp_flush_join_list(struct mptcp_sock *msk)
> +static bool mptcp_do_flush_join_list(struct mptcp_sock *msk)
>  {
>  	struct mptcp_subflow_context *subflow;
>  
>  	if (likely(list_empty(&msk->join_list)))
> -		return;
> +		return false;
>  
>  	spin_lock_bh(&msk->join_list_lock);
>  	list_for_each_entry(subflow, &msk->join_list, node)
>  		mptcp_propagate_sndbuf((struct sock *)msk, mptcp_subflow_tcp_sock(subflow));
>  	list_splice_tail_init(&msk->join_list, &msk->conn_list);
>  	spin_unlock_bh(&msk->join_list_lock);
> +
> +	return true;
> +}
> +
> +static void mptcp_work_flush_join_list(struct mptcp_sock *msk)
> +{
> +	bool sync_needed = test_and_clear_bit(MPTCP_WORK_SYNC_SETSOCKOPT, &msk->flags);
> +
> +	if (!mptcp_do_flush_join_list(msk) && !sync_needed)
> +		return;
> +
> +	mptcp_sockopt_sync_all(msk);
> +}
> +
> +void __mptcp_flush_join_list(struct mptcp_sock *msk)

There are a few __mptcp_flush_join_list() call-sites which are already
in process context, e.g.:

mptcp_stream_accept()
__mptcp_push_pending()
__mptcp_move_skbs()
mptcp_disconnect()

what renaming mptcp_work_flush_join_list() in mptcp_flush_join_list() -
or mptcp_work_flush_join_list_lock() or some better name - and use it
in the above places?

> @@ -2569,6 +2593,7 @@ static void __mptcp_destroy_sock(struct sock *sk)
>  	xfrm_sk_free_policy(sk);
>  	sk_refcnt_debug_release(sk);
>  	mptcp_dispose_initial_subflow(msk);
> +

I guess the added empty line is not intentional? ;)

Thanks!

Paolo
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 2/9] mptcp: setsockopt: handle SO_KEEPALIVE and SO_PRIORITY
  2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 2/9] mptcp: setsockopt: handle SO_KEEPALIVE and SO_PRIORITY Florian Westphal
@ 2021-03-19 11:43   ` Paolo Abeni
  2021-03-19 12:35     ` Florian Westphal
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Abeni @ 2021-03-19 11:43 UTC (permalink / raw)
  To: Florian Westphal, mptcp

On Wed, 2021-03-17 at 17:38 +0100, Florian Westphal wrote:
> start with something simple: both take an integer value, both
> need to be mirrored to all subflows.
> 
> Signed-off-by: Florian Westphal <fw@strlen.de>
> ---
>  net/mptcp/sockopt.c | 95 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 95 insertions(+)
> 
> diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
> index fa71216e11a3..33ca67e99f8f 100644
> --- a/net/mptcp/sockopt.c
> +++ b/net/mptcp/sockopt.c
> @@ -24,6 +24,81 @@ static struct sock *__mptcp_tcp_fallback(struct mptcp_sock *msk)
>  	return msk->first;
>  }
>  
> +static int mptcp_get_int_option(struct mptcp_sock *msk, sockptr_t optval, unsigned int optlen, int *val)
> +{
> +	if (optlen < sizeof(int))
> +		return -EINVAL;
> +
> +	if (copy_from_sockptr(val, optval, sizeof(*val)))
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +
> +static void mptcp_so_keepalive(struct mptcp_sock *msk, int val)
> +{
> +	struct mptcp_subflow_context *subflow;
> +	struct sock *sk = (struct sock *)msk;
> +
> +	lock_sock(sk);
> +
> +	mptcp_for_each_subflow(msk, subflow) {
> +		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> +		bool slow = lock_sock_fast(ssk);

Can we reduce the code duplication moving this chunk - msk lock, loop
across the subflows - in mptcp_setsockopt_sol_socket_int() ? the main
switch will be inside the for each loop.

We will end-up acquiring the ssk sock lock more then once/more than
strictily required, but perhaps a bunch of line less ??? WDYT?

(I think this can apply also to a few next patches)

/P
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 3/9] mptcp: setsockopt: handle receive/send buffer and device bind
  2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 3/9] mptcp: setsockopt: handle receive/send buffer and device bind Florian Westphal
@ 2021-03-19 11:44   ` Paolo Abeni
  2021-03-19 12:37     ` Florian Westphal
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Abeni @ 2021-03-19 11:44 UTC (permalink / raw)
  To: Florian Westphal, mptcp

On Wed, 2021-03-17 at 17:38 +0100, Florian Westphal wrote:
> Similar to previous patch: needs to be mirrored to all subflows.
> 
> Device bind is simpler: it is only done on the initial (listener) sk.
> 
> Signed-off-by: Florian Westphal <fw@strlen.de>
> ---
>  net/mptcp/sockopt.c | 51 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 51 insertions(+)
> 
> diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
> index 33ca67e99f8f..9a87c50e21a4 100644
> --- a/net/mptcp/sockopt.c
> +++ b/net/mptcp/sockopt.c
> @@ -79,6 +79,38 @@ static int mptcp_so_priority(struct mptcp_sock *msk, int val)
>  	return 0;
>  }
>  
> +static int mptcp_so_sndrcvbuf(struct mptcp_sock *msk, int optname, int val)
> +{
> +	sockptr_t optval = KERNEL_SOCKPTR(&val);
> +	struct mptcp_subflow_context *subflow;
> +	struct sock *sk = (struct sock *)msk;
> +	int ret;
> +
> +	ret = sock_setsockopt(sk->sk_socket, SOL_SOCKET, optname,
> +			      optval, sizeof(val));
> +	if (ret)
> +		return ret;
> +
> +	lock_sock(sk);
> +	mptcp_for_each_subflow(msk, subflow) {
> +		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> +		bool slow = lock_sock_fast(ssk);
> +		unsigned int ulock;
> +
> +		ulock = sk->sk_userlocks;
> +		ulock &= SOCK_SNDBUF_LOCK | SOCK_RCVBUF_LOCK;
> +
> +		ssk->sk_priority = val;
> +		ssk->sk_userlocks |= ulock;
> +		WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf);
> +		WRITE_ONCE(ssk->sk_rcvbuf, sk->sk_rcvbuf);

I guess we should overwrite the snd/rcv buf size only if the
corrensponding LOCK is set ?!? The same below in the synchronization
part.

Cheers,

Paolo
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 8/9] mptcp: sockopt: add TCP_CONGESTION
  2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 8/9] mptcp: sockopt: add TCP_CONGESTION Florian Westphal
@ 2021-03-19 11:54   ` Paolo Abeni
  2021-03-19 12:19   ` Paolo Abeni
  1 sibling, 0 replies; 16+ messages in thread
From: Paolo Abeni @ 2021-03-19 11:54 UTC (permalink / raw)
  To: Florian Westphal, mptcp

On Wed, 2021-03-17 at 17:38 +0100, Florian Westphal wrote:
> Make this interact with the initial subflow only, as that is the one
> exposed to userspace (connect, bind).

I'm wondering if we should instead propagate the CG algo to all
subflows ??! I see this is possibly subject to personal taste - and
mine is notoriously not good ;) Do you have strong preference here?

Thanks!

Paolo


_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 9/9] mptcp: sockopt: handle TCP_INFO
  2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 9/9] mptcp: sockopt: handle TCP_INFO Florian Westphal
@ 2021-03-19 12:00   ` Paolo Abeni
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Abeni @ 2021-03-19 12:00 UTC (permalink / raw)
  To: Florian Westphal, mptcp

On Wed, 2021-03-17 at 17:38 +0100, Florian Westphal wrote:
> Tries to follow the subflow most recently used from transmit.

I guess this one is the most tricky so far ;)

One downside with picking the most recent one, is that different
subflows could be picked in successive getsockopt, possibly fooling the
userspace with e.g. decreasing seg_in/seg_out.

What if we always pick the first subflow?

/P
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 8/9] mptcp: sockopt: add TCP_CONGESTION
  2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 8/9] mptcp: sockopt: add TCP_CONGESTION Florian Westphal
  2021-03-19 11:54   ` [MPTCP] " Paolo Abeni
@ 2021-03-19 12:19   ` Paolo Abeni
  1 sibling, 0 replies; 16+ messages in thread
From: Paolo Abeni @ 2021-03-19 12:19 UTC (permalink / raw)
  To: Florian Westphal, mptcp

On Wed, 2021-03-17 at 17:38 +0100, Florian Westphal wrote:
> @@ -585,6 +625,47 @@ int mptcp_setsockopt(struct sock *sk, int level, int optname,
>  	if (level == SOL_IPV6)
>  		return mptcp_setsockopt_v6(msk, optname, optval, optlen);
>  
> +	if (level == SOL_TCP)
> +		return mptcp_setsockopt_sol_tcp(msk, optname, optval, optlen);
> +
> +	return -EOPNOTSUPP;
> +}

Side notes not really related to this patch, but to the code near
here;)

I think we could call  __mptcp_check_fallback() before acquiring the
lock, so that in the non fallback case we could avoid acquring the msk
lock.

I think we also need to sync also sk_reuseport/sk_reuse/sk_ipv6only ?!?

I don't see where msk->setsockopt_seq is incremented. I expected to
find that on successful mptcp_setsockopt() but I missed the relevant
code ?!?

/P
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 1/9] mptcp: add skeleton to sync msk socket options to subflows
  2021-03-19 11:33   ` [MPTCP] " Paolo Abeni
@ 2021-03-19 12:34     ` Florian Westphal
  2021-03-19 14:36       ` Paolo Abeni
  0 siblings, 1 reply; 16+ messages in thread
From: Florian Westphal @ 2021-03-19 12:34 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Paolo Abeni <pabeni@redhat.com> wrote:
> There are a few __mptcp_flush_join_list() call-sites which are already
> in process context, e.g.:
> 
> mptcp_stream_accept()
> __mptcp_push_pending()
> __mptcp_move_skbs()
> mptcp_disconnect()
> 
> what renaming mptcp_work_flush_join_list() in mptcp_flush_join_list() -
> or mptcp_work_flush_join_list_lock() or some better name - and use it
> in the above places?

What would be the advanage?

> >  	xfrm_sk_free_policy(sk);
> >  	sk_refcnt_debug_release(sk);
> >  	mptcp_dispose_initial_subflow(msk);
> > +
> 
> I guess the added empty line is not intentional? ;)

No, I will zap it.
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 2/9] mptcp: setsockopt: handle SO_KEEPALIVE and SO_PRIORITY
  2021-03-19 11:43   ` [MPTCP] " Paolo Abeni
@ 2021-03-19 12:35     ` Florian Westphal
  0 siblings, 0 replies; 16+ messages in thread
From: Florian Westphal @ 2021-03-19 12:35 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Paolo Abeni <pabeni@redhat.com> wrote:
> > +static void mptcp_so_keepalive(struct mptcp_sock *msk, int val)
> > +{
> > +	struct mptcp_subflow_context *subflow;
> > +	struct sock *sk = (struct sock *)msk;
> > +
> > +	lock_sock(sk);
> > +
> > +	mptcp_for_each_subflow(msk, subflow) {
> > +		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> > +		bool slow = lock_sock_fast(ssk);
> 
> Can we reduce the code duplication moving this chunk - msk lock, loop
> across the subflows - in mptcp_setsockopt_sol_socket_int() ? the main
> switch will be inside the for each loop.
> 
> We will end-up acquiring the ssk sock lock more then once/more than
> strictily required, but perhaps a bunch of line less ??? WDYT?

No idea.  I tried it first and thought it was ugly.
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 3/9] mptcp: setsockopt: handle receive/send buffer and device bind
  2021-03-19 11:44   ` [MPTCP] " Paolo Abeni
@ 2021-03-19 12:37     ` Florian Westphal
  0 siblings, 0 replies; 16+ messages in thread
From: Florian Westphal @ 2021-03-19 12:37 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Paolo Abeni <pabeni@redhat.com> wrote:
> > +	lock_sock(sk);
> > +	mptcp_for_each_subflow(msk, subflow) {
> > +		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> > +		bool slow = lock_sock_fast(ssk);
> > +		unsigned int ulock;
> > +
> > +		ulock = sk->sk_userlocks;
> > +		ulock &= SOCK_SNDBUF_LOCK | SOCK_RCVBUF_LOCK;
> > +
> > +		ssk->sk_priority = val;
> > +		ssk->sk_userlocks |= ulock;
> > +		WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf);
> > +		WRITE_ONCE(ssk->sk_rcvbuf, sk->sk_rcvbuf);
> 
> I guess we should overwrite the snd/rcv buf size only if the
> corrensponding LOCK is set ?!? The same below in the synchronization
> part.

Right, user might lock RCVBUF but not SNDBUF, I'll add a test.
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MPTCP] Re: [PATCH mptcp-next 1/9] mptcp: add skeleton to sync msk socket options to subflows
  2021-03-19 12:34     ` Florian Westphal
@ 2021-03-19 14:36       ` Paolo Abeni
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Abeni @ 2021-03-19 14:36 UTC (permalink / raw)
  To: Florian Westphal; +Cc: mptcp

On Fri, 2021-03-19 at 13:34 +0100, Florian Westphal wrote:
> Paolo Abeni <pabeni@redhat.com> wrote:
> > There are a few __mptcp_flush_join_list() call-sites which are already
> > in process context, e.g.:
> > 
> > mptcp_stream_accept()
> > __mptcp_push_pending()
> > __mptcp_move_skbs()
> > mptcp_disconnect()
> > 
> > what renaming mptcp_work_flush_join_list() in mptcp_flush_join_list() -
> > or mptcp_work_flush_join_list_lock() or some better name - and use it
> > in the above places?
> 
> What would be the advanage?

Uhmm... the need for synchronization should be quite a rare event,
right? I guess we can affort calling the workqueue on such occasion,
but it's a bit strange to me schedule the workqueue to get process
context when alredy in process context.

BTW there a few scenarios which are not 110% clear to me:

1)
listen(s0)
s1 = accept() // main msk

setsockopt(s1, a)

[2nd subflow is accepted]
// s1->setsockopt_seq == 1, s1->setsockopt_seq _old == 0

mptcp_sockopt_sync_all()
// s1->setsockopt_seq _old == 1

[3rd subflow is accepted]
// s1->setsockopt_seq == 1, s1->setsockopt_seq _old == 1, no
synchronzation ?!?
// but the 3rd subflow will inherit it's sockopt from s0, so
// it should need that?

2)
listen(s0)
s1 = accept() // main msk

setsockopt(s0, a)

[2nd subflow is accepted, after the above sockopt]
// s1->setsockopt_seq == 0, s1->setsockopt_seq _old == 0, no
// synchronzation ?!?
// but the 2nd subflow have different options inherited by s0?!?

3)
subflow generated by port-based endpoint

I think we need some additional handling for the above ?!? e.g.
fetching setsockopt_seq_old from the listener sockets into the subflow
at syn_recv time and in _flush_join_list do/schedule the
synchronization if at least a subflow has setsockopt_seq != msk-
>setsockopt_seq.

please let me know if the above makes any sense ;)

/P
_______________________________________________
mptcp mailing list -- mptcp@lists.01.org
To unsubscribe send an email to mptcp-leave@lists.01.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-03-19 14:36 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-17 16:38 [MPTCP] [RFC PATCH mptcp-next 0/9] initial SOL_SOCKET support Florian Westphal
2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 1/9] mptcp: add skeleton to sync msk socket options to subflows Florian Westphal
2021-03-19 11:33   ` [MPTCP] " Paolo Abeni
2021-03-19 12:34     ` Florian Westphal
2021-03-19 14:36       ` Paolo Abeni
2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 2/9] mptcp: setsockopt: handle SO_KEEPALIVE and SO_PRIORITY Florian Westphal
2021-03-19 11:43   ` [MPTCP] " Paolo Abeni
2021-03-19 12:35     ` Florian Westphal
2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 3/9] mptcp: setsockopt: handle receive/send buffer and device bind Florian Westphal
2021-03-19 11:44   ` [MPTCP] " Paolo Abeni
2021-03-19 12:37     ` Florian Westphal
2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 8/9] mptcp: sockopt: add TCP_CONGESTION Florian Westphal
2021-03-19 11:54   ` [MPTCP] " Paolo Abeni
2021-03-19 12:19   ` Paolo Abeni
2021-03-17 16:38 ` [MPTCP] [PATCH mptcp-next 9/9] mptcp: sockopt: handle TCP_INFO Florian Westphal
2021-03-19 12:00   ` [MPTCP] " Paolo Abeni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).