All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements
@ 2022-02-10 18:49 Paolo Abeni
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 1/3] Squash-to: "mptcp: constify a bunch of of helpers" Paolo Abeni
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Paolo Abeni @ 2022-02-10 18:49 UTC (permalink / raw)
  To: mptcp

This is the new iteration of "mptcp: strict local address ID selection."
should hopefully fix the last self-test failure.

patch 1/3 is a somewhat related cleanup (to be squashed into existing
patch)
patch 2/3 should address the self-test failure
patch 3/3 avoids RFC breakage reported into the previous iteration,
leveraging the previois one

Paolo Abeni (3):
  Squash-to: "mptcp: constify a bunch of of helpers"
  mptcp: more careful RM_ADDR generation
  mptcp: strict local address ID selection.

 include/uapi/linux/mptcp.h |  1 +
 net/mptcp/pm_netlink.c     | 44 ++++++++++++-------------
 net/mptcp/protocol.c       |  3 ++
 net/mptcp/protocol.h       |  3 +-
 net/mptcp/subflow.c        | 67 +++++++++++++++++++++++++++++++++-----
 5 files changed, 84 insertions(+), 34 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 mptcp-next 1/3] Squash-to: "mptcp: constify a bunch of of helpers"
  2022-02-10 18:49 [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Paolo Abeni
@ 2022-02-10 18:49 ` Paolo Abeni
  2022-02-11 23:11   ` Mat Martineau
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 2/3] mptcp: more careful RM_ADDR generation Paolo Abeni
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2022-02-10 18:49 UTC (permalink / raw)
  To: mptcp

There are a few more helpers that can be const-ified and I missed
in the previous patch, just squash these change into there.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/pm_netlink.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 46346f009485..56f5603c10f2 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -1289,7 +1289,7 @@ int mptcp_pm_get_flags_and_ifindex_by_id(struct net *net, unsigned int id,
 }
 
 static bool remove_anno_list_by_saddr(struct mptcp_sock *msk,
-				      struct mptcp_addr_info *addr)
+				      const struct mptcp_addr_info *addr)
 {
 	struct mptcp_pm_add_entry *entry;
 
@@ -1304,7 +1304,7 @@ static bool remove_anno_list_by_saddr(struct mptcp_sock *msk,
 }
 
 static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
-				      struct mptcp_addr_info *addr,
+				      const struct mptcp_addr_info *addr,
 				      bool force)
 {
 	struct mptcp_rm_list list = { .nr = 0 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 mptcp-next 2/3] mptcp: more careful RM_ADDR generation
  2022-02-10 18:49 [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Paolo Abeni
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 1/3] Squash-to: "mptcp: constify a bunch of of helpers" Paolo Abeni
@ 2022-02-10 18:49 ` Paolo Abeni
  2022-02-11 23:10   ` Mat Martineau
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 3/3] mptcp: strict local address ID selection Paolo Abeni
  2022-02-11 10:33 ` [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Matthieu Baerts
  3 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2022-02-10 18:49 UTC (permalink / raw)
  To: mptcp

In some edge scenarios, an MPTCP subflows can use a local address
mapped by a "dummy" endpoint created by the in kernel path manager.

When such endpoint is deleted, the in kernel PM sends a RM_ADDR MPTCP
suboption. That is somewhat unexpected, as an MPTCP listener will keep
accepting incoming subflows targeting such address and the unexpected
options can confuse some self-tests.

Be more conservative about RM_ADDR generation: do it only if the
relevant endpoint has either the SIGNAL or SUBFLOW flag - it's not a
dummy one.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 include/uapi/linux/mptcp.h |  1 +
 net/mptcp/pm_netlink.c     | 25 +++++++++++++++++--------
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
index f106a3941cdf..34ca8c04f64e 100644
--- a/include/uapi/linux/mptcp.h
+++ b/include/uapi/linux/mptcp.h
@@ -81,6 +81,7 @@ enum {
 #define MPTCP_PM_ADDR_FLAG_SUBFLOW			(1 << 1)
 #define MPTCP_PM_ADDR_FLAG_BACKUP			(1 << 2)
 #define MPTCP_PM_ADDR_FLAG_FULLMESH			(1 << 3)
+#define MPTCP_PM_ADDR_FLAG_DUMMY			(1 << 4)
 
 enum {
 	MPTCP_PM_CMD_UNSPEC,
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 56f5603c10f2..928ebe4949e9 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -1036,7 +1036,7 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
 	entry->addr.id = 0;
 	entry->addr.port = 0;
 	entry->ifindex = 0;
-	entry->flags = 0;
+	entry->flags = MPTCP_PM_ADDR_FLAG_DUMMY;
 	entry->lsk = NULL;
 	ret = mptcp_pm_nl_append_new_local_addr(pernet, entry);
 	if (ret < 0)
@@ -1238,6 +1238,11 @@ static int mptcp_nl_cmd_add_addr(struct sk_buff *skb, struct genl_info *info)
 		return -EINVAL;
 	}
 
+	if (addr.flags & MPTCP_PM_ADDR_FLAG_DUMMY) {
+		GENL_SET_ERR_MSG(info, "can't create DUMMY endpoint");
+		return -EINVAL;
+	}
+
 	entry = kmalloc(sizeof(*entry), GFP_KERNEL);
 	if (!entry) {
 		GENL_SET_ERR_MSG(info, "can't allocate addr");
@@ -1322,11 +1327,12 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
 }
 
 static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
-						   struct mptcp_addr_info *addr)
+						   const struct mptcp_pm_addr_entry *entry)
 {
-	struct mptcp_sock *msk;
-	long s_slot = 0, s_num = 0;
+	const struct mptcp_addr_info *addr = &entry->addr;
 	struct mptcp_rm_list list = { .nr = 0 };
+	long s_slot = 0, s_num = 0;
+	struct mptcp_sock *msk;
 
 	pr_debug("remove_id=%d", addr->id);
 
@@ -1346,7 +1352,8 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
 
 		lock_sock(sk);
 		remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr);
-		mptcp_pm_remove_anno_addr(msk, addr, remove_subflow);
+		mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
+						     !(entry->flags & MPTCP_PM_ADDR_FLAG_DUMMY));
 		if (remove_subflow)
 			mptcp_pm_remove_subflow(msk, &list);
 		release_sock(sk);
@@ -1443,7 +1450,7 @@ static int mptcp_nl_cmd_del_addr(struct sk_buff *skb, struct genl_info *info)
 	__clear_bit(entry->addr.id, pernet->id_bitmap);
 	spin_unlock_bh(&pernet->lock);
 
-	mptcp_nl_remove_subflow_and_signal_addr(sock_net(skb->sk), &entry->addr);
+	mptcp_nl_remove_subflow_and_signal_addr(sock_net(skb->sk), entry);
 	synchronize_rcu();
 	__mptcp_pm_release_addr_entry(entry);
 
@@ -1458,9 +1465,11 @@ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
 
 	list_for_each_entry(entry, rm_list, list) {
 		if (lookup_subflow_by_saddr(&msk->conn_list, &entry->addr) &&
-		    alist.nr < MPTCP_RM_IDS_MAX &&
 		    slist.nr < MPTCP_RM_IDS_MAX) {
-			alist.ids[alist.nr++] = entry->addr.id;
+			/* skip RM_ADDR for dummy endpoints */
+			if (!(entry->flags & MPTCP_PM_ADDR_FLAG_DUMMY) &&
+			    alist.nr < MPTCP_RM_IDS_MAX)
+				alist.ids[alist.nr++] = entry->addr.id;
 			slist.ids[slist.nr++] = entry->addr.id;
 		} else if (remove_anno_list_by_saddr(msk, &entry->addr) &&
 			 alist.nr < MPTCP_RM_IDS_MAX) {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 mptcp-next 3/3] mptcp: strict local address ID selection.
  2022-02-10 18:49 [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Paolo Abeni
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 1/3] Squash-to: "mptcp: constify a bunch of of helpers" Paolo Abeni
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 2/3] mptcp: more careful RM_ADDR generation Paolo Abeni
@ 2022-02-10 18:49 ` Paolo Abeni
  2022-02-11 23:04   ` Mat Martineau
  2022-02-11 10:33 ` [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Matthieu Baerts
  3 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2022-02-10 18:49 UTC (permalink / raw)
  To: mptcp

The address ID selection for MPJ subflows created in response
to incoming ADD_ADDR option is currently unreliable: it happens
at MPJ socket creation time, when the local address could be
unknown.

Additionally, if the no local endpoint is available for the local
address, a new dummy endpoint is created, confusing the user-land.

This change refactor the code to move the address ID seleciton inside
the rebuild_header() helper, when the local address eventually
selected by the route lookup is finally known. If the address used
is not mapped by any endpoint - and thus can't be advertised/removed
pick the id 0 instead of allocate a new endpoint.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 v2 -> v3:
- keep creating dummy endpoint

 v1 -> v2:
- hopefully fix build issue with ipv6 disabled
- avoid looking-up multiple times the local_id for req sockets
- factor-out an helper for local_id initialization

RFC -> v1:
- don't bail if ID lookup fails, use 0 instead
---
 net/mptcp/pm_netlink.c | 15 +---------
 net/mptcp/protocol.c   |  3 ++
 net/mptcp/protocol.h   |  3 +-
 net/mptcp/subflow.c    | 67 ++++++++++++++++++++++++++++++++++++------
 4 files changed, 64 insertions(+), 24 deletions(-)

diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 928ebe4949e9..ca0fb2ab1204 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -83,16 +83,6 @@ static bool addresses_equal(const struct mptcp_addr_info *a,
 	return a->port == b->port;
 }
 
-static bool address_zero(const struct mptcp_addr_info *addr)
-{
-	struct mptcp_addr_info zero;
-
-	memset(&zero, 0, sizeof(zero));
-	zero.family = addr->family;
-
-	return addresses_equal(addr, &zero, true);
-}
-
 static void local_address(const struct sock_common *skc,
 			  struct mptcp_addr_info *addr)
 {
@@ -998,7 +988,7 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
 	struct mptcp_addr_info skc_local;
 	struct mptcp_addr_info msk_local;
 	struct pm_nl_pernet *pernet;
-	int ret = -1;
+	int ret = 0;
 
 	if (WARN_ON_ONCE(!msk))
 		return -1;
@@ -1011,9 +1001,6 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
 	if (addresses_equal(&msk_local, &skc_local, false))
 		return 0;
 
-	if (address_zero(&skc_local))
-		return 0;
-
 	pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id);
 
 	rcu_read_lock();
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 3324e1c61576..57caf470e500 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -117,6 +117,9 @@ static int __mptcp_socket_create(struct mptcp_sock *msk)
 	list_add(&subflow->node, &msk->conn_list);
 	sock_hold(ssock->sk);
 	subflow->request_mptcp = 1;
+
+	/* This is the first subflow, always with id 0 */
+	subflow->local_id_valid = 1;
 	mptcp_sock_graft(msk->first, sk->sk_socket);
 
 	return 0;
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index a1ce1fd005ab..663b8d83154e 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -453,7 +453,8 @@ struct mptcp_subflow_context {
 		rx_eof : 1,
 		can_ack : 1,        /* only after processing the remote a key */
 		disposable : 1,	    /* ctx can be free at ulp release time */
-		stale : 1;	    /* unable to snd/rcv data, do not use for xmit */
+		stale : 1,	    /* unable to snd/rcv data, do not use for xmit */
+		local_id_valid : 1; /* local_id is correctly initialized */
 	enum mptcp_data_avail data_avail;
 	u32	remote_nonce;
 	u64	thmac;
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index b53b392dd280..283e5d57e003 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -481,6 +481,51 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
 	mptcp_subflow_reset(sk);
 }
 
+static void subflow_set_local_id(struct mptcp_subflow_context *subflow, int local_id)
+{
+	subflow->local_id = local_id;
+	subflow->local_id_valid = 1;
+}
+
+static int subflow_chk_local_id(struct sock *sk)
+{
+	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+	int err;
+
+	if (likely(subflow->local_id_valid))
+		return 0;
+
+	err = mptcp_pm_get_local_id(msk, (struct sock_common *)sk);
+	if (err < 0)
+		return err;
+
+	subflow_set_local_id(subflow, err);
+	return 0;
+}
+
+static int subflow_rebuild_header(struct sock *sk)
+{
+	int err = subflow_chk_local_id(sk);
+
+	if (unlikely(err < 0))
+		return err;
+
+	return inet_sk_rebuild_header(sk);
+}
+
+#if IS_ENABLED(CONFIG_MPTCP_IPV6)
+static int subflow_v6_rebuild_header(struct sock *sk)
+{
+	int err = subflow_chk_local_id(sk);
+
+	if (unlikely(err < 0))
+		return err;
+
+	return inet6_sk_rebuild_header(sk);
+}
+#endif
+
 struct request_sock_ops mptcp_subflow_request_sock_ops;
 static struct tcp_request_sock_ops subflow_request_sock_ipv4_ops __ro_after_init;
 
@@ -1403,13 +1448,8 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
 		get_random_bytes(&subflow->local_nonce, sizeof(u32));
 	} while (!subflow->local_nonce);
 
-	if (!local_id) {
-		err = mptcp_pm_get_local_id(msk, (struct sock_common *)ssk);
-		if (err < 0)
-			goto failed;
-
-		local_id = err;
-	}
+	if (local_id)
+		subflow_set_local_id(subflow, local_id);
 
 	mptcp_pm_get_flags_and_ifindex_by_id(sock_net(sk), local_id,
 					     &flags, &ifindex);
@@ -1434,7 +1474,6 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
 	pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk,
 		 remote_token, local_id, remote_id);
 	subflow->remote_token = remote_token;
-	subflow->local_id = local_id;
 	subflow->remote_id = remote_id;
 	subflow->request_join = 1;
 	subflow->request_bkup = !!(flags & MPTCP_PM_ADDR_FLAG_BACKUP);
@@ -1734,15 +1773,22 @@ static void subflow_ulp_clone(const struct request_sock *req,
 		new_ctx->token = subflow_req->token;
 		new_ctx->ssn_offset = subflow_req->ssn_offset;
 		new_ctx->idsn = subflow_req->idsn;
+
+		/* this is the first subflow, id is always 0 */
+		new_ctx->local_id_valid = 1;
 	} else if (subflow_req->mp_join) {
 		new_ctx->ssn_offset = subflow_req->ssn_offset;
 		new_ctx->mp_join = 1;
 		new_ctx->fully_established = 1;
 		new_ctx->backup = subflow_req->backup;
-		new_ctx->local_id = subflow_req->local_id;
 		new_ctx->remote_id = subflow_req->remote_id;
 		new_ctx->token = subflow_req->token;
 		new_ctx->thmac = subflow_req->thmac;
+
+		/* the subflow req id is valid, fetched via subflow_check_req()
+		 * and subflow_token_join_request()
+		 */
+		subflow_set_local_id(new_ctx, subflow_req->local_id);
 	}
 }
 
@@ -1795,6 +1841,7 @@ void __init mptcp_subflow_init(void)
 	subflow_specific.conn_request = subflow_v4_conn_request;
 	subflow_specific.syn_recv_sock = subflow_syn_recv_sock;
 	subflow_specific.sk_rx_dst_set = subflow_finish_connect;
+	subflow_specific.rebuild_header = subflow_rebuild_header;
 
 	tcp_prot_override = tcp_prot;
 	tcp_prot_override.release_cb = tcp_release_cb_override;
@@ -1807,6 +1854,7 @@ void __init mptcp_subflow_init(void)
 	subflow_v6_specific.conn_request = subflow_v6_conn_request;
 	subflow_v6_specific.syn_recv_sock = subflow_syn_recv_sock;
 	subflow_v6_specific.sk_rx_dst_set = subflow_finish_connect;
+	subflow_v6_specific.rebuild_header = subflow_v6_rebuild_header;
 
 	subflow_v6m_specific = subflow_v6_specific;
 	subflow_v6m_specific.queue_xmit = ipv4_specific.queue_xmit;
@@ -1814,6 +1862,7 @@ void __init mptcp_subflow_init(void)
 	subflow_v6m_specific.net_header_len = ipv4_specific.net_header_len;
 	subflow_v6m_specific.mtu_reduced = ipv4_specific.mtu_reduced;
 	subflow_v6m_specific.net_frag_header_len = 0;
+	subflow_v6m_specific.rebuild_header = subflow_rebuild_header;
 
 	tcpv6_prot_override = tcpv6_prot;
 	tcpv6_prot_override.release_cb = tcp_release_cb_override;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements
  2022-02-10 18:49 [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Paolo Abeni
                   ` (2 preceding siblings ...)
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 3/3] mptcp: strict local address ID selection Paolo Abeni
@ 2022-02-11 10:33 ` Matthieu Baerts
  2022-02-11 11:44   ` Paolo Abeni
  3 siblings, 1 reply; 10+ messages in thread
From: Matthieu Baerts @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Paolo Abeni, mptcp

Hi Paolo,

On 10/02/2022 19:49, Paolo Abeni wrote:
> This is the new iteration of "mptcp: strict local address ID selection."
> should hopefully fix the last self-test failure.

Thank you for this v3!

I don't know if there is a link but the CI seems to take longer than
usual with these 3 patches.

With a debug kernel config, it even reached the timeout of 55min I set
to start the VM and run all tests:

- KVM Validation: debug:
  - Critical: Global Timeout ❌:
  - Task: https://cirrus-ci.com/task/6256933542821888
  - Summary:
https://api.cirrus-ci.com/v1/artifact/task/6256933542821888/summary/summary.txt

So I restarted it and I didn't reach the timeout:

- KVM Validation: debug:
  - Unstable: 1 failed test(s): selftest_mptcp_connect 🔴:
  - Task: https://cirrus-ci.com/task/5212186606829568
  - Summary:
https://api.cirrus-ci.com/v1/artifact/task/5212186606829568/summary/summary.txt

Still taking longer than usual I think. But that's hard to be sure about
that with these public CI as we don't control where it is executed and
which other tasks are ran in parallel.
Do you observe the same on your side?


BTW, you can see that on this last build, it detected an issue with
mptcp_connect.sh. It didn't detect that with the previous job. That's
probably not related to your modifications. I just noticed it is not the
first time we got the issue (I didn't see it due to the other issues):

- https://cirrus-ci.com/task/6348125144088576
- https://cirrus-ci.com/task/6565756707012608
- https://cirrus-ci.com/task/5501584690905088
- https://cirrus-ci.com/task/6734301743022080
- https://cirrus-ci.com/task/6064315366113280
- etc.

I'm going to create a new issue for that.

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements
  2022-02-11 10:33 ` [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Matthieu Baerts
@ 2022-02-11 11:44   ` Paolo Abeni
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Abeni @ 2022-02-11 11:44 UTC (permalink / raw)
  To: Matthieu Baerts, mptcp

On Fri, 2022-02-11 at 11:33 +0100, Matthieu Baerts wrote:
> Hi Paolo,
> 
> On 10/02/2022 19:49, Paolo Abeni wrote:
> > This is the new iteration of "mptcp: strict local address ID selection."
> > should hopefully fix the last self-test failure.
> 
> Thank you for this v3!
> 
> I don't know if there is a link but the CI seems to take longer than
> usual with these 3 patches.
> 
> With a debug kernel config, it even reached the timeout of 55min I set
> to start the VM and run all tests:
> 
> - KVM Validation: debug:
>   - Critical: Global Timeout ❌:
>   - Task: https://cirrus-ci.com/task/6256933542821888
>   - Summary:
> https://api.cirrus-ci.com/v1/artifact/task/6256933542821888/summary/summary.txt
> 
> So I restarted it and I didn't reach the timeout:
> 
> - KVM Validation: debug:
>   - Unstable: 1 failed test(s): selftest_mptcp_connect 🔴:
>   - Task: https://cirrus-ci.com/task/5212186606829568
>   - Summary:
> https://api.cirrus-ci.com/v1/artifact/task/5212186606829568/summary/summary.txt
> 
> Still taking longer than usual I think. But that's hard to be sure about
> that with these public CI as we don't control where it is executed and
> which other tasks are ran in parallel.
> Do you observe the same on your side?

Uhmmm... I did not look closely to the running time. Nothing in the
patch should change it significantly. The successful run for v2 took
1h10':

https://cirrus-ci.com/task/5430100798734336

do you know which is the average runtime for a dbg build? It looks like
> 50' was not uncommon at all... 

I'll try to have a look at the runtime here, but not very soon.

It does look something to be coped by with a suitable timeout.

> BTW, you can see that on this last build, it detected an issue with
> mptcp_connect.sh. It didn't detect that with the previous job. That's
> probably not related to your modifications. I just noticed it is not the
> first time we got the issue (I didn't see it due to the other issues):
> 
> - https://cirrus-ci.com/task/6348125144088576
> - https://cirrus-ci.com/task/6565756707012608
> - https://cirrus-ci.com/task/5501584690905088
> - https://cirrus-ci.com/task/6734301743022080
> - https://cirrus-ci.com/task/6064315366113280
> - etc.
> 
> I'm going to create a new issue for that.
> 
uhm... too much job security hurts ;)

Everyone of such failures is on disconnect test, in the fallback
scenario.

/P


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 mptcp-next 3/3] mptcp: strict local address ID selection.
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 3/3] mptcp: strict local address ID selection Paolo Abeni
@ 2022-02-11 23:04   ` Mat Martineau
  0 siblings, 0 replies; 10+ messages in thread
From: Mat Martineau @ 2022-02-11 23:04 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

On Thu, 10 Feb 2022, Paolo Abeni wrote:

> The address ID selection for MPJ subflows created in response
> to incoming ADD_ADDR option is currently unreliable: it happens
> at MPJ socket creation time, when the local address could be
> unknown.
>
> Additionally, if the no local endpoint is available for the local
> address, a new dummy endpoint is created, confusing the user-land.
>
> This change refactor the code to move the address ID seleciton inside
> the rebuild_header() helper, when the local address eventually
> selected by the route lookup is finally known. If the address used
> is not mapped by any endpoint - and thus can't be advertised/removed
> pick the id 0 instead of allocate a new endpoint.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> v2 -> v3:
> - keep creating dummy endpoint
>
> v1 -> v2:
> - hopefully fix build issue with ipv6 disabled
> - avoid looking-up multiple times the local_id for req sockets
> - factor-out an helper for local_id initialization
>
> RFC -> v1:
> - don't bail if ID lookup fails, use 0 instead
> ---
> net/mptcp/pm_netlink.c | 15 +---------
> net/mptcp/protocol.c   |  3 ++
> net/mptcp/protocol.h   |  3 +-
> net/mptcp/subflow.c    | 67 ++++++++++++++++++++++++++++++++++++------
> 4 files changed, 64 insertions(+), 24 deletions(-)
>
> diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
> index 928ebe4949e9..ca0fb2ab1204 100644
> --- a/net/mptcp/pm_netlink.c
> +++ b/net/mptcp/pm_netlink.c
> @@ -83,16 +83,6 @@ static bool addresses_equal(const struct mptcp_addr_info *a,
> 	return a->port == b->port;
> }
>
> -static bool address_zero(const struct mptcp_addr_info *addr)
> -{
> -	struct mptcp_addr_info zero;
> -
> -	memset(&zero, 0, sizeof(zero));
> -	zero.family = addr->family;
> -
> -	return addresses_equal(addr, &zero, true);
> -}
> -
> static void local_address(const struct sock_common *skc,
> 			  struct mptcp_addr_info *addr)
> {
> @@ -998,7 +988,7 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
> 	struct mptcp_addr_info skc_local;
> 	struct mptcp_addr_info msk_local;
> 	struct pm_nl_pernet *pernet;
> -	int ret = -1;
> +	int ret = 0;

With this line changed, ret is never negative after the rcu_read_unlock() 
in this function, so the dummy record creation code at the end is all dead 
code. I'm guessing this needs to stay "ret = -1" for the dummy allocation 
to work as expected.

-Mat

>
> 	if (WARN_ON_ONCE(!msk))
> 		return -1;
> @@ -1011,9 +1001,6 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
> 	if (addresses_equal(&msk_local, &skc_local, false))
> 		return 0;
>
> -	if (address_zero(&skc_local))
> -		return 0;
> -
> 	pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id);
>
> 	rcu_read_lock();
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 3324e1c61576..57caf470e500 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -117,6 +117,9 @@ static int __mptcp_socket_create(struct mptcp_sock *msk)
> 	list_add(&subflow->node, &msk->conn_list);
> 	sock_hold(ssock->sk);
> 	subflow->request_mptcp = 1;
> +
> +	/* This is the first subflow, always with id 0 */
> +	subflow->local_id_valid = 1;
> 	mptcp_sock_graft(msk->first, sk->sk_socket);
>
> 	return 0;
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index a1ce1fd005ab..663b8d83154e 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -453,7 +453,8 @@ struct mptcp_subflow_context {
> 		rx_eof : 1,
> 		can_ack : 1,        /* only after processing the remote a key */
> 		disposable : 1,	    /* ctx can be free at ulp release time */
> -		stale : 1;	    /* unable to snd/rcv data, do not use for xmit */
> +		stale : 1,	    /* unable to snd/rcv data, do not use for xmit */
> +		local_id_valid : 1; /* local_id is correctly initialized */
> 	enum mptcp_data_avail data_avail;
> 	u32	remote_nonce;
> 	u64	thmac;
> diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
> index b53b392dd280..283e5d57e003 100644
> --- a/net/mptcp/subflow.c
> +++ b/net/mptcp/subflow.c
> @@ -481,6 +481,51 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
> 	mptcp_subflow_reset(sk);
> }
>
> +static void subflow_set_local_id(struct mptcp_subflow_context *subflow, int local_id)
> +{
> +	subflow->local_id = local_id;
> +	subflow->local_id_valid = 1;
> +}
> +
> +static int subflow_chk_local_id(struct sock *sk)
> +{
> +	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
> +	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
> +	int err;
> +
> +	if (likely(subflow->local_id_valid))
> +		return 0;
> +
> +	err = mptcp_pm_get_local_id(msk, (struct sock_common *)sk);
> +	if (err < 0)
> +		return err;
> +
> +	subflow_set_local_id(subflow, err);
> +	return 0;
> +}
> +
> +static int subflow_rebuild_header(struct sock *sk)
> +{
> +	int err = subflow_chk_local_id(sk);
> +
> +	if (unlikely(err < 0))
> +		return err;
> +
> +	return inet_sk_rebuild_header(sk);
> +}
> +
> +#if IS_ENABLED(CONFIG_MPTCP_IPV6)
> +static int subflow_v6_rebuild_header(struct sock *sk)
> +{
> +	int err = subflow_chk_local_id(sk);
> +
> +	if (unlikely(err < 0))
> +		return err;
> +
> +	return inet6_sk_rebuild_header(sk);
> +}
> +#endif
> +
> struct request_sock_ops mptcp_subflow_request_sock_ops;
> static struct tcp_request_sock_ops subflow_request_sock_ipv4_ops __ro_after_init;
>
> @@ -1403,13 +1448,8 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
> 		get_random_bytes(&subflow->local_nonce, sizeof(u32));
> 	} while (!subflow->local_nonce);
>
> -	if (!local_id) {
> -		err = mptcp_pm_get_local_id(msk, (struct sock_common *)ssk);
> -		if (err < 0)
> -			goto failed;
> -
> -		local_id = err;
> -	}
> +	if (local_id)
> +		subflow_set_local_id(subflow, local_id);
>
> 	mptcp_pm_get_flags_and_ifindex_by_id(sock_net(sk), local_id,
> 					     &flags, &ifindex);
> @@ -1434,7 +1474,6 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
> 	pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk,
> 		 remote_token, local_id, remote_id);
> 	subflow->remote_token = remote_token;
> -	subflow->local_id = local_id;
> 	subflow->remote_id = remote_id;
> 	subflow->request_join = 1;
> 	subflow->request_bkup = !!(flags & MPTCP_PM_ADDR_FLAG_BACKUP);
> @@ -1734,15 +1773,22 @@ static void subflow_ulp_clone(const struct request_sock *req,
> 		new_ctx->token = subflow_req->token;
> 		new_ctx->ssn_offset = subflow_req->ssn_offset;
> 		new_ctx->idsn = subflow_req->idsn;
> +
> +		/* this is the first subflow, id is always 0 */
> +		new_ctx->local_id_valid = 1;
> 	} else if (subflow_req->mp_join) {
> 		new_ctx->ssn_offset = subflow_req->ssn_offset;
> 		new_ctx->mp_join = 1;
> 		new_ctx->fully_established = 1;
> 		new_ctx->backup = subflow_req->backup;
> -		new_ctx->local_id = subflow_req->local_id;
> 		new_ctx->remote_id = subflow_req->remote_id;
> 		new_ctx->token = subflow_req->token;
> 		new_ctx->thmac = subflow_req->thmac;
> +
> +		/* the subflow req id is valid, fetched via subflow_check_req()
> +		 * and subflow_token_join_request()
> +		 */
> +		subflow_set_local_id(new_ctx, subflow_req->local_id);
> 	}
> }
>
> @@ -1795,6 +1841,7 @@ void __init mptcp_subflow_init(void)
> 	subflow_specific.conn_request = subflow_v4_conn_request;
> 	subflow_specific.syn_recv_sock = subflow_syn_recv_sock;
> 	subflow_specific.sk_rx_dst_set = subflow_finish_connect;
> +	subflow_specific.rebuild_header = subflow_rebuild_header;
>
> 	tcp_prot_override = tcp_prot;
> 	tcp_prot_override.release_cb = tcp_release_cb_override;
> @@ -1807,6 +1854,7 @@ void __init mptcp_subflow_init(void)
> 	subflow_v6_specific.conn_request = subflow_v6_conn_request;
> 	subflow_v6_specific.syn_recv_sock = subflow_syn_recv_sock;
> 	subflow_v6_specific.sk_rx_dst_set = subflow_finish_connect;
> +	subflow_v6_specific.rebuild_header = subflow_v6_rebuild_header;
>
> 	subflow_v6m_specific = subflow_v6_specific;
> 	subflow_v6m_specific.queue_xmit = ipv4_specific.queue_xmit;
> @@ -1814,6 +1862,7 @@ void __init mptcp_subflow_init(void)
> 	subflow_v6m_specific.net_header_len = ipv4_specific.net_header_len;
> 	subflow_v6m_specific.mtu_reduced = ipv4_specific.mtu_reduced;
> 	subflow_v6m_specific.net_frag_header_len = 0;
> +	subflow_v6m_specific.rebuild_header = subflow_rebuild_header;
>
> 	tcpv6_prot_override = tcpv6_prot;
> 	tcpv6_prot_override.release_cb = tcp_release_cb_override;
> -- 
> 2.34.1
>
>
>

--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 mptcp-next 2/3] mptcp: more careful RM_ADDR generation
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 2/3] mptcp: more careful RM_ADDR generation Paolo Abeni
@ 2022-02-11 23:10   ` Mat Martineau
  2022-02-13  9:06     ` Paolo Abeni
  0 siblings, 1 reply; 10+ messages in thread
From: Mat Martineau @ 2022-02-11 23:10 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

On Thu, 10 Feb 2022, Paolo Abeni wrote:

> In some edge scenarios, an MPTCP subflows can use a local address
> mapped by a "dummy" endpoint created by the in kernel path manager.
>
> When such endpoint is deleted, the in kernel PM sends a RM_ADDR MPTCP
> suboption. That is somewhat unexpected, as an MPTCP listener will keep
> accepting incoming subflows targeting such address and the unexpected
> options can confuse some self-tests.
>
> Be more conservative about RM_ADDR generation: do it only if the
> relevant endpoint has either the SIGNAL or SUBFLOW flag - it's not a
> dummy one.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> include/uapi/linux/mptcp.h |  1 +
> net/mptcp/pm_netlink.c     | 25 +++++++++++++++++--------
> 2 files changed, 18 insertions(+), 8 deletions(-)
>
> diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
> index f106a3941cdf..34ca8c04f64e 100644
> --- a/include/uapi/linux/mptcp.h
> +++ b/include/uapi/linux/mptcp.h
> @@ -81,6 +81,7 @@ enum {
> #define MPTCP_PM_ADDR_FLAG_SUBFLOW			(1 << 1)
> #define MPTCP_PM_ADDR_FLAG_BACKUP			(1 << 2)
> #define MPTCP_PM_ADDR_FLAG_FULLMESH			(1 << 3)
> +#define MPTCP_PM_ADDR_FLAG_DUMMY			(1 << 4)

Since this is a public API, "DUMMY" might be a confusing / ambiguous name. 
MPTCP_PM_ADDR_FLAG_IMPLICIT_ENDPOINT or MPTCP_PM_ADDR_FLAG_UNADVERTISED 
maybe? (open to other ideas of course)

It looks like these dummy/implicit records stay around until a flush 
happens. What if there's a request to advertise an address that has had a 
dummy created already? mptcp_pm_nl_append_new_local_addr() would consider 
that a duplicate and reject it, but replacing the dummy record with a real 
one would be better.

-Mat

>
> enum {
> 	MPTCP_PM_CMD_UNSPEC,
> diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
> index 56f5603c10f2..928ebe4949e9 100644
> --- a/net/mptcp/pm_netlink.c
> +++ b/net/mptcp/pm_netlink.c
> @@ -1036,7 +1036,7 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
> 	entry->addr.id = 0;
> 	entry->addr.port = 0;
> 	entry->ifindex = 0;
> -	entry->flags = 0;
> +	entry->flags = MPTCP_PM_ADDR_FLAG_DUMMY;
> 	entry->lsk = NULL;
> 	ret = mptcp_pm_nl_append_new_local_addr(pernet, entry);
> 	if (ret < 0)
> @@ -1238,6 +1238,11 @@ static int mptcp_nl_cmd_add_addr(struct sk_buff *skb, struct genl_info *info)
> 		return -EINVAL;
> 	}
>
> +	if (addr.flags & MPTCP_PM_ADDR_FLAG_DUMMY) {
> +		GENL_SET_ERR_MSG(info, "can't create DUMMY endpoint");
> +		return -EINVAL;
> +	}
> +
> 	entry = kmalloc(sizeof(*entry), GFP_KERNEL);
> 	if (!entry) {
> 		GENL_SET_ERR_MSG(info, "can't allocate addr");
> @@ -1322,11 +1327,12 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
> }
>
> static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
> -						   struct mptcp_addr_info *addr)
> +						   const struct mptcp_pm_addr_entry *entry)
> {
> -	struct mptcp_sock *msk;
> -	long s_slot = 0, s_num = 0;
> +	const struct mptcp_addr_info *addr = &entry->addr;
> 	struct mptcp_rm_list list = { .nr = 0 };
> +	long s_slot = 0, s_num = 0;
> +	struct mptcp_sock *msk;
>
> 	pr_debug("remove_id=%d", addr->id);
>
> @@ -1346,7 +1352,8 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
>
> 		lock_sock(sk);
> 		remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr);
> -		mptcp_pm_remove_anno_addr(msk, addr, remove_subflow);
> +		mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
> +						     !(entry->flags & MPTCP_PM_ADDR_FLAG_DUMMY));
> 		if (remove_subflow)
> 			mptcp_pm_remove_subflow(msk, &list);
> 		release_sock(sk);
> @@ -1443,7 +1450,7 @@ static int mptcp_nl_cmd_del_addr(struct sk_buff *skb, struct genl_info *info)
> 	__clear_bit(entry->addr.id, pernet->id_bitmap);
> 	spin_unlock_bh(&pernet->lock);
>
> -	mptcp_nl_remove_subflow_and_signal_addr(sock_net(skb->sk), &entry->addr);
> +	mptcp_nl_remove_subflow_and_signal_addr(sock_net(skb->sk), entry);
> 	synchronize_rcu();
> 	__mptcp_pm_release_addr_entry(entry);
>
> @@ -1458,9 +1465,11 @@ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
>
> 	list_for_each_entry(entry, rm_list, list) {
> 		if (lookup_subflow_by_saddr(&msk->conn_list, &entry->addr) &&
> -		    alist.nr < MPTCP_RM_IDS_MAX &&
> 		    slist.nr < MPTCP_RM_IDS_MAX) {
> -			alist.ids[alist.nr++] = entry->addr.id;
> +			/* skip RM_ADDR for dummy endpoints */
> +			if (!(entry->flags & MPTCP_PM_ADDR_FLAG_DUMMY) &&
> +			    alist.nr < MPTCP_RM_IDS_MAX)
> +				alist.ids[alist.nr++] = entry->addr.id;
> 			slist.ids[slist.nr++] = entry->addr.id;
> 		} else if (remove_anno_list_by_saddr(msk, &entry->addr) &&
> 			 alist.nr < MPTCP_RM_IDS_MAX) {
> -- 
> 2.34.1
>
>
>

--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 mptcp-next 1/3] Squash-to: "mptcp: constify a bunch of of helpers"
  2022-02-10 18:49 ` [PATCH v3 mptcp-next 1/3] Squash-to: "mptcp: constify a bunch of of helpers" Paolo Abeni
@ 2022-02-11 23:11   ` Mat Martineau
  0 siblings, 0 replies; 10+ messages in thread
From: Mat Martineau @ 2022-02-11 23:11 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

On Thu, 10 Feb 2022, Paolo Abeni wrote:

> There are a few more helpers that can be const-ified and I missed
> in the previous patch, just squash these change into there.
>

Ok, looks good to squash. Thanks.

-Mat

> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> net/mptcp/pm_netlink.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
> index 46346f009485..56f5603c10f2 100644
> --- a/net/mptcp/pm_netlink.c
> +++ b/net/mptcp/pm_netlink.c
> @@ -1289,7 +1289,7 @@ int mptcp_pm_get_flags_and_ifindex_by_id(struct net *net, unsigned int id,
> }
>
> static bool remove_anno_list_by_saddr(struct mptcp_sock *msk,
> -				      struct mptcp_addr_info *addr)
> +				      const struct mptcp_addr_info *addr)
> {
> 	struct mptcp_pm_add_entry *entry;
>
> @@ -1304,7 +1304,7 @@ static bool remove_anno_list_by_saddr(struct mptcp_sock *msk,
> }
>
> static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
> -				      struct mptcp_addr_info *addr,
> +				      const struct mptcp_addr_info *addr,
> 				      bool force)
> {
> 	struct mptcp_rm_list list = { .nr = 0 };
> -- 
> 2.34.1
>
>
>

--
Mat Martineau
Intel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 mptcp-next 2/3] mptcp: more careful RM_ADDR generation
  2022-02-11 23:10   ` Mat Martineau
@ 2022-02-13  9:06     ` Paolo Abeni
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Abeni @ 2022-02-13  9:06 UTC (permalink / raw)
  To: Mat Martineau; +Cc: mptcp

On Fri, 2022-02-11 at 15:10 -0800, Mat Martineau wrote:
> On Thu, 10 Feb 2022, Paolo Abeni wrote:
> 
> > In some edge scenarios, an MPTCP subflows can use a local address
> > mapped by a "dummy" endpoint created by the in kernel path manager.
> > 
> > When such endpoint is deleted, the in kernel PM sends a RM_ADDR MPTCP
> > suboption. That is somewhat unexpected, as an MPTCP listener will keep
> > accepting incoming subflows targeting such address and the unexpected
> > options can confuse some self-tests.
> > 
> > Be more conservative about RM_ADDR generation: do it only if the
> > relevant endpoint has either the SIGNAL or SUBFLOW flag - it's not a
> > dummy one.
> > 
> > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > ---
> > include/uapi/linux/mptcp.h |  1 +
> > net/mptcp/pm_netlink.c     | 25 +++++++++++++++++--------
> > 2 files changed, 18 insertions(+), 8 deletions(-)
> > 
> > diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
> > index f106a3941cdf..34ca8c04f64e 100644
> > --- a/include/uapi/linux/mptcp.h
> > +++ b/include/uapi/linux/mptcp.h
> > @@ -81,6 +81,7 @@ enum {
> > #define MPTCP_PM_ADDR_FLAG_SUBFLOW			(1 << 1)
> > #define MPTCP_PM_ADDR_FLAG_BACKUP			(1 << 2)
> > #define MPTCP_PM_ADDR_FLAG_FULLMESH			(1 << 3)
> > +#define MPTCP_PM_ADDR_FLAG_DUMMY			(1 << 4)
> 
> Since this is a public API, "DUMMY" might be a confusing / ambiguous name. 
> MPTCP_PM_ADDR_FLAG_IMPLICIT_ENDPOINT or MPTCP_PM_ADDR_FLAG_UNADVERTISED 
> maybe? (open to other ideas of course)

I think "IMPLICIT" is the better option, as it's both unadvertised, not
used for subflow.
> 
> It looks like these dummy/implicit records stay around until a flush 
> happens. What if there's a request to advertise an address that has had a 
> dummy created already? mptcp_pm_nl_append_new_local_addr() would consider 
> that a duplicate and reject it, but replacing the dummy record with a real 
> one would be better.

Agreed. I'll do that in the next iteration.

/P


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-02-13  9:06 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-10 18:49 [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Paolo Abeni
2022-02-10 18:49 ` [PATCH v3 mptcp-next 1/3] Squash-to: "mptcp: constify a bunch of of helpers" Paolo Abeni
2022-02-11 23:11   ` Mat Martineau
2022-02-10 18:49 ` [PATCH v3 mptcp-next 2/3] mptcp: more careful RM_ADDR generation Paolo Abeni
2022-02-11 23:10   ` Mat Martineau
2022-02-13  9:06     ` Paolo Abeni
2022-02-10 18:49 ` [PATCH v3 mptcp-next 3/3] mptcp: strict local address ID selection Paolo Abeni
2022-02-11 23:04   ` Mat Martineau
2022-02-11 10:33 ` [PATCH v3 mptcp-next 0/3] mptcp: more self-tests improvements Matthieu Baerts
2022-02-11 11:44   ` Paolo Abeni

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.