All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: mptcp@lists.linux.dev
Subject: [PATCH mptcp-next] Squash-to: "mptcp: cleanup MPJ subflow list handling"
Date: Thu, 16 Dec 2021 15:33:37 +0100	[thread overview]
Message-ID: <70e15da075bab481ac07ed1ce8c2adc9740403c6.1639665203.git.pabeni@redhat.com> (raw)

The self-tests in a loop triggered a UaF similar to:

https://github.com/multipath-tcp/mptcp_net-next/issues/250

The critical scenario is actually almost fixed by:

"mptcp: cleanup MPJ subflow list handling"

with a notable exception: if an MPJ handshake races with
mptcp_close(), the subflow enter the join_list and __mptcp_finish_join()
is processed at the msk socket lock release in mptcp_close(),
the subflow will preserver a danfling reference to the msk sk_socket.

Address the issue fragting the subflow only on successful
__mptcp_finish_join()

Note that issues/250 triggers even before
"mptcp: cleanup MPJ subflow list handling", as before such commit the join
list was not spliced by mptcp_close(). We could consider a net-only patch to
address that.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/protocol.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index e81fd46a43c4..e89d7741aa7f 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -810,9 +810,17 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
 
 static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
 {
-	if (((struct sock *)msk)->sk_state != TCP_ESTABLISHED)
+	struct sock *sk = (struct sock *)msk;
+
+	if (sk->sk_state != TCP_ESTABLISHED)
 		return false;
 
+	/* attach to msk socket only after we are sure we will deal with it
+	 * at close time
+	 */
+	if (sk->sk_socket && !ssk->sk_socket)
+		mptcp_sock_graft(ssk, sk->sk_socket);
+
 	mptcp_propagate_sndbuf((struct sock *)msk, ssk);
 	mptcp_sockopt_sync_locked(msk, ssk);
 	WRITE_ONCE(msk->allow_infinite_fallback, false);
@@ -3234,7 +3242,6 @@ bool mptcp_finish_join(struct sock *ssk)
 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
 	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
 	struct sock *parent = (void *)msk;
-	struct socket *parent_sock;
 	bool ret = true;
 
 	pr_debug("msk=%p, subflow=%p", msk, subflow);
@@ -3278,13 +3285,6 @@ bool mptcp_finish_join(struct sock *ssk)
 		return false;
 	}
 
-	/* attach to msk socket only after we are sure he will deal with us
-	 * at close time
-	 */
-	parent_sock = READ_ONCE(parent->sk_socket);
-	if (parent_sock && !ssk->sk_socket)
-		mptcp_sock_graft(ssk, parent_sock);
-
 	subflow->map_seq = READ_ONCE(msk->ack_seq);
 
 out:
-- 
2.33.1


             reply	other threads:[~2021-12-16 14:34 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-16 14:33 Paolo Abeni [this message]
2021-12-17  0:40 ` [PATCH mptcp-next] Squash-to: "mptcp: cleanup MPJ subflow list handling" Mat Martineau
2021-12-17 11:40 ` Matthieu Baerts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=70e15da075bab481ac07ed1ce8c2adc9740403c6.1639665203.git.pabeni@redhat.com \
    --to=pabeni@redhat.com \
    --cc=mptcp@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.