All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes
@ 2023-03-30 19:22 Paolo Abeni
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 1/2] mptcp: stops worker on unaccepted sockets at listener close Paolo Abeni
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Paolo Abeni @ 2023-03-30 19:22 UTC (permalink / raw)
  To: mptcp; +Cc: Christoph Paasch

dump cover letter to trigger the CI, see the individual
changelog for the details.

Note that patch  2/2 introduces a behavioral change that hits
the packetdrill test close_before_accept.pkt. The msk socket
will be CLOSED after accept if the first subflow is closed
before accept.

v2 -> v3:
 - fix locking in patch 2/2 as spotted by Matttbe

Paolo Abeni (2):
  mptcp: stops worker on unaccepted sockets at listener close
  mptcp: fix accept vs worker race

 net/mptcp/protocol.c | 54 +++++++++++++++++-----------
 net/mptcp/protocol.h |  2 ++
 net/mptcp/subflow.c  | 83 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 119 insertions(+), 20 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 mptcp-net 1/2] mptcp: stops worker on unaccepted sockets at listener close
  2023-03-30 19:22 [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes Paolo Abeni
@ 2023-03-30 19:22 ` Paolo Abeni
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Paolo Abeni
  2023-03-31 16:32 ` [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes Matthieu Baerts
  2 siblings, 0 replies; 9+ messages in thread
From: Paolo Abeni @ 2023-03-30 19:22 UTC (permalink / raw)
  To: mptcp; +Cc: Christoph Paasch

This is a partial revert of the blamed commit, with a relevant
change: mptcp_subflow_queue_clean() now just change the msk
socket status and stop the worker, so that the UaF issue addressed
by the blamed commit is not re-introduced.

The above prevents the mptcp worker from running concurrently with
inet_csk_listen_stop(), as such race would trigger a warning, as
reported by Christoph:

RSP: 002b:00007f784fe09cd8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
WARNING: CPU: 0 PID: 25807 at net/ipv4/inet_connection_sock.c:1387 inet_csk_listen_stop+0x664/0x870 net/ipv4/inet_connection_sock.c:1387
RAX: ffffffffffffffda RBX: 00000000006bc050 RCX: 00007f7850afd6a9
RDX: 0000000000000000 RSI: 0000000020000340 RDI: 0000000000000004
Modules linked in:
RBP: 0000000000000002 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006bc05c
R13: fffffffffffffea8 R14: 00000000006bc050 R15: 000000000001fe40

 </TASK>
CPU: 0 PID: 25807 Comm: syz-executor.7 Not tainted 6.2.0-g778e54711659 #7
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
RIP: 0010:inet_csk_listen_stop+0x664/0x870 net/ipv4/inet_connection_sock.c:1387
RAX: 0000000000000000 RBX: ffff888100dfbd40 RCX: 0000000000000000
RDX: ffff8881363aab80 RSI: ffffffff81c494f4 RDI: 0000000000000005
RBP: ffff888126dad080 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffff888100dfe040
R13: 0000000000000001 R14: 0000000000000000 R15: ffff888100dfbdd8
FS:  00007f7850a2c800(0000) GS:ffff88813bc00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b32d26000 CR3: 000000012fdd8006 CR4: 0000000000770ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 <TASK>
 __tcp_close+0x5b2/0x620 net/ipv4/tcp.c:2875
 __mptcp_close_ssk+0x145/0x3d0 net/mptcp/protocol.c:2427
 mptcp_destroy_common+0x8a/0x1c0 net/mptcp/protocol.c:3277
 mptcp_destroy+0x41/0x60 net/mptcp/protocol.c:3304
 __mptcp_destroy_sock+0x56/0x140 net/mptcp/protocol.c:2965
 __mptcp_close+0x38f/0x4a0 net/mptcp/protocol.c:3057
 mptcp_close+0x24/0xe0 net/mptcp/protocol.c:3072
 inet_release+0x53/0xa0 net/ipv4/af_inet.c:429
 __sock_release+0x4e/0xf0 net/socket.c:651
 sock_close+0x15/0x20 net/socket.c:1393
 __fput+0xff/0x420 fs/file_table.c:321
 task_work_run+0x8b/0xe0 kernel/task_work.c:179
 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:171 [inline]
 exit_to_user_mode_prepare+0x113/0x120 kernel/entry/common.c:203
 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline]
 syscall_exit_to_user_mode+0x1d/0x40 kernel/entry/common.c:296
 do_syscall_64+0x46/0x90 arch/x86/entry/common.c:86
 entry_SYSCALL_64_after_hwframe+0x72/0xdc
RIP: 0033:0x7f7850af70dc
RAX: 0000000000000000 RBX: 0000000000000004 RCX: 00007f7850af70dc
RDX: 00007f7850a2c800 RSI: 0000000000000002 RDI: 0000000000000003
RBP: 00000000006bd980 R08: 0000000000000000 R09: 00000000000018a0
R10: 00000000316338a4 R11: 0000000000000293 R12: 0000000000211e31
R13: 00000000006bc05c R14: 00007f785062c000 R15: 0000000000211af0

Fixes: 0a3f4f1f9c27 ("mptcp: fix UaF in listener shutdown")
Reported-by: Christoph Paasch <cpaasch@apple.com>
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/371
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/protocol.c |  6 +++-
 net/mptcp/protocol.h |  1 +
 net/mptcp/subflow.c  | 73 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 79 insertions(+), 1 deletion(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 1116a64d072e..d28528df43fa 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2421,8 +2421,12 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 		mptcp_subflow_drop_ctx(ssk);
 	} else {
 		/* otherwise tcp will dispose of the ssk and subflow ctx */
-		if (ssk->sk_state == TCP_LISTEN)
+		if (ssk->sk_state == TCP_LISTEN) {
+			tcp_set_state(ssk, TCP_CLOSE);
+			mptcp_subflow_queue_clean(sk, ssk);
+			inet_csk_listen_stop(ssk);
 			mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED);
+		}
 
 		__tcp_close(ssk, 0);
 
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index d84f0d19e9d6..a4e82bcfc7b8 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -628,6 +628,7 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 		     struct mptcp_subflow_context *subflow);
 void __mptcp_subflow_send_ack(struct sock *ssk);
 void mptcp_subflow_reset(struct sock *ssk);
+void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk);
 void mptcp_sock_graft(struct sock *sk, struct socket *parent);
 struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk);
 bool __mptcp_close(struct sock *sk, long timeout);
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index e90c6f6a676a..d90267da9114 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -1808,6 +1808,79 @@ static void subflow_state_change(struct sock *sk)
 	}
 }
 
+void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_ssk)
+{
+	struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue;
+	struct mptcp_sock *msk, *next, *head = NULL;
+	struct request_sock *req;
+
+	/* build a list of all unaccepted mptcp sockets */
+	spin_lock_bh(&queue->rskq_lock);
+	for (req = queue->rskq_accept_head; req; req = req->dl_next) {
+		struct mptcp_subflow_context *subflow;
+		struct sock *ssk = req->sk;
+
+		if (!sk_is_mptcp(ssk))
+			continue;
+
+		subflow = mptcp_subflow_ctx(ssk);
+		if (!subflow || !subflow->conn)
+			continue;
+
+		/* skip if already in list */
+		msk = mptcp_sk(subflow->conn);
+		if (msk->dl_next || msk == head)
+			continue;
+
+		sock_hold(subflow->conn);
+		msk->dl_next = head;
+		head = msk;
+	}
+	spin_unlock_bh(&queue->rskq_lock);
+	if (!head)
+		return;
+
+	/* can't acquire the msk socket lock under the subflow one,
+	 * or will cause ABBA deadlock
+	 */
+	release_sock(listener_ssk);
+
+	for (msk = head; msk; msk = next) {
+		struct sock *sk = (struct sock *)msk;
+
+		lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+		next = msk->dl_next;
+		msk->dl_next = NULL;
+
+		/* prevent the stack from later re-schedule the worker for
+		 * this socket
+		 */
+		inet_sk_state_store(sk, TCP_CLOSE);
+		release_sock(sk);
+
+		/* lockdep will report a false positive ABBA deadlock
+		 * between cancel_work_sync and the listener socket.
+		 * The involved locks belong to different sockets WRT
+		 * the existing AB chain.
+		 * Using a per socket key is problematic as key
+		 * deregistration requires process context and must be
+		 * performed at socket disposal time, in atomic
+		 * context.
+		 * Just tell lockdep to consider the listener socket
+		 * released here.
+		 */
+		mutex_release(&listener_sk->sk_lock.dep_map, _RET_IP_);
+		mptcp_cancel_work(sk);
+		mutex_acquire(&listener_sk->sk_lock.dep_map,
+			      SINGLE_DEPTH_NESTING, 0, _RET_IP_);
+
+		sock_put(sk);
+	}
+
+	/* we are still under the listener msk socket lock */
+	lock_sock_nested(listener_ssk, SINGLE_DEPTH_NESTING);
+}
+
 static int subflow_ulp_init(struct sock *sk)
 {
 	struct inet_connection_sock *icsk = inet_csk(sk);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race
  2023-03-30 19:22 [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes Paolo Abeni
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 1/2] mptcp: stops worker on unaccepted sockets at listener close Paolo Abeni
@ 2023-03-30 19:22 ` Paolo Abeni
  2023-03-30 20:12   ` mptcp: fix accept vs worker race: Tests Results MPTCP CI
                     ` (2 more replies)
  2023-03-31 16:32 ` [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes Matthieu Baerts
  2 siblings, 3 replies; 9+ messages in thread
From: Paolo Abeni @ 2023-03-30 19:22 UTC (permalink / raw)
  To: mptcp; +Cc: Christoph Paasch

The mptcp worker and mptcp_accept() can race, as reported by Christoph:

refcount_t: addition on 0; use-after-free.
WARNING: CPU: 1 PID: 14351 at lib/refcount.c:25 refcount_warn_saturate+0x105/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 1 PID: 14351 Comm: syz-executor.2 Not tainted 6.3.0-rc1-gde5e8fd0123c #11
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
RIP: 0010:refcount_warn_saturate+0x105/0x1b0 lib/refcount.c:25
Code: 02 31 ff 89 de e8 1b f0 a7 ff 84 db 0f 85 6e ff ff ff e8 3e f5 a7 ff 48 c7 c7 d8 c7 34 83 c6 05 6d 2d 0f 02 01 e8 cb 3d 90 ff <0f> 0b e9 4f ff ff ff e8 1f f5 a7 ff 0f b6 1d 54 2d 0f 02 31 ff 89
RSP: 0018:ffffc90000a47bf8 EFLAGS: 00010282
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff88802eae98c0 RSI: ffffffff81097d4f RDI: 0000000000000001
RBP: ffff88802e712180 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: ffff88802eaea148 R12: ffff88802e712100
R13: ffff88802e712a88 R14: ffff888005cb93a8 R15: ffff88802e712a88
FS:  0000000000000000(0000) GS:ffff88803ed00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f277fd89120 CR3: 0000000035486002 CR4: 0000000000370ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 __refcount_add include/linux/refcount.h:199 [inline]
 __refcount_inc include/linux/refcount.h:250 [inline]
 refcount_inc include/linux/refcount.h:267 [inline]
 sock_hold include/net/sock.h:775 [inline]
 __mptcp_close+0x4c6/0x4d0 net/mptcp/protocol.c:3051
 mptcp_close+0x24/0xe0 net/mptcp/protocol.c:3072
 inet_release+0x56/0xa0 net/ipv4/af_inet.c:429
 __sock_release+0x51/0xf0 net/socket.c:653
 sock_close+0x18/0x20 net/socket.c:1395
 __fput+0x113/0x430 fs/file_table.c:321
 task_work_run+0x96/0x100 kernel/task_work.c:179
 exit_task_work include/linux/task_work.h:38 [inline]
 do_exit+0x4fc/0x10c0 kernel/exit.c:869
 do_group_exit+0x51/0xf0 kernel/exit.c:1019
 get_signal+0x12b0/0x1390 kernel/signal.c:2859
 arch_do_signal_or_restart+0x25/0x260 arch/x86/kernel/signal.c:306
 exit_to_user_mode_loop kernel/entry/common.c:168 [inline]
 exit_to_user_mode_prepare+0x131/0x1a0 kernel/entry/common.c:203
 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline]
 syscall_exit_to_user_mode+0x19/0x40 kernel/entry/common.c:296
 do_syscall_64+0x46/0x90 arch/x86/entry/common.c:86
 entry_SYSCALL_64_after_hwframe+0x72/0xdc
RIP: 0033:0x7fec4b4926a9
Code: Unable to access opcode bytes at 0x7fec4b49267f.
RSP: 002b:00007fec49f9dd78 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffe00 RBX: 00000000006bc058 RCX: 00007fec4b4926a9
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00000000006bc058
RBP: 00000000006bc050 R08: 00000000007df998 R09: 00000000007df998
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006bc05c
R13: fffffffffffffea8 R14: 000000000000000b R15: 000000000001fe40
 </TASK>

The root cause is that the worker can force fallback to TCP the first
mptcp subflow, actually deleting the unaccepted msk socket.

We can explicitly prevent the race delaying the unaccepted msk deletion
at listener shutdown time. In case the closed subflow is later accepted,
just just drop the mptcp contenxt and let the user-space deal with the
paiered mptcp socket.

Fixes: b6985b9b8295 ("mptcp: use the workqueue to destroy unaccepted sockets")
Reported-by: Christoph Paasch <cpaasch@apple.com>
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/375
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/protocol.c | 48 ++++++++++++++++++++++++++------------------
 net/mptcp/protocol.h |  1 +
 net/mptcp/subflow.c  | 26 ++++++++++++++++--------
 3 files changed, 48 insertions(+), 27 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index d28528df43fa..ce495b5a683a 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2373,6 +2373,15 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 	struct mptcp_sock *msk = mptcp_sk(sk);
 	bool need_push, dispose_it;
 
+	/* If the first subflow moved to a close state, e.g. due to
+	 * incoming reset and we reach here before accept we need to be able
+	 * to deliver the msk to user-space.
+	 * Do nothing at the moment and take action at accept and/or listener
+	 * shutdown.
+	 */
+	if (msk->in_accept_queue && msk->first == ssk)
+		return;
+
 	dispose_it = !msk->subflow || ssk != msk->subflow->sk;
 	if (dispose_it)
 		list_del(&subflow->node);
@@ -2407,18 +2416,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 	if (!inet_csk(ssk)->icsk_ulp_ops) {
 		WARN_ON_ONCE(!sock_flag(ssk, SOCK_DEAD));
 		kfree_rcu(subflow, rcu);
-	} else if (msk->in_accept_queue && msk->first == ssk) {
-		/* if the first subflow moved to a close state, e.g. due to
-		 * incoming reset and we reach here before inet_child_forget()
-		 * the TCP stack could later try to close it via
-		 * inet_csk_listen_stop(), or deliver it to the user space via
-		 * accept().
-		 * We can't delete the subflow - or risk a double free - nor let
-		 * the msk survive - or will be leaked in the non accept scenario:
-		 * fallback and let TCP cope with the subflow cleanup.
-		 */
-		WARN_ON_ONCE(sock_flag(ssk, SOCK_DEAD));
-		mptcp_subflow_drop_ctx(ssk);
 	} else {
 		/* otherwise tcp will dispose of the ssk and subflow ctx */
 		if (ssk->sk_state == TCP_LISTEN) {
@@ -2487,13 +2484,6 @@ static void __mptcp_close_subflow(struct sock *sk)
 		mptcp_close_ssk(sk, ssk, subflow);
 	}
 
-	/* if the MPC subflow has been closed before the msk is accepted,
-	 * msk will never be accept-ed, close it now
-	 */
-	if (!msk->first && msk->in_accept_queue) {
-		sock_set_flag(sk, SOCK_DEAD);
-		inet_sk_state_store(sk, TCP_CLOSE);
-	}
 }
 
 static bool mptcp_check_close_timeout(const struct sock *sk)
@@ -2976,6 +2966,14 @@ static void __mptcp_destroy_sock(struct sock *sk)
 	sock_put(sk);
 }
 
+void __mptcp_unaccepted_force_close(struct sock *sk)
+{
+	sock_set_flag(sk, SOCK_DEAD);
+	inet_sk_state_store(sk, TCP_CLOSE);
+	mptcp_do_fastclose(sk);
+	__mptcp_destroy_sock(sk);
+}
+
 static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
 {
 	/* Concurrent splices from sk_receive_queue into receive_queue will
@@ -3825,6 +3823,18 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
 			if (!ssk->sk_socket)
 				mptcp_sock_graft(ssk, newsock);
 		}
+
+		/* Do late cleanup for the first subflow as necessary. Also
+		 * deal with bad peers not doing a complete shutdown.
+		 */
+		if (msk->first &&
+		    unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) {
+			__mptcp_close_ssk(newsk, msk->first,
+					  mptcp_subflow_ctx(msk->first), 0);
+			if (unlikely(list_empty(&msk->conn_list)))
+				inet_sk_state_store(newsk, TCP_CLOSE);
+		}
+
 	}
 	release_sock(newsk);
 
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index a4e82bcfc7b8..457c9f63ed82 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -631,6 +631,7 @@ void mptcp_subflow_reset(struct sock *ssk);
 void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk);
 void mptcp_sock_graft(struct sock *sk, struct socket *parent);
 struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk);
+void __mptcp_unaccepted_force_close(struct sock *sk);
 bool __mptcp_close(struct sock *sk, long timeout);
 void mptcp_cancel_work(struct sock *sk);
 void mptcp_set_owner_r(struct sk_buff *skb, struct sock *sk);
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index d90267da9114..3d6ccf5067d7 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -1813,12 +1813,13 @@ void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_s
 	struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue;
 	struct mptcp_sock *msk, *next, *head = NULL;
 	struct request_sock *req;
+	struct sock *sk, *ssk;
 
 	/* build a list of all unaccepted mptcp sockets */
 	spin_lock_bh(&queue->rskq_lock);
 	for (req = queue->rskq_accept_head; req; req = req->dl_next) {
 		struct mptcp_subflow_context *subflow;
-		struct sock *ssk = req->sk;
+		ssk = req->sk;
 
 		if (!sk_is_mptcp(ssk))
 			continue;
@@ -1828,11 +1829,12 @@ void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_s
 			continue;
 
 		/* skip if already in list */
-		msk = mptcp_sk(subflow->conn);
+		sk = subflow->conn;
+		msk = mptcp_sk(sk);
 		if (msk->dl_next || msk == head)
 			continue;
 
-		sock_hold(subflow->conn);
+		sock_hold(sk);
 		msk->dl_next = head;
 		head = msk;
 	}
@@ -1846,18 +1848,26 @@ void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_s
 	release_sock(listener_ssk);
 
 	for (msk = head; msk; msk = next) {
-		struct sock *sk = (struct sock *)msk;
+		sk = (struct sock *)msk;
 
 		lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+		ssk = msk->first;
 		next = msk->dl_next;
 		msk->dl_next = NULL;
 
-		/* prevent the stack from later re-schedule the worker for
-		 * this socket
-		 */
-		inet_sk_state_store(sk, TCP_CLOSE);
+		__mptcp_unaccepted_force_close(sk);
 		release_sock(sk);
 
+		/* the first subflow is not touched by the above, as the msk
+		 * is still in the accept queue, see __mptcp_close_ssk,
+		 * we need to release only the ctx related resources, the
+		 * tcp socket will be destroyed by inet_csk_listen_stop()
+		 */
+		lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
+		mptcp_subflow_drop_ctx(ssk);
+		release_sock(ssk);
+		sock_put(ssk);
+
 		/* lockdep will report a false positive ABBA deadlock
 		 * between cancel_work_sync and the listener socket.
 		 * The involved locks belong to different sockets WRT
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: mptcp: fix accept vs worker race: Tests Results
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Paolo Abeni
@ 2023-03-30 20:12   ` MPTCP CI
  2023-03-31 17:09   ` MPTCP CI
  2023-04-04 16:09   ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Christoph Paasch
  2 siblings, 0 replies; 9+ messages in thread
From: MPTCP CI @ 2023-03-30 20:12 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_syscalls 🔴:
  - Task: https://cirrus-ci.com/task/5776733312385024
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5776733312385024/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5213783358963712
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5213783358963712/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Script error! ❓:
  - Task: https://cirrus-ci.com/task/6339683265806336
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6339683265806336/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4932308382253056
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4932308382253056/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/f3b6d044228f


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes
  2023-03-30 19:22 [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes Paolo Abeni
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 1/2] mptcp: stops worker on unaccepted sockets at listener close Paolo Abeni
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Paolo Abeni
@ 2023-03-31 16:32 ` Matthieu Baerts
  2 siblings, 0 replies; 9+ messages in thread
From: Matthieu Baerts @ 2023-03-31 16:32 UTC (permalink / raw)
  To: Paolo Abeni, mptcp; +Cc: Christoph Paasch

Hi Paolo,

On 30/03/2023 21:22, Paolo Abeni wrote:
> dump cover letter to trigger the CI, see the individual
> changelog for the details.
> 
> Note that patch  2/2 introduces a behavioral change that hits
> the packetdrill test close_before_accept.pkt. The msk socket
> will be CLOSED after accept if the first subflow is closed
> before accept.

Thank you for the fixes!

Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net>

Now in our tree (fixes for -net) without a few typos spot by
checkpatch.pl --shellcheck.

Please note that I had a few conflicts when applying them to -net. I
hope that's OK, please see below:

New patches for t/upstream-net and t/upstream:
- dddea846b342: mptcp: stops worker on unaccepted sockets at listener close
- 87668b995225: mptcp: fix accept vs worker race
- c6a6d1debd0d: conflict in
t/mptcp-move-first-subflow-allocation-at-mpc-access-time
- e04e8b8fb4b8: conflict in t/mptcp-refactor-mptcp_stream_accept
- Results: 199eed4f57e8..f01e02dfadff (export-net)
- Results: e2a442363ad9..68957ea8fdf9 (export)

Tests are now in progress:

https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export-net/20230331T162750
https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export/20230331T162750

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mptcp: fix accept vs worker race: Tests Results
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Paolo Abeni
  2023-03-30 20:12   ` mptcp: fix accept vs worker race: Tests Results MPTCP CI
@ 2023-03-31 17:09   ` MPTCP CI
  2023-04-04 16:09   ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Christoph Paasch
  2 siblings, 0 replies; 9+ messages in thread
From: MPTCP CI @ 2023-03-31 17:09 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_syscalls 🔴:
  - Task: https://cirrus-ci.com/task/5776733312385024
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5776733312385024/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5213783358963712
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5213783358963712/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_syscalls 🔴:
  - Task: https://cirrus-ci.com/task/5666683801567232
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5666683801567232/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4932308382253056
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4932308382253056/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/f3b6d044228f


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race
  2023-03-30 19:22 ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Paolo Abeni
  2023-03-30 20:12   ` mptcp: fix accept vs worker race: Tests Results MPTCP CI
  2023-03-31 17:09   ` MPTCP CI
@ 2023-04-04 16:09   ` Christoph Paasch
  2 siblings, 0 replies; 9+ messages in thread
From: Christoph Paasch @ 2023-04-04 16:09 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp



> On Mar 30, 2023, at 12:22 PM, Paolo Abeni <pabeni@redhat.com> wrote:
> 
> The mptcp worker and mptcp_accept() can race, as reported by Christoph:
> 
> refcount_t: addition on 0; use-after-free.
> WARNING: CPU: 1 PID: 14351 at lib/refcount.c:25 refcount_warn_saturate+0x105/0x1b0 lib/refcount.c:25
> Modules linked in:
> CPU: 1 PID: 14351 Comm: syz-executor.2 Not tainted 6.3.0-rc1-gde5e8fd0123c #11
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
> RIP: 0010:refcount_warn_saturate+0x105/0x1b0 lib/refcount.c:25
> Code: 02 31 ff 89 de e8 1b f0 a7 ff 84 db 0f 85 6e ff ff ff e8 3e f5 a7 ff 48 c7 c7 d8 c7 34 83 c6 05 6d 2d 0f 02 01 e8 cb 3d 90 ff <0f> 0b e9 4f ff ff ff e8 1f f5 a7 ff 0f b6 1d 54 2d 0f 02 31 ff 89
> RSP: 0018:ffffc90000a47bf8 EFLAGS: 00010282
> RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
> RDX: ffff88802eae98c0 RSI: ffffffff81097d4f RDI: 0000000000000001
> RBP: ffff88802e712180 R08: 0000000000000001 R09: 0000000000000000
> R10: 0000000000000001 R11: ffff88802eaea148 R12: ffff88802e712100
> R13: ffff88802e712a88 R14: ffff888005cb93a8 R15: ffff88802e712a88
> FS:  0000000000000000(0000) GS:ffff88803ed00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f277fd89120 CR3: 0000000035486002 CR4: 0000000000370ee0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
> <TASK>
> __refcount_add include/linux/refcount.h:199 [inline]
> __refcount_inc include/linux/refcount.h:250 [inline]
> refcount_inc include/linux/refcount.h:267 [inline]
> sock_hold include/net/sock.h:775 [inline]
> __mptcp_close+0x4c6/0x4d0 net/mptcp/protocol.c:3051
> mptcp_close+0x24/0xe0 net/mptcp/protocol.c:3072
> inet_release+0x56/0xa0 net/ipv4/af_inet.c:429
> __sock_release+0x51/0xf0 net/socket.c:653
> sock_close+0x18/0x20 net/socket.c:1395
> __fput+0x113/0x430 fs/file_table.c:321
> task_work_run+0x96/0x100 kernel/task_work.c:179
> exit_task_work include/linux/task_work.h:38 [inline]
> do_exit+0x4fc/0x10c0 kernel/exit.c:869
> do_group_exit+0x51/0xf0 kernel/exit.c:1019
> get_signal+0x12b0/0x1390 kernel/signal.c:2859
> arch_do_signal_or_restart+0x25/0x260 arch/x86/kernel/signal.c:306
> exit_to_user_mode_loop kernel/entry/common.c:168 [inline]
> exit_to_user_mode_prepare+0x131/0x1a0 kernel/entry/common.c:203
> __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline]
> syscall_exit_to_user_mode+0x19/0x40 kernel/entry/common.c:296
> do_syscall_64+0x46/0x90 arch/x86/entry/common.c:86
> entry_SYSCALL_64_after_hwframe+0x72/0xdc
> RIP: 0033:0x7fec4b4926a9
> Code: Unable to access opcode bytes at 0x7fec4b49267f.
> RSP: 002b:00007fec49f9dd78 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
> RAX: fffffffffffffe00 RBX: 00000000006bc058 RCX: 00007fec4b4926a9
> RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00000000006bc058
> RBP: 00000000006bc050 R08: 00000000007df998 R09: 00000000007df998
> R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006bc05c
> R13: fffffffffffffea8 R14: 000000000000000b R15: 000000000001fe40
> </TASK>
> 
> The root cause is that the worker can force fallback to TCP the first
> mptcp subflow, actually deleting the unaccepted msk socket.
> 
> We can explicitly prevent the race delaying the unaccepted msk deletion
> at listener shutdown time. In case the closed subflow is later accepted,
> just just drop the mptcp contenxt and let the user-space deal with the
> paiered mptcp socket.
> 
> Fixes: b6985b9b8295 ("mptcp: use the workqueue to destroy unaccepted sockets")
> Reported-by: Christoph Paasch <cpaasch@apple.com>
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/375
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> net/mptcp/protocol.c | 48 ++++++++++++++++++++++++++------------------
> net/mptcp/protocol.h |  1 +
> net/mptcp/subflow.c  | 26 ++++++++++++++++--------
> 3 files changed, 48 insertions(+), 27 deletions(-)

Tested the reproducer and can’t trigger a Panic anymore:

Tested-by: Christoph Paasch <cpaasch@apple.com>


Christoph

> 
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index d28528df43fa..ce495b5a683a 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -2373,6 +2373,15 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
> 	struct mptcp_sock *msk = mptcp_sk(sk);
> 	bool need_push, dispose_it;
> 
> +	/* If the first subflow moved to a close state, e.g. due to
> +	 * incoming reset and we reach here before accept we need to be able
> +	 * to deliver the msk to user-space.
> +	 * Do nothing at the moment and take action at accept and/or listener
> +	 * shutdown.
> +	 */
> +	if (msk->in_accept_queue && msk->first == ssk)
> +		return;
> +
> 	dispose_it = !msk->subflow || ssk != msk->subflow->sk;
> 	if (dispose_it)
> 		list_del(&subflow->node);
> @@ -2407,18 +2416,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
> 	if (!inet_csk(ssk)->icsk_ulp_ops) {
> 		WARN_ON_ONCE(!sock_flag(ssk, SOCK_DEAD));
> 		kfree_rcu(subflow, rcu);
> -	} else if (msk->in_accept_queue && msk->first == ssk) {
> -		/* if the first subflow moved to a close state, e.g. due to
> -		 * incoming reset and we reach here before inet_child_forget()
> -		 * the TCP stack could later try to close it via
> -		 * inet_csk_listen_stop(), or deliver it to the user space via
> -		 * accept().
> -		 * We can't delete the subflow - or risk a double free - nor let
> -		 * the msk survive - or will be leaked in the non accept scenario:
> -		 * fallback and let TCP cope with the subflow cleanup.
> -		 */
> -		WARN_ON_ONCE(sock_flag(ssk, SOCK_DEAD));
> -		mptcp_subflow_drop_ctx(ssk);
> 	} else {
> 		/* otherwise tcp will dispose of the ssk and subflow ctx */
> 		if (ssk->sk_state == TCP_LISTEN) {
> @@ -2487,13 +2484,6 @@ static void __mptcp_close_subflow(struct sock *sk)
> 		mptcp_close_ssk(sk, ssk, subflow);
> 	}
> 
> -	/* if the MPC subflow has been closed before the msk is accepted,
> -	 * msk will never be accept-ed, close it now
> -	 */
> -	if (!msk->first && msk->in_accept_queue) {
> -		sock_set_flag(sk, SOCK_DEAD);
> -		inet_sk_state_store(sk, TCP_CLOSE);
> -	}
> }
> 
> static bool mptcp_check_close_timeout(const struct sock *sk)
> @@ -2976,6 +2966,14 @@ static void __mptcp_destroy_sock(struct sock *sk)
> 	sock_put(sk);
> }
> 
> +void __mptcp_unaccepted_force_close(struct sock *sk)
> +{
> +	sock_set_flag(sk, SOCK_DEAD);
> +	inet_sk_state_store(sk, TCP_CLOSE);
> +	mptcp_do_fastclose(sk);
> +	__mptcp_destroy_sock(sk);
> +}
> +
> static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
> {
> 	/* Concurrent splices from sk_receive_queue into receive_queue will
> @@ -3825,6 +3823,18 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
> 			if (!ssk->sk_socket)
> 				mptcp_sock_graft(ssk, newsock);
> 		}
> +
> +		/* Do late cleanup for the first subflow as necessary. Also
> +		 * deal with bad peers not doing a complete shutdown.
> +		 */
> +		if (msk->first &&
> +		    unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) {
> +			__mptcp_close_ssk(newsk, msk->first,
> +					  mptcp_subflow_ctx(msk->first), 0);
> +			if (unlikely(list_empty(&msk->conn_list)))
> +				inet_sk_state_store(newsk, TCP_CLOSE);
> +		}
> +
> 	}
> 	release_sock(newsk);
> 
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index a4e82bcfc7b8..457c9f63ed82 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -631,6 +631,7 @@ void mptcp_subflow_reset(struct sock *ssk);
> void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk);
> void mptcp_sock_graft(struct sock *sk, struct socket *parent);
> struct socket *__mptcp_nmpc_socket(struct mptcp_sock *msk);
> +void __mptcp_unaccepted_force_close(struct sock *sk);
> bool __mptcp_close(struct sock *sk, long timeout);
> void mptcp_cancel_work(struct sock *sk);
> void mptcp_set_owner_r(struct sk_buff *skb, struct sock *sk);
> diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
> index d90267da9114..3d6ccf5067d7 100644
> --- a/net/mptcp/subflow.c
> +++ b/net/mptcp/subflow.c
> @@ -1813,12 +1813,13 @@ void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_s
> 	struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue;
> 	struct mptcp_sock *msk, *next, *head = NULL;
> 	struct request_sock *req;
> +	struct sock *sk, *ssk;
> 
> 	/* build a list of all unaccepted mptcp sockets */
> 	spin_lock_bh(&queue->rskq_lock);
> 	for (req = queue->rskq_accept_head; req; req = req->dl_next) {
> 		struct mptcp_subflow_context *subflow;
> -		struct sock *ssk = req->sk;
> +		ssk = req->sk;
> 
> 		if (!sk_is_mptcp(ssk))
> 			continue;
> @@ -1828,11 +1829,12 @@ void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_s
> 			continue;
> 
> 		/* skip if already in list */
> -		msk = mptcp_sk(subflow->conn);
> +		sk = subflow->conn;
> +		msk = mptcp_sk(sk);
> 		if (msk->dl_next || msk == head)
> 			continue;
> 
> -		sock_hold(subflow->conn);
> +		sock_hold(sk);
> 		msk->dl_next = head;
> 		head = msk;
> 	}
> @@ -1846,18 +1848,26 @@ void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_s
> 	release_sock(listener_ssk);
> 
> 	for (msk = head; msk; msk = next) {
> -		struct sock *sk = (struct sock *)msk;
> +		sk = (struct sock *)msk;
> 
> 		lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
> +		ssk = msk->first;
> 		next = msk->dl_next;
> 		msk->dl_next = NULL;
> 
> -		/* prevent the stack from later re-schedule the worker for
> -		 * this socket
> -		 */
> -		inet_sk_state_store(sk, TCP_CLOSE);
> +		__mptcp_unaccepted_force_close(sk);
> 		release_sock(sk);
> 
> +		/* the first subflow is not touched by the above, as the msk
> +		 * is still in the accept queue, see __mptcp_close_ssk,
> +		 * we need to release only the ctx related resources, the
> +		 * tcp socket will be destroyed by inet_csk_listen_stop()
> +		 */
> +		lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
> +		mptcp_subflow_drop_ctx(ssk);
> +		release_sock(ssk);
> +		sock_put(ssk);
> +
> 		/* lockdep will report a false positive ABBA deadlock
> 		 * between cancel_work_sync and the listener socket.
> 		 * The involved locks belong to different sockets WRT
> -- 
> 2.39.2
> 
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mptcp: fix accept vs worker race: Tests Results
  2023-03-29 17:32 [PATCH mptcp-net v2 2/2] mptcp: fix accept vs worker race Paolo Abeni
  2023-03-29 18:32 ` mptcp: fix accept vs worker race: Tests Results MPTCP CI
@ 2023-03-30 18:22 ` MPTCP CI
  1 sibling, 0 replies; 9+ messages in thread
From: MPTCP CI @ 2023-03-30 18:22 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_syscalls 🔴:
  - Task: https://cirrus-ci.com/task/4921633274593280
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4921633274593280/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6047533181435904
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6047533181435904/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6610483134857216
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6610483134857216/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_syscalls 🔴:
  - Task: https://cirrus-ci.com/task/5484583228014592
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5484583228014592/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/1ab9c8f4a437


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mptcp: fix accept vs worker race: Tests Results
  2023-03-29 17:32 [PATCH mptcp-net v2 2/2] mptcp: fix accept vs worker race Paolo Abeni
@ 2023-03-29 18:32 ` MPTCP CI
  2023-03-30 18:22 ` MPTCP CI
  1 sibling, 0 replies; 9+ messages in thread
From: MPTCP CI @ 2023-03-29 18:32 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_syscalls 🔴:
  - Task: https://cirrus-ci.com/task/5132270852374528
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5132270852374528/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6258170759217152
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6258170759217152/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_syscalls 🔴:
  - Task: https://cirrus-ci.com/task/4850795875663872
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4850795875663872/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5976695782506496
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5976695782506496/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/de04f3705017


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-04-04 17:10 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-30 19:22 [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes Paolo Abeni
2023-03-30 19:22 ` [PATCH v3 mptcp-net 1/2] mptcp: stops worker on unaccepted sockets at listener close Paolo Abeni
2023-03-30 19:22 ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Paolo Abeni
2023-03-30 20:12   ` mptcp: fix accept vs worker race: Tests Results MPTCP CI
2023-03-31 17:09   ` MPTCP CI
2023-04-04 16:09   ` [PATCH v3 mptcp-net 2/2] mptcp: fix accept vs worker race Christoph Paasch
2023-03-31 16:32 ` [PATCH v3 mptcp-net 0/2] mptcp: a couple of fixes Matthieu Baerts
  -- strict thread matches above, loose matches on Subject: below --
2023-03-29 17:32 [PATCH mptcp-net v2 2/2] mptcp: fix accept vs worker race Paolo Abeni
2023-03-29 18:32 ` mptcp: fix accept vs worker race: Tests Results MPTCP CI
2023-03-30 18:22 ` MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.