All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-net 1/2] mptcp: use mptcp_schedule_work instead of open-codying it
@ 2023-03-24 10:34 Paolo Abeni
  2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Abeni @ 2023-03-24 10:34 UTC (permalink / raw)
  To: mptcp

Beyond reducing code duplication this also avoiding scheduling
the mptcp_worker on a closed socket on some edge scenarios.

The addressed issue is actually older than the blamed commit
below, but this fix need it as a pre-requisite.

Fixes: ba8f48f7a4d7 ("mptcp: introduce mptcp_schedule_work")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/options.c | 5 ++---
 net/mptcp/subflow.c | 9 ++++-----
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index 99c4f9e9bb90..cd3b885c8faa 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -1192,9 +1192,8 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
 	 */
 	if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
 		if (mp_opt.data_fin && mp_opt.data_len == 1 &&
-		    mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64) &&
-		    schedule_work(&msk->work))
-			sock_hold(subflow->conn);
+		    mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64))
+			mptcp_schedule_work((struct sock *)msk);
 
 		return true;
 	}
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index fe02d1222f43..e90c6f6a676a 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -408,9 +408,8 @@ void mptcp_subflow_reset(struct sock *ssk)
 
 	tcp_send_active_reset(ssk, GFP_ATOMIC);
 	tcp_done(ssk);
-	if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags) &&
-	    schedule_work(&mptcp_sk(sk)->work))
-		return; /* worker will put sk for us */
+	if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags))
+		mptcp_schedule_work(sk);
 
 	sock_put(sk);
 }
@@ -1101,8 +1100,8 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
 				skb_ext_del(skb, SKB_EXT_MPTCP);
 				return MAPPING_OK;
 			} else {
-				if (updated && schedule_work(&msk->work))
-					sock_hold((struct sock *)msk);
+				if (updated)
+					mptcp_schedule_work((struct sock *)msk);
 
 				return MAPPING_DATA_FIN;
 			}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker
  2023-03-24 10:34 [PATCH mptcp-net 1/2] mptcp: use mptcp_schedule_work instead of open-codying it Paolo Abeni
@ 2023-03-24 10:34 ` Paolo Abeni
  2023-03-24 12:34   ` mptcp: stricter state check in mptcp_worker: Build Failure MPTCP CI
                     ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Paolo Abeni @ 2023-03-24 10:34 UTC (permalink / raw)
  To: mptcp

As reported by Christpher, the mptcp protocol can run the
worker when the relevant msk socket is in an unexpected state:

connect()
// incoming reset + fastclose
// the mptcp worker is scheduled
mptcp_disconnect()
// msk is now CLOSED
listen()
mptcp_worker()

Leading to the following splat:

divide error: 0000 [#1] PREEMPT SMP
CPU: 1 PID: 21 Comm: kworker/1:0 Not tainted 6.3.0-rc1-gde5e8fd0123c #11
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
Workqueue: events mptcp_worker
RIP: 0010:__tcp_select_window+0x22c/0x4b0 net/ipv4/tcp_output.c:3018
RSP: 0018:ffffc900000b3c98 EFLAGS: 00010293
RAX: 000000000000ffd7 RBX: 000000000000ffd7 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffff8214ce97 RDI: 0000000000000004
RBP: 000000000000ffd7 R08: 0000000000000004 R09: 0000000000010000
R10: 000000000000ffd7 R11: ffff888005afa148 R12: 000000000000ffd7
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff88803ed00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000405270 CR3: 000000003011e006 CR4: 0000000000370ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 tcp_select_window net/ipv4/tcp_output.c:262 [inline]
 __tcp_transmit_skb+0x356/0x1280 net/ipv4/tcp_output.c:1345
 tcp_transmit_skb net/ipv4/tcp_output.c:1417 [inline]
 tcp_send_active_reset+0x13e/0x320 net/ipv4/tcp_output.c:3459
 mptcp_check_fastclose net/mptcp/protocol.c:2530 [inline]
 mptcp_worker+0x6c7/0x800 net/mptcp/protocol.c:2705
 process_one_work+0x3bd/0x950 kernel/workqueue.c:2390
 worker_thread+0x5b/0x610 kernel/workqueue.c:2537
 kthread+0x138/0x170 kernel/kthread.c:376
 ret_from_fork+0x2c/0x50 arch/x86/entry/entry_64.S:308
 </TASK>

This change address the issue explicitly checking for bad states
before running the mptcp worker.

Reported-by: Christoph Paasch <cpaasch@apple.com>
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/374
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/mptcp/protocol.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 3a8add6aa74c..1116a64d072e 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2697,7 +2697,7 @@ static void mptcp_worker(struct work_struct *work)
 
 	lock_sock(sk);
 	state = sk->sk_state;
-	if (unlikely(state == TCP_CLOSE))
+	if (unlikely((1 << state) & (TCPF_CLOSE | TCPF_LISTEN)))
 		goto unlock;
 
 	mptcp_check_data_fin_ack(sk);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: mptcp: stricter state check in mptcp_worker: Build Failure
  2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
@ 2023-03-24 12:34   ` MPTCP CI
  2023-03-24 12:36   ` MPTCP CI
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: MPTCP CI @ 2023-03-24 12:34 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://patchwork.kernel.org/project/mptcp/patch/10b158069455d831d9efc547bcdb3a6379234f1c.1679654040.git.pabeni@redhat.com/
  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/4511121675

Status: failure
Initiator: matttbe
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/0a8809a9f3ab

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mptcp: stricter state check in mptcp_worker: Build Failure
  2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
  2023-03-24 12:34   ` mptcp: stricter state check in mptcp_worker: Build Failure MPTCP CI
@ 2023-03-24 12:36   ` MPTCP CI
  2023-03-24 14:25   ` mptcp: stricter state check in mptcp_worker: Tests Results MPTCP CI
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: MPTCP CI @ 2023-03-24 12:36 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://patchwork.kernel.org/project/mptcp/patch/10b158069455d831d9efc547bcdb3a6379234f1c.1679654040.git.pabeni@redhat.com/
  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/4511130601

Status: failure
Initiator: matttbe
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/d79b0ae2a966

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mptcp: stricter state check in mptcp_worker: Tests Results
  2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
  2023-03-24 12:34   ` mptcp: stricter state check in mptcp_worker: Build Failure MPTCP CI
  2023-03-24 12:36   ` MPTCP CI
@ 2023-03-24 14:25   ` MPTCP CI
  2023-03-24 15:12   ` MPTCP CI
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: MPTCP CI @ 2023-03-24 14:25 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/4772230790381568
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4772230790381568/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6461080650645504
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6461080650645504/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5898130697224192
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5898130697224192/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5335180743802880
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5335180743802880/summary/summary.txt

Initiator: Matthieu Baerts
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/2f4d5b686cc8


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mptcp: stricter state check in mptcp_worker: Tests Results
  2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
                     ` (2 preceding siblings ...)
  2023-03-24 14:25   ` mptcp: stricter state check in mptcp_worker: Tests Results MPTCP CI
@ 2023-03-24 15:12   ` MPTCP CI
  2023-03-27 15:33   ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Matthieu Baerts
  2023-03-27 17:17   ` Christoph Paasch
  5 siblings, 0 replies; 8+ messages in thread
From: MPTCP CI @ 2023-03-24 15:12 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5434954880909312
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5434954880909312/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5857167345975296
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5857167345975296/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/6560854787751936
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6560854787751936/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4731267439132672
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4731267439132672/summary/summary.txt

Initiator: Matthieu Baerts
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/79b03f33ac6a


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker
  2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
                     ` (3 preceding siblings ...)
  2023-03-24 15:12   ` MPTCP CI
@ 2023-03-27 15:33   ` Matthieu Baerts
  2023-03-27 17:17   ` Christoph Paasch
  5 siblings, 0 replies; 8+ messages in thread
From: Matthieu Baerts @ 2023-03-27 15:33 UTC (permalink / raw)
  To: Paolo Abeni, mptcp

Hi Paolo,

On 24/03/2023 11:34, Paolo Abeni wrote:
> As reported by Christpher, the mptcp protocol can run the
> worker when the relevant msk socket is in an unexpected state:
> 
> connect()
> // incoming reset + fastclose
> // the mptcp worker is scheduled
> mptcp_disconnect()
> // msk is now CLOSED
> listen()
> mptcp_worker()
> 
> Leading to the following splat:

(...)

> This change address the issue explicitly checking for bad states
> before running the mptcp worker.
> 
> Reported-by: Christoph Paasch <cpaasch@apple.com>
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/374
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>

Thank you for the two fixes! They look good to me:

Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net>

As I said on IRC, it looks like your patch 1/2 could also include the
commit with the same title we have queued for net-next, in our export
branch[1]. Please tell me if you are OK with that.

[1] https://github.com/multipath-tcp/mptcp_net-next/commit/58e381ebe218

I just applied the two patches in our tree (fix for -net):

New patches for t/upstream-net and t/upstream:
- f5fbf4c258e7: mptcp: use mptcp_schedule_work instead of open-coding it
(net)
- e8fc17094d85: mptcp: stricter state check in mptcp_worker
- Results: ad9a6d9ee2fb..c50cadb14dbb (export-net)
- Results: 3ab8c05f4350..3dca512f38c6 (export)

Tests are now in progress:

https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export-net/20230327T152231
https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export/20230327T152231

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker
  2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
                     ` (4 preceding siblings ...)
  2023-03-27 15:33   ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Matthieu Baerts
@ 2023-03-27 17:17   ` Christoph Paasch
  5 siblings, 0 replies; 8+ messages in thread
From: Christoph Paasch @ 2023-03-27 17:17 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp



> On Mar 24, 2023, at 3:34 AM, Paolo Abeni <pabeni@redhat.com> wrote:
> 
> As reported by Christpher, the mptcp protocol can run the
> worker when the relevant msk socket is in an unexpected state:
> 
> connect()
> // incoming reset + fastclose
> // the mptcp worker is scheduled
> mptcp_disconnect()
> // msk is now CLOSED
> listen()
> mptcp_worker()
> 
> Leading to the following splat:
> 
> divide error: 0000 [#1] PREEMPT SMP
> CPU: 1 PID: 21 Comm: kworker/1:0 Not tainted 6.3.0-rc1-gde5e8fd0123c #11
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
> Workqueue: events mptcp_worker
> RIP: 0010:__tcp_select_window+0x22c/0x4b0 net/ipv4/tcp_output.c:3018
> RSP: 0018:ffffc900000b3c98 EFLAGS: 00010293
> RAX: 000000000000ffd7 RBX: 000000000000ffd7 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: ffffffff8214ce97 RDI: 0000000000000004
> RBP: 000000000000ffd7 R08: 0000000000000004 R09: 0000000000010000
> R10: 000000000000ffd7 R11: ffff888005afa148 R12: 000000000000ffd7
> R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
> FS:  0000000000000000(0000) GS:ffff88803ed00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000405270 CR3: 000000003011e006 CR4: 0000000000370ee0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
> <TASK>
> tcp_select_window net/ipv4/tcp_output.c:262 [inline]
> __tcp_transmit_skb+0x356/0x1280 net/ipv4/tcp_output.c:1345
> tcp_transmit_skb net/ipv4/tcp_output.c:1417 [inline]
> tcp_send_active_reset+0x13e/0x320 net/ipv4/tcp_output.c:3459
> mptcp_check_fastclose net/mptcp/protocol.c:2530 [inline]
> mptcp_worker+0x6c7/0x800 net/mptcp/protocol.c:2705
> process_one_work+0x3bd/0x950 kernel/workqueue.c:2390
> worker_thread+0x5b/0x610 kernel/workqueue.c:2537
> kthread+0x138/0x170 kernel/kthread.c:376
> ret_from_fork+0x2c/0x50 arch/x86/entry/entry_64.S:308
> </TASK>
> 
> This change address the issue explicitly checking for bad states
> before running the mptcp worker.
> 
> Reported-by: Christoph Paasch <cpaasch@apple.com>
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/374
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> net/mptcp/protocol.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)

Works!

Tested-by: Christoph Paasch <cpaasch@apple.com>




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-03-27 18:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-24 10:34 [PATCH mptcp-net 1/2] mptcp: use mptcp_schedule_work instead of open-codying it Paolo Abeni
2023-03-24 10:34 ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Paolo Abeni
2023-03-24 12:34   ` mptcp: stricter state check in mptcp_worker: Build Failure MPTCP CI
2023-03-24 12:36   ` MPTCP CI
2023-03-24 14:25   ` mptcp: stricter state check in mptcp_worker: Tests Results MPTCP CI
2023-03-24 15:12   ` MPTCP CI
2023-03-27 15:33   ` [PATCH mptcp-net 2/2] mptcp: stricter state check in mptcp_worker Matthieu Baerts
2023-03-27 17:17   ` Christoph Paasch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.