All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout
@ 2023-01-26 15:44 Paolo Abeni
  2023-01-26 16:38 ` mptcp: do not wait for bare sockets' timeout: Tests Results MPTCP CI
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Paolo Abeni @ 2023-01-26 15:44 UTC (permalink / raw)
  To: mptcp

If the peer closes all the existing subflows for a given
mptcp socket and later the application closes it, the current
implementation let it survive until the timewait timeout expires.

While the above is allowed by the protocol specification it
consumes resources for almost no reason and additionally
causes sporadic self-tests failures.

Let's move the mptcp socket to the TCP_CLOSE state when there are
no alive subflows at close time, so that the allocated resources
will be freed immediately.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
--
this could land either on -net or net-next, as it introduces
a change of behavior that "fixes" self-tests.

The fix tag would be:

Fixes: e16163b6e2b7 ("mptcp: refactor shutdown and close")
---
 net/mptcp/protocol.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 003b44a79fce..43f53fd20364 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2954,6 +2954,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
 	struct mptcp_subflow_context *subflow;
 	struct mptcp_sock *msk = mptcp_sk(sk);
 	bool do_cancel_work = false;
+	int subflows_alive = 0;
 
 	sk->sk_shutdown = SHUTDOWN_MASK;
 
@@ -2980,6 +2981,8 @@ bool __mptcp_close(struct sock *sk, long timeout)
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
 		bool slow = lock_sock_fast_nested(ssk);
 
+		subflows_alive += ssk->sk_state != TCP_CLOSE;
+
 		/* since the close timeout takes precedence on the fail one,
 		 * cancel the latter
 		 */
@@ -2995,6 +2998,12 @@ bool __mptcp_close(struct sock *sk, long timeout)
 	}
 	sock_orphan(sk);
 
+	/* all the subflows are closed, only timeout can change the msk
+	 * state, let's not keep resources busy for no reasons
+	 */
+	if (subflows_alive == 0)
+		inet_sk_state_store(sk, TCP_CLOSE);
+
 	sock_hold(sk);
 	pr_debug("msk=%p state=%d", sk, sk->sk_state);
 	if (msk->token)
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: mptcp: do not wait for bare sockets' timeout: Tests Results
  2023-01-26 15:44 [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Paolo Abeni
@ 2023-01-26 16:38 ` MPTCP CI
  2023-01-26 17:22 ` [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Matthieu Baerts
  2023-01-26 17:47 ` mptcp: do not wait for bare sockets' timeout: Tests Results MPTCP CI
  2 siblings, 0 replies; 6+ messages in thread
From: MPTCP CI @ 2023-01-26 16:38 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6644295248117760
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6644295248117760/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4603601666965504
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4603601666965504/summary/summary.txt

- {"code":404,"message":
  - "Can't find artifacts containing file conclusion.txt"}:
  - Task: https://cirrus-ci.com/task/5166551620386816
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5166551620386816/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_sockopts 🔴:
  - Task: https://cirrus-ci.com/task/5729501573808128
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5729501573808128/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/bb52e8434e1d


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout
  2023-01-26 15:44 [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Paolo Abeni
  2023-01-26 16:38 ` mptcp: do not wait for bare sockets' timeout: Tests Results MPTCP CI
@ 2023-01-26 17:22 ` Matthieu Baerts
  2023-01-26 17:36   ` Paolo Abeni
  2023-01-26 17:47 ` mptcp: do not wait for bare sockets' timeout: Tests Results MPTCP CI
  2 siblings, 1 reply; 6+ messages in thread
From: Matthieu Baerts @ 2023-01-26 17:22 UTC (permalink / raw)
  To: Paolo Abeni, mptcp

Hi Paolo,

On 26/01/2023 16:44, Paolo Abeni wrote:
> If the peer closes all the existing subflows for a given
> mptcp socket and later the application closes it, the current
> implementation let it survive until the timewait timeout expires.

Thank you for having looked at that and provided a solution!

> While the above is allowed by the protocol specification it
> consumes resources for almost no reason and additionally
> causes sporadic self-tests failures.

I guess if we are in __mptcp_close(), there is no reason to keep the
MPTCP socket alive if there is no more subflows: if there are new
subflows requests coming, they will not be accepted, right?

When the last subflow is closed without MPTCP DATA FIN, it might be good
to keep the MPTCP socket alive for a bit longer before closing the
socket to let the client joining from another network
(make-before-break?). (this should be controllable from the userspace
but that's another topic)

> Let's move the mptcp socket to the TCP_CLOSE state when there are
> no alive subflows at close time, so that the allocated resources
> will be freed immediately.
> 
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> --
> this could land either on -net or net-next, as it introduces
> a change of behavior that "fixes" self-tests.
> 
> The fix tag would be:
> 
> Fixes: e16163b6e2b7 ("mptcp: refactor shutdown and close")

If I understand properly, this doesn't break any 'make-before-break'
feature but it clean resources quicker while there was no reason to
wait, right? In this case it makes sense to target the -net tree.

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout
  2023-01-26 17:22 ` [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Matthieu Baerts
@ 2023-01-26 17:36   ` Paolo Abeni
  2023-01-26 18:01     ` Matthieu Baerts
  0 siblings, 1 reply; 6+ messages in thread
From: Paolo Abeni @ 2023-01-26 17:36 UTC (permalink / raw)
  To: Matthieu Baerts, mptcp

On Thu, 2023-01-26 at 18:22 +0100, Matthieu Baerts wrote:
> Hi Paolo,
> 
> On 26/01/2023 16:44, Paolo Abeni wrote:
> > If the peer closes all the existing subflows for a given
> > mptcp socket and later the application closes it, the current
> > implementation let it survive until the timewait timeout expires.
> 
> Thank you for having looked at that and provided a solution!
> 
> > While the above is allowed by the protocol specification it
> > consumes resources for almost no reason and additionally
> > causes sporadic self-tests failures.
> 
> I guess if we are in __mptcp_close(), there is no reason to keep the
> MPTCP socket alive if there is no more subflows: if there are new
> subflows requests coming, they will not be accepted, right?

Correct, as both passive and active subflow creation checks
mptcp_is_fully_established(sk), which in turn always returns false
after close() -> msk is not in ESTABLISHED status.
> 
> When the last subflow is closed without MPTCP DATA FIN, it might be good
> to keep the MPTCP socket alive for a bit longer before closing the
> socket to let the client joining from another network
> (make-before-break?).

yes, and we do that. Even with no subflow alive, the msk is closed only
_after_ the close(). Otherwise it stay happily around and the msk
socket status is not affected at all.

> (this should be controllable from the userspace
> but that's another topic)
> 
> > Let's move the mptcp socket to the TCP_CLOSE state when there are
> > no alive subflows at close time, so that the allocated resources
> > will be freed immediately.
> > 
> > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > --
> > this could land either on -net or net-next, as it introduces
> > a change of behavior that "fixes" self-tests.
> > 
> > The fix tag would be:
> > 
> > Fixes: e16163b6e2b7 ("mptcp: refactor shutdown and close")
> 
> If I understand properly, this doesn't break any 'make-before-break'
> feature but it clean resources quicker while there was no reason to
> wait, right? In this case it makes sense to target the -net tree.

I'm ok with the above.

Thanks for the review,

Paolo


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: mptcp: do not wait for bare sockets' timeout: Tests Results
  2023-01-26 15:44 [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Paolo Abeni
  2023-01-26 16:38 ` mptcp: do not wait for bare sockets' timeout: Tests Results MPTCP CI
  2023-01-26 17:22 ` [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Matthieu Baerts
@ 2023-01-26 17:47 ` MPTCP CI
  2 siblings, 0 replies; 6+ messages in thread
From: MPTCP CI @ 2023-01-26 17:47 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6644295248117760
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6644295248117760/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4603601666965504
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4603601666965504/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/4871645106536448
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4871645106536448/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Unstable: 1 failed test(s): packetdrill_sockopts 🔴:
  - Task: https://cirrus-ci.com/task/5729501573808128
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5729501573808128/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/bb52e8434e1d


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout
  2023-01-26 17:36   ` Paolo Abeni
@ 2023-01-26 18:01     ` Matthieu Baerts
  0 siblings, 0 replies; 6+ messages in thread
From: Matthieu Baerts @ 2023-01-26 18:01 UTC (permalink / raw)
  To: Paolo Abeni, mptcp

Hi Paolo,

On 26/01/2023 18:36, Paolo Abeni wrote:
> On Thu, 2023-01-26 at 18:22 +0100, Matthieu Baerts wrote:
>> Hi Paolo,
>>
>> On 26/01/2023 16:44, Paolo Abeni wrote:
>>> If the peer closes all the existing subflows for a given
>>> mptcp socket and later the application closes it, the current
>>> implementation let it survive until the timewait timeout expires.
>>
>> Thank you for having looked at that and provided a solution!
>>
>>> While the above is allowed by the protocol specification it
>>> consumes resources for almost no reason and additionally
>>> causes sporadic self-tests failures.
>>
>> I guess if we are in __mptcp_close(), there is no reason to keep the
>> MPTCP socket alive if there is no more subflows: if there are new
>> subflows requests coming, they will not be accepted, right?
> 
> Correct, as both passive and active subflow creation checks
> mptcp_is_fully_established(sk), which in turn always returns false
> after close() -> msk is not in ESTABLISHED status.

Thank you for the confirmation!

Then it looks good to me!

Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net>

>> When the last subflow is closed without MPTCP DATA FIN, it might be good
>> to keep the MPTCP socket alive for a bit longer before closing the
>> socket to let the client joining from another network
>> (make-before-break?).
> 
> yes, and we do that. Even with no subflow alive, the msk is closed only
> _after_ the close(). Otherwise it stay happily around and the msk
> socket status is not affected at all.

I missed that part, thank you!
(and now I remembered we already discussed about that, sorry)

>> (this should be controllable from the userspace
>> but that's another topic)
>>
>>> Let's move the mptcp socket to the TCP_CLOSE state when there are
>>> no alive subflows at close time, so that the allocated resources
>>> will be freed immediately.
>>>
>>> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
>>> --
>>> this could land either on -net or net-next, as it introduces
>>> a change of behavior that "fixes" self-tests.
>>>
>>> The fix tag would be:
>>>
>>> Fixes: e16163b6e2b7 ("mptcp: refactor shutdown and close")
>>
>> If I understand properly, this doesn't break any 'make-before-break'
>> feature but it clean resources quicker while there was no reason to
>> wait, right? In this case it makes sense to target the -net tree.
> 
> I'm ok with the above.

Good, just applied in our tree (fixes for -net) with my RvB and the
Fixes tags:

New patches for t/upstream-net:
- fca3ab68ee5a: mptcp: do not wait for bare sockets' timeout
- Results: 03f3f3e2e0ae..c32eb8d771ff (export-net)
- Results: e34b063a59ec..302a589ae88a (export)

Tests are now in progress:

https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export-net/20230126T175912
https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export/20230126T175912

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-01-26 18:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-26 15:44 [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Paolo Abeni
2023-01-26 16:38 ` mptcp: do not wait for bare sockets' timeout: Tests Results MPTCP CI
2023-01-26 17:22 ` [PATCH mptcp-next] mptcp: do not wait for bare sockets' timeout Matthieu Baerts
2023-01-26 17:36   ` Paolo Abeni
2023-01-26 18:01     ` Matthieu Baerts
2023-01-26 17:47 ` mptcp: do not wait for bare sockets' timeout: Tests Results MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.