mptcp.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios.
@ 2022-08-26 17:03 Paolo Abeni
  2022-08-26 19:03 ` mptcp: use fastclose on more edge scenarios.: Tests Results MPTCP CI
  2022-08-29 13:23 ` [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios Paolo Abeni
  0 siblings, 2 replies; 3+ messages in thread
From: Paolo Abeni @ 2022-08-26 17:03 UTC (permalink / raw)
  To: mptcp

Daire reported a user-space application hang-up when the
peer is forcibly closed before the data transfer completion.

The relevant application expects the peer to either
do an application-level clean shutdown or a transport-level
connection reset.

We can accommodate a such user by extending the fastclose
usage: at fd close time, if the msk socket has some unread
data, and at FIN_WAIT timeout.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
no fixes nor reported tag, as this is IMHO not for -net and
is more a new feature than a bug-fix, but we can adjust
---
 net/mptcp/protocol.c | 42 +++++++++++++++++++++++++++++-------------
 1 file changed, 29 insertions(+), 13 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index f4891db86217..417ee8860366 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2515,6 +2515,16 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
 	mptcp_reset_timeout(msk, 0);
 }
 
+static void mptcp_do_fastclose(struct sock *sk)
+{
+	struct mptcp_subflow_context *subflow, *tmp;
+	struct mptcp_sock *msk = mptcp_sk(sk);
+
+	mptcp_for_each_subflow_safe(msk, subflow, tmp)
+		__mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow),
+				  subflow, MPTCP_CF_FASTCLOSE);
+}
+
 static void mptcp_worker(struct work_struct *work)
 {
 	struct mptcp_sock *msk = container_of(work, struct mptcp_sock, work);
@@ -2546,6 +2556,7 @@ static void mptcp_worker(struct work_struct *work)
 	if (sock_flag(sk, SOCK_DEAD) &&
 	    (mptcp_check_close_timeout(sk) || sk->sk_state == TCP_CLOSE)) {
 		inet_sk_state_store(sk, TCP_CLOSE);
+		mptcp_do_fastclose(sk);
 		__mptcp_destroy_sock(sk);
 		goto unlock;
 	}
@@ -2798,6 +2809,18 @@ static void __mptcp_destroy_sock(struct sock *sk)
 	sock_put(sk);
 }
 
+static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
+{
+	/* Concurrent splices from sk_receive_queue into receive_queue will
+	 * always show at least one non-empty queue when checked in this order.
+	 */
+	if (skb_queue_empty_lockless(&((struct sock *)msk)->sk_receive_queue) &&
+	    skb_queue_empty_lockless(&msk->receive_queue))
+		return 0;
+
+	return EPOLLIN | EPOLLRDNORM;
+}
+
 static void mptcp_close(struct sock *sk, long timeout)
 {
 	struct mptcp_subflow_context *subflow;
@@ -2812,8 +2835,13 @@ static void mptcp_close(struct sock *sk, long timeout)
 		goto cleanup;
 	}
 
-	if (mptcp_close_state(sk))
+	if (mptcp_check_readable(msk)) {
+		/* the msk has read data, do the MPTCP equivalent of TCP reset */
+		inet_sk_state_store(sk, TCP_CLOSE);
+		mptcp_do_fastclose(sk);
+	} else if (mptcp_close_state(sk)) {
 		__mptcp_wr_shutdown(sk);
+	}
 
 	sk_stream_wait_close(sk, timeout);
 
@@ -3621,18 +3649,6 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
 	return err;
 }
 
-static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
-{
-	/* Concurrent splices from sk_receive_queue into receive_queue will
-	 * always show at least one non-empty queue when checked in this order.
-	 */
-	if (skb_queue_empty_lockless(&((struct sock *)msk)->sk_receive_queue) &&
-	    skb_queue_empty_lockless(&msk->receive_queue))
-		return 0;
-
-	return EPOLLIN | EPOLLRDNORM;
-}
-
 static __poll_t mptcp_check_writeable(struct mptcp_sock *msk)
 {
 	struct sock *sk = (struct sock *)msk;
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: mptcp: use fastclose on more edge scenarios.: Tests Results
  2022-08-26 17:03 [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios Paolo Abeni
@ 2022-08-26 19:03 ` MPTCP CI
  2022-08-29 13:23 ` [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios Paolo Abeni
  1 sibling, 0 replies; 3+ messages in thread
From: MPTCP CI @ 2022-08-26 19:03 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 2 failed test(s): packetdrill_syscalls selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/6532607029542912
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6532607029542912/summary/summary.txt

- {"code":404,"message":
  - "Can't find artifacts containing file conclusion.txt"}:
  - Task: https://cirrus-ci.com/task/4703019680923648
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4703019680923648/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/19f1e4100f11


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios.
  2022-08-26 17:03 [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios Paolo Abeni
  2022-08-26 19:03 ` mptcp: use fastclose on more edge scenarios.: Tests Results MPTCP CI
@ 2022-08-29 13:23 ` Paolo Abeni
  1 sibling, 0 replies; 3+ messages in thread
From: Paolo Abeni @ 2022-08-29 13:23 UTC (permalink / raw)
  To: mptcp

On Fri, 2022-08-26 at 19:03 +0200, Paolo Abeni wrote:
> Daire reported a user-space application hang-up when the
> peer is forcibly closed before the data transfer completion.
> 
> The relevant application expects the peer to either
> do an application-level clean shutdown or a transport-level
> connection reset.
> 
> We can accommodate a such user by extending the fastclose
> usage: at fd close time, if the msk socket has some unread
> data, and at FIN_WAIT timeout.
> 
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>

self-nack. This causes self-tests hang-up, even if the root cause of
the latter is likely pre-existent and this just uncover the latter
issue.

Will send a v2.

/P


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-08-29 13:23 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-26 17:03 [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios Paolo Abeni
2022-08-26 19:03 ` mptcp: use fastclose on more edge scenarios.: Tests Results MPTCP CI
2022-08-29 13:23 ` [PATCH mptcp-next] mptcp: use fastclose on more edge scenarios Paolo Abeni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).