All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-next 0/2] mptcp: rx path refactor follow-ups
@ 2022-08-24 11:18 Paolo Abeni
  2022-08-24 11:18 ` [PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock" Paolo Abeni
  2022-08-24 11:18 ` [PATCH mptcp-next 2/2] mptcp: never re-inject data on a single subflow Paolo Abeni
  0 siblings, 2 replies; 5+ messages in thread
From: Paolo Abeni @ 2022-08-24 11:18 UTC (permalink / raw)
  To: mptcp

A couple of fixes trying to address the self-tests failures
seen after the recent refactor.

Paolo Abeni (2):
  Squash-to: "mptcp: move msk input path under full msk socket lock"
  mptcp: never re-inject data on a single subflow

 net/mptcp/protocol.c | 3 ++-
 net/mptcp/sched.c    | 4 ++++
 2 files changed, 6 insertions(+), 1 deletion(-)

-- 
2.37.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock"
  2022-08-24 11:18 [PATCH mptcp-next 0/2] mptcp: rx path refactor follow-ups Paolo Abeni
@ 2022-08-24 11:18 ` Paolo Abeni
  2022-08-24 12:38   ` Paolo Abeni
  2022-08-24 11:18 ` [PATCH mptcp-next 2/2] mptcp: never re-inject data on a single subflow Paolo Abeni
  1 sibling, 1 reply; 5+ messages in thread
From: Paolo Abeni @ 2022-08-24 11:18 UTC (permalink / raw)
  To: mptcp

whoops, I forgot to really test for pending data at release_cb time.

The above causes several recurring failures in the self-tests.

Note that this could affect badly the mptcp performance (as we now
really move relevant CPU time from the subflow rx path/ksoftirqd to
the user-space process), even if I haven't performed perf-tests yet
on top of this change.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
This should be placed just before: "mptcp: move RCVPRUNE event later"/
the last rx path refactor
---
 net/mptcp/protocol.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 74699bd47edf..d47c19494023 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3008,7 +3008,8 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
 
 #define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \
 				      BIT(MPTCP_RETRANSMIT) | \
-				      BIT(MPTCP_FLUSH_JOIN_LIST))
+				      BIT(MPTCP_FLUSH_JOIN_LIST) | \
+				      BIT(MPTCP_DEQUEUE))
 
 /* processes deferred events and flush wmem */
 static void mptcp_release_cb(struct sock *sk)
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH mptcp-next 2/2] mptcp: never re-inject data on a single subflow
  2022-08-24 11:18 [PATCH mptcp-next 0/2] mptcp: rx path refactor follow-ups Paolo Abeni
  2022-08-24 11:18 ` [PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock" Paolo Abeni
@ 2022-08-24 11:18 ` Paolo Abeni
  2022-08-24 13:00   ` mptcp: never re-inject data on a single subflow: Tests Results MPTCP CI
  1 sibling, 1 reply; 5+ messages in thread
From: Paolo Abeni @ 2022-08-24 11:18 UTC (permalink / raw)
  To: mptcp

If the MPTCP-level data ack is delayed WRT to the TCP-level ack,
there is a single subflow and the user-space stops pushing data,
the MPTCP-level retransmit may trigger in, fooling (disallowing)
infinite mapping fallback.

All the above is triggered quite reliably by the self-tests, once
we move the MPTCP receive path under the msk socket lock - delaying
the MPTCP-level acks.

Plain TCP is good enough to handle retransmissions as needed.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
Note that there are a couple of sad untold stories here. The first one
is as follow: with the delayed mptcp-level ack, at MPC handshake time we have:

MPC + SYN ->
	<- MPC + SYN_ACK(1)
MPC + ACK(1) ->
	<- DSS(n bytes) + ACK(1)
DSS_ACK(1) + ACK(n+1)

and no more acks from the client side, even after reading the
incoming n bytes data. So a possible alternative solution would be
to tweek mptcp_cleanup_rbuf() to properly handle this scenario.

Additionally, possibly, still due to the mptcp-level delyated ack,
we could need to tweek mptcp_cleanup_rbuf() more - e.g. do we need
to force mptcp-level ack when there is a large enough amount of newly
"ackable" data due to __mptcp_move_skbs() action? I don't know.

I guess/hope that the condition on mptcp-level window update already
implemented in mptcp_cleanup_rbuf() should also cover for the above,
but I'm not sure.

The other point is that I can't recall the previous discussion about
avoiding re-injection with a single subflow. I now think that is bad,
at least with delayed mptcp-level ack, and I don't see a need for that,
but possibly/likely I'm forgetting something?!?
---
 net/mptcp/sched.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
index 044c5ec8bbfb..409e832085c2 100644
--- a/net/mptcp/sched.c
+++ b/net/mptcp/sched.c
@@ -162,6 +162,10 @@ struct sock *mptcp_sched_get_retrans(struct mptcp_sock *msk)
 	if (__mptcp_check_fallback(msk))
 		return NULL;
 
+	/* never retransmit when there is a single subflow */
+	if (list_is_singular(&msk->conn_list) && list_empty(&msk->join_list))
+		return NULL;
+
 	if (!msk->sched)
 		return mptcp_subflow_get_retrans(msk);
 
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock"
  2022-08-24 11:18 ` [PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock" Paolo Abeni
@ 2022-08-24 12:38   ` Paolo Abeni
  0 siblings, 0 replies; 5+ messages in thread
From: Paolo Abeni @ 2022-08-24 12:38 UTC (permalink / raw)
  To: mptcp

On Wed, 2022-08-24 at 13:18 +0200, Paolo Abeni wrote:
> whoops, I forgot to really test for pending data at release_cb time.
> 
> The above causes several recurring failures in the self-tests.
> 
> Note that this could affect badly the mptcp performance (as we now
> really move relevant CPU time from the subflow rx path/ksoftirqd to
> the user-space process), even if I haven't performed perf-tests yet
> on top of this change.

The first raw number are quite discouraging :/ 
-30% on single subflow test.

/P


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: mptcp: never re-inject data on a single subflow: Tests Results
  2022-08-24 11:18 ` [PATCH mptcp-next 2/2] mptcp: never re-inject data on a single subflow Paolo Abeni
@ 2022-08-24 13:00   ` MPTCP CI
  0 siblings, 0 replies; 5+ messages in thread
From: MPTCP CI @ 2022-08-24 13:00 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 2 failed test(s): selftest_mptcp_join selftest_simult_flows 🔴:
  - Task: https://cirrus-ci.com/task/4563343720579072
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4563343720579072/summary/summary.txt

- KVM Validation: debug:
  - Unstable: 2 failed test(s): selftest_mptcp_connect selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/5689243627421696
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5689243627421696/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/a85c4255a43d


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-08-24 13:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-24 11:18 [PATCH mptcp-next 0/2] mptcp: rx path refactor follow-ups Paolo Abeni
2022-08-24 11:18 ` [PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock" Paolo Abeni
2022-08-24 12:38   ` Paolo Abeni
2022-08-24 11:18 ` [PATCH mptcp-next 2/2] mptcp: never re-inject data on a single subflow Paolo Abeni
2022-08-24 13:00   ` mptcp: never re-inject data on a single subflow: Tests Results MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.