* [PATCH mptcp-next v2] mptcp: call mp_fail_no_response only when needed
@ 2022-06-08 9:33 Geliang Tang
2022-06-08 10:48 ` Paolo Abeni
2022-06-08 10:57 ` mptcp: call mp_fail_no_response only when needed: Tests Results MPTCP CI
0 siblings, 2 replies; 4+ messages in thread
From: Geliang Tang @ 2022-06-08 9:33 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
mptcp_mp_fail_no_response shouldn't be invoked on each worker run, it
should be invoked only when MP_FAIL response timeout occurs.
This patch adds a new msk flag MPTCP_WORK_TIMEOUT, set it in
mptcp_timeout_timer(). And test it in mptcp_worker() before invoking
mptcp_mp_fail_no_response().
Check the SOCK_DEAD flag on the subflow in mptcp_mp_fail_no_response()
to make sure it's a MP_FAIL timeout, not a MPTCP socket close timeout.
Now mp_fail_response_expect_subflow() helper is only used by
mptcp_mp_fail_no_response(), move the definition of the helper right
before mptcp_mp_fail_no_response().
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/281
Fixes: d9fb797046c5 ("mptcp: Do not traverse the subflow connection list without lock")
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
net/mptcp/protocol.c | 41 ++++++++++++++++++++++++-----------------
net/mptcp/protocol.h | 1 +
2 files changed, 25 insertions(+), 17 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index d6aef4b13b8a..123fa2b3f200 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2167,25 +2167,13 @@ static void mptcp_retransmit_timer(struct timer_list *t)
sock_put(sk);
}
-static struct mptcp_subflow_context *
-mp_fail_response_expect_subflow(struct mptcp_sock *msk)
-{
- struct mptcp_subflow_context *subflow, *ret = NULL;
-
- mptcp_for_each_subflow(msk, subflow) {
- if (READ_ONCE(subflow->mp_fail_response_expect)) {
- ret = subflow;
- break;
- }
- }
-
- return ret;
-}
-
static void mptcp_timeout_timer(struct timer_list *t)
{
struct sock *sk = from_timer(sk, t, sk_timer);
+ bh_lock_sock(sk);
+ __set_bit(MPTCP_WORK_TIMEOUT, &mptcp_sk(sk)->flags);
+ bh_unlock_sock(sk);
mptcp_schedule_work(sk);
sock_put(sk);
}
@@ -2505,6 +2493,21 @@ static void __mptcp_retrans(struct sock *sk)
mptcp_reset_timer(sk);
}
+static struct mptcp_subflow_context *
+mp_fail_response_expect_subflow(struct mptcp_sock *msk)
+{
+ struct mptcp_subflow_context *subflow, *ret = NULL;
+
+ mptcp_for_each_subflow(msk, subflow) {
+ if (READ_ONCE(subflow->mp_fail_response_expect)) {
+ ret = subflow;
+ break;
+ }
+ }
+
+ return ret;
+}
+
static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
{
struct mptcp_subflow_context *subflow;
@@ -2513,9 +2516,12 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
subflow = mp_fail_response_expect_subflow(msk);
if (subflow) {
+ ssk = mptcp_subflow_tcp_sock(subflow);
+ if (sock_flag(ssk, SOCK_DEAD))
+ return;
+
pr_debug("MP_FAIL doesn't respond, reset the subflow");
- ssk = mptcp_subflow_tcp_sock(subflow);
slow = lock_sock_fast(ssk);
mptcp_subflow_reset(ssk);
unlock_sock_fast(ssk, slow);
@@ -2562,7 +2568,8 @@ static void mptcp_worker(struct work_struct *work)
if (test_and_clear_bit(MPTCP_WORK_RTX, &msk->flags))
__mptcp_retrans(sk);
- mptcp_mp_fail_no_response(msk);
+ if (test_and_clear_bit(MPTCP_WORK_TIMEOUT, &msk->flags))
+ mptcp_mp_fail_no_response(msk);
unlock:
release_sock(sk);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index d406b5afbee4..f78caaaaa360 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -116,6 +116,7 @@
#define MPTCP_WORK_EOF 3
#define MPTCP_FALLBACK_DONE 4
#define MPTCP_WORK_CLOSE_SUBFLOW 5
+#define MPTCP_WORK_TIMEOUT 6
/* MPTCP socket release cb flags */
#define MPTCP_PUSH_PENDING 1
--
2.35.3
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH mptcp-next v2] mptcp: call mp_fail_no_response only when needed
2022-06-08 9:33 [PATCH mptcp-next v2] mptcp: call mp_fail_no_response only when needed Geliang Tang
@ 2022-06-08 10:48 ` Paolo Abeni
2022-06-08 10:57 ` mptcp: call mp_fail_no_response only when needed: Tests Results MPTCP CI
1 sibling, 0 replies; 4+ messages in thread
From: Paolo Abeni @ 2022-06-08 10:48 UTC (permalink / raw)
To: Geliang Tang, mptcp
On Wed, 2022-06-08 at 17:33 +0800, Geliang Tang wrote:
> mptcp_mp_fail_no_response shouldn't be invoked on each worker run, it
> should be invoked only when MP_FAIL response timeout occurs.
>
> This patch adds a new msk flag MPTCP_WORK_TIMEOUT, set it in
> mptcp_timeout_timer(). And test it in mptcp_worker() before invoking
> mptcp_mp_fail_no_response().
>
> Check the SOCK_DEAD flag on the subflow in mptcp_mp_fail_no_response()
> to make sure it's a MP_FAIL timeout, not a MPTCP socket close timeout.
>
> Now mp_fail_response_expect_subflow() helper is only used by
> mptcp_mp_fail_no_response(), move the definition of the helper right
> before mptcp_mp_fail_no_response().
>
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/281
> Fixes: d9fb797046c5 ("mptcp: Do not traverse the subflow connection list without lock")
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> net/mptcp/protocol.c | 41 ++++++++++++++++++++++++-----------------
> net/mptcp/protocol.h | 1 +
> 2 files changed, 25 insertions(+), 17 deletions(-)
>
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index d6aef4b13b8a..123fa2b3f200 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -2167,25 +2167,13 @@ static void mptcp_retransmit_timer(struct timer_list *t)
> sock_put(sk);
> }
>
> -static struct mptcp_subflow_context *
> -mp_fail_response_expect_subflow(struct mptcp_sock *msk)
> -{
> - struct mptcp_subflow_context *subflow, *ret = NULL;
> -
> - mptcp_for_each_subflow(msk, subflow) {
> - if (READ_ONCE(subflow->mp_fail_response_expect)) {
> - ret = subflow;
> - break;
> - }
> - }
> -
> - return ret;
> -}
I think you don't need to move mp_fail_response_expect_subflow() around
> -
> static void mptcp_timeout_timer(struct timer_list *t)
> {
> struct sock *sk = from_timer(sk, t, sk_timer);
>
> + bh_lock_sock(sk);
> + __set_bit(MPTCP_WORK_TIMEOUT, &mptcp_sk(sk)->flags);
> + bh_unlock_sock(sk);
This is not correct: the caller must always manipulate msk->flags with
atomic operations. No need to acquire the msk spin lock
> mptcp_schedule_work(sk);
> sock_put(sk);
> }
> @@ -2505,6 +2493,21 @@ static void __mptcp_retrans(struct sock *sk)
> mptcp_reset_timer(sk);
> }
>
> +static struct mptcp_subflow_context *
> +mp_fail_response_expect_subflow(struct mptcp_sock *msk)
> +{
> + struct mptcp_subflow_context *subflow, *ret = NULL;
> +
> + mptcp_for_each_subflow(msk, subflow) {
> + if (READ_ONCE(subflow->mp_fail_response_expect)) {
> + ret = subflow;
> + break;
> + }
> + }
> +
> + return ret;
> +}
> +
> static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
> {
> struct mptcp_subflow_context *subflow;
> @@ -2513,9 +2516,12 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
>
> subflow = mp_fail_response_expect_subflow(msk);
> if (subflow) {
> + ssk = mptcp_subflow_tcp_sock(subflow);
> + if (sock_flag(ssk, SOCK_DEAD))
> + return;
I *think* /guess it's better to check the msk SOCK_DEAD flag?!?
Additionally, if you do that, you can move the check in mptcp_worker()
Note that if mp_fail timeout overlap with the close timeout things will
be quite fuzzy. Very complex to follow/understand at best.
What about the following?
- subflow_check_data_avail() does:
msk->mp_fail_subflow = subflow;
msk->mp_fail_stamp = jiffies;
sk_reset_timer((struct sock *)msk,
&((struct sock *)msk)->sk_timer,
jiffies + TCP_RTO_MAX);
- no changes to mptcp_timeout_timer()
- mptcp_worker does:
if (msk->mp_fail_subflow &&
time_after(jiffies, msk->mp_fail_stamp)) {
subflow = msk->mp_fail_subflow;
pr_debug("MP_FAIL doesn't respond, reset the
subflow");
ssk = mptcp_subflow_tcp_sock(subflow);
slow = lock_sock_fast(ssk);
mptcp_subflow_reset(ssk);
unlock_sock_fast(ssk, slow);
}
No need to traverse the subflow list and mp_fail will co-exist with
close timeout.
Note: mp_fail_stamp and mp_fail_subflow are new fields.
WDYT?
Thanks!
Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: mptcp: call mp_fail_no_response only when needed: Tests Results
2022-06-08 9:33 [PATCH mptcp-next v2] mptcp: call mp_fail_no_response only when needed Geliang Tang
2022-06-08 10:48 ` Paolo Abeni
@ 2022-06-08 10:57 ` MPTCP CI
1 sibling, 0 replies; 4+ messages in thread
From: MPTCP CI @ 2022-06-08 10:57 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Success! ✅:
- Task: https://cirrus-ci.com/task/5546032812523520
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5546032812523520/summary/summary.txt
- KVM Validation: debug:
- Unstable: 3 failed test(s): packetdrill_add_addr selftest_diag selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/6671932719366144
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6671932719366144/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/c4865fd329ad
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: mptcp: call mp_fail_no_response only when needed: Tests Results
2022-06-08 6:36 [PATCH mptcp-next] mptcp: call mp_fail_no_response only when needed Geliang Tang
@ 2022-06-08 8:15 ` MPTCP CI
0 siblings, 0 replies; 4+ messages in thread
From: MPTCP CI @ 2022-06-08 8:15 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Success! ✅:
- Task: https://cirrus-ci.com/task/5856398490730496
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5856398490730496/summary/summary.txt
- KVM Validation: debug:
- Unstable: 3 failed test(s): packetdrill_sockopts selftest_diag selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/5293448537309184
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5293448537309184/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/527585a8ae40
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-06-08 10:57 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-08 9:33 [PATCH mptcp-next v2] mptcp: call mp_fail_no_response only when needed Geliang Tang
2022-06-08 10:48 ` Paolo Abeni
2022-06-08 10:57 ` mptcp: call mp_fail_no_response only when needed: Tests Results MPTCP CI
-- strict thread matches above, loose matches on Subject: below --
2022-06-08 6:36 [PATCH mptcp-next] mptcp: call mp_fail_no_response only when needed Geliang Tang
2022-06-08 8:15 ` mptcp: call mp_fail_no_response only when needed: Tests Results MPTCP CI
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).