* [PATCH mptcp-next] Squash-to: "mptcp: faster active backup recovery"
@ 2021-07-14 8:55 Paolo Abeni
0 siblings, 0 replies; only message in thread
From: Paolo Abeni @ 2021-07-14 8:55 UTC (permalink / raw)
To: mptcp
kbuildbot rightfully complainted about non initialized variable
We can drop that variable altogether, and always try to push
the pending data: after the stale even we can possibly use
backup subflow to flush out tx queue.
Additionally fix doc a bit.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
Documentation/networking/mptcp-sysctl.rst | 12 ++++++------
net/mptcp/pm_netlink.c | 12 +++++++-----
2 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/Documentation/networking/mptcp-sysctl.rst b/Documentation/networking/mptcp-sysctl.rst
index 45fa8b2aefa8..b0d4da71e68e 100644
--- a/Documentation/networking/mptcp-sysctl.rst
+++ b/Documentation/networking/mptcp-sysctl.rst
@@ -47,12 +47,12 @@ allow_join_initial_addr_port - BOOLEAN
Default: 1
stale_loss_cnt - INTEGER
- The number of MPTCP-level retransmission intervals with no traffic and
- pending outstanding data on a given subflow required to declare it stale.
- The packet scheduler ignores stale subflows.
- A low stale_loss_cnt value allows for fast active-backup switch-over,
- an high value maximixe links utilization on edge scenarios e.g. lossy
- link with high BER or peer pausing the data processing.
+ The number of MPTCP-level retransmission intervals with no traffic and
+ pending outstanding data on a given subflow required to declare it stale.
+ The packet scheduler ignores stale subflows.
+ A low stale_loss_cnt value allows for fast active-backup switch-over,
+ an high value maximize links utilization on edge scenarios e.g. lossy
+ link with high BER or peer pausing the data processing.
This is a per-namespace sysctl.
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 3d4fa2dd2cea..05a2da20ab31 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -908,7 +908,7 @@ void mptcp_pm_nl_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ss
unsigned int active_max_loss_cnt;
struct net *net = sock_net(sk);
unsigned int stale_loss_cnt;
- bool slow, push;
+ bool slow;
stale_loss_cnt = mptcp_stale_loss_cnt(net);
if (subflow->stale || !stale_loss_cnt || subflow->stale_count <= stale_loss_cnt)
@@ -923,14 +923,16 @@ void mptcp_pm_nl_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ss
slow = lock_sock_fast(ssk);
if (!tcp_rtx_and_write_queues_empty(ssk)) {
subflow->stale = 1;
- push = __mptcp_retransmit_pending_data(sk);
+ __mptcp_retransmit_pending_data(sk);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_SUBFLOWSTALE);
}
unlock_sock_fast(ssk, slow);
- /* pending data on the idle subflow: retransmit */
- if (push)
- __mptcp_push_pending(sk, 0);
+ /* always try to push the pending data regarless of re-injections:
+ * we can possibly use backup subflows now, and subflow selection
+ * is cheap under the msk socket lock
+ */
+ __mptcp_push_pending(sk, 0);
return;
}
}
--
2.26.3
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2021-07-14 8:55 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14 8:55 [PATCH mptcp-next] Squash-to: "mptcp: faster active backup recovery" Paolo Abeni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).