stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Mohamed Abuelfotoh, Hazem" <abuehaze@amazon.com>
To: "netdev@vger.kernel.org" <netdev@vger.kernel.org>
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"edumazet@google.com" <edumazet@google.com>,
	"ycheng@google.com" <ycheng@google.com>,
	"ncardwell@google.com" <ncardwell@google.com>,
	"weiwan@google.com" <weiwan@google.com>,
	"Strohman, Andy" <astroh@amazon.com>,
	"Herrenschmidt, Benjamin" <benh@amazon.com>
Subject: Re: [PATCH net-next] tcp: optimise  receiver buffer autotuning initialisation for high latency connections
Date: Fri, 4 Dec 2020 18:19:37 +0000	[thread overview]
Message-ID: <44E3AA29-F033-4B8E-A1BC-E38824B5B1E3@amazon.com> (raw)
In-Reply-To: <20201204180622.14285-1-abuehaze@amazon.com>

[-- Attachment #1: Type: text/plain, Size: 13343 bytes --]

Hey Team,

I am sending you this e-mail as a follow-up to provide more context about the patch that I proposed in my previous e-mail.


1-We have received a customer complain[1] about degraded download speed   from google endpoints after they upgraded their Ubuntu kernel from 4.14 to 5.4.These customers were getting around 80MB/s on kernel 4.14 which became 3MB/s after the upgrade to kernel 5.4.
2-We tried to reproduce the issue locally between EC2 instances within the same region but we couldn’t however we were able to reproduce it when fetching data from google endpoint.
3-The issue could only be reproduced in Regions where we have high RTT(around 12msec  or more ) with Google endpoints.
4-We have found some workarounds that can be applied on the receiver side which has proven to be effective and I am listing them below:
            A) Decrease TCP socket default rmem from 131072 to 87380
            B) Decrease MTU from 9001 to 1500.
            C) Change sysctl_tcp_adv_win_scale from default 1 to 0 or 2
            D)We have also found that disabling net.ipv4.tcp_moderate_rcvbuf on kernel 4.14 is giving exactly the same bad performance speed.
5-We have done some kernel bisect to understand when this behaviour has been introduced and found that   commit a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")[2] which has been merged to mainline kernel 4.19.86 is the culprit behind this download performance degradation, The commit  mainly did two main changes:
A)Raising the initial TCP receive buffer size and receive window.
B)Changing the way in which TCP Dynamic Right Sizing (DRS) is been kicked off.

6)There was a regression that has been introduced because of the above patch causing the receive window scaling  to take long time after raising the initial receiver buffer & receive window  and there was additional fix for that  in commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")[3].

7)Commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner") was trying to decrease the initial rcvq_space.space which  is used in TCP's internal auto-tuning to grow socket buffers based on how much data the kernel estimates the sender can send and It should  change over the life of any connection based on the amount of data that the sender is sending. This patch is relying on advmss (which is the MSS configured on the receiver) to identify the initial receive space, although this works very well with receivers with small MTUs like 1500 it’s doesn’t help if the receiver is configured to use Jumbo frames (9001 MTU) which is the default MTU on AWS EC2 instances and this is why we think this hasn’t been reported before beside the high RTT >=12msec required to see the issue as well.

8)After further debugging and testing we have found that the issue can only be reproduced under any of the  below conditions:
A)Sender (MTU 1500) using bbr/bbrv2 as congestion control algorithm ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072   running kernel 4.19.86 or later with RTT >=12msec.——>consistently reproducible
B)Sender (MTU 1500) using cubic as congestion control algorithm with fq as disc ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072 running kernel 4.19.86 or later with RTT >=30msec.——>consistently reproducible.
C)Sender (MTU 1500) using cubic as congestion control algorithm with pfifo_fast as qdisc ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072   running kernel 4.19.86 or later with RTT >=30msec.——>intermittently  reproducible
D)Sender needs a MTU of 1500. If the sender is using MTU of 9001  with no MSS clamping , then we  couldn’t  reproduce the issue.
E)AWS EC2 instances are using 9001 as MTU by default hence they are likely more impacted by this.


9)With some kernel hacking & packet capture analysis we found that the main issue is that under the above mentioned conditions the receive window never scales up as it looks like the tcp receiver autotuning never kicks off, I have attached to this e-mail  screenshots showing Window scaling with and without the proposed patch.
We also found that all workarounds either decreasing initial rcvq_space (this includes decreasing receiver advertised MSS from 9001 to 1500 or  default receive buffer size from 131072 to 87380) or increasing the maximum advertised receive window (before TCP autotuning start scaling) and this includes changing net.ipv4.tcp_adv_win_scale from 1 to 0 or 2.

10)It looks like when the issue happen we have a  kind of deadlock here so advertised receive window has to exceed rcvq_space for the tcp auto tuning to kickoff at the same time with the initial default  configuration the receive window is not going to exceed rcvq_space because it can only get half of the initial receive socket buffer size.

11)The current code which is based on patch has main drawback  which should be handled:
A)It relies on receiver configured MTU to define the initial receive space(threshold where tcp autotuning starts), as mentioned above this works well with 1500 MTU because with that it will make sure that initial receive space is lower than receive window so tcp autotuning will work just fine while it won’t work with Jumbo frames in use on the receiver because at this case the receiver won’t start tcp autotuning especially with high RTT and we will be hitting the regression that commit  041a14d26715 ("tcp: start receiver buffer autotuning sooner") was trying to handle.
12)I am proposing  the below patch which is relying on RCV_MSS (our guess about MSS used by the peer which is equal to TCP_MSS_DEFAULT 536 bytes by default) this should work regardless the receiver configured MSS. I am also sharing  my iperf test results with and without the patch and also verified  that the connection won’t get stuck in the middle in case of packet loss or latency spike which I emulated using tc netem on the sender side.


Test Results using the same sender & receiver:

-Without our proposed patch

#iperf3 -c xx.xx.xx.xx -t15 -i1 -R
Connecting to host xx.xx.xx.xx, port 5201
Reverse mode, remote host xx.xx.xx.xx is sending
[  4] local 172.31.37.167 port 52838 connected to xx.xx.xx.xx port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   269 KBytes  2.20 Mbits/sec
[  4]   1.00-2.00   sec   332 KBytes  2.72 Mbits/sec
[  4]   2.00-3.00   sec   334 KBytes  2.73 Mbits/sec
[  4]   3.00-4.00   sec   335 KBytes  2.75 Mbits/sec
[  4]   4.00-5.00   sec   332 KBytes  2.72 Mbits/sec
[  4]   5.00-6.00   sec   283 KBytes  2.32 Mbits/sec
[  4]   6.00-7.00   sec   332 KBytes  2.72 Mbits/sec
[  4]   7.00-8.00   sec   335 KBytes  2.75 Mbits/sec
[  4]   8.00-9.00   sec   335 KBytes  2.75 Mbits/sec
[  4]   9.00-10.00  sec   334 KBytes  2.73 Mbits/sec
[  4]  10.00-11.00  sec   332 KBytes  2.72 Mbits/sec
[  4]  11.00-12.00  sec   332 KBytes  2.72 Mbits/sec
[  4]  12.00-13.00  sec   338 KBytes  2.77 Mbits/sec
[  4]  13.00-14.00  sec   334 KBytes  2.73 Mbits/sec
[  4]  14.00-15.00  sec   332 KBytes  2.72 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec  6.07 MBytes  3.39 Mbits/sec    0             sender
[  4]   0.00-15.00  sec  4.90 MBytes  2.74 Mbits/sec                  receiver

iperf Done.


Test downloading from google endpoint:

# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
--2020-12-04 16:53:00--  https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.1.48, 172.217.8.176, 172.217.4.48, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.1.48|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 113320760 (108M) [application/octet-stream]
Saving to: ‘kubelet.45’

100%[===================================================================================================================================================>] 113,320,760 3.04MB/s   in 36s

2020-12-04 16:53:36 (3.02 MB/s) - ‘kubelet’ saved [113320760/113320760]


########################################################################################################################

-With the proposed  patch:

#iperf3 -c xx.xx.xx.xx -t15 -i1 -R
Connecting to host xx.xx.xx.xx, port 5201
Reverse mode, remote host xx.xx.xx.xx is sending
[  4] local 172.31.37.167 port 44514 connected to xx.xx.xx.xx port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   911 KBytes  7.46 Mbits/sec
[  4]   1.00-2.00   sec  8.95 MBytes  75.1 Mbits/sec
[  4]   2.00-3.00   sec  9.57 MBytes  80.3 Mbits/sec
[  4]   3.00-4.00   sec  9.56 MBytes  80.2 Mbits/sec
[  4]   4.00-5.00   sec  9.58 MBytes  80.3 Mbits/sec
[  4]   5.00-6.00   sec  9.58 MBytes  80.4 Mbits/sec
[  4]   6.00-7.00   sec  9.59 MBytes  80.4 Mbits/sec
[  4]   7.00-8.00   sec  9.59 MBytes  80.5 Mbits/sec
[  4]   8.00-9.00   sec  9.58 MBytes  80.4 Mbits/sec
[  4]   9.00-10.00  sec  9.58 MBytes  80.4 Mbits/sec
[  4]  10.00-11.00  sec  9.59 MBytes  80.4 Mbits/sec
[  4]  11.00-12.00  sec  9.59 MBytes  80.5 Mbits/sec
[  4]  12.00-13.00  sec  8.05 MBytes  67.5 Mbits/sec
[  4]  13.00-14.00  sec  9.57 MBytes  80.3 Mbits/sec
[  4]  14.00-15.00  sec  9.57 MBytes  80.3 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec   136 MBytes  76.3 Mbits/sec    0             sender
[  4]   0.00-15.00  sec   134 MBytes  75.2 Mbits/sec                  receiver

iperf Done.

Test downloading from google endpoint:


# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
--2020-12-04 16:54:34--  https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.0.16, 216.58.192.144, 172.217.6.16, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.0.16|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 113320760 (108M) [application/octet-stream]
Saving to: ‘kubelet’

100%[===================================================================================================================================================>] 113,320,760 80.0MB/s   in 1.4s

2020-12-04 16:54:36 (80.0 MB/s) - ‘kubelet.1’ saved [113320760/113320760]

Links:

[1] https://github.com/kubernetes/kops/issues/10206
[2] https://lore.kernel.org/patchwork/patch/1157936/
[3] https://lore.kernel.org/patchwork/patch/1157883/



Thank you.

Hazem

On 04/12/2020, 18:08, "Hazem Mohamed Abuelfotoh" <abuehaze@amazon.com> wrote:

        Previously receiver buffer auto-tuning starts after receiving
        one advertised window amount of data.After the initial
        receiver buffer was raised by
        commit a337531b942b ("tcp: up initial rmem to 128KB
        and SYN rwin to around 64KB"),the receiver buffer may
        take too long for TCP autotuning to start raising
        the receiver buffer size.
        commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
        tried to decrease the threshold at which TCP auto-tuning starts
        but it's doesn't work well in some environments
        where the receiver has large MTU (9001) configured
        specially within environments where RTT is high.
        To address this issue this patch is relying on RCV_MSS
        so auto-tuning can start early regardless
        the receiver configured MTU.

        Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
        Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")

    Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
    ---
     net/ipv4/tcp_input.c | 3 ++-
     1 file changed, 2 insertions(+), 1 deletion(-)

    diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
    index 389d1b340248..f0ffac9e937b 100644
    --- a/net/ipv4/tcp_input.c
    +++ b/net/ipv4/tcp_input.c
    @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
     static void tcp_init_buffer_space(struct sock *sk)
     {
     	int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
    +	struct inet_connection_sock *icsk = inet_csk(sk);
     	struct tcp_sock *tp = tcp_sk(sk);
     	int maxwin;

     	if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
     		tcp_sndbuf_expand(sk);

    -	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
    +	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
     	tcp_mstamp_refresh(tp);
     	tp->rcvq_space.time = tp->tcp_mstamp;
     	tp->rcvq_space.seq = tp->copied_seq;
    -- 
    2.16.6





Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



[-- Attachment #2: sender-bbr-bad-unpatched.png --]
[-- Type: image/png, Size: 125913 bytes --]

[-- Attachment #3: sender-bbr-good-patched.png --]
[-- Type: image/png, Size: 119698 bytes --]

  reply	other threads:[~2020-12-04 18:21 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-04 18:06 [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections Hazem Mohamed Abuelfotoh
2020-12-04 18:19 ` Mohamed Abuelfotoh, Hazem [this message]
2020-12-04 18:41   ` Eric Dumazet
     [not found]     ` <3F02FF08-EDA6-4DFD-8D93-479A5B05E25A@amazon.com>
2020-12-07 15:25       ` Eric Dumazet
2020-12-07 16:09         ` Mohamed Abuelfotoh, Hazem
2020-12-07 16:22           ` Eric Dumazet
2020-12-07 16:33             ` Neal Cardwell
2020-12-07 17:08               ` Eric Dumazet
2020-12-07 20:09                 ` Mohamed Abuelfotoh, Hazem
2020-12-07 23:22                   ` Eric Dumazet
2020-12-07 17:16               ` Mohamed Abuelfotoh, Hazem
2020-12-07 17:27                 ` Eric Dumazet
2020-12-08 16:28                   ` Mohamed Abuelfotoh, Hazem
2020-12-08 16:30                     ` Mohamed Abuelfotoh, Hazem
2020-12-08 16:46                     ` Eric Dumazet
2020-12-07 16:34             ` Mohamed Abuelfotoh, Hazem
2020-12-07 17:46               ` Greg KH
2020-12-07 17:54                 ` Mohamed Abuelfotoh, Hazem
2020-12-04 19:10 ` Eric Dumazet
2020-12-04 21:28 ` Neal Cardwell
2020-12-07 11:46   ` [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS Hazem Mohamed Abuelfotoh
2020-12-07 18:53     ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=44E3AA29-F033-4B8E-A1BC-E38824B5B1E3@amazon.com \
    --to=abuehaze@amazon.com \
    --cc=astroh@amazon.com \
    --cc=benh@amazon.com \
    --cc=edumazet@google.com \
    --cc=ncardwell@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=weiwan@google.com \
    --cc=ycheng@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).