All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] tcp: optimise  receiver buffer autotuning initialisation for high latency connections
@ 2020-12-04 18:06 Hazem Mohamed Abuelfotoh
  2020-12-04 18:19 ` Mohamed Abuelfotoh, Hazem
                   ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Hazem Mohamed Abuelfotoh @ 2020-12-04 18:06 UTC (permalink / raw)
  To: netdev
  Cc: stable, edumazet, ycheng, ncardwell, weiwan, astroh, benh,
	Hazem Mohamed Abuelfotoh

    Previously receiver buffer auto-tuning starts after receiving
    one advertised window amount of data.After the initial
    receiver buffer was raised by
    commit a337531b942b ("tcp: up initial rmem to 128KB
    and SYN rwin to around 64KB"),the receiver buffer may
    take too long for TCP autotuning to start raising
    the receiver buffer size.
    commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
    tried to decrease the threshold at which TCP auto-tuning starts
    but it's doesn't work well in some environments
    where the receiver has large MTU (9001) configured
    specially within environments where RTT is high.
    To address this issue this patch is relying on RCV_MSS
    so auto-tuning can start early regardless
    the receiver configured MTU.

    Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
    Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")

Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
---
 net/ipv4/tcp_input.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 389d1b340248..f0ffac9e937b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
 static void tcp_init_buffer_space(struct sock *sk)
 {
 	int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
+	struct inet_connection_sock *icsk = inet_csk(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
 	int maxwin;
 
 	if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
 		tcp_sndbuf_expand(sk);
 
-	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
+	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
 	tcp_mstamp_refresh(tp);
 	tp->rcvq_space.time = tp->tcp_mstamp;
 	tp->rcvq_space.seq = tp->copied_seq;
-- 
2.16.6




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise  receiver buffer autotuning initialisation for high latency connections
  2020-12-04 18:06 [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections Hazem Mohamed Abuelfotoh
@ 2020-12-04 18:19 ` Mohamed Abuelfotoh, Hazem
  2020-12-04 18:41   ` Eric Dumazet
  2020-12-04 19:10 ` Eric Dumazet
  2020-12-04 21:28 ` Neal Cardwell
  2 siblings, 1 reply; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-04 18:19 UTC (permalink / raw)
  To: netdev
  Cc: stable, edumazet, ycheng, ncardwell, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

[-- Attachment #1: Type: text/plain, Size: 13343 bytes --]

Hey Team,

I am sending you this e-mail as a follow-up to provide more context about the patch that I proposed in my previous e-mail.


1-We have received a customer complain[1] about degraded download speed   from google endpoints after they upgraded their Ubuntu kernel from 4.14 to 5.4.These customers were getting around 80MB/s on kernel 4.14 which became 3MB/s after the upgrade to kernel 5.4.
2-We tried to reproduce the issue locally between EC2 instances within the same region but we couldn’t however we were able to reproduce it when fetching data from google endpoint.
3-The issue could only be reproduced in Regions where we have high RTT(around 12msec  or more ) with Google endpoints.
4-We have found some workarounds that can be applied on the receiver side which has proven to be effective and I am listing them below:
            A) Decrease TCP socket default rmem from 131072 to 87380
            B) Decrease MTU from 9001 to 1500.
            C) Change sysctl_tcp_adv_win_scale from default 1 to 0 or 2
            D)We have also found that disabling net.ipv4.tcp_moderate_rcvbuf on kernel 4.14 is giving exactly the same bad performance speed.
5-We have done some kernel bisect to understand when this behaviour has been introduced and found that   commit a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")[2] which has been merged to mainline kernel 4.19.86 is the culprit behind this download performance degradation, The commit  mainly did two main changes:
A)Raising the initial TCP receive buffer size and receive window.
B)Changing the way in which TCP Dynamic Right Sizing (DRS) is been kicked off.

6)There was a regression that has been introduced because of the above patch causing the receive window scaling  to take long time after raising the initial receiver buffer & receive window  and there was additional fix for that  in commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")[3].

7)Commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner") was trying to decrease the initial rcvq_space.space which  is used in TCP's internal auto-tuning to grow socket buffers based on how much data the kernel estimates the sender can send and It should  change over the life of any connection based on the amount of data that the sender is sending. This patch is relying on advmss (which is the MSS configured on the receiver) to identify the initial receive space, although this works very well with receivers with small MTUs like 1500 it’s doesn’t help if the receiver is configured to use Jumbo frames (9001 MTU) which is the default MTU on AWS EC2 instances and this is why we think this hasn’t been reported before beside the high RTT >=12msec required to see the issue as well.

8)After further debugging and testing we have found that the issue can only be reproduced under any of the  below conditions:
A)Sender (MTU 1500) using bbr/bbrv2 as congestion control algorithm ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072   running kernel 4.19.86 or later with RTT >=12msec.——>consistently reproducible
B)Sender (MTU 1500) using cubic as congestion control algorithm with fq as disc ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072 running kernel 4.19.86 or later with RTT >=30msec.——>consistently reproducible.
C)Sender (MTU 1500) using cubic as congestion control algorithm with pfifo_fast as qdisc ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072   running kernel 4.19.86 or later with RTT >=30msec.——>intermittently  reproducible
D)Sender needs a MTU of 1500. If the sender is using MTU of 9001  with no MSS clamping , then we  couldn’t  reproduce the issue.
E)AWS EC2 instances are using 9001 as MTU by default hence they are likely more impacted by this.


9)With some kernel hacking & packet capture analysis we found that the main issue is that under the above mentioned conditions the receive window never scales up as it looks like the tcp receiver autotuning never kicks off, I have attached to this e-mail  screenshots showing Window scaling with and without the proposed patch.
We also found that all workarounds either decreasing initial rcvq_space (this includes decreasing receiver advertised MSS from 9001 to 1500 or  default receive buffer size from 131072 to 87380) or increasing the maximum advertised receive window (before TCP autotuning start scaling) and this includes changing net.ipv4.tcp_adv_win_scale from 1 to 0 or 2.

10)It looks like when the issue happen we have a  kind of deadlock here so advertised receive window has to exceed rcvq_space for the tcp auto tuning to kickoff at the same time with the initial default  configuration the receive window is not going to exceed rcvq_space because it can only get half of the initial receive socket buffer size.

11)The current code which is based on patch has main drawback  which should be handled:
A)It relies on receiver configured MTU to define the initial receive space(threshold where tcp autotuning starts), as mentioned above this works well with 1500 MTU because with that it will make sure that initial receive space is lower than receive window so tcp autotuning will work just fine while it won’t work with Jumbo frames in use on the receiver because at this case the receiver won’t start tcp autotuning especially with high RTT and we will be hitting the regression that commit  041a14d26715 ("tcp: start receiver buffer autotuning sooner") was trying to handle.
12)I am proposing  the below patch which is relying on RCV_MSS (our guess about MSS used by the peer which is equal to TCP_MSS_DEFAULT 536 bytes by default) this should work regardless the receiver configured MSS. I am also sharing  my iperf test results with and without the patch and also verified  that the connection won’t get stuck in the middle in case of packet loss or latency spike which I emulated using tc netem on the sender side.


Test Results using the same sender & receiver:

-Without our proposed patch

#iperf3 -c xx.xx.xx.xx -t15 -i1 -R
Connecting to host xx.xx.xx.xx, port 5201
Reverse mode, remote host xx.xx.xx.xx is sending
[  4] local 172.31.37.167 port 52838 connected to xx.xx.xx.xx port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   269 KBytes  2.20 Mbits/sec
[  4]   1.00-2.00   sec   332 KBytes  2.72 Mbits/sec
[  4]   2.00-3.00   sec   334 KBytes  2.73 Mbits/sec
[  4]   3.00-4.00   sec   335 KBytes  2.75 Mbits/sec
[  4]   4.00-5.00   sec   332 KBytes  2.72 Mbits/sec
[  4]   5.00-6.00   sec   283 KBytes  2.32 Mbits/sec
[  4]   6.00-7.00   sec   332 KBytes  2.72 Mbits/sec
[  4]   7.00-8.00   sec   335 KBytes  2.75 Mbits/sec
[  4]   8.00-9.00   sec   335 KBytes  2.75 Mbits/sec
[  4]   9.00-10.00  sec   334 KBytes  2.73 Mbits/sec
[  4]  10.00-11.00  sec   332 KBytes  2.72 Mbits/sec
[  4]  11.00-12.00  sec   332 KBytes  2.72 Mbits/sec
[  4]  12.00-13.00  sec   338 KBytes  2.77 Mbits/sec
[  4]  13.00-14.00  sec   334 KBytes  2.73 Mbits/sec
[  4]  14.00-15.00  sec   332 KBytes  2.72 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec  6.07 MBytes  3.39 Mbits/sec    0             sender
[  4]   0.00-15.00  sec  4.90 MBytes  2.74 Mbits/sec                  receiver

iperf Done.


Test downloading from google endpoint:

# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
--2020-12-04 16:53:00--  https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.1.48, 172.217.8.176, 172.217.4.48, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.1.48|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 113320760 (108M) [application/octet-stream]
Saving to: ‘kubelet.45’

100%[===================================================================================================================================================>] 113,320,760 3.04MB/s   in 36s

2020-12-04 16:53:36 (3.02 MB/s) - ‘kubelet’ saved [113320760/113320760]


########################################################################################################################

-With the proposed  patch:

#iperf3 -c xx.xx.xx.xx -t15 -i1 -R
Connecting to host xx.xx.xx.xx, port 5201
Reverse mode, remote host xx.xx.xx.xx is sending
[  4] local 172.31.37.167 port 44514 connected to xx.xx.xx.xx port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   911 KBytes  7.46 Mbits/sec
[  4]   1.00-2.00   sec  8.95 MBytes  75.1 Mbits/sec
[  4]   2.00-3.00   sec  9.57 MBytes  80.3 Mbits/sec
[  4]   3.00-4.00   sec  9.56 MBytes  80.2 Mbits/sec
[  4]   4.00-5.00   sec  9.58 MBytes  80.3 Mbits/sec
[  4]   5.00-6.00   sec  9.58 MBytes  80.4 Mbits/sec
[  4]   6.00-7.00   sec  9.59 MBytes  80.4 Mbits/sec
[  4]   7.00-8.00   sec  9.59 MBytes  80.5 Mbits/sec
[  4]   8.00-9.00   sec  9.58 MBytes  80.4 Mbits/sec
[  4]   9.00-10.00  sec  9.58 MBytes  80.4 Mbits/sec
[  4]  10.00-11.00  sec  9.59 MBytes  80.4 Mbits/sec
[  4]  11.00-12.00  sec  9.59 MBytes  80.5 Mbits/sec
[  4]  12.00-13.00  sec  8.05 MBytes  67.5 Mbits/sec
[  4]  13.00-14.00  sec  9.57 MBytes  80.3 Mbits/sec
[  4]  14.00-15.00  sec  9.57 MBytes  80.3 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-15.00  sec   136 MBytes  76.3 Mbits/sec    0             sender
[  4]   0.00-15.00  sec   134 MBytes  75.2 Mbits/sec                  receiver

iperf Done.

Test downloading from google endpoint:


# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
--2020-12-04 16:54:34--  https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.0.16, 216.58.192.144, 172.217.6.16, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.0.16|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 113320760 (108M) [application/octet-stream]
Saving to: ‘kubelet’

100%[===================================================================================================================================================>] 113,320,760 80.0MB/s   in 1.4s

2020-12-04 16:54:36 (80.0 MB/s) - ‘kubelet.1’ saved [113320760/113320760]

Links:

[1] https://github.com/kubernetes/kops/issues/10206
[2] https://lore.kernel.org/patchwork/patch/1157936/
[3] https://lore.kernel.org/patchwork/patch/1157883/



Thank you.

Hazem

On 04/12/2020, 18:08, "Hazem Mohamed Abuelfotoh" <abuehaze@amazon.com> wrote:

        Previously receiver buffer auto-tuning starts after receiving
        one advertised window amount of data.After the initial
        receiver buffer was raised by
        commit a337531b942b ("tcp: up initial rmem to 128KB
        and SYN rwin to around 64KB"),the receiver buffer may
        take too long for TCP autotuning to start raising
        the receiver buffer size.
        commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
        tried to decrease the threshold at which TCP auto-tuning starts
        but it's doesn't work well in some environments
        where the receiver has large MTU (9001) configured
        specially within environments where RTT is high.
        To address this issue this patch is relying on RCV_MSS
        so auto-tuning can start early regardless
        the receiver configured MTU.

        Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
        Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")

    Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
    ---
     net/ipv4/tcp_input.c | 3 ++-
     1 file changed, 2 insertions(+), 1 deletion(-)

    diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
    index 389d1b340248..f0ffac9e937b 100644
    --- a/net/ipv4/tcp_input.c
    +++ b/net/ipv4/tcp_input.c
    @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
     static void tcp_init_buffer_space(struct sock *sk)
     {
     	int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
    +	struct inet_connection_sock *icsk = inet_csk(sk);
     	struct tcp_sock *tp = tcp_sk(sk);
     	int maxwin;

     	if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
     		tcp_sndbuf_expand(sk);

    -	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
    +	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
     	tcp_mstamp_refresh(tp);
     	tp->rcvq_space.time = tp->tcp_mstamp;
     	tp->rcvq_space.seq = tp->copied_seq;
    -- 
    2.16.6





Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



[-- Attachment #2: sender-bbr-bad-unpatched.png --]
[-- Type: image/png, Size: 125913 bytes --]

[-- Attachment #3: sender-bbr-good-patched.png --]
[-- Type: image/png, Size: 119698 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-04 18:19 ` Mohamed Abuelfotoh, Hazem
@ 2020-12-04 18:41   ` Eric Dumazet
       [not found]     ` <3F02FF08-EDA6-4DFD-8D93-479A5B05E25A@amazon.com>
  0 siblings, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2020-12-04 18:41 UTC (permalink / raw)
  To: Mohamed Abuelfotoh, Hazem
  Cc: netdev, stable, ycheng, ncardwell, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

On Fri, Dec 4, 2020 at 7:19 PM Mohamed Abuelfotoh, Hazem
<abuehaze@amazon.com> wrote:
>
> Hey Team,
>
> I am sending you this e-mail as a follow-up to provide more context about the patch that I proposed in my previous e-mail.
>
>
> 1-We have received a customer complain[1] about degraded download speed   from google endpoints after they upgraded their Ubuntu kernel from 4.14 to 5.4.These customers were getting around 80MB/s on kernel 4.14 which became 3MB/s after the upgrade to kernel 5.4.
> 2-We tried to reproduce the issue locally between EC2 instances within the same region but we couldn’t however we were able to reproduce it when fetching data from google endpoint.
> 3-The issue could only be reproduced in Regions where we have high RTT(around 12msec  or more ) with Google endpoints.
> 4-We have found some workarounds that can be applied on the receiver side which has proven to be effective and I am listing them below:
>             A) Decrease TCP socket default rmem from 131072 to 87380
>             B) Decrease MTU from 9001 to 1500.
>             C) Change sysctl_tcp_adv_win_scale from default 1 to 0 or 2
>             D)We have also found that disabling net.ipv4.tcp_moderate_rcvbuf on kernel 4.14 is giving exactly the same bad performance speed.
> 5-We have done some kernel bisect to understand when this behaviour has been introduced and found that   commit a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")[2] which has been merged to mainline kernel 4.19.86 is the culprit behind this download performance degradation, The commit  mainly did two main changes:
> A)Raising the initial TCP receive buffer size and receive window.
> B)Changing the way in which TCP Dynamic Right Sizing (DRS) is been kicked off.
>
> 6)There was a regression that has been introduced because of the above patch causing the receive window scaling  to take long time after raising the initial receiver buffer & receive window  and there was additional fix for that  in commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")[3].
>
> 7)Commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner") was trying to decrease the initial rcvq_space.space which  is used in TCP's internal auto-tuning to grow socket buffers based on how much data the kernel estimates the sender can send and It should  change over the life of any connection based on the amount of data that the sender is sending. This patch is relying on advmss (which is the MSS configured on the receiver) to identify the initial receive space, although this works very well with receivers with small MTUs like 1500 it’s doesn’t help if the receiver is configured to use Jumbo frames (9001 MTU) which is the default MTU on AWS EC2 instances and this is why we think this hasn’t been reported before beside the high RTT >=12msec required to see the issue as well.
>
> 8)After further debugging and testing we have found that the issue can only be reproduced under any of the  below conditions:
> A)Sender (MTU 1500) using bbr/bbrv2 as congestion control algorithm ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072   running kernel 4.19.86 or later with RTT >=12msec.——>consistently reproducible
> B)Sender (MTU 1500) using cubic as congestion control algorithm with fq as disc ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072 running kernel 4.19.86 or later with RTT >=30msec.——>consistently reproducible.
> C)Sender (MTU 1500) using cubic as congestion control algorithm with pfifo_fast as qdisc ——> Receiver (MTU 9001) with default ipv4.sysctl_tcp_rmem[1] = 131072   running kernel 4.19.86 or later with RTT >=30msec.——>intermittently  reproducible
> D)Sender needs a MTU of 1500. If the sender is using MTU of 9001  with no MSS clamping , then we  couldn’t  reproduce the issue.
> E)AWS EC2 instances are using 9001 as MTU by default hence they are likely more impacted by this.
>
>
> 9)With some kernel hacking & packet capture analysis we found that the main issue is that under the above mentioned conditions the receive window never scales up as it looks like the tcp receiver autotuning never kicks off, I have attached to this e-mail  screenshots showing Window scaling with and without the proposed patch.
> We also found that all workarounds either decreasing initial rcvq_space (this includes decreasing receiver advertised MSS from 9001 to 1500 or  default receive buffer size from 131072 to 87380) or increasing the maximum advertised receive window (before TCP autotuning start scaling) and this includes changing net.ipv4.tcp_adv_win_scale from 1 to 0 or 2.
>
> 10)It looks like when the issue happen we have a  kind of deadlock here so advertised receive window has to exceed rcvq_space for the tcp auto tuning to kickoff at the same time with the initial default  configuration the receive window is not going to exceed rcvq_space because it can only get half of the initial receive socket buffer size.
>
> 11)The current code which is based on patch has main drawback  which should be handled:
> A)It relies on receiver configured MTU to define the initial receive space(threshold where tcp autotuning starts), as mentioned above this works well with 1500 MTU because with that it will make sure that initial receive space is lower than receive window so tcp autotuning will work just fine while it won’t work with Jumbo frames in use on the receiver because at this case the receiver won’t start tcp autotuning especially with high RTT and we will be hitting the regression that commit  041a14d26715 ("tcp: start receiver buffer autotuning sooner") was trying to handle.
> 12)I am proposing  the below patch which is relying on RCV_MSS (our guess about MSS used by the peer which is equal to TCP_MSS_DEFAULT 536 bytes by default) this should work regardless the receiver configured MSS. I am also sharing  my iperf test results with and without the patch and also verified  that the connection won’t get stuck in the middle in case of packet loss or latency spike which I emulated using tc netem on the sender side.
>
>
> Test Results using the same sender & receiver:
>
> -Without our proposed patch
>
> #iperf3 -c xx.xx.xx.xx -t15 -i1 -R
> Connecting to host xx.xx.xx.xx, port 5201
> Reverse mode, remote host xx.xx.xx.xx is sending
> [  4] local 172.31.37.167 port 52838 connected to xx.xx.xx.xx port 5201
> [ ID] Interval           Transfer     Bandwidth
> [  4]   0.00-1.00   sec   269 KBytes  2.20 Mbits/sec
> [  4]   1.00-2.00   sec   332 KBytes  2.72 Mbits/sec
> [  4]   2.00-3.00   sec   334 KBytes  2.73 Mbits/sec
> [  4]   3.00-4.00   sec   335 KBytes  2.75 Mbits/sec
> [  4]   4.00-5.00   sec   332 KBytes  2.72 Mbits/sec
> [  4]   5.00-6.00   sec   283 KBytes  2.32 Mbits/sec
> [  4]   6.00-7.00   sec   332 KBytes  2.72 Mbits/sec
> [  4]   7.00-8.00   sec   335 KBytes  2.75 Mbits/sec
> [  4]   8.00-9.00   sec   335 KBytes  2.75 Mbits/sec
> [  4]   9.00-10.00  sec   334 KBytes  2.73 Mbits/sec
> [  4]  10.00-11.00  sec   332 KBytes  2.72 Mbits/sec
> [  4]  11.00-12.00  sec   332 KBytes  2.72 Mbits/sec
> [  4]  12.00-13.00  sec   338 KBytes  2.77 Mbits/sec
> [  4]  13.00-14.00  sec   334 KBytes  2.73 Mbits/sec
> [  4]  14.00-15.00  sec   332 KBytes  2.72 Mbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bandwidth       Retr
> [  4]   0.00-15.00  sec  6.07 MBytes  3.39 Mbits/sec    0             sender
> [  4]   0.00-15.00  sec  4.90 MBytes  2.74 Mbits/sec                  receiver
>
> iperf Done.
>
>
> Test downloading from google endpoint:
>
> # wget https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
> --2020-12-04 16:53:00--  https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
> Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.1.48, 172.217.8.176, 172.217.4.48, ...
> Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.1.48|:443... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 113320760 (108M) [application/octet-stream]
> Saving to: ‘kubelet.45’
>
> 100%[===================================================================================================================================================>] 113,320,760 3.04MB/s   in 36s
>
> 2020-12-04 16:53:36 (3.02 MB/s) - ‘kubelet’ saved [113320760/113320760]
>
>
> ########################################################################################################################
>
> -With the proposed  patch:
>
> #iperf3 -c xx.xx.xx.xx -t15 -i1 -R
> Connecting to host xx.xx.xx.xx, port 5201
> Reverse mode, remote host xx.xx.xx.xx is sending
> [  4] local 172.31.37.167 port 44514 connected to xx.xx.xx.xx port 5201
> [ ID] Interval           Transfer     Bandwidth
> [  4]   0.00-1.00   sec   911 KBytes  7.46 Mbits/sec
> [  4]   1.00-2.00   sec  8.95 MBytes  75.1 Mbits/sec
> [  4]   2.00-3.00   sec  9.57 MBytes  80.3 Mbits/sec
> [  4]   3.00-4.00   sec  9.56 MBytes  80.2 Mbits/sec
> [  4]   4.00-5.00   sec  9.58 MBytes  80.3 Mbits/sec
> [  4]   5.00-6.00   sec  9.58 MBytes  80.4 Mbits/sec
> [  4]   6.00-7.00   sec  9.59 MBytes  80.4 Mbits/sec
> [  4]   7.00-8.00   sec  9.59 MBytes  80.5 Mbits/sec
> [  4]   8.00-9.00   sec  9.58 MBytes  80.4 Mbits/sec
> [  4]   9.00-10.00  sec  9.58 MBytes  80.4 Mbits/sec
> [  4]  10.00-11.00  sec  9.59 MBytes  80.4 Mbits/sec
> [  4]  11.00-12.00  sec  9.59 MBytes  80.5 Mbits/sec
> [  4]  12.00-13.00  sec  8.05 MBytes  67.5 Mbits/sec
> [  4]  13.00-14.00  sec  9.57 MBytes  80.3 Mbits/sec
> [  4]  14.00-15.00  sec  9.57 MBytes  80.3 Mbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bandwidth       Retr
> [  4]   0.00-15.00  sec   136 MBytes  76.3 Mbits/sec    0             sender
> [  4]   0.00-15.00  sec   134 MBytes  75.2 Mbits/sec                  receiver
>
> iperf Done.
>
> Test downloading from google endpoint:
>
>
> # wget https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
> --2020-12-04 16:54:34--  https://storage.googleapis.com/kubernetes-release/release/v1.18.9/bin/linux/amd64/kubelet
> Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.0.16, 216.58.192.144, 172.217.6.16, ...
> Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.0.16|:443... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 113320760 (108M) [application/octet-stream]
> Saving to: ‘kubelet’
>
> 100%[===================================================================================================================================================>] 113,320,760 80.0MB/s   in 1.4s
>
> 2020-12-04 16:54:36 (80.0 MB/s) - ‘kubelet.1’ saved [113320760/113320760]
>
> Links:
>
> [1] https://github.com/kubernetes/kops/issues/10206
> [2] https://lore.kernel.org/patchwork/patch/1157936/
> [3] https://lore.kernel.org/patchwork/patch/1157883/
>

Unfortunately few things are missing in this report.

What is the RTT between hosts in your test ?

What driver is used at the receiving side ?

Usually, this kind of problem comes when s(kb->len / skb->truesize) is
pathologically small.
This could be caused by a driver lacking scatter gather support at RX
(a 1500 bytes incoming packet would use 12KB of memory or so, because
driver MTU was set to 9000)

Also worth noting that if you set MTU to 9000 (instead of standard
1500), you probably need to tweak a few sysctls.

autotuning is tricky, changing initial values can be good in some
cases, bad in others.

It would be nice if you send "ss -temoi"  output taken at receiver
while transfer is in progress.

>
>
> Thank you.
>
> Hazem
>
> On 04/12/2020, 18:08, "Hazem Mohamed Abuelfotoh" <abuehaze@amazon.com> wrote:
>
>         Previously receiver buffer auto-tuning starts after receiving
>         one advertised window amount of data.After the initial
>         receiver buffer was raised by
>         commit a337531b942b ("tcp: up initial rmem to 128KB
>         and SYN rwin to around 64KB"),the receiver buffer may
>         take too long for TCP autotuning to start raising
>         the receiver buffer size.
>         commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>         tried to decrease the threshold at which TCP auto-tuning starts
>         but it's doesn't work well in some environments
>         where the receiver has large MTU (9001) configured
>         specially within environments where RTT is high.
>         To address this issue this patch is relying on RCV_MSS
>         so auto-tuning can start early regardless
>         the receiver configured MTU.
>
>         Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
>         Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>
>     Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
>     ---
>      net/ipv4/tcp_input.c | 3 ++-
>      1 file changed, 2 insertions(+), 1 deletion(-)
>
>     diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
>     index 389d1b340248..f0ffac9e937b 100644
>     --- a/net/ipv4/tcp_input.c
>     +++ b/net/ipv4/tcp_input.c
>     @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
>      static void tcp_init_buffer_space(struct sock *sk)
>      {
>         int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
>     +   struct inet_connection_sock *icsk = inet_csk(sk);
>         struct tcp_sock *tp = tcp_sk(sk);
>         int maxwin;
>
>         if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
>                 tcp_sndbuf_expand(sk);
>
>     -   tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
>     +   tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
>         tcp_mstamp_refresh(tp);
>         tp->rcvq_space.time = tp->tcp_mstamp;
>         tp->rcvq_space.seq = tp->copied_seq;
>     --
>     2.16.6
>
>
>
>
>
> Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
>
> Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-04 18:06 [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections Hazem Mohamed Abuelfotoh
  2020-12-04 18:19 ` Mohamed Abuelfotoh, Hazem
@ 2020-12-04 19:10 ` Eric Dumazet
  2020-12-04 21:28 ` Neal Cardwell
  2 siblings, 0 replies; 26+ messages in thread
From: Eric Dumazet @ 2020-12-04 19:10 UTC (permalink / raw)
  To: Hazem Mohamed Abuelfotoh
  Cc: netdev, stable, Yuchung Cheng, Neal Cardwell, Wei Wang, Strohman,
	Andy, Benjamin Herrenschmidt

On Fri, Dec 4, 2020 at 7:08 PM Hazem Mohamed Abuelfotoh
<abuehaze@amazon.com> wrote:
>
>     Previously receiver buffer auto-tuning starts after receiving
>     one advertised window amount of data.After the initial
>     receiver buffer was raised by
>     commit a337531b942b ("tcp: up initial rmem to 128KB
>     and SYN rwin to around 64KB"),the receiver buffer may
>     take too long for TCP autotuning to start raising
>     the receiver buffer size.
>     commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>     tried to decrease the threshold at which TCP auto-tuning starts
>     but it's doesn't work well in some environments
>     where the receiver has large MTU (9001) configured
>     specially within environments where RTT is high.
>     To address this issue this patch is relying on RCV_MSS
>     so auto-tuning can start early regardless
>     the receiver configured MTU.
>
>     Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
>     Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>
> Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
> ---
>  net/ipv4/tcp_input.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 389d1b340248..f0ffac9e937b 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
>  static void tcp_init_buffer_space(struct sock *sk)
>  {
>         int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
> +       struct inet_connection_sock *icsk = inet_csk(sk);
>         struct tcp_sock *tp = tcp_sk(sk);
>         int maxwin;
>
>         if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
>                 tcp_sndbuf_expand(sk);
>
> -       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
> +       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);

So are you claiming icsk->icsk_ack.rcv_mss is related to MTU 9000 ?

RCV_MSS is not known until we receive actual packets... The initial
value is somthing like 536 if I am not mistaken.

I think your patch does not match the changelog.

>         tcp_mstamp_refresh(tp);
>         tp->rcvq_space.time = tp->tcp_mstamp;
>         tp->rcvq_space.seq = tp->copied_seq;
> --
> 2.16.6
>
>
>
>
> Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
>
> Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
>
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-04 18:06 [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections Hazem Mohamed Abuelfotoh
  2020-12-04 18:19 ` Mohamed Abuelfotoh, Hazem
  2020-12-04 19:10 ` Eric Dumazet
@ 2020-12-04 21:28 ` Neal Cardwell
  2020-12-07 11:46   ` [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS Hazem Mohamed Abuelfotoh
  2 siblings, 1 reply; 26+ messages in thread
From: Neal Cardwell @ 2020-12-04 21:28 UTC (permalink / raw)
  To: Hazem Mohamed Abuelfotoh
  Cc: Netdev, stable, Eric Dumazet, Yuchung Cheng, Wei Wang, astroh, benh

On Fri, Dec 4, 2020 at 1:08 PM Hazem Mohamed Abuelfotoh
<abuehaze@amazon.com> wrote:
>
>     Previously receiver buffer auto-tuning starts after receiving
>     one advertised window amount of data.After the initial
>     receiver buffer was raised by
>     commit a337531b942b ("tcp: up initial rmem to 128KB
>     and SYN rwin to around 64KB"),the receiver buffer may
>     take too long for TCP autotuning to start raising
>     the receiver buffer size.
>     commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>     tried to decrease the threshold at which TCP auto-tuning starts
>     but it's doesn't work well in some environments
>     where the receiver has large MTU (9001) configured
>     specially within environments where RTT is high.
>     To address this issue this patch is relying on RCV_MSS
>     so auto-tuning can start early regardless
>     the receiver configured MTU.
>
>     Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
>     Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>
> Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
> ---
>  net/ipv4/tcp_input.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 389d1b340248..f0ffac9e937b 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
>  static void tcp_init_buffer_space(struct sock *sk)
>  {
>         int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
> +       struct inet_connection_sock *icsk = inet_csk(sk);
>         struct tcp_sock *tp = tcp_sk(sk);
>         int maxwin;
>
>         if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
>                 tcp_sndbuf_expand(sk);
>
> -       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
> +       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);

Thanks for the detailed report and the proposed fix.

AFAICT the core of the bug is related to this part of your follow-up email:

> 10)It looks like when the issue happen we have a  kind of deadlock here
> so advertised receive window has to exceed rcvq_space for the tcp auto
> tuning to kickoff at the same time with the initial default  configuration the
> receive window is not going to exceed rcvq_space

The existing code is:

> -       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);

(1) With a typical case where both sides have an advmss based on an
MTU of 1500 bytes, the governing limit here is around 10*1460 or so,
or around 15Kbytes. With a tp->rcvq_space.space of that size, it is
common for the data sender to send that amount of data in one round
trip, which will trigger the receive buffer autotuning code in
tcp_rcv_space_adjust(), so autotuning works well.

(2) With a case where the sender has a 1500B NIC and the receiver has
an advmss based on an MTU of 9KB, then the  expression becomes
governed by the receive window of 64KBytes:

  tp->rcvq_space.space
    ~=  min(rcv_wnd, 10*9KBytes)
    ~= min(64KByte,90KByte) = 65536 bytes

But because tp->rcvq_space.space is set to the rcv_wnd, and
the sender is not allowed to send more than the receive
window, this check in tcp_rcv_space_adjust() will always fail,
and the receiver will take the new_measure path:

        /* Number of bytes copied to user in last RTT */
        copied = tp->copied_seq - tp->rcvq_space.seq;
        if (copied <= tp->rcvq_space.space)
                goto new_measure;

This seems to be why receive buffer autotuning is never triggered.

Furthermore, if we try to fix it with:

-        if (copied <= tp->rcvq_space.space)
+        if (copied < tp->rcvq_space.space)

The buggy behavior will still remain, because 65536 is not a multiple
of the MSS, so the number of bytes copied to the user in the last RTT
will never, in practice, exactly match the tp->rcvq_space.space.

AFAICT the proposed patch fixes this bug by setting the
tp->rcvq_space.space to 10*536 = 5360. This is a number of bytes that
*can* actually be delivered in a round trip in a mixed 1500B/9KB
scenario, so this allows receive window auto-tuning to actually be
triggered.

It seems like a reasonable approach to fix this issue, but I agree
with Eric that it would be good to improve the commit message a bit.

Also, since this is a bug fix, it seems it should be directed to the
"net" branch rather than the "net-next" branch.

Also, FWIW, I think the "for high latency connections" can be dropped
from the commit summary/first-line, since the bug can be hit at any
RTT, and is simply easier to notice at high RTTs. You might consider
something like:

  tcp: fix receive buffer autotuning to trigger for any valid advertised MSS

best,
neal

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS
  2020-12-04 21:28 ` Neal Cardwell
@ 2020-12-07 11:46   ` Hazem Mohamed Abuelfotoh
  2020-12-07 18:53     ` Jakub Kicinski
  0 siblings, 1 reply; 26+ messages in thread
From: Hazem Mohamed Abuelfotoh @ 2020-12-07 11:46 UTC (permalink / raw)
  To: netdev
  Cc: stable, edumazet, ycheng, ncardwell, weiwan, astroh, benh,
	Hazem Mohamed Abuelfotoh

    Previously receiver buffer auto-tuning starts after receiving
    one advertised window amount of data.After the initial
    receiver buffer was raised by
    commit a337531b942b ("tcp: up initial rmem to 128KB
    and SYN rwin to around 64KB"),the receiver buffer may
    take too long for TCP autotuning to start raising
    the receiver buffer size.
    commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
    tried to decrease the threshold at which TCP auto-tuning starts
    but it's doesn't work well in some environments
    where the receiver has large MTU (9001) especially with high RTT
    connections as in these environments rcvq_space.space will be the same
    as rcv_wnd so TCP autotuning will never start because
    sender can't send more than rcv_wnd size in one round trip.
    To address this issue this patch is decreasing the initial
    rcvq_space.space so TCP autotuning kicks in whenever the sender is
    able to send more than 5360 bytes in one round trip regardless the
    receiver's configured MTU.

    Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
    Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")

Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
---
 net/ipv4/tcp_input.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 389d1b340248..f0ffac9e937b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
 static void tcp_init_buffer_space(struct sock *sk)
 {
 	int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
+	struct inet_connection_sock *icsk = inet_csk(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
 	int maxwin;
 
 	if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
 		tcp_sndbuf_expand(sk);
 
-	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
+	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
 	tcp_mstamp_refresh(tp);
 	tp->rcvq_space.time = tp->tcp_mstamp;
 	tp->rcvq_space.seq = tp->copied_seq;
-- 
2.16.6




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
       [not found]     ` <3F02FF08-EDA6-4DFD-8D93-479A5B05E25A@amazon.com>
@ 2020-12-07 15:25       ` Eric Dumazet
  2020-12-07 16:09         ` Mohamed Abuelfotoh, Hazem
  0 siblings, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2020-12-07 15:25 UTC (permalink / raw)
  To: Mohamed Abuelfotoh, Hazem
  Cc: netdev, stable, ycheng, ncardwell, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

On Sat, Dec 5, 2020 at 1:03 PM Mohamed Abuelfotoh, Hazem
<abuehaze@amazon.com> wrote:
>
> Unfortunately few things are missing in this report.
>
>     What is the RTT between hosts in your test ?
>      >>>>>RTT in my test is 162 msec, but I am able to reproduce it with lower RTTs for example I could see the issue downloading from google   endpoint with RTT of 16.7 msec, as mentioned in my previous e-mail the issue is reproducible whenever RTT exceeded 12msec given that    the sender is using bbr.
>
>         RTT between hosts where I run the iperf test.
>         # ping 54.199.163.187
>         PING 54.199.163.187 (54.199.163.187) 56(84) bytes of data.
>         64 bytes from 54.199.163.187: icmp_seq=1 ttl=33 time=162 ms
>         64 bytes from 54.199.163.187: icmp_seq=2 ttl=33 time=162 ms
>         64 bytes from 54.199.163.187: icmp_seq=3 ttl=33 time=162 ms
>         64 bytes from 54.199.163.187: icmp_seq=4 ttl=33 time=162 ms
>
>         RTT between my EC2 instances and google endpoint.
>         # ping 172.217.4.240
>         PING 172.217.4.240 (172.217.4.240) 56(84) bytes of data.
>         64 bytes from 172.217.4.240: icmp_seq=1 ttl=101 time=16.7 ms
>         64 bytes from 172.217.4.240: icmp_seq=2 ttl=101 time=16.7 ms
>         64 bytes from 172.217.4.240: icmp_seq=3 ttl=101 time=16.7 ms
>         64 bytes from 172.217.4.240: icmp_seq=4 ttl=101 time=16.7 ms
>
>     What driver is used at the receiving side ?
>       >>>>>>I am using ENA driver version version: 2.2.10g on the receiver with scatter gathering enabled.
>
>         # ethtool -k eth0 | grep scatter-gather
>         scatter-gather: on
>                 tx-scatter-gather: on
>                 tx-scatter-gather-fraglist: off [fixed]

This ethtool output refers to TX scatter gather, which is not relevant
for this bug.

I see ENA driver might use 16 KB per incoming packet (if ENA_PAGE_SIZE is 16 KB)

Since I can not reproduce this problem with another NIC on x86, I
really wonder if this is not an issue with ENA driver on PowerPC
perhaps ?

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 15:25       ` Eric Dumazet
@ 2020-12-07 16:09         ` Mohamed Abuelfotoh, Hazem
  2020-12-07 16:22           ` Eric Dumazet
  0 siblings, 1 reply; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-07 16:09 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: netdev, stable, ycheng, ncardwell, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

    >Since I can not reproduce this problem with another NIC on x86, I
    >really wonder if this is not an issue with ENA driver on PowerPC
    >perhaps ?


I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.

What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?

Thank you.

Hazem

On 07/12/2020, 15:26, "Eric Dumazet" <edumazet@google.com> wrote:

    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



    On Sat, Dec 5, 2020 at 1:03 PM Mohamed Abuelfotoh, Hazem
    <abuehaze@amazon.com> wrote:
    >
    > Unfortunately few things are missing in this report.
    >
    >     What is the RTT between hosts in your test ?
    >      >>>>>RTT in my test is 162 msec, but I am able to reproduce it with lower RTTs for example I could see the issue downloading from google   endpoint with RTT of 16.7 msec, as mentioned in my previous e-mail the issue is reproducible whenever RTT exceeded 12msec given that    the sender is using bbr.
    >
    >         RTT between hosts where I run the iperf test.
    >         # ping 54.199.163.187
    >         PING 54.199.163.187 (54.199.163.187) 56(84) bytes of data.
    >         64 bytes from 54.199.163.187: icmp_seq=1 ttl=33 time=162 ms
    >         64 bytes from 54.199.163.187: icmp_seq=2 ttl=33 time=162 ms
    >         64 bytes from 54.199.163.187: icmp_seq=3 ttl=33 time=162 ms
    >         64 bytes from 54.199.163.187: icmp_seq=4 ttl=33 time=162 ms
    >
    >         RTT between my EC2 instances and google endpoint.
    >         # ping 172.217.4.240
    >         PING 172.217.4.240 (172.217.4.240) 56(84) bytes of data.
    >         64 bytes from 172.217.4.240: icmp_seq=1 ttl=101 time=16.7 ms
    >         64 bytes from 172.217.4.240: icmp_seq=2 ttl=101 time=16.7 ms
    >         64 bytes from 172.217.4.240: icmp_seq=3 ttl=101 time=16.7 ms
    >         64 bytes from 172.217.4.240: icmp_seq=4 ttl=101 time=16.7 ms
    >
    >     What driver is used at the receiving side ?
    >       >>>>>>I am using ENA driver version version: 2.2.10g on the receiver with scatter gathering enabled.
    >
    >         # ethtool -k eth0 | grep scatter-gather
    >         scatter-gather: on
    >                 tx-scatter-gather: on
    >                 tx-scatter-gather-fraglist: off [fixed]

    This ethtool output refers to TX scatter gather, which is not relevant
    for this bug.

    I see ENA driver might use 16 KB per incoming packet (if ENA_PAGE_SIZE is 16 KB)

    Since I can not reproduce this problem with another NIC on x86, I
    really wonder if this is not an issue with ENA driver on PowerPC
    perhaps ?




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 16:09         ` Mohamed Abuelfotoh, Hazem
@ 2020-12-07 16:22           ` Eric Dumazet
  2020-12-07 16:33             ` Neal Cardwell
  2020-12-07 16:34             ` Mohamed Abuelfotoh, Hazem
  0 siblings, 2 replies; 26+ messages in thread
From: Eric Dumazet @ 2020-12-07 16:22 UTC (permalink / raw)
  To: Mohamed Abuelfotoh, Hazem
  Cc: netdev, stable, ycheng, ncardwell, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
<abuehaze@amazon.com> wrote:
>
>     >Since I can not reproduce this problem with another NIC on x86, I
>     >really wonder if this is not an issue with ENA driver on PowerPC
>     >perhaps ?
>
>
> I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
>
> What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?


100ms RTT

Which exact version of linux kernel are you using ?



>
> Thank you.
>
> Hazem
>
> On 07/12/2020, 15:26, "Eric Dumazet" <edumazet@google.com> wrote:
>
>     CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>
>
>
>     On Sat, Dec 5, 2020 at 1:03 PM Mohamed Abuelfotoh, Hazem
>     <abuehaze@amazon.com> wrote:
>     >
>     > Unfortunately few things are missing in this report.
>     >
>     >     What is the RTT between hosts in your test ?
>     >      >>>>>RTT in my test is 162 msec, but I am able to reproduce it with lower RTTs for example I could see the issue downloading from google   endpoint with RTT of 16.7 msec, as mentioned in my previous e-mail the issue is reproducible whenever RTT exceeded 12msec given that    the sender is using bbr.
>     >
>     >         RTT between hosts where I run the iperf test.
>     >         # ping 54.199.163.187
>     >         PING 54.199.163.187 (54.199.163.187) 56(84) bytes of data.
>     >         64 bytes from 54.199.163.187: icmp_seq=1 ttl=33 time=162 ms
>     >         64 bytes from 54.199.163.187: icmp_seq=2 ttl=33 time=162 ms
>     >         64 bytes from 54.199.163.187: icmp_seq=3 ttl=33 time=162 ms
>     >         64 bytes from 54.199.163.187: icmp_seq=4 ttl=33 time=162 ms
>     >
>     >         RTT between my EC2 instances and google endpoint.
>     >         # ping 172.217.4.240
>     >         PING 172.217.4.240 (172.217.4.240) 56(84) bytes of data.
>     >         64 bytes from 172.217.4.240: icmp_seq=1 ttl=101 time=16.7 ms
>     >         64 bytes from 172.217.4.240: icmp_seq=2 ttl=101 time=16.7 ms
>     >         64 bytes from 172.217.4.240: icmp_seq=3 ttl=101 time=16.7 ms
>     >         64 bytes from 172.217.4.240: icmp_seq=4 ttl=101 time=16.7 ms
>     >
>     >     What driver is used at the receiving side ?
>     >       >>>>>>I am using ENA driver version version: 2.2.10g on the receiver with scatter gathering enabled.
>     >
>     >         # ethtool -k eth0 | grep scatter-gather
>     >         scatter-gather: on
>     >                 tx-scatter-gather: on
>     >                 tx-scatter-gather-fraglist: off [fixed]
>
>     This ethtool output refers to TX scatter gather, which is not relevant
>     for this bug.
>
>     I see ENA driver might use 16 KB per incoming packet (if ENA_PAGE_SIZE is 16 KB)
>
>     Since I can not reproduce this problem with another NIC on x86, I
>     really wonder if this is not an issue with ENA driver on PowerPC
>     perhaps ?
>
>
>
>
> Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
>
> Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 16:22           ` Eric Dumazet
@ 2020-12-07 16:33             ` Neal Cardwell
  2020-12-07 17:08               ` Eric Dumazet
  2020-12-07 17:16               ` Mohamed Abuelfotoh, Hazem
  2020-12-07 16:34             ` Mohamed Abuelfotoh, Hazem
  1 sibling, 2 replies; 26+ messages in thread
From: Neal Cardwell @ 2020-12-07 16:33 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Mohamed Abuelfotoh, Hazem, netdev, stable, ycheng, weiwan,
	Strohman, Andy, Herrenschmidt, Benjamin

On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@google.com> wrote:
>
> On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
> <abuehaze@amazon.com> wrote:
> >
> >     >Since I can not reproduce this problem with another NIC on x86, I
> >     >really wonder if this is not an issue with ENA driver on PowerPC
> >     >perhaps ?
> >
> >
> > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
> >
> > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
>
>
> 100ms RTT
>
> Which exact version of linux kernel are you using ?

Thanks for testing this, Eric. Would you be able to share the MTU
config commands you used, and the tcpdump traces you get? I'm
surprised that receive buffer autotuning would work for advmss of
around 6500 or higher.

thanks,
neal

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 16:22           ` Eric Dumazet
  2020-12-07 16:33             ` Neal Cardwell
@ 2020-12-07 16:34             ` Mohamed Abuelfotoh, Hazem
  2020-12-07 17:46               ` Greg KH
  1 sibling, 1 reply; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-07 16:34 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: netdev, stable, ycheng, ncardwell, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

100ms RTT

>Which exact version of linux kernel are you using ?
On the receiver side I could see the issue with any mainline kernel version >=4.19.86 which is the first kernel version that has patches [1] & [2] included.
On the sender I am using kernel 5.4.0-rc6.

Links:

[1] https://lore.kernel.org/patchwork/patch/1157936/
[2] https://lore.kernel.org/patchwork/patch/1157883/

Thank you.

Hazem



On 07/12/2020, 16:24, "Eric Dumazet" <edumazet@google.com> wrote:

    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



    On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
    <abuehaze@amazon.com> wrote:
    >
    >     >Since I can not reproduce this problem with another NIC on x86, I
    >     >really wonder if this is not an issue with ENA driver on PowerPC
    >     >perhaps ?
    >
    >
    > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
    >
    > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?


    100ms RTT

    Which exact version of linux kernel are you using ?



    >
    > Thank you.
    >
    > Hazem
    >
    > On 07/12/2020, 15:26, "Eric Dumazet" <edumazet@google.com> wrote:
    >
    >     CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
    >
    >
    >
    >     On Sat, Dec 5, 2020 at 1:03 PM Mohamed Abuelfotoh, Hazem
    >     <abuehaze@amazon.com> wrote:
    >     >
    >     > Unfortunately few things are missing in this report.
    >     >
    >     >     What is the RTT between hosts in your test ?
    >     >      >>>>>RTT in my test is 162 msec, but I am able to reproduce it with lower RTTs for example I could see the issue downloading from google   endpoint with RTT of 16.7 msec, as mentioned in my previous e-mail the issue is reproducible whenever RTT exceeded 12msec given that    the sender is using bbr.
    >     >
    >     >         RTT between hosts where I run the iperf test.
    >     >         # ping 54.199.163.187
    >     >         PING 54.199.163.187 (54.199.163.187) 56(84) bytes of data.
    >     >         64 bytes from 54.199.163.187: icmp_seq=1 ttl=33 time=162 ms
    >     >         64 bytes from 54.199.163.187: icmp_seq=2 ttl=33 time=162 ms
    >     >         64 bytes from 54.199.163.187: icmp_seq=3 ttl=33 time=162 ms
    >     >         64 bytes from 54.199.163.187: icmp_seq=4 ttl=33 time=162 ms
    >     >
    >     >         RTT between my EC2 instances and google endpoint.
    >     >         # ping 172.217.4.240
    >     >         PING 172.217.4.240 (172.217.4.240) 56(84) bytes of data.
    >     >         64 bytes from 172.217.4.240: icmp_seq=1 ttl=101 time=16.7 ms
    >     >         64 bytes from 172.217.4.240: icmp_seq=2 ttl=101 time=16.7 ms
    >     >         64 bytes from 172.217.4.240: icmp_seq=3 ttl=101 time=16.7 ms
    >     >         64 bytes from 172.217.4.240: icmp_seq=4 ttl=101 time=16.7 ms
    >     >
    >     >     What driver is used at the receiving side ?
    >     >       >>>>>>I am using ENA driver version version: 2.2.10g on the receiver with scatter gathering enabled.
    >     >
    >     >         # ethtool -k eth0 | grep scatter-gather
    >     >         scatter-gather: on
    >     >                 tx-scatter-gather: on
    >     >                 tx-scatter-gather-fraglist: off [fixed]
    >
    >     This ethtool output refers to TX scatter gather, which is not relevant
    >     for this bug.
    >
    >     I see ENA driver might use 16 KB per incoming packet (if ENA_PAGE_SIZE is 16 KB)
    >
    >     Since I can not reproduce this problem with another NIC on x86, I
    >     really wonder if this is not an issue with ENA driver on PowerPC
    >     perhaps ?
    >
    >
    >
    >
    > Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
    >
    > Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
    >
    >




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 16:33             ` Neal Cardwell
@ 2020-12-07 17:08               ` Eric Dumazet
  2020-12-07 20:09                 ` Mohamed Abuelfotoh, Hazem
  2020-12-07 17:16               ` Mohamed Abuelfotoh, Hazem
  1 sibling, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2020-12-07 17:08 UTC (permalink / raw)
  To: Neal Cardwell
  Cc: Mohamed Abuelfotoh, Hazem, netdev, stable, ycheng, weiwan,
	Strohman, Andy, Herrenschmidt, Benjamin

On Mon, Dec 7, 2020 at 5:34 PM Neal Cardwell <ncardwell@google.com> wrote:
>
> On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@google.com> wrote:
> >
> > On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
> > <abuehaze@amazon.com> wrote:
> > >
> > >     >Since I can not reproduce this problem with another NIC on x86, I
> > >     >really wonder if this is not an issue with ENA driver on PowerPC
> > >     >perhaps ?
> > >
> > >
> > > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
> > >
> > > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
> >
> >
> > 100ms RTT
> >
> > Which exact version of linux kernel are you using ?
>
> Thanks for testing this, Eric. Would you be able to share the MTU
> config commands you used, and the tcpdump traces you get? I'm
> surprised that receive buffer autotuning would work for advmss of
> around 6500 or higher.

autotuning might be delayed by one RTT, this does not match numbers
given by Mohamed (flows stuck in low speed)

autotuning is an heuristic, and because it has one RTT latency, it is
crucial to get proper initial rcvmem values.

People using MTU=9000 should know they have to tune tcp_rmem[1]
accordingly, especially when using drivers consuming one page per
incoming MSS.


(mlx4 driver only uses ome 2048 bytes fragment for a 1500 MTU packet.
even with MTU set to 9000)

I want to state again that using 536 bytes as a magic value makes no
sense to me.


For the record, Google has increased tcp_rmem[1] when switching to a bigger MTU.

The reason is simple : If we intend to receive 10 MSS, we should allow
for 90000 bytes of payload, or tcp_rmem[1] set to 180,000
Because of autotuning latency, doubling the value is advised : 360000

Another problem with kicking autotuning too fast is that it might
allow bigger sk->sk_rcvbuf values even for small flows, opening more
surface to malicious attacks.

I _think_ that if we want to allow admins to set high MTU without
having to tune tcp_rmem[], we need something different than current
proposal.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 16:33             ` Neal Cardwell
  2020-12-07 17:08               ` Eric Dumazet
@ 2020-12-07 17:16               ` Mohamed Abuelfotoh, Hazem
  2020-12-07 17:27                 ` Eric Dumazet
  1 sibling, 1 reply; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-07 17:16 UTC (permalink / raw)
  To: Neal Cardwell, Eric Dumazet
  Cc: netdev, stable, ycheng, weiwan, Strohman, Andy, Herrenschmidt, Benjamin

[-- Attachment #1: Type: text/plain, Size: 3060 bytes --]

    >Thanks for testing this, Eric. Would you be able to share the MTU
    >config commands you used, and the tcpdump traces you get? I'm
    >surprised that receive buffer autotuning would work for advmss of
    >around 6500 or higher.

Packet capture before applying the proposed patch

https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-bad-unpatched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T170123Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=a599a0e0e6632a957e5619007ba5ce4f63c8e8535ea24470b7093fef440a8300

Packet capture after applying the proposed patch

https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-good-patched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T165831Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=f18ec7246107590e8ac35c24322af699e4c2a73d174067c51cf6b0a06bbbca77

kernel version & MTU and configuration  from my receiver & sender is attached to this e-mail, please be aware that EC2 is doing MSS clamping so you need to configure MTU as 1500 on the sender side if you don’t have any MSS clamping between sender & receiver.

Thank you.

Hazem


On 07/12/2020, 16:34, "Neal Cardwell" <ncardwell@google.com> wrote:

    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



    On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@google.com> wrote:
    >
    > On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
    > <abuehaze@amazon.com> wrote:
    > >
    > >     >Since I can not reproduce this problem with another NIC on x86, I
    > >     >really wonder if this is not an issue with ENA driver on PowerPC
    > >     >perhaps ?
    > >
    > >
    > > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
    > >
    > > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
    >
    >
    > 100ms RTT
    >
    > Which exact version of linux kernel are you using ?

    Thanks for testing this, Eric. Would you be able to share the MTU
    config commands you used, and the tcpdump traces you get? I'm
    surprised that receive buffer autotuning would work for advmss of
    around 6500 or higher.

    thanks,
    neal




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



[-- Attachment #2: receiverconf --]
[-- Type: application/octet-stream, Size: 73007 bytes --]

5.4.74-36.135.amzn2.x86_64
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
        inet 172.31.37.167  netmask 255.255.240.0  broadcast 172.31.47.255
        inet6 fe80::8f5:b8ff:fefa:827e  prefixlen 64  scopeid 0x20<link>
        ether 0a:f5:b8:fa:82:7e  txqueuelen 1000  (Ethernet)
        RX packets 322997  bytes 293467446 (279.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 213839  bytes 23587696 (22.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

abi.vsyscall32 = 1
crypto.fips_enabled = 0
debug.exception-trace = 1
debug.kprobes-optimization = 1
dev.hpet.max-user-freq = 64
dev.raid.speed_limit_max = 200000
dev.raid.speed_limit_min = 1000
dev.tty.ldisc_autoload = 1
fs.aio-max-nr = 65536
fs.aio-nr = 0
fs.binfmt_misc.status = enabled
fs.dentry-state = 57902	42848	45	0	13000	0
fs.dir-notify-enable = 1
fs.epoll.max_user_watches = 19795722
fs.file-max = 9663372
fs.file-nr = 1984	0	9663372
fs.inode-nr = 45204	360
fs.inode-state = 45203	360	0	0	0	0	0
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.lease-break-time = 45
fs.leases-enable = 1
fs.mount-max = 100000
fs.mqueue.msg_default = 10
fs.mqueue.msg_max = 10
fs.mqueue.msgsize_default = 8192
fs.mqueue.msgsize_max = 8192
fs.mqueue.queues_max = 256
fs.nr_open = 1048576
fs.overflowgid = 65534
fs.overflowuid = 65534
fs.pipe-max-size = 1048576
fs.pipe-user-pages-hard = 0
fs.pipe-user-pages-soft = 16384
fs.protected_fifos = 0
fs.protected_hardlinks = 1
fs.protected_regular = 0
fs.protected_symlinks = 1
fs.quota.allocated_dquots = 0
fs.quota.cache_hits = 0
fs.quota.drops = 0
fs.quota.free_dquots = 0
fs.quota.lookups = 0
fs.quota.reads = 0
fs.quota.syncs = 0
fs.quota.writes = 0
fs.suid_dumpable = 0
fs.xfs.error_level = 3
fs.xfs.filestream_centisecs = 3000
fs.xfs.inherit_noatime = 1
fs.xfs.inherit_nodefrag = 1
fs.xfs.inherit_nodump = 1
fs.xfs.inherit_nosymlinks = 0
fs.xfs.inherit_sync = 1
fs.xfs.irix_sgid_inherit = 0
fs.xfs.irix_symlink_mode = 0
fs.xfs.panic_mask = 0
fs.xfs.rotorstep = 1
fs.xfs.speculative_cow_prealloc_lifetime = 1800
fs.xfs.speculative_prealloc_lifetime = 300
fs.xfs.stats_clear = 0
fs.xfs.xfssyncd_centisecs = 3000
kernel.acct = 4	2	30
kernel.acpi_video_flags = 0
kernel.auto_msgmni = 0
kernel.bootloader_type = 114
kernel.bootloader_version = 2
kernel.bpf_stats_enabled = 0
kernel.cad_pid = 1
kernel.cap_last_cap = 37
kernel.core_pattern = core
kernel.core_pipe_limit = 0
kernel.core_uses_pid = 1
kernel.ctrl-alt-del = 0
kernel.dmesg_restrict = 0
kernel.domainname = (none)
kernel.firmware_config.force_sysfs_fallback = 0
kernel.firmware_config.ignore_sysfs_fallback = 0
kernel.ftrace_dump_on_oops = 0
kernel.ftrace_enabled = 1
kernel.hardlockup_all_cpu_backtrace = 0
kernel.hardlockup_panic = 0
kernel.hostname = ip-172-31-37-167.us-east-2.compute.internal
kernel.hotplug = /sbin/hotplug
kernel.hung_task_check_count = 4194304
kernel.hung_task_check_interval_secs = 0
kernel.hung_task_panic = 0
kernel.hung_task_timeout_secs = 120
kernel.hung_task_warnings = 10
kernel.io_delay_type = 0
kernel.kexec_load_disabled = 0
kernel.keys.gc_delay = 300
kernel.keys.maxbytes = 20000
kernel.keys.maxkeys = 200
kernel.keys.persistent_keyring_expiry = 259200
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.kptr_restrict = 0
kernel.latencytop = 0
kernel.max_lock_depth = 1024
kernel.modprobe = /sbin/modprobe
kernel.modules_disabled = 0
kernel.msg_next_id = -1
kernel.msgmax = 8192
kernel.msgmnb = 16384
kernel.msgmni = 32000
kernel.ngroups_max = 65536
kernel.nmi_watchdog = 0
kernel.ns_last_pid = 28406
kernel.numa_balancing = 0
kernel.numa_balancing_scan_delay_ms = 1000
kernel.numa_balancing_scan_period_max_ms = 60000
kernel.numa_balancing_scan_period_min_ms = 1000
kernel.numa_balancing_scan_size_mb = 256
kernel.osrelease = 5.4.74-36.135.amzn2.x86_64
kernel.ostype = Linux
kernel.overflowgid = 65534
kernel.overflowuid = 65534
kernel.panic = 30
kernel.panic_on_io_nmi = 0
kernel.panic_on_oops = 0
kernel.panic_on_rcu_stall = 0
kernel.panic_on_unrecovered_nmi = 0
kernel.panic_on_warn = 0
kernel.panic_print = 0
kernel.perf_cpu_time_max_percent = 25
kernel.perf_event_max_contexts_per_stack = 8
kernel.perf_event_max_sample_rate = 100000
kernel.perf_event_max_stack = 127
kernel.perf_event_mlock_kb = 516
kernel.perf_event_paranoid = 2
kernel.pid_max = 49152
kernel.poweroff_cmd = /sbin/poweroff
kernel.print-fatal-signals = 0
kernel.printk = 8	4	1	7
kernel.printk_delay = 0
kernel.printk_devkmsg = ratelimit
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10
kernel.pty.max = 4096
kernel.pty.nr = 5
kernel.pty.reserve = 1024
kernel.random.boot_id = 0d65fac5-e5fd-4154-93aa-2518eb2db2b4
kernel.random.entropy_avail = 3158
kernel.random.poolsize = 4096
kernel.random.read_wakeup_threshold = 64
kernel.random.urandom_min_reseed_secs = 60
kernel.random.uuid = ae1c1815-2be2-4404-a6ab-36cd6b3165b9
kernel.random.write_wakeup_threshold = 3072
kernel.randomize_va_space = 2
kernel.real-root-dev = 0
kernel.sched_autogroup_enabled = 0
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.sched_child_runs_first = 0
kernel.sched_domain.cpu0.domain0.busy_factor = 32
kernel.sched_domain.cpu0.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu0.domain0.flags = 4783
kernel.sched_domain.cpu0.domain0.imbalance_pct = 110
kernel.sched_domain.cpu0.domain0.max_interval = 4
kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 552
kernel.sched_domain.cpu0.domain0.min_interval = 2
kernel.sched_domain.cpu0.domain0.name = SMT
kernel.sched_domain.cpu0.domain1.busy_factor = 32
kernel.sched_domain.cpu0.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu0.domain1.flags = 4655
kernel.sched_domain.cpu0.domain1.imbalance_pct = 117
kernel.sched_domain.cpu0.domain1.max_interval = 96
kernel.sched_domain.cpu0.domain1.max_newidle_lb_cost = 4293
kernel.sched_domain.cpu0.domain1.min_interval = 48
kernel.sched_domain.cpu0.domain1.name = MC
kernel.sched_domain.cpu1.domain0.busy_factor = 32
kernel.sched_domain.cpu1.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu1.domain0.flags = 4783
kernel.sched_domain.cpu1.domain0.imbalance_pct = 110
kernel.sched_domain.cpu1.domain0.max_interval = 4
kernel.sched_domain.cpu1.domain0.max_newidle_lb_cost = 79
kernel.sched_domain.cpu1.domain0.min_interval = 2
kernel.sched_domain.cpu1.domain0.name = SMT
kernel.sched_domain.cpu1.domain1.busy_factor = 32
kernel.sched_domain.cpu1.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu1.domain1.flags = 4655
kernel.sched_domain.cpu1.domain1.imbalance_pct = 117
kernel.sched_domain.cpu1.domain1.max_interval = 96
kernel.sched_domain.cpu1.domain1.max_newidle_lb_cost = 1061
kernel.sched_domain.cpu1.domain1.min_interval = 48
kernel.sched_domain.cpu1.domain1.name = MC
kernel.sched_domain.cpu10.domain0.busy_factor = 32
kernel.sched_domain.cpu10.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu10.domain0.flags = 4783
kernel.sched_domain.cpu10.domain0.imbalance_pct = 110
kernel.sched_domain.cpu10.domain0.max_interval = 4
kernel.sched_domain.cpu10.domain0.max_newidle_lb_cost = 608
kernel.sched_domain.cpu10.domain0.min_interval = 2
kernel.sched_domain.cpu10.domain0.name = SMT
kernel.sched_domain.cpu10.domain1.busy_factor = 32
kernel.sched_domain.cpu10.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu10.domain1.flags = 4655
kernel.sched_domain.cpu10.domain1.imbalance_pct = 117
kernel.sched_domain.cpu10.domain1.max_interval = 96
kernel.sched_domain.cpu10.domain1.max_newidle_lb_cost = 2687
kernel.sched_domain.cpu10.domain1.min_interval = 48
kernel.sched_domain.cpu10.domain1.name = MC
kernel.sched_domain.cpu11.domain0.busy_factor = 32
kernel.sched_domain.cpu11.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu11.domain0.flags = 4783
kernel.sched_domain.cpu11.domain0.imbalance_pct = 110
kernel.sched_domain.cpu11.domain0.max_interval = 4
kernel.sched_domain.cpu11.domain0.max_newidle_lb_cost = 2445
kernel.sched_domain.cpu11.domain0.min_interval = 2
kernel.sched_domain.cpu11.domain0.name = SMT
kernel.sched_domain.cpu11.domain1.busy_factor = 32
kernel.sched_domain.cpu11.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu11.domain1.flags = 4655
kernel.sched_domain.cpu11.domain1.imbalance_pct = 117
kernel.sched_domain.cpu11.domain1.max_interval = 96
kernel.sched_domain.cpu11.domain1.max_newidle_lb_cost = 8333
kernel.sched_domain.cpu11.domain1.min_interval = 48
kernel.sched_domain.cpu11.domain1.name = MC
kernel.sched_domain.cpu12.domain0.busy_factor = 32
kernel.sched_domain.cpu12.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu12.domain0.flags = 4783
kernel.sched_domain.cpu12.domain0.imbalance_pct = 110
kernel.sched_domain.cpu12.domain0.max_interval = 4
kernel.sched_domain.cpu12.domain0.max_newidle_lb_cost = 877
kernel.sched_domain.cpu12.domain0.min_interval = 2
kernel.sched_domain.cpu12.domain0.name = SMT
kernel.sched_domain.cpu12.domain1.busy_factor = 32
kernel.sched_domain.cpu12.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu12.domain1.flags = 4655
kernel.sched_domain.cpu12.domain1.imbalance_pct = 117
kernel.sched_domain.cpu12.domain1.max_interval = 96
kernel.sched_domain.cpu12.domain1.max_newidle_lb_cost = 11776
kernel.sched_domain.cpu12.domain1.min_interval = 48
kernel.sched_domain.cpu12.domain1.name = MC
kernel.sched_domain.cpu13.domain0.busy_factor = 32
kernel.sched_domain.cpu13.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu13.domain0.flags = 4783
kernel.sched_domain.cpu13.domain0.imbalance_pct = 110
kernel.sched_domain.cpu13.domain0.max_interval = 4
kernel.sched_domain.cpu13.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu13.domain0.min_interval = 2
kernel.sched_domain.cpu13.domain0.name = SMT
kernel.sched_domain.cpu13.domain1.busy_factor = 32
kernel.sched_domain.cpu13.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu13.domain1.flags = 4655
kernel.sched_domain.cpu13.domain1.imbalance_pct = 117
kernel.sched_domain.cpu13.domain1.max_interval = 96
kernel.sched_domain.cpu13.domain1.max_newidle_lb_cost = 0
kernel.sched_domain.cpu13.domain1.min_interval = 48
kernel.sched_domain.cpu13.domain1.name = MC
kernel.sched_domain.cpu14.domain0.busy_factor = 32
kernel.sched_domain.cpu14.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu14.domain0.flags = 4783
kernel.sched_domain.cpu14.domain0.imbalance_pct = 110
kernel.sched_domain.cpu14.domain0.max_interval = 4
kernel.sched_domain.cpu14.domain0.max_newidle_lb_cost = 105
kernel.sched_domain.cpu14.domain0.min_interval = 2
kernel.sched_domain.cpu14.domain0.name = SMT
kernel.sched_domain.cpu14.domain1.busy_factor = 32
kernel.sched_domain.cpu14.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu14.domain1.flags = 4655
kernel.sched_domain.cpu14.domain1.imbalance_pct = 117
kernel.sched_domain.cpu14.domain1.max_interval = 96
kernel.sched_domain.cpu14.domain1.max_newidle_lb_cost = 1284
kernel.sched_domain.cpu14.domain1.min_interval = 48
kernel.sched_domain.cpu14.domain1.name = MC
kernel.sched_domain.cpu15.domain0.busy_factor = 32
kernel.sched_domain.cpu15.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu15.domain0.flags = 4783
kernel.sched_domain.cpu15.domain0.imbalance_pct = 110
kernel.sched_domain.cpu15.domain0.max_interval = 4
kernel.sched_domain.cpu15.domain0.max_newidle_lb_cost = 965
kernel.sched_domain.cpu15.domain0.min_interval = 2
kernel.sched_domain.cpu15.domain0.name = SMT
kernel.sched_domain.cpu15.domain1.busy_factor = 32
kernel.sched_domain.cpu15.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu15.domain1.flags = 4655
kernel.sched_domain.cpu15.domain1.imbalance_pct = 117
kernel.sched_domain.cpu15.domain1.max_interval = 96
kernel.sched_domain.cpu15.domain1.max_newidle_lb_cost = 15395
kernel.sched_domain.cpu15.domain1.min_interval = 48
kernel.sched_domain.cpu15.domain1.name = MC
kernel.sched_domain.cpu16.domain0.busy_factor = 32
kernel.sched_domain.cpu16.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu16.domain0.flags = 4783
kernel.sched_domain.cpu16.domain0.imbalance_pct = 110
kernel.sched_domain.cpu16.domain0.max_interval = 4
kernel.sched_domain.cpu16.domain0.max_newidle_lb_cost = 367
kernel.sched_domain.cpu16.domain0.min_interval = 2
kernel.sched_domain.cpu16.domain0.name = SMT
kernel.sched_domain.cpu16.domain1.busy_factor = 32
kernel.sched_domain.cpu16.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu16.domain1.flags = 4655
kernel.sched_domain.cpu16.domain1.imbalance_pct = 117
kernel.sched_domain.cpu16.domain1.max_interval = 96
kernel.sched_domain.cpu16.domain1.max_newidle_lb_cost = 1605
kernel.sched_domain.cpu16.domain1.min_interval = 48
kernel.sched_domain.cpu16.domain1.name = MC
kernel.sched_domain.cpu17.domain0.busy_factor = 32
kernel.sched_domain.cpu17.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu17.domain0.flags = 4783
kernel.sched_domain.cpu17.domain0.imbalance_pct = 110
kernel.sched_domain.cpu17.domain0.max_interval = 4
kernel.sched_domain.cpu17.domain0.max_newidle_lb_cost = 33
kernel.sched_domain.cpu17.domain0.min_interval = 2
kernel.sched_domain.cpu17.domain0.name = SMT
kernel.sched_domain.cpu17.domain1.busy_factor = 32
kernel.sched_domain.cpu17.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu17.domain1.flags = 4655
kernel.sched_domain.cpu17.domain1.imbalance_pct = 117
kernel.sched_domain.cpu17.domain1.max_interval = 96
kernel.sched_domain.cpu17.domain1.max_newidle_lb_cost = 506
kernel.sched_domain.cpu17.domain1.min_interval = 48
kernel.sched_domain.cpu17.domain1.name = MC
kernel.sched_domain.cpu18.domain0.busy_factor = 32
kernel.sched_domain.cpu18.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu18.domain0.flags = 4783
kernel.sched_domain.cpu18.domain0.imbalance_pct = 110
kernel.sched_domain.cpu18.domain0.max_interval = 4
kernel.sched_domain.cpu18.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu18.domain0.min_interval = 2
kernel.sched_domain.cpu18.domain0.name = SMT
kernel.sched_domain.cpu18.domain1.busy_factor = 32
kernel.sched_domain.cpu18.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu18.domain1.flags = 4655
kernel.sched_domain.cpu18.domain1.imbalance_pct = 117
kernel.sched_domain.cpu18.domain1.max_interval = 96
kernel.sched_domain.cpu18.domain1.max_newidle_lb_cost = 582
kernel.sched_domain.cpu18.domain1.min_interval = 48
kernel.sched_domain.cpu18.domain1.name = MC
kernel.sched_domain.cpu19.domain0.busy_factor = 32
kernel.sched_domain.cpu19.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu19.domain0.flags = 4783
kernel.sched_domain.cpu19.domain0.imbalance_pct = 110
kernel.sched_domain.cpu19.domain0.max_interval = 4
kernel.sched_domain.cpu19.domain0.max_newidle_lb_cost = 1605
kernel.sched_domain.cpu19.domain0.min_interval = 2
kernel.sched_domain.cpu19.domain0.name = SMT
kernel.sched_domain.cpu19.domain1.busy_factor = 32
kernel.sched_domain.cpu19.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu19.domain1.flags = 4655
kernel.sched_domain.cpu19.domain1.imbalance_pct = 117
kernel.sched_domain.cpu19.domain1.max_interval = 96
kernel.sched_domain.cpu19.domain1.max_newidle_lb_cost = 1385
kernel.sched_domain.cpu19.domain1.min_interval = 48
kernel.sched_domain.cpu19.domain1.name = MC
kernel.sched_domain.cpu2.domain0.busy_factor = 32
kernel.sched_domain.cpu2.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu2.domain0.flags = 4783
kernel.sched_domain.cpu2.domain0.imbalance_pct = 110
kernel.sched_domain.cpu2.domain0.max_interval = 4
kernel.sched_domain.cpu2.domain0.max_newidle_lb_cost = 269
kernel.sched_domain.cpu2.domain0.min_interval = 2
kernel.sched_domain.cpu2.domain0.name = SMT
kernel.sched_domain.cpu2.domain1.busy_factor = 32
kernel.sched_domain.cpu2.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu2.domain1.flags = 4655
kernel.sched_domain.cpu2.domain1.imbalance_pct = 117
kernel.sched_domain.cpu2.domain1.max_interval = 96
kernel.sched_domain.cpu2.domain1.max_newidle_lb_cost = 3490
kernel.sched_domain.cpu2.domain1.min_interval = 48
kernel.sched_domain.cpu2.domain1.name = MC
kernel.sched_domain.cpu20.domain0.busy_factor = 32
kernel.sched_domain.cpu20.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu20.domain0.flags = 4783
kernel.sched_domain.cpu20.domain0.imbalance_pct = 110
kernel.sched_domain.cpu20.domain0.max_interval = 4
kernel.sched_domain.cpu20.domain0.max_newidle_lb_cost = 1665
kernel.sched_domain.cpu20.domain0.min_interval = 2
kernel.sched_domain.cpu20.domain0.name = SMT
kernel.sched_domain.cpu20.domain1.busy_factor = 32
kernel.sched_domain.cpu20.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu20.domain1.flags = 4655
kernel.sched_domain.cpu20.domain1.imbalance_pct = 117
kernel.sched_domain.cpu20.domain1.max_interval = 96
kernel.sched_domain.cpu20.domain1.max_newidle_lb_cost = 6083
kernel.sched_domain.cpu20.domain1.min_interval = 48
kernel.sched_domain.cpu20.domain1.name = MC
kernel.sched_domain.cpu21.domain0.busy_factor = 32
kernel.sched_domain.cpu21.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu21.domain0.flags = 4783
kernel.sched_domain.cpu21.domain0.imbalance_pct = 110
kernel.sched_domain.cpu21.domain0.max_interval = 4
kernel.sched_domain.cpu21.domain0.max_newidle_lb_cost = 1676
kernel.sched_domain.cpu21.domain0.min_interval = 2
kernel.sched_domain.cpu21.domain0.name = SMT
kernel.sched_domain.cpu21.domain1.busy_factor = 32
kernel.sched_domain.cpu21.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu21.domain1.flags = 4655
kernel.sched_domain.cpu21.domain1.imbalance_pct = 117
kernel.sched_domain.cpu21.domain1.max_interval = 96
kernel.sched_domain.cpu21.domain1.max_newidle_lb_cost = 22147
kernel.sched_domain.cpu21.domain1.min_interval = 48
kernel.sched_domain.cpu21.domain1.name = MC
kernel.sched_domain.cpu22.domain0.busy_factor = 32
kernel.sched_domain.cpu22.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu22.domain0.flags = 4783
kernel.sched_domain.cpu22.domain0.imbalance_pct = 110
kernel.sched_domain.cpu22.domain0.max_interval = 4
kernel.sched_domain.cpu22.domain0.max_newidle_lb_cost = 1275
kernel.sched_domain.cpu22.domain0.min_interval = 2
kernel.sched_domain.cpu22.domain0.name = SMT
kernel.sched_domain.cpu22.domain1.busy_factor = 32
kernel.sched_domain.cpu22.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu22.domain1.flags = 4655
kernel.sched_domain.cpu22.domain1.imbalance_pct = 117
kernel.sched_domain.cpu22.domain1.max_interval = 96
kernel.sched_domain.cpu22.domain1.max_newidle_lb_cost = 12732
kernel.sched_domain.cpu22.domain1.min_interval = 48
kernel.sched_domain.cpu22.domain1.name = MC
kernel.sched_domain.cpu23.domain0.busy_factor = 32
kernel.sched_domain.cpu23.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu23.domain0.flags = 4783
kernel.sched_domain.cpu23.domain0.imbalance_pct = 110
kernel.sched_domain.cpu23.domain0.max_interval = 4
kernel.sched_domain.cpu23.domain0.max_newidle_lb_cost = 708
kernel.sched_domain.cpu23.domain0.min_interval = 2
kernel.sched_domain.cpu23.domain0.name = SMT
kernel.sched_domain.cpu23.domain1.busy_factor = 32
kernel.sched_domain.cpu23.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu23.domain1.flags = 4655
kernel.sched_domain.cpu23.domain1.imbalance_pct = 117
kernel.sched_domain.cpu23.domain1.max_interval = 96
kernel.sched_domain.cpu23.domain1.max_newidle_lb_cost = 4719
kernel.sched_domain.cpu23.domain1.min_interval = 48
kernel.sched_domain.cpu23.domain1.name = MC
kernel.sched_domain.cpu24.domain0.busy_factor = 32
kernel.sched_domain.cpu24.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu24.domain0.flags = 4783
kernel.sched_domain.cpu24.domain0.imbalance_pct = 110
kernel.sched_domain.cpu24.domain0.max_interval = 4
kernel.sched_domain.cpu24.domain0.max_newidle_lb_cost = 79
kernel.sched_domain.cpu24.domain0.min_interval = 2
kernel.sched_domain.cpu24.domain0.name = SMT
kernel.sched_domain.cpu24.domain1.busy_factor = 32
kernel.sched_domain.cpu24.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu24.domain1.flags = 4655
kernel.sched_domain.cpu24.domain1.imbalance_pct = 117
kernel.sched_domain.cpu24.domain1.max_interval = 96
kernel.sched_domain.cpu24.domain1.max_newidle_lb_cost = 1064
kernel.sched_domain.cpu24.domain1.min_interval = 48
kernel.sched_domain.cpu24.domain1.name = MC
kernel.sched_domain.cpu25.domain0.busy_factor = 32
kernel.sched_domain.cpu25.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu25.domain0.flags = 4783
kernel.sched_domain.cpu25.domain0.imbalance_pct = 110
kernel.sched_domain.cpu25.domain0.max_interval = 4
kernel.sched_domain.cpu25.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu25.domain0.min_interval = 2
kernel.sched_domain.cpu25.domain0.name = SMT
kernel.sched_domain.cpu25.domain1.busy_factor = 32
kernel.sched_domain.cpu25.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu25.domain1.flags = 4655
kernel.sched_domain.cpu25.domain1.imbalance_pct = 117
kernel.sched_domain.cpu25.domain1.max_interval = 96
kernel.sched_domain.cpu25.domain1.max_newidle_lb_cost = 0
kernel.sched_domain.cpu25.domain1.min_interval = 48
kernel.sched_domain.cpu25.domain1.name = MC
kernel.sched_domain.cpu26.domain0.busy_factor = 32
kernel.sched_domain.cpu26.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu26.domain0.flags = 4783
kernel.sched_domain.cpu26.domain0.imbalance_pct = 110
kernel.sched_domain.cpu26.domain0.max_interval = 4
kernel.sched_domain.cpu26.domain0.max_newidle_lb_cost = 1794
kernel.sched_domain.cpu26.domain0.min_interval = 2
kernel.sched_domain.cpu26.domain0.name = SMT
kernel.sched_domain.cpu26.domain1.busy_factor = 32
kernel.sched_domain.cpu26.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu26.domain1.flags = 4655
kernel.sched_domain.cpu26.domain1.imbalance_pct = 117
kernel.sched_domain.cpu26.domain1.max_interval = 96
kernel.sched_domain.cpu26.domain1.max_newidle_lb_cost = 7461
kernel.sched_domain.cpu26.domain1.min_interval = 48
kernel.sched_domain.cpu26.domain1.name = MC
kernel.sched_domain.cpu27.domain0.busy_factor = 32
kernel.sched_domain.cpu27.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu27.domain0.flags = 4783
kernel.sched_domain.cpu27.domain0.imbalance_pct = 110
kernel.sched_domain.cpu27.domain0.max_interval = 4
kernel.sched_domain.cpu27.domain0.max_newidle_lb_cost = 271
kernel.sched_domain.cpu27.domain0.min_interval = 2
kernel.sched_domain.cpu27.domain0.name = SMT
kernel.sched_domain.cpu27.domain1.busy_factor = 32
kernel.sched_domain.cpu27.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu27.domain1.flags = 4655
kernel.sched_domain.cpu27.domain1.imbalance_pct = 117
kernel.sched_domain.cpu27.domain1.max_interval = 96
kernel.sched_domain.cpu27.domain1.max_newidle_lb_cost = 2378
kernel.sched_domain.cpu27.domain1.min_interval = 48
kernel.sched_domain.cpu27.domain1.name = MC
kernel.sched_domain.cpu28.domain0.busy_factor = 32
kernel.sched_domain.cpu28.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu28.domain0.flags = 4783
kernel.sched_domain.cpu28.domain0.imbalance_pct = 110
kernel.sched_domain.cpu28.domain0.max_interval = 4
kernel.sched_domain.cpu28.domain0.max_newidle_lb_cost = 1204
kernel.sched_domain.cpu28.domain0.min_interval = 2
kernel.sched_domain.cpu28.domain0.name = SMT
kernel.sched_domain.cpu28.domain1.busy_factor = 32
kernel.sched_domain.cpu28.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu28.domain1.flags = 4655
kernel.sched_domain.cpu28.domain1.imbalance_pct = 117
kernel.sched_domain.cpu28.domain1.max_interval = 96
kernel.sched_domain.cpu28.domain1.max_newidle_lb_cost = 7083
kernel.sched_domain.cpu28.domain1.min_interval = 48
kernel.sched_domain.cpu28.domain1.name = MC
kernel.sched_domain.cpu29.domain0.busy_factor = 32
kernel.sched_domain.cpu29.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu29.domain0.flags = 4783
kernel.sched_domain.cpu29.domain0.imbalance_pct = 110
kernel.sched_domain.cpu29.domain0.max_interval = 4
kernel.sched_domain.cpu29.domain0.max_newidle_lb_cost = 1301
kernel.sched_domain.cpu29.domain0.min_interval = 2
kernel.sched_domain.cpu29.domain0.name = SMT
kernel.sched_domain.cpu29.domain1.busy_factor = 32
kernel.sched_domain.cpu29.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu29.domain1.flags = 4655
kernel.sched_domain.cpu29.domain1.imbalance_pct = 117
kernel.sched_domain.cpu29.domain1.max_interval = 96
kernel.sched_domain.cpu29.domain1.max_newidle_lb_cost = 8313
kernel.sched_domain.cpu29.domain1.min_interval = 48
kernel.sched_domain.cpu29.domain1.name = MC
kernel.sched_domain.cpu3.domain0.busy_factor = 32
kernel.sched_domain.cpu3.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu3.domain0.flags = 4783
kernel.sched_domain.cpu3.domain0.imbalance_pct = 110
kernel.sched_domain.cpu3.domain0.max_interval = 4
kernel.sched_domain.cpu3.domain0.max_newidle_lb_cost = 47
kernel.sched_domain.cpu3.domain0.min_interval = 2
kernel.sched_domain.cpu3.domain0.name = SMT
kernel.sched_domain.cpu3.domain1.busy_factor = 32
kernel.sched_domain.cpu3.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu3.domain1.flags = 4655
kernel.sched_domain.cpu3.domain1.imbalance_pct = 117
kernel.sched_domain.cpu3.domain1.max_interval = 96
kernel.sched_domain.cpu3.domain1.max_newidle_lb_cost = 367
kernel.sched_domain.cpu3.domain1.min_interval = 48
kernel.sched_domain.cpu3.domain1.name = MC
kernel.sched_domain.cpu30.domain0.busy_factor = 32
kernel.sched_domain.cpu30.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu30.domain0.flags = 4783
kernel.sched_domain.cpu30.domain0.imbalance_pct = 110
kernel.sched_domain.cpu30.domain0.max_interval = 4
kernel.sched_domain.cpu30.domain0.max_newidle_lb_cost = 600
kernel.sched_domain.cpu30.domain0.min_interval = 2
kernel.sched_domain.cpu30.domain0.name = SMT
kernel.sched_domain.cpu30.domain1.busy_factor = 32
kernel.sched_domain.cpu30.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu30.domain1.flags = 4655
kernel.sched_domain.cpu30.domain1.imbalance_pct = 117
kernel.sched_domain.cpu30.domain1.max_interval = 96
kernel.sched_domain.cpu30.domain1.max_newidle_lb_cost = 2303
kernel.sched_domain.cpu30.domain1.min_interval = 48
kernel.sched_domain.cpu30.domain1.name = MC
kernel.sched_domain.cpu31.domain0.busy_factor = 32
kernel.sched_domain.cpu31.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu31.domain0.flags = 4783
kernel.sched_domain.cpu31.domain0.imbalance_pct = 110
kernel.sched_domain.cpu31.domain0.max_interval = 4
kernel.sched_domain.cpu31.domain0.max_newidle_lb_cost = 791
kernel.sched_domain.cpu31.domain0.min_interval = 2
kernel.sched_domain.cpu31.domain0.name = SMT
kernel.sched_domain.cpu31.domain1.busy_factor = 32
kernel.sched_domain.cpu31.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu31.domain1.flags = 4655
kernel.sched_domain.cpu31.domain1.imbalance_pct = 117
kernel.sched_domain.cpu31.domain1.max_interval = 96
kernel.sched_domain.cpu31.domain1.max_newidle_lb_cost = 5335
kernel.sched_domain.cpu31.domain1.min_interval = 48
kernel.sched_domain.cpu31.domain1.name = MC
kernel.sched_domain.cpu32.domain0.busy_factor = 32
kernel.sched_domain.cpu32.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu32.domain0.flags = 4783
kernel.sched_domain.cpu32.domain0.imbalance_pct = 110
kernel.sched_domain.cpu32.domain0.max_interval = 4
kernel.sched_domain.cpu32.domain0.max_newidle_lb_cost = 1926
kernel.sched_domain.cpu32.domain0.min_interval = 2
kernel.sched_domain.cpu32.domain0.name = SMT
kernel.sched_domain.cpu32.domain1.busy_factor = 32
kernel.sched_domain.cpu32.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu32.domain1.flags = 4655
kernel.sched_domain.cpu32.domain1.imbalance_pct = 117
kernel.sched_domain.cpu32.domain1.max_interval = 96
kernel.sched_domain.cpu32.domain1.max_newidle_lb_cost = 11608
kernel.sched_domain.cpu32.domain1.min_interval = 48
kernel.sched_domain.cpu32.domain1.name = MC
kernel.sched_domain.cpu33.domain0.busy_factor = 32
kernel.sched_domain.cpu33.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu33.domain0.flags = 4783
kernel.sched_domain.cpu33.domain0.imbalance_pct = 110
kernel.sched_domain.cpu33.domain0.max_interval = 4
kernel.sched_domain.cpu33.domain0.max_newidle_lb_cost = 1505
kernel.sched_domain.cpu33.domain0.min_interval = 2
kernel.sched_domain.cpu33.domain0.name = SMT
kernel.sched_domain.cpu33.domain1.busy_factor = 32
kernel.sched_domain.cpu33.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu33.domain1.flags = 4655
kernel.sched_domain.cpu33.domain1.imbalance_pct = 117
kernel.sched_domain.cpu33.domain1.max_interval = 96
kernel.sched_domain.cpu33.domain1.max_newidle_lb_cost = 15512
kernel.sched_domain.cpu33.domain1.min_interval = 48
kernel.sched_domain.cpu33.domain1.name = MC
kernel.sched_domain.cpu34.domain0.busy_factor = 32
kernel.sched_domain.cpu34.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu34.domain0.flags = 4783
kernel.sched_domain.cpu34.domain0.imbalance_pct = 110
kernel.sched_domain.cpu34.domain0.max_interval = 4
kernel.sched_domain.cpu34.domain0.max_newidle_lb_cost = 1150
kernel.sched_domain.cpu34.domain0.min_interval = 2
kernel.sched_domain.cpu34.domain0.name = SMT
kernel.sched_domain.cpu34.domain1.busy_factor = 32
kernel.sched_domain.cpu34.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu34.domain1.flags = 4655
kernel.sched_domain.cpu34.domain1.imbalance_pct = 117
kernel.sched_domain.cpu34.domain1.max_interval = 96
kernel.sched_domain.cpu34.domain1.max_newidle_lb_cost = 7719
kernel.sched_domain.cpu34.domain1.min_interval = 48
kernel.sched_domain.cpu34.domain1.name = MC
kernel.sched_domain.cpu35.domain0.busy_factor = 32
kernel.sched_domain.cpu35.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu35.domain0.flags = 4783
kernel.sched_domain.cpu35.domain0.imbalance_pct = 110
kernel.sched_domain.cpu35.domain0.max_interval = 4
kernel.sched_domain.cpu35.domain0.max_newidle_lb_cost = 1930
kernel.sched_domain.cpu35.domain0.min_interval = 2
kernel.sched_domain.cpu35.domain0.name = SMT
kernel.sched_domain.cpu35.domain1.busy_factor = 32
kernel.sched_domain.cpu35.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu35.domain1.flags = 4655
kernel.sched_domain.cpu35.domain1.imbalance_pct = 117
kernel.sched_domain.cpu35.domain1.max_interval = 96
kernel.sched_domain.cpu35.domain1.max_newidle_lb_cost = 6693
kernel.sched_domain.cpu35.domain1.min_interval = 48
kernel.sched_domain.cpu35.domain1.name = MC
kernel.sched_domain.cpu36.domain0.busy_factor = 32
kernel.sched_domain.cpu36.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu36.domain0.flags = 4783
kernel.sched_domain.cpu36.domain0.imbalance_pct = 110
kernel.sched_domain.cpu36.domain0.max_interval = 4
kernel.sched_domain.cpu36.domain0.max_newidle_lb_cost = 545
kernel.sched_domain.cpu36.domain0.min_interval = 2
kernel.sched_domain.cpu36.domain0.name = SMT
kernel.sched_domain.cpu36.domain1.busy_factor = 32
kernel.sched_domain.cpu36.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu36.domain1.flags = 4655
kernel.sched_domain.cpu36.domain1.imbalance_pct = 117
kernel.sched_domain.cpu36.domain1.max_interval = 96
kernel.sched_domain.cpu36.domain1.max_newidle_lb_cost = 4101
kernel.sched_domain.cpu36.domain1.min_interval = 48
kernel.sched_domain.cpu36.domain1.name = MC
kernel.sched_domain.cpu37.domain0.busy_factor = 32
kernel.sched_domain.cpu37.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu37.domain0.flags = 4783
kernel.sched_domain.cpu37.domain0.imbalance_pct = 110
kernel.sched_domain.cpu37.domain0.max_interval = 4
kernel.sched_domain.cpu37.domain0.max_newidle_lb_cost = 1054
kernel.sched_domain.cpu37.domain0.min_interval = 2
kernel.sched_domain.cpu37.domain0.name = SMT
kernel.sched_domain.cpu37.domain1.busy_factor = 32
kernel.sched_domain.cpu37.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu37.domain1.flags = 4655
kernel.sched_domain.cpu37.domain1.imbalance_pct = 117
kernel.sched_domain.cpu37.domain1.max_interval = 96
kernel.sched_domain.cpu37.domain1.max_newidle_lb_cost = 5982
kernel.sched_domain.cpu37.domain1.min_interval = 48
kernel.sched_domain.cpu37.domain1.name = MC
kernel.sched_domain.cpu38.domain0.busy_factor = 32
kernel.sched_domain.cpu38.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu38.domain0.flags = 4783
kernel.sched_domain.cpu38.domain0.imbalance_pct = 110
kernel.sched_domain.cpu38.domain0.max_interval = 4
kernel.sched_domain.cpu38.domain0.max_newidle_lb_cost = 229
kernel.sched_domain.cpu38.domain0.min_interval = 2
kernel.sched_domain.cpu38.domain0.name = SMT
kernel.sched_domain.cpu38.domain1.busy_factor = 32
kernel.sched_domain.cpu38.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu38.domain1.flags = 4655
kernel.sched_domain.cpu38.domain1.imbalance_pct = 117
kernel.sched_domain.cpu38.domain1.max_interval = 96
kernel.sched_domain.cpu38.domain1.max_newidle_lb_cost = 2758
kernel.sched_domain.cpu38.domain1.min_interval = 48
kernel.sched_domain.cpu38.domain1.name = MC
kernel.sched_domain.cpu39.domain0.busy_factor = 32
kernel.sched_domain.cpu39.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu39.domain0.flags = 4783
kernel.sched_domain.cpu39.domain0.imbalance_pct = 110
kernel.sched_domain.cpu39.domain0.max_interval = 4
kernel.sched_domain.cpu39.domain0.max_newidle_lb_cost = 1698
kernel.sched_domain.cpu39.domain0.min_interval = 2
kernel.sched_domain.cpu39.domain0.name = SMT
kernel.sched_domain.cpu39.domain1.busy_factor = 32
kernel.sched_domain.cpu39.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu39.domain1.flags = 4655
kernel.sched_domain.cpu39.domain1.imbalance_pct = 117
kernel.sched_domain.cpu39.domain1.max_interval = 96
kernel.sched_domain.cpu39.domain1.max_newidle_lb_cost = 9714
kernel.sched_domain.cpu39.domain1.min_interval = 48
kernel.sched_domain.cpu39.domain1.name = MC
kernel.sched_domain.cpu4.domain0.busy_factor = 32
kernel.sched_domain.cpu4.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu4.domain0.flags = 4783
kernel.sched_domain.cpu4.domain0.imbalance_pct = 110
kernel.sched_domain.cpu4.domain0.max_interval = 4
kernel.sched_domain.cpu4.domain0.max_newidle_lb_cost = 143
kernel.sched_domain.cpu4.domain0.min_interval = 2
kernel.sched_domain.cpu4.domain0.name = SMT
kernel.sched_domain.cpu4.domain1.busy_factor = 32
kernel.sched_domain.cpu4.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu4.domain1.flags = 4655
kernel.sched_domain.cpu4.domain1.imbalance_pct = 117
kernel.sched_domain.cpu4.domain1.max_interval = 96
kernel.sched_domain.cpu4.domain1.max_newidle_lb_cost = 1224
kernel.sched_domain.cpu4.domain1.min_interval = 48
kernel.sched_domain.cpu4.domain1.name = MC
kernel.sched_domain.cpu40.domain0.busy_factor = 32
kernel.sched_domain.cpu40.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu40.domain0.flags = 4783
kernel.sched_domain.cpu40.domain0.imbalance_pct = 110
kernel.sched_domain.cpu40.domain0.max_interval = 4
kernel.sched_domain.cpu40.domain0.max_newidle_lb_cost = 1559
kernel.sched_domain.cpu40.domain0.min_interval = 2
kernel.sched_domain.cpu40.domain0.name = SMT
kernel.sched_domain.cpu40.domain1.busy_factor = 32
kernel.sched_domain.cpu40.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu40.domain1.flags = 4655
kernel.sched_domain.cpu40.domain1.imbalance_pct = 117
kernel.sched_domain.cpu40.domain1.max_interval = 96
kernel.sched_domain.cpu40.domain1.max_newidle_lb_cost = 5159
kernel.sched_domain.cpu40.domain1.min_interval = 48
kernel.sched_domain.cpu40.domain1.name = MC
kernel.sched_domain.cpu41.domain0.busy_factor = 32
kernel.sched_domain.cpu41.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu41.domain0.flags = 4783
kernel.sched_domain.cpu41.domain0.imbalance_pct = 110
kernel.sched_domain.cpu41.domain0.max_interval = 4
kernel.sched_domain.cpu41.domain0.max_newidle_lb_cost = 721
kernel.sched_domain.cpu41.domain0.min_interval = 2
kernel.sched_domain.cpu41.domain0.name = SMT
kernel.sched_domain.cpu41.domain1.busy_factor = 32
kernel.sched_domain.cpu41.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu41.domain1.flags = 4655
kernel.sched_domain.cpu41.domain1.imbalance_pct = 117
kernel.sched_domain.cpu41.domain1.max_interval = 96
kernel.sched_domain.cpu41.domain1.max_newidle_lb_cost = 5866
kernel.sched_domain.cpu41.domain1.min_interval = 48
kernel.sched_domain.cpu41.domain1.name = MC
kernel.sched_domain.cpu42.domain0.busy_factor = 32
kernel.sched_domain.cpu42.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu42.domain0.flags = 4783
kernel.sched_domain.cpu42.domain0.imbalance_pct = 110
kernel.sched_domain.cpu42.domain0.max_interval = 4
kernel.sched_domain.cpu42.domain0.max_newidle_lb_cost = 150
kernel.sched_domain.cpu42.domain0.min_interval = 2
kernel.sched_domain.cpu42.domain0.name = SMT
kernel.sched_domain.cpu42.domain1.busy_factor = 32
kernel.sched_domain.cpu42.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu42.domain1.flags = 4655
kernel.sched_domain.cpu42.domain1.imbalance_pct = 117
kernel.sched_domain.cpu42.domain1.max_interval = 96
kernel.sched_domain.cpu42.domain1.max_newidle_lb_cost = 672
kernel.sched_domain.cpu42.domain1.min_interval = 48
kernel.sched_domain.cpu42.domain1.name = MC
kernel.sched_domain.cpu43.domain0.busy_factor = 32
kernel.sched_domain.cpu43.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu43.domain0.flags = 4783
kernel.sched_domain.cpu43.domain0.imbalance_pct = 110
kernel.sched_domain.cpu43.domain0.max_interval = 4
kernel.sched_domain.cpu43.domain0.max_newidle_lb_cost = 1015
kernel.sched_domain.cpu43.domain0.min_interval = 2
kernel.sched_domain.cpu43.domain0.name = SMT
kernel.sched_domain.cpu43.domain1.busy_factor = 32
kernel.sched_domain.cpu43.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu43.domain1.flags = 4655
kernel.sched_domain.cpu43.domain1.imbalance_pct = 117
kernel.sched_domain.cpu43.domain1.max_interval = 96
kernel.sched_domain.cpu43.domain1.max_newidle_lb_cost = 10008
kernel.sched_domain.cpu43.domain1.min_interval = 48
kernel.sched_domain.cpu43.domain1.name = MC
kernel.sched_domain.cpu44.domain0.busy_factor = 32
kernel.sched_domain.cpu44.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu44.domain0.flags = 4783
kernel.sched_domain.cpu44.domain0.imbalance_pct = 110
kernel.sched_domain.cpu44.domain0.max_interval = 4
kernel.sched_domain.cpu44.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu44.domain0.min_interval = 2
kernel.sched_domain.cpu44.domain0.name = SMT
kernel.sched_domain.cpu44.domain1.busy_factor = 32
kernel.sched_domain.cpu44.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu44.domain1.flags = 4655
kernel.sched_domain.cpu44.domain1.imbalance_pct = 117
kernel.sched_domain.cpu44.domain1.max_interval = 96
kernel.sched_domain.cpu44.domain1.max_newidle_lb_cost = 440
kernel.sched_domain.cpu44.domain1.min_interval = 48
kernel.sched_domain.cpu44.domain1.name = MC
kernel.sched_domain.cpu45.domain0.busy_factor = 32
kernel.sched_domain.cpu45.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu45.domain0.flags = 4783
kernel.sched_domain.cpu45.domain0.imbalance_pct = 110
kernel.sched_domain.cpu45.domain0.max_interval = 4
kernel.sched_domain.cpu45.domain0.max_newidle_lb_cost = 31
kernel.sched_domain.cpu45.domain0.min_interval = 2
kernel.sched_domain.cpu45.domain0.name = SMT
kernel.sched_domain.cpu45.domain1.busy_factor = 32
kernel.sched_domain.cpu45.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu45.domain1.flags = 4655
kernel.sched_domain.cpu45.domain1.imbalance_pct = 117
kernel.sched_domain.cpu45.domain1.max_interval = 96
kernel.sched_domain.cpu45.domain1.max_newidle_lb_cost = 495
kernel.sched_domain.cpu45.domain1.min_interval = 48
kernel.sched_domain.cpu45.domain1.name = MC
kernel.sched_domain.cpu46.domain0.busy_factor = 32
kernel.sched_domain.cpu46.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu46.domain0.flags = 4783
kernel.sched_domain.cpu46.domain0.imbalance_pct = 110
kernel.sched_domain.cpu46.domain0.max_interval = 4
kernel.sched_domain.cpu46.domain0.max_newidle_lb_cost = 402
kernel.sched_domain.cpu46.domain0.min_interval = 2
kernel.sched_domain.cpu46.domain0.name = SMT
kernel.sched_domain.cpu46.domain1.busy_factor = 32
kernel.sched_domain.cpu46.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu46.domain1.flags = 4655
kernel.sched_domain.cpu46.domain1.imbalance_pct = 117
kernel.sched_domain.cpu46.domain1.max_interval = 96
kernel.sched_domain.cpu46.domain1.max_newidle_lb_cost = 1802
kernel.sched_domain.cpu46.domain1.min_interval = 48
kernel.sched_domain.cpu46.domain1.name = MC
kernel.sched_domain.cpu47.domain0.busy_factor = 32
kernel.sched_domain.cpu47.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu47.domain0.flags = 4783
kernel.sched_domain.cpu47.domain0.imbalance_pct = 110
kernel.sched_domain.cpu47.domain0.max_interval = 4
kernel.sched_domain.cpu47.domain0.max_newidle_lb_cost = 842
kernel.sched_domain.cpu47.domain0.min_interval = 2
kernel.sched_domain.cpu47.domain0.name = SMT
kernel.sched_domain.cpu47.domain1.busy_factor = 32
kernel.sched_domain.cpu47.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu47.domain1.flags = 4655
kernel.sched_domain.cpu47.domain1.imbalance_pct = 117
kernel.sched_domain.cpu47.domain1.max_interval = 96
kernel.sched_domain.cpu47.domain1.max_newidle_lb_cost = 4531
kernel.sched_domain.cpu47.domain1.min_interval = 48
kernel.sched_domain.cpu47.domain1.name = MC
kernel.sched_domain.cpu5.domain0.busy_factor = 32
kernel.sched_domain.cpu5.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu5.domain0.flags = 4783
kernel.sched_domain.cpu5.domain0.imbalance_pct = 110
kernel.sched_domain.cpu5.domain0.max_interval = 4
kernel.sched_domain.cpu5.domain0.max_newidle_lb_cost = 1780
kernel.sched_domain.cpu5.domain0.min_interval = 2
kernel.sched_domain.cpu5.domain0.name = SMT
kernel.sched_domain.cpu5.domain1.busy_factor = 32
kernel.sched_domain.cpu5.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu5.domain1.flags = 4655
kernel.sched_domain.cpu5.domain1.imbalance_pct = 117
kernel.sched_domain.cpu5.domain1.max_interval = 96
kernel.sched_domain.cpu5.domain1.max_newidle_lb_cost = 7787
kernel.sched_domain.cpu5.domain1.min_interval = 48
kernel.sched_domain.cpu5.domain1.name = MC
kernel.sched_domain.cpu6.domain0.busy_factor = 32
kernel.sched_domain.cpu6.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu6.domain0.flags = 4783
kernel.sched_domain.cpu6.domain0.imbalance_pct = 110
kernel.sched_domain.cpu6.domain0.max_interval = 4
kernel.sched_domain.cpu6.domain0.max_newidle_lb_cost = 524
kernel.sched_domain.cpu6.domain0.min_interval = 2
kernel.sched_domain.cpu6.domain0.name = SMT
kernel.sched_domain.cpu6.domain1.busy_factor = 32
kernel.sched_domain.cpu6.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu6.domain1.flags = 4655
kernel.sched_domain.cpu6.domain1.imbalance_pct = 117
kernel.sched_domain.cpu6.domain1.max_interval = 96
kernel.sched_domain.cpu6.domain1.max_newidle_lb_cost = 2517
kernel.sched_domain.cpu6.domain1.min_interval = 48
kernel.sched_domain.cpu6.domain1.name = MC
kernel.sched_domain.cpu7.domain0.busy_factor = 32
kernel.sched_domain.cpu7.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu7.domain0.flags = 4783
kernel.sched_domain.cpu7.domain0.imbalance_pct = 110
kernel.sched_domain.cpu7.domain0.max_interval = 4
kernel.sched_domain.cpu7.domain0.max_newidle_lb_cost = 593
kernel.sched_domain.cpu7.domain0.min_interval = 2
kernel.sched_domain.cpu7.domain0.name = SMT
kernel.sched_domain.cpu7.domain1.busy_factor = 32
kernel.sched_domain.cpu7.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu7.domain1.flags = 4655
kernel.sched_domain.cpu7.domain1.imbalance_pct = 117
kernel.sched_domain.cpu7.domain1.max_interval = 96
kernel.sched_domain.cpu7.domain1.max_newidle_lb_cost = 8795
kernel.sched_domain.cpu7.domain1.min_interval = 48
kernel.sched_domain.cpu7.domain1.name = MC
kernel.sched_domain.cpu8.domain0.busy_factor = 32
kernel.sched_domain.cpu8.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu8.domain0.flags = 4783
kernel.sched_domain.cpu8.domain0.imbalance_pct = 110
kernel.sched_domain.cpu8.domain0.max_interval = 4
kernel.sched_domain.cpu8.domain0.max_newidle_lb_cost = 1600
kernel.sched_domain.cpu8.domain0.min_interval = 2
kernel.sched_domain.cpu8.domain0.name = SMT
kernel.sched_domain.cpu8.domain1.busy_factor = 32
kernel.sched_domain.cpu8.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu8.domain1.flags = 4655
kernel.sched_domain.cpu8.domain1.imbalance_pct = 117
kernel.sched_domain.cpu8.domain1.max_interval = 96
kernel.sched_domain.cpu8.domain1.max_newidle_lb_cost = 6126
kernel.sched_domain.cpu8.domain1.min_interval = 48
kernel.sched_domain.cpu8.domain1.name = MC
kernel.sched_domain.cpu9.domain0.busy_factor = 32
kernel.sched_domain.cpu9.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu9.domain0.flags = 4783
kernel.sched_domain.cpu9.domain0.imbalance_pct = 110
kernel.sched_domain.cpu9.domain0.max_interval = 4
kernel.sched_domain.cpu9.domain0.max_newidle_lb_cost = 84
kernel.sched_domain.cpu9.domain0.min_interval = 2
kernel.sched_domain.cpu9.domain0.name = SMT
kernel.sched_domain.cpu9.domain1.busy_factor = 32
kernel.sched_domain.cpu9.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu9.domain1.flags = 4655
kernel.sched_domain.cpu9.domain1.imbalance_pct = 117
kernel.sched_domain.cpu9.domain1.max_interval = 96
kernel.sched_domain.cpu9.domain1.max_newidle_lb_cost = 914
kernel.sched_domain.cpu9.domain1.min_interval = 48
kernel.sched_domain.cpu9.domain1.name = MC
kernel.sched_latency_ns = 24000000
kernel.sched_migration_cost_ns = 500000
kernel.sched_min_granularity_ns = 3000000
kernel.sched_nr_migrate = 32
kernel.sched_rr_timeslice_ms = 100
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.sched_schedstats = 0
kernel.sched_tunable_scaling = 1
kernel.sched_wakeup_granularity_ns = 4000000
kernel.seccomp.actions_avail = kill_process kill_thread trap errno user_notif trace log allow
kernel.seccomp.actions_logged = kill_process kill_thread trap errno user_notif trace log
kernel.sem = 32000	1024000000	500	32000
kernel.sem_next_id = -1
kernel.shm_next_id = -1
kernel.shm_rmid_forced = 0
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
kernel.soft_watchdog = 1
kernel.softlockup_all_cpu_backtrace = 0
kernel.softlockup_panic = 0
kernel.stack_tracer_enabled = 0
kernel.sysctl_writes_strict = 1
kernel.sysrq = 16
kernel.tainted = 0
kernel.threads-max = 755146
kernel.timer_migration = 1
kernel.traceoff_on_warning = 0
kernel.tracepoint_printk = 0
kernel.unknown_nmi_panic = 0
kernel.unprivileged_bpf_disabled = 0
kernel.usermodehelper.bset = 4294967295	63
kernel.usermodehelper.inheritable = 4294967295	63
kernel.version = #1 SMP Wed Nov 4 17:56:35 UTC 2020
kernel.watchdog = 1
kernel.watchdog_cpumask = 0-47
kernel.watchdog_thresh = 10
net.core.bpf_jit_enable = 1
net.core.bpf_jit_harden = 0
net.core.bpf_jit_kallsyms = 0
net.core.bpf_jit_limit = 398458880
net.core.busy_poll = 0
net.core.busy_read = 0
net.core.default_qdisc = pfifo_fast
net.core.dev_weight = 64
net.core.dev_weight_rx_bias = 1
net.core.dev_weight_tx_bias = 1
net.core.devconf_inherit_init_net = 0
net.core.fb_tunnels_only_for_init_net = 0
net.core.flow_limit_cpu_bitmap = 0000,00000000
net.core.flow_limit_table_len = 4096
net.core.gro_normal_batch = 8
net.core.high_order_alloc_disable = 0
net.core.max_skb_frags = 17
net.core.message_burst = 10
net.core.message_cost = 5
net.core.netdev_budget = 300
net.core.netdev_budget_usecs = 8000
net.core.netdev_max_backlog = 1000
net.core.netdev_rss_key = 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
net.core.netdev_tstamp_prequeue = 1
net.core.optmem_max = 20480
net.core.rmem_default = 212992
net.core.rmem_max = 212992
net.core.rps_sock_flow_entries = 0
net.core.somaxconn = 4096
net.core.tstamp_allow_data = 1
net.core.warnings = 0
net.core.wmem_default = 212992
net.core.wmem_max = 212992
net.core.xfrm_acq_expires = 30
net.core.xfrm_aevent_etime = 10
net.core.xfrm_aevent_rseqth = 2
net.core.xfrm_larval_drop = 1
net.ipv4.cipso_cache_bucket_size = 10
net.ipv4.cipso_cache_enable = 1
net.ipv4.cipso_rbm_optfmt = 0
net.ipv4.cipso_rbm_strictvalid = 1
net.ipv4.conf.all.accept_local = 0
net.ipv4.conf.all.accept_redirects = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.arp_accept = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_notify = 0
net.ipv4.conf.all.bc_forwarding = 0
net.ipv4.conf.all.bootp_relay = 0
net.ipv4.conf.all.disable_policy = 0
net.ipv4.conf.all.disable_xfrm = 0
net.ipv4.conf.all.drop_gratuitous_arp = 0
net.ipv4.conf.all.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.all.force_igmp_version = 0
net.ipv4.conf.all.forwarding = 0
net.ipv4.conf.all.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.all.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.all.ignore_routes_with_linkdown = 0
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.all.medium_id = 0
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.all.proxy_arp = 0
net.ipv4.conf.all.proxy_arp_pvlan = 0
net.ipv4.conf.all.route_localnet = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.all.send_redirects = 1
net.ipv4.conf.all.shared_media = 1
net.ipv4.conf.all.src_valid_mark = 0
net.ipv4.conf.all.tag = 0
net.ipv4.conf.default.accept_local = 0
net.ipv4.conf.default.accept_redirects = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.arp_accept = 0
net.ipv4.conf.default.arp_announce = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.arp_ignore = 0
net.ipv4.conf.default.arp_notify = 0
net.ipv4.conf.default.bc_forwarding = 0
net.ipv4.conf.default.bootp_relay = 0
net.ipv4.conf.default.disable_policy = 0
net.ipv4.conf.default.disable_xfrm = 0
net.ipv4.conf.default.drop_gratuitous_arp = 0
net.ipv4.conf.default.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.default.force_igmp_version = 0
net.ipv4.conf.default.forwarding = 0
net.ipv4.conf.default.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.default.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.default.ignore_routes_with_linkdown = 0
net.ipv4.conf.default.log_martians = 0
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.default.medium_id = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.proxy_arp_pvlan = 0
net.ipv4.conf.default.route_localnet = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.secure_redirects = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.default.shared_media = 1
net.ipv4.conf.default.src_valid_mark = 0
net.ipv4.conf.default.tag = 0
net.ipv4.conf.eth0.accept_local = 0
net.ipv4.conf.eth0.accept_redirects = 1
net.ipv4.conf.eth0.accept_source_route = 0
net.ipv4.conf.eth0.arp_accept = 0
net.ipv4.conf.eth0.arp_announce = 0
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.arp_ignore = 0
net.ipv4.conf.eth0.arp_notify = 0
net.ipv4.conf.eth0.bc_forwarding = 0
net.ipv4.conf.eth0.bootp_relay = 0
net.ipv4.conf.eth0.disable_policy = 0
net.ipv4.conf.eth0.disable_xfrm = 0
net.ipv4.conf.eth0.drop_gratuitous_arp = 0
net.ipv4.conf.eth0.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.eth0.force_igmp_version = 0
net.ipv4.conf.eth0.forwarding = 0
net.ipv4.conf.eth0.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.eth0.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.eth0.ignore_routes_with_linkdown = 0
net.ipv4.conf.eth0.log_martians = 0
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.eth0.medium_id = 0
net.ipv4.conf.eth0.promote_secondaries = 1
net.ipv4.conf.eth0.proxy_arp = 0
net.ipv4.conf.eth0.proxy_arp_pvlan = 0
net.ipv4.conf.eth0.route_localnet = 0
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth0.secure_redirects = 1
net.ipv4.conf.eth0.send_redirects = 1
net.ipv4.conf.eth0.shared_media = 1
net.ipv4.conf.eth0.src_valid_mark = 0
net.ipv4.conf.eth0.tag = 0
net.ipv4.conf.lo.accept_local = 0
net.ipv4.conf.lo.accept_redirects = 1
net.ipv4.conf.lo.accept_source_route = 1
net.ipv4.conf.lo.arp_accept = 0
net.ipv4.conf.lo.arp_announce = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_notify = 0
net.ipv4.conf.lo.bc_forwarding = 0
net.ipv4.conf.lo.bootp_relay = 0
net.ipv4.conf.lo.disable_policy = 1
net.ipv4.conf.lo.disable_xfrm = 1
net.ipv4.conf.lo.drop_gratuitous_arp = 0
net.ipv4.conf.lo.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.lo.force_igmp_version = 0
net.ipv4.conf.lo.forwarding = 0
net.ipv4.conf.lo.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.lo.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.lo.ignore_routes_with_linkdown = 0
net.ipv4.conf.lo.log_martians = 0
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.conf.lo.medium_id = 0
net.ipv4.conf.lo.promote_secondaries = 0
net.ipv4.conf.lo.proxy_arp = 0
net.ipv4.conf.lo.proxy_arp_pvlan = 0
net.ipv4.conf.lo.route_localnet = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.lo.secure_redirects = 1
net.ipv4.conf.lo.send_redirects = 1
net.ipv4.conf.lo.shared_media = 1
net.ipv4.conf.lo.src_valid_mark = 0
net.ipv4.conf.lo.tag = 0
net.ipv4.fib_multipath_hash_policy = 0
net.ipv4.fib_multipath_use_neigh = 0
net.ipv4.fib_sync_mem = 524288
net.ipv4.fwmark_reflect = 0
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_errors_use_inbound_ifaddr = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.icmp_msgs_burst = 50
net.ipv4.icmp_msgs_per_sec = 1000
net.ipv4.icmp_ratelimit = 1000
net.ipv4.icmp_ratemask = 6168
net.ipv4.igmp_link_local_mcast_reports = 1
net.ipv4.igmp_max_memberships = 20
net.ipv4.igmp_max_msf = 10
net.ipv4.igmp_qrv = 2
net.ipv4.inet_peer_maxttl = 600
net.ipv4.inet_peer_minttl = 120
net.ipv4.inet_peer_threshold = 65664
net.ipv4.ip_default_ttl = 255
net.ipv4.ip_dynaddr = 0
net.ipv4.ip_early_demux = 1
net.ipv4.ip_forward = 0
net.ipv4.ip_forward_update_priority = 1
net.ipv4.ip_forward_use_pmtu = 0
net.ipv4.ip_local_port_range = 32768	60999
net.ipv4.ip_local_reserved_ports = 
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.ip_nonlocal_bind = 0
net.ipv4.ip_unprivileged_port_start = 1024
net.ipv4.ipfrag_high_thresh = 4194304
net.ipv4.ipfrag_low_thresh = 3145728
net.ipv4.ipfrag_max_dist = 64
net.ipv4.ipfrag_secret_interval = 0
net.ipv4.ipfrag_time = 30
net.ipv4.neigh.default.anycast_delay = 100
net.ipv4.neigh.default.app_solicit = 0
net.ipv4.neigh.default.base_reachable_time_ms = 30000
net.ipv4.neigh.default.delay_first_probe_time = 5
net.ipv4.neigh.default.gc_interval = 30
net.ipv4.neigh.default.gc_stale_time = 60
net.ipv4.neigh.default.gc_thresh1 = 0
net.ipv4.neigh.default.gc_thresh2 = 15360
net.ipv4.neigh.default.gc_thresh3 = 16384
net.ipv4.neigh.default.locktime = 100
net.ipv4.neigh.default.mcast_resolicit = 0
net.ipv4.neigh.default.mcast_solicit = 3
net.ipv4.neigh.default.proxy_delay = 80
net.ipv4.neigh.default.proxy_qlen = 64
net.ipv4.neigh.default.retrans_time_ms = 1000
net.ipv4.neigh.default.ucast_solicit = 3
net.ipv4.neigh.default.unres_qlen = 101
net.ipv4.neigh.default.unres_qlen_bytes = 212992
net.ipv4.neigh.eth0.anycast_delay = 100
net.ipv4.neigh.eth0.app_solicit = 0
net.ipv4.neigh.eth0.base_reachable_time_ms = 30000
net.ipv4.neigh.eth0.delay_first_probe_time = 5
net.ipv4.neigh.eth0.gc_stale_time = 60
net.ipv4.neigh.eth0.locktime = 100
net.ipv4.neigh.eth0.mcast_resolicit = 0
net.ipv4.neigh.eth0.mcast_solicit = 3
net.ipv4.neigh.eth0.proxy_delay = 80
net.ipv4.neigh.eth0.proxy_qlen = 64
net.ipv4.neigh.eth0.retrans_time_ms = 1000
net.ipv4.neigh.eth0.ucast_solicit = 3
net.ipv4.neigh.eth0.unres_qlen = 101
net.ipv4.neigh.eth0.unres_qlen_bytes = 212992
net.ipv4.neigh.lo.anycast_delay = 100
net.ipv4.neigh.lo.app_solicit = 0
net.ipv4.neigh.lo.base_reachable_time_ms = 30000
net.ipv4.neigh.lo.delay_first_probe_time = 5
net.ipv4.neigh.lo.gc_stale_time = 60
net.ipv4.neigh.lo.locktime = 100
net.ipv4.neigh.lo.mcast_resolicit = 0
net.ipv4.neigh.lo.mcast_solicit = 3
net.ipv4.neigh.lo.proxy_delay = 80
net.ipv4.neigh.lo.proxy_qlen = 64
net.ipv4.neigh.lo.retrans_time_ms = 1000
net.ipv4.neigh.lo.ucast_solicit = 3
net.ipv4.neigh.lo.unres_qlen = 101
net.ipv4.neigh.lo.unres_qlen_bytes = 212992
net.ipv4.ping_group_range = 1	0
net.ipv4.raw_l3mdev_accept = 1
net.ipv4.route.error_burst = 1250
net.ipv4.route.error_cost = 250
net.ipv4.route.gc_elasticity = 8
net.ipv4.route.gc_interval = 60
net.ipv4.route.gc_min_interval = 0
net.ipv4.route.gc_min_interval_ms = 500
net.ipv4.route.gc_thresh = -1
net.ipv4.route.gc_timeout = 300
net.ipv4.route.max_size = 2147483647
net.ipv4.route.min_adv_mss = 256
net.ipv4.route.min_pmtu = 552
net.ipv4.route.mtu_expires = 600
net.ipv4.route.redirect_load = 5
net.ipv4.route.redirect_number = 9
net.ipv4.route.redirect_silence = 5120
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_allowed_congestion_control = reno cubic
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_autocorking = 1
net.ipv4.tcp_available_congestion_control = reno cubic
net.ipv4.tcp_available_ulp = 
net.ipv4.tcp_base_mss = 1024
net.ipv4.tcp_challenge_ack_limit = 1000
net.ipv4.tcp_comp_sack_delay_ns = 1000000
net.ipv4.tcp_comp_sack_nr = 44
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_early_demux = 1
net.ipv4.tcp_early_retrans = 3
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_ecn_fallback = 1
net.ipv4.tcp_fack = 0
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_fastopen_blackhole_timeout_sec = 3600
net.ipv4.tcp_fastopen_key = 00000000-00000000-00000000-00000000
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_frto = 2
net.ipv4.tcp_fwmark_accept = 0
net.ipv4.tcp_invalid_ratelimit = 500
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_l3mdev_accept = 0
net.ipv4.tcp_limit_output_bytes = 1048576
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_reordering = 300
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_mem = 1129662	1506217	2259324
net.ipv4.tcp_min_rtt_wlen = 300
net.ipv4.tcp_min_snd_mss = 48
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_mtu_probe_floor = 48
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_notsent_lowat = 4294967295
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_pacing_ca_ratio = 120
net.ipv4.tcp_pacing_ss_ratio = 200
net.ipv4.tcp_probe_interval = 600
net.ipv4.tcp_probe_threshold = 8
net.ipv4.tcp_recovery = 1
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rmem = 4096	131072	6291456
net.ipv4.tcp_rx_skb_cache = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_tw_reuse = 2
net.ipv4.tcp_tx_skb_cache = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096	20480	4194304
net.ipv4.tcp_workaround_signed_windows = 0
net.ipv4.udp_early_demux = 1
net.ipv4.udp_l3mdev_accept = 0
net.ipv4.udp_mem = 2259324	3012435	4518648
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096
net.ipv4.xfrm4_gc_thresh = 32768
net.ipv6.anycast_src_echo_reply = 0
net.ipv6.auto_flowlabels = 1
net.ipv6.bindv6only = 0
net.ipv6.calipso_cache_bucket_size = 10
net.ipv6.calipso_cache_enable = 1
net.ipv6.conf.all.accept_dad = 0
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_ra_defrtr = 1
net.ipv6.conf.all.accept_ra_from_local = 0
net.ipv6.conf.all.accept_ra_min_hop_limit = 1
net.ipv6.conf.all.accept_ra_mtu = 1
net.ipv6.conf.all.accept_ra_pinfo = 1
net.ipv6.conf.all.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.all.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.all.accept_ra_rtr_pref = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.all.addr_gen_mode = 0
net.ipv6.conf.all.autoconf = 1
net.ipv6.conf.all.dad_transmits = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.disable_policy = 0
net.ipv6.conf.all.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.all.drop_unsolicited_na = 0
net.ipv6.conf.all.enhanced_dad = 1
net.ipv6.conf.all.force_mld_version = 0
net.ipv6.conf.all.force_tllao = 0
net.ipv6.conf.all.forwarding = 0
net.ipv6.conf.all.hop_limit = 64
net.ipv6.conf.all.ignore_routes_with_linkdown = 0
net.ipv6.conf.all.keep_addr_on_down = 0
net.ipv6.conf.all.max_addresses = 16
net.ipv6.conf.all.max_desync_factor = 600
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.all.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.all.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.all.mtu = 1280
net.ipv6.conf.all.ndisc_notify = 0
net.ipv6.conf.all.ndisc_tclass = 0
net.ipv6.conf.all.optimistic_dad = 0
net.ipv6.conf.all.proxy_ndp = 0
net.ipv6.conf.all.regen_max_retry = 3
net.ipv6.conf.all.router_probe_interval = 60
net.ipv6.conf.all.router_solicitation_delay = 1
net.ipv6.conf.all.router_solicitation_interval = 4
net.ipv6.conf.all.router_solicitation_max_interval = 3600
net.ipv6.conf.all.router_solicitations = -1
net.ipv6.conf.all.seg6_enabled = 0
net.ipv6.conf.all.seg6_require_hmac = 0
net.ipv6.conf.all.suppress_frag_ndisc = 1
net.ipv6.conf.all.temp_prefered_lft = 86400
net.ipv6.conf.all.temp_valid_lft = 604800
net.ipv6.conf.all.use_oif_addrs_only = 0
net.ipv6.conf.all.use_optimistic = 0
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.accept_dad = 1
net.ipv6.conf.default.accept_ra = 1
net.ipv6.conf.default.accept_ra_defrtr = 1
net.ipv6.conf.default.accept_ra_from_local = 0
net.ipv6.conf.default.accept_ra_min_hop_limit = 1
net.ipv6.conf.default.accept_ra_mtu = 1
net.ipv6.conf.default.accept_ra_pinfo = 1
net.ipv6.conf.default.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.default.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 1
net.ipv6.conf.default.accept_redirects = 1
net.ipv6.conf.default.accept_source_route = 0
net.ipv6.conf.default.addr_gen_mode = 0
net.ipv6.conf.default.autoconf = 1
net.ipv6.conf.default.dad_transmits = 1
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.default.disable_policy = 0
net.ipv6.conf.default.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.default.drop_unsolicited_na = 0
net.ipv6.conf.default.enhanced_dad = 1
net.ipv6.conf.default.force_mld_version = 0
net.ipv6.conf.default.force_tllao = 0
net.ipv6.conf.default.forwarding = 0
net.ipv6.conf.default.hop_limit = 64
net.ipv6.conf.default.ignore_routes_with_linkdown = 0
net.ipv6.conf.default.keep_addr_on_down = 0
net.ipv6.conf.default.max_addresses = 16
net.ipv6.conf.default.max_desync_factor = 600
net.ipv6.conf.default.mc_forwarding = 0
net.ipv6.conf.default.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.default.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.default.mtu = 1280
net.ipv6.conf.default.ndisc_notify = 0
net.ipv6.conf.default.ndisc_tclass = 0
net.ipv6.conf.default.optimistic_dad = 0
net.ipv6.conf.default.proxy_ndp = 0
net.ipv6.conf.default.regen_max_retry = 3
net.ipv6.conf.default.router_probe_interval = 60
net.ipv6.conf.default.router_solicitation_delay = 1
net.ipv6.conf.default.router_solicitation_interval = 4
net.ipv6.conf.default.router_solicitation_max_interval = 3600
net.ipv6.conf.default.router_solicitations = -1
net.ipv6.conf.default.seg6_enabled = 0
net.ipv6.conf.default.seg6_require_hmac = 0
net.ipv6.conf.default.suppress_frag_ndisc = 1
net.ipv6.conf.default.temp_prefered_lft = 86400
net.ipv6.conf.default.temp_valid_lft = 604800
net.ipv6.conf.default.use_oif_addrs_only = 0
net.ipv6.conf.default.use_optimistic = 0
net.ipv6.conf.default.use_tempaddr = 0
net.ipv6.conf.eth0.accept_dad = 1
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_from_local = 0
net.ipv6.conf.eth0.accept_ra_min_hop_limit = 1
net.ipv6.conf.eth0.accept_ra_mtu = 1
net.ipv6.conf.eth0.accept_ra_pinfo = 1
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.eth0.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_redirects = 1
net.ipv6.conf.eth0.accept_source_route = 0
net.ipv6.conf.eth0.addr_gen_mode = 0
net.ipv6.conf.eth0.autoconf = 1
net.ipv6.conf.eth0.dad_transmits = 1
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.eth0.disable_policy = 0
net.ipv6.conf.eth0.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.eth0.drop_unsolicited_na = 0
net.ipv6.conf.eth0.enhanced_dad = 1
net.ipv6.conf.eth0.force_mld_version = 0
net.ipv6.conf.eth0.force_tllao = 0
net.ipv6.conf.eth0.forwarding = 0
net.ipv6.conf.eth0.hop_limit = 64
net.ipv6.conf.eth0.ignore_routes_with_linkdown = 0
net.ipv6.conf.eth0.keep_addr_on_down = 0
net.ipv6.conf.eth0.max_addresses = 16
net.ipv6.conf.eth0.max_desync_factor = 600
net.ipv6.conf.eth0.mc_forwarding = 0
net.ipv6.conf.eth0.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.eth0.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.eth0.mtu = 9001
net.ipv6.conf.eth0.ndisc_notify = 0
net.ipv6.conf.eth0.ndisc_tclass = 0
net.ipv6.conf.eth0.optimistic_dad = 0
net.ipv6.conf.eth0.proxy_ndp = 0
net.ipv6.conf.eth0.regen_max_retry = 3
net.ipv6.conf.eth0.router_probe_interval = 60
net.ipv6.conf.eth0.router_solicitation_delay = 1
net.ipv6.conf.eth0.router_solicitation_interval = 4
net.ipv6.conf.eth0.router_solicitation_max_interval = 3600
net.ipv6.conf.eth0.router_solicitations = -1
net.ipv6.conf.eth0.seg6_enabled = 0
net.ipv6.conf.eth0.seg6_require_hmac = 0
net.ipv6.conf.eth0.suppress_frag_ndisc = 1
net.ipv6.conf.eth0.temp_prefered_lft = 86400
net.ipv6.conf.eth0.temp_valid_lft = 604800
net.ipv6.conf.eth0.use_oif_addrs_only = 0
net.ipv6.conf.eth0.use_optimistic = 0
net.ipv6.conf.eth0.use_tempaddr = 0
net.ipv6.conf.lo.accept_dad = -1
net.ipv6.conf.lo.accept_ra = 1
net.ipv6.conf.lo.accept_ra_defrtr = 1
net.ipv6.conf.lo.accept_ra_from_local = 0
net.ipv6.conf.lo.accept_ra_min_hop_limit = 1
net.ipv6.conf.lo.accept_ra_mtu = 1
net.ipv6.conf.lo.accept_ra_pinfo = 1
net.ipv6.conf.lo.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.lo.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.lo.accept_ra_rtr_pref = 1
net.ipv6.conf.lo.accept_redirects = 1
net.ipv6.conf.lo.accept_source_route = 0
net.ipv6.conf.lo.addr_gen_mode = 0
net.ipv6.conf.lo.autoconf = 1
net.ipv6.conf.lo.dad_transmits = 1
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.lo.disable_policy = 0
net.ipv6.conf.lo.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.lo.drop_unsolicited_na = 0
net.ipv6.conf.lo.enhanced_dad = 1
net.ipv6.conf.lo.force_mld_version = 0
net.ipv6.conf.lo.force_tllao = 0
net.ipv6.conf.lo.forwarding = 0
net.ipv6.conf.lo.hop_limit = 64
net.ipv6.conf.lo.ignore_routes_with_linkdown = 0
net.ipv6.conf.lo.keep_addr_on_down = 0
net.ipv6.conf.lo.max_addresses = 16
net.ipv6.conf.lo.max_desync_factor = 600
net.ipv6.conf.lo.mc_forwarding = 0
net.ipv6.conf.lo.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.lo.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.lo.mtu = 65536
net.ipv6.conf.lo.ndisc_notify = 0
net.ipv6.conf.lo.ndisc_tclass = 0
net.ipv6.conf.lo.optimistic_dad = 0
net.ipv6.conf.lo.proxy_ndp = 0
net.ipv6.conf.lo.regen_max_retry = 3
net.ipv6.conf.lo.router_probe_interval = 60
net.ipv6.conf.lo.router_solicitation_delay = 1
net.ipv6.conf.lo.router_solicitation_interval = 4
net.ipv6.conf.lo.router_solicitation_max_interval = 3600
net.ipv6.conf.lo.router_solicitations = -1
net.ipv6.conf.lo.seg6_enabled = 0
net.ipv6.conf.lo.seg6_require_hmac = 0
net.ipv6.conf.lo.suppress_frag_ndisc = 1
net.ipv6.conf.lo.temp_prefered_lft = 86400
net.ipv6.conf.lo.temp_valid_lft = 604800
net.ipv6.conf.lo.use_oif_addrs_only = 0
net.ipv6.conf.lo.use_optimistic = 0
net.ipv6.conf.lo.use_tempaddr = -1
net.ipv6.fib_multipath_hash_policy = 0
net.ipv6.flowlabel_consistency = 1
net.ipv6.flowlabel_reflect = 0
net.ipv6.flowlabel_state_ranges = 0
net.ipv6.fwmark_reflect = 0
net.ipv6.icmp.echo_ignore_all = 0
net.ipv6.icmp.echo_ignore_anycast = 0
net.ipv6.icmp.echo_ignore_multicast = 0
net.ipv6.icmp.ratelimit = 1000
net.ipv6.icmp.ratemask = 0-1,3-127
net.ipv6.idgen_delay = 1
net.ipv6.idgen_retries = 3
net.ipv6.ip6frag_high_thresh = 4194304
net.ipv6.ip6frag_low_thresh = 3145728
net.ipv6.ip6frag_secret_interval = 0
net.ipv6.ip6frag_time = 60
net.ipv6.ip_nonlocal_bind = 0
net.ipv6.max_dst_opts_length = 2147483647
net.ipv6.max_dst_opts_number = 8
net.ipv6.max_hbh_length = 2147483647
net.ipv6.max_hbh_opts_number = 8
net.ipv6.mld_max_msf = 64
net.ipv6.mld_qrv = 2
net.ipv6.neigh.default.anycast_delay = 100
net.ipv6.neigh.default.app_solicit = 0
net.ipv6.neigh.default.base_reachable_time_ms = 30000
net.ipv6.neigh.default.delay_first_probe_time = 5
net.ipv6.neigh.default.gc_interval = 30
net.ipv6.neigh.default.gc_stale_time = 60
net.ipv6.neigh.default.gc_thresh1 = 0
net.ipv6.neigh.default.gc_thresh2 = 15360
net.ipv6.neigh.default.gc_thresh3 = 16384
net.ipv6.neigh.default.locktime = 0
net.ipv6.neigh.default.mcast_resolicit = 0
net.ipv6.neigh.default.mcast_solicit = 3
net.ipv6.neigh.default.proxy_delay = 80
net.ipv6.neigh.default.proxy_qlen = 64
net.ipv6.neigh.default.retrans_time_ms = 1000
net.ipv6.neigh.default.ucast_solicit = 3
net.ipv6.neigh.default.unres_qlen = 101
net.ipv6.neigh.default.unres_qlen_bytes = 212992
net.ipv6.neigh.eth0.anycast_delay = 100
net.ipv6.neigh.eth0.app_solicit = 0
net.ipv6.neigh.eth0.base_reachable_time_ms = 30000
net.ipv6.neigh.eth0.delay_first_probe_time = 5
net.ipv6.neigh.eth0.gc_stale_time = 60
net.ipv6.neigh.eth0.locktime = 0
net.ipv6.neigh.eth0.mcast_resolicit = 0
net.ipv6.neigh.eth0.mcast_solicit = 3
net.ipv6.neigh.eth0.proxy_delay = 80
net.ipv6.neigh.eth0.proxy_qlen = 64
net.ipv6.neigh.eth0.retrans_time_ms = 1000
net.ipv6.neigh.eth0.ucast_solicit = 3
net.ipv6.neigh.eth0.unres_qlen = 101
net.ipv6.neigh.eth0.unres_qlen_bytes = 212992
net.ipv6.neigh.lo.anycast_delay = 100
net.ipv6.neigh.lo.app_solicit = 0
net.ipv6.neigh.lo.base_reachable_time_ms = 30000
net.ipv6.neigh.lo.delay_first_probe_time = 5
net.ipv6.neigh.lo.gc_stale_time = 60
net.ipv6.neigh.lo.locktime = 0
net.ipv6.neigh.lo.mcast_resolicit = 0
net.ipv6.neigh.lo.mcast_solicit = 3
net.ipv6.neigh.lo.proxy_delay = 80
net.ipv6.neigh.lo.proxy_qlen = 64
net.ipv6.neigh.lo.retrans_time_ms = 1000
net.ipv6.neigh.lo.ucast_solicit = 3
net.ipv6.neigh.lo.unres_qlen = 101
net.ipv6.neigh.lo.unres_qlen_bytes = 212992
net.ipv6.route.gc_elasticity = 9
net.ipv6.route.gc_interval = 30
net.ipv6.route.gc_min_interval = 0
net.ipv6.route.gc_min_interval_ms = 500
net.ipv6.route.gc_thresh = 1024
net.ipv6.route.gc_timeout = 60
net.ipv6.route.max_size = 4096
net.ipv6.route.min_adv_mss = 1220
net.ipv6.route.mtu_expires = 600
net.ipv6.route.skip_notify_on_dev_down = 0
net.ipv6.seg6_flowlabel = 0
net.ipv6.xfrm6_gc_thresh = 32768
net.netfilter.nf_log.0 = NONE
net.netfilter.nf_log.1 = NONE
net.netfilter.nf_log.10 = NONE
net.netfilter.nf_log.11 = NONE
net.netfilter.nf_log.12 = NONE
net.netfilter.nf_log.2 = NONE
net.netfilter.nf_log.3 = NONE
net.netfilter.nf_log.4 = NONE
net.netfilter.nf_log.5 = NONE
net.netfilter.nf_log.6 = NONE
net.netfilter.nf_log.7 = NONE
net.netfilter.nf_log.8 = NONE
net.netfilter.nf_log.9 = NONE
net.netfilter.nf_log_all_netns = 0
net.unix.max_dgram_qlen = 512
sunrpc.max_resvport = 1023
sunrpc.min_resvport = 665
sunrpc.nfs_debug = 0x0000
sunrpc.nfsd_debug = 0x0000
sunrpc.nlm_debug = 0x0000
sunrpc.rpc_debug = 0x0000
sunrpc.tcp_fin_timeout = 15
sunrpc.tcp_max_slot_table_entries = 65536
sunrpc.tcp_slot_table_entries = 2
sunrpc.transports = tcp 1048576
sunrpc.transports = udp 32768
sunrpc.udp_slot_table_entries = 16
user.max_cgroup_namespaces = 377573
user.max_inotify_instances = 128
user.max_inotify_watches = 8192
user.max_ipc_namespaces = 377573
user.max_mnt_namespaces = 377573
user.max_net_namespaces = 377573
user.max_pid_namespaces = 377573
user.max_user_namespaces = 377573
user.max_uts_namespaces = 377573
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.compact_unevictable_allowed = 1
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
vm.extfrag_threshold = 500
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256	256	32	0
vm.max_map_count = 65530
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
vm.min_free_kbytes = 67584
vm.min_slab_ratio = 5
vm.min_unmapped_ratio = 1
vm.mmap_min_addr = 4096
vm.mmap_rnd_bits = 28
vm.mmap_rnd_compat_bits = 8
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.nr_overcommit_hugepages = 0
vm.numa_stat = 1
vm.numa_zonelist_order = Node
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.unprivileged_userfaultfd = 1
vm.user_reserve_kbytes = 131072
vm.vfs_cache_pressure = 100
vm.watermark_boost_factor = 15000
vm.watermark_scale_factor = 10
vm.zone_reclaim_mode = 0

[-- Attachment #3: senderconf --]
[-- Type: application/octet-stream, Size: 73639 bytes --]

ens5      Link encap:Ethernet  HWaddr 06:a7:ff:c5:b7:36  
          inet addr:172.31.24.182  Bcast:172.31.31.255  Mask:255.255.240.0
          inet6 addr: fe80::4a7:ffff:fec5:b736/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:3224349 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6165225 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2845092012 (2.8 GB)  TX bytes:8458720408 (8.4 GB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:200 errors:0 dropped:0 overruns:0 frame:0
          TX packets:200 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:15040 (15.0 KB)  TX bytes:15040 (15.0 KB)

abi.vsyscall32 = 1
debug.exception-trace = 1
debug.kprobes-optimization = 1
dev.cdrom.autoclose = 1
dev.cdrom.autoeject = 0
dev.cdrom.check_media = 0
dev.cdrom.debug = 0
dev.cdrom.info = CD-ROM information, Id: cdrom.c 3.20 2003/12/17
dev.cdrom.info = 
dev.cdrom.info = drive name:	
dev.cdrom.info = drive speed:	
dev.cdrom.info = drive # of slots:
dev.cdrom.info = Can close tray:	
dev.cdrom.info = Can open tray:	
dev.cdrom.info = Can lock tray:	
dev.cdrom.info = Can change speed:
dev.cdrom.info = Can select disk:
dev.cdrom.info = Can read multisession:
dev.cdrom.info = Can read MCN:	
dev.cdrom.info = Reports media changed:
dev.cdrom.info = Can play audio:	
dev.cdrom.info = Can write CD-R:	
dev.cdrom.info = Can write CD-RW:
dev.cdrom.info = Can read DVD:	
dev.cdrom.info = Can write DVD-R:
dev.cdrom.info = Can write DVD-RAM:
dev.cdrom.info = Can read MRW:	
dev.cdrom.info = Can write MRW:	
dev.cdrom.info = Can write RAM:	
dev.cdrom.info = 
dev.cdrom.info = 
dev.cdrom.lock = 1
dev.hpet.max-user-freq = 64
dev.mac_hid.mouse_button2_keycode = 97
dev.mac_hid.mouse_button3_keycode = 100
dev.mac_hid.mouse_button_emulation = 0
dev.parport.default.spintime = 500
dev.parport.default.timeslice = 200
dev.raid.speed_limit_max = 200000
dev.raid.speed_limit_min = 1000
dev.scsi.logging_level = 0
dev.tty.ldisc_autoload = 1
fs.aio-max-nr = 65536
fs.aio-nr = 0
fs.binfmt_misc.status = enabled
fs.dentry-state = 130931	109108	45	0	29329	0
fs.dir-notify-enable = 1
fs.epoll.max_user_watches = 19795148
fs.file-max = 9665226
fs.file-nr = 1824	0	9665226
fs.inode-nr = 101719	437
fs.inode-state = 101719	437	0	0	0	0	0
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 8192
fs.lease-break-time = 45
fs.leases-enable = 1
fs.mount-max = 100000
fs.mqueue.msg_default = 10
fs.mqueue.msg_max = 10
fs.mqueue.msgsize_default = 8192
fs.mqueue.msgsize_max = 8192
fs.mqueue.queues_max = 256
fs.nr_open = 1048576
fs.overflowgid = 65534
fs.overflowuid = 65534
fs.pipe-max-size = 1048576
fs.pipe-user-pages-hard = 0
fs.pipe-user-pages-soft = 16384
fs.protected_fifos = 0
fs.protected_hardlinks = 1
fs.protected_regular = 0
fs.protected_symlinks = 1
fs.quota.allocated_dquots = 0
fs.quota.cache_hits = 0
fs.quota.drops = 0
fs.quota.free_dquots = 0
fs.quota.lookups = 0
fs.quota.reads = 0
fs.quota.syncs = 4
fs.quota.writes = 0
fs.suid_dumpable = 2
fs.verity.require_signatures = 0
fs.xfs.error_level = 3
fs.xfs.filestream_centisecs = 3000
fs.xfs.inherit_noatime = 1
fs.xfs.inherit_nodefrag = 1
fs.xfs.inherit_nodump = 1
fs.xfs.inherit_nosymlinks = 0
fs.xfs.inherit_sync = 1
fs.xfs.irix_sgid_inherit = 0
fs.xfs.irix_symlink_mode = 0
fs.xfs.panic_mask = 0
fs.xfs.rotorstep = 1
fs.xfs.speculative_cow_prealloc_lifetime = 1800
fs.xfs.speculative_prealloc_lifetime = 300
fs.xfs.stats_clear = 0
fs.xfs.xfssyncd_centisecs = 3000
kernel.acct = 4	2	30
kernel.acpi_video_flags = 0
kernel.auto_msgmni = 0
kernel.bootloader_type = 114
kernel.bootloader_version = 2
kernel.bpf_stats_enabled = 0
kernel.cad_pid = 1
kernel.cap_last_cap = 37
kernel.core_pattern = |/usr/share/apport/apport %p %s %c %d %P %E
kernel.core_pipe_limit = 0
kernel.core_uses_pid = 0
kernel.ctrl-alt-del = 0
kernel.dmesg_restrict = 0
kernel.domainname = (none)
kernel.firmware_config.force_sysfs_fallback = 0
kernel.firmware_config.ignore_sysfs_fallback = 0
kernel.ftrace_dump_on_oops = 0
kernel.ftrace_enabled = 1
kernel.hardlockup_all_cpu_backtrace = 0
kernel.hardlockup_panic = 0
kernel.hostname = ip-172-31-24-182
kernel.hotplug = 
kernel.hung_task_check_count = 4194304
kernel.hung_task_check_interval_secs = 0
kernel.hung_task_panic = 0
kernel.hung_task_timeout_secs = 120
kernel.hung_task_warnings = 10
kernel.io_delay_type = 1
kernel.kexec_load_disabled = 0
kernel.keys.gc_delay = 300
kernel.keys.maxbytes = 20000
kernel.keys.maxkeys = 200
kernel.keys.persistent_keyring_expiry = 259200
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.kptr_restrict = 1
kernel.max_lock_depth = 1024
kernel.modprobe = /sbin/modprobe
kernel.modules_disabled = 0
kernel.msg_next_id = -1
kernel.msgmax = 8192
kernel.msgmnb = 16384
kernel.msgmni = 32000
kernel.ngroups_max = 65536
kernel.nmi_watchdog = 0
kernel.ns_last_pid = 33416
kernel.numa_balancing = 0
kernel.numa_balancing_scan_delay_ms = 1000
kernel.numa_balancing_scan_period_max_ms = 60000
kernel.numa_balancing_scan_period_min_ms = 1000
kernel.numa_balancing_scan_size_mb = 256
kernel.osrelease = 5.4.0-rc6
kernel.ostype = Linux
kernel.overflowgid = 65534
kernel.overflowuid = 65534
kernel.panic = 0
kernel.panic_on_io_nmi = 0
kernel.panic_on_oops = 0
kernel.panic_on_rcu_stall = 0
kernel.panic_on_unrecovered_nmi = 0
kernel.panic_on_warn = 0
kernel.panic_print = 0
kernel.perf_cpu_time_max_percent = 25
kernel.perf_event_max_contexts_per_stack = 8
kernel.perf_event_max_sample_rate = 100000
kernel.perf_event_max_stack = 127
kernel.perf_event_mlock_kb = 516
kernel.perf_event_paranoid = 2
kernel.pid_max = 49152
kernel.poweroff_cmd = /sbin/poweroff
kernel.print-fatal-signals = 0
kernel.printk = 4	4	1	7
kernel.printk_delay = 0
kernel.printk_devkmsg = ratelimit
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10
kernel.pty.max = 4096
kernel.pty.nr = 8
kernel.pty.reserve = 1024
kernel.random.boot_id = 30408b35-2574-413d-8841-fe755aba6248
kernel.random.entropy_avail = 1178
kernel.random.poolsize = 4096
kernel.random.read_wakeup_threshold = 64
kernel.random.urandom_min_reseed_secs = 60
kernel.random.uuid = a7b78e29-9112-4465-86a0-fdf9abb8ebd1
kernel.random.write_wakeup_threshold = 896
kernel.randomize_va_space = 2
kernel.real-root-dev = 0
kernel.sched_autogroup_enabled = 1
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.sched_child_runs_first = 0
kernel.sched_domain.cpu0.domain0.busy_factor = 32
kernel.sched_domain.cpu0.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu0.domain0.flags = 4783
kernel.sched_domain.cpu0.domain0.imbalance_pct = 110
kernel.sched_domain.cpu0.domain0.max_interval = 4
kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 891
kernel.sched_domain.cpu0.domain0.min_interval = 2
kernel.sched_domain.cpu0.domain0.name = SMT
kernel.sched_domain.cpu0.domain1.busy_factor = 32
kernel.sched_domain.cpu0.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu0.domain1.flags = 4655
kernel.sched_domain.cpu0.domain1.imbalance_pct = 117
kernel.sched_domain.cpu0.domain1.max_interval = 96
kernel.sched_domain.cpu0.domain1.max_newidle_lb_cost = 5252
kernel.sched_domain.cpu0.domain1.min_interval = 48
kernel.sched_domain.cpu0.domain1.name = MC
kernel.sched_domain.cpu1.domain0.busy_factor = 32
kernel.sched_domain.cpu1.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu1.domain0.flags = 4783
kernel.sched_domain.cpu1.domain0.imbalance_pct = 110
kernel.sched_domain.cpu1.domain0.max_interval = 4
kernel.sched_domain.cpu1.domain0.max_newidle_lb_cost = 291
kernel.sched_domain.cpu1.domain0.min_interval = 2
kernel.sched_domain.cpu1.domain0.name = SMT
kernel.sched_domain.cpu1.domain1.busy_factor = 32
kernel.sched_domain.cpu1.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu1.domain1.flags = 4655
kernel.sched_domain.cpu1.domain1.imbalance_pct = 117
kernel.sched_domain.cpu1.domain1.max_interval = 96
kernel.sched_domain.cpu1.domain1.max_newidle_lb_cost = 3068
kernel.sched_domain.cpu1.domain1.min_interval = 48
kernel.sched_domain.cpu1.domain1.name = MC
kernel.sched_domain.cpu10.domain0.busy_factor = 32
kernel.sched_domain.cpu10.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu10.domain0.flags = 4783
kernel.sched_domain.cpu10.domain0.imbalance_pct = 110
kernel.sched_domain.cpu10.domain0.max_interval = 4
kernel.sched_domain.cpu10.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu10.domain0.min_interval = 2
kernel.sched_domain.cpu10.domain0.name = SMT
kernel.sched_domain.cpu10.domain1.busy_factor = 32
kernel.sched_domain.cpu10.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu10.domain1.flags = 4655
kernel.sched_domain.cpu10.domain1.imbalance_pct = 117
kernel.sched_domain.cpu10.domain1.max_interval = 96
kernel.sched_domain.cpu10.domain1.max_newidle_lb_cost = 0
kernel.sched_domain.cpu10.domain1.min_interval = 48
kernel.sched_domain.cpu10.domain1.name = MC
kernel.sched_domain.cpu11.domain0.busy_factor = 32
kernel.sched_domain.cpu11.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu11.domain0.flags = 4783
kernel.sched_domain.cpu11.domain0.imbalance_pct = 110
kernel.sched_domain.cpu11.domain0.max_interval = 4
kernel.sched_domain.cpu11.domain0.max_newidle_lb_cost = 72
kernel.sched_domain.cpu11.domain0.min_interval = 2
kernel.sched_domain.cpu11.domain0.name = SMT
kernel.sched_domain.cpu11.domain1.busy_factor = 32
kernel.sched_domain.cpu11.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu11.domain1.flags = 4655
kernel.sched_domain.cpu11.domain1.imbalance_pct = 117
kernel.sched_domain.cpu11.domain1.max_interval = 96
kernel.sched_domain.cpu11.domain1.max_newidle_lb_cost = 838
kernel.sched_domain.cpu11.domain1.min_interval = 48
kernel.sched_domain.cpu11.domain1.name = MC
kernel.sched_domain.cpu12.domain0.busy_factor = 32
kernel.sched_domain.cpu12.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu12.domain0.flags = 4783
kernel.sched_domain.cpu12.domain0.imbalance_pct = 110
kernel.sched_domain.cpu12.domain0.max_interval = 4
kernel.sched_domain.cpu12.domain0.max_newidle_lb_cost = 512
kernel.sched_domain.cpu12.domain0.min_interval = 2
kernel.sched_domain.cpu12.domain0.name = SMT
kernel.sched_domain.cpu12.domain1.busy_factor = 32
kernel.sched_domain.cpu12.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu12.domain1.flags = 4655
kernel.sched_domain.cpu12.domain1.imbalance_pct = 117
kernel.sched_domain.cpu12.domain1.max_interval = 96
kernel.sched_domain.cpu12.domain1.max_newidle_lb_cost = 5713
kernel.sched_domain.cpu12.domain1.min_interval = 48
kernel.sched_domain.cpu12.domain1.name = MC
kernel.sched_domain.cpu13.domain0.busy_factor = 32
kernel.sched_domain.cpu13.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu13.domain0.flags = 4783
kernel.sched_domain.cpu13.domain0.imbalance_pct = 110
kernel.sched_domain.cpu13.domain0.max_interval = 4
kernel.sched_domain.cpu13.domain0.max_newidle_lb_cost = 658
kernel.sched_domain.cpu13.domain0.min_interval = 2
kernel.sched_domain.cpu13.domain0.name = SMT
kernel.sched_domain.cpu13.domain1.busy_factor = 32
kernel.sched_domain.cpu13.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu13.domain1.flags = 4655
kernel.sched_domain.cpu13.domain1.imbalance_pct = 117
kernel.sched_domain.cpu13.domain1.max_interval = 96
kernel.sched_domain.cpu13.domain1.max_newidle_lb_cost = 2805
kernel.sched_domain.cpu13.domain1.min_interval = 48
kernel.sched_domain.cpu13.domain1.name = MC
kernel.sched_domain.cpu14.domain0.busy_factor = 32
kernel.sched_domain.cpu14.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu14.domain0.flags = 4783
kernel.sched_domain.cpu14.domain0.imbalance_pct = 110
kernel.sched_domain.cpu14.domain0.max_interval = 4
kernel.sched_domain.cpu14.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu14.domain0.min_interval = 2
kernel.sched_domain.cpu14.domain0.name = SMT
kernel.sched_domain.cpu14.domain1.busy_factor = 32
kernel.sched_domain.cpu14.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu14.domain1.flags = 4655
kernel.sched_domain.cpu14.domain1.imbalance_pct = 117
kernel.sched_domain.cpu14.domain1.max_interval = 96
kernel.sched_domain.cpu14.domain1.max_newidle_lb_cost = 57
kernel.sched_domain.cpu14.domain1.min_interval = 48
kernel.sched_domain.cpu14.domain1.name = MC
kernel.sched_domain.cpu15.domain0.busy_factor = 32
kernel.sched_domain.cpu15.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu15.domain0.flags = 4783
kernel.sched_domain.cpu15.domain0.imbalance_pct = 110
kernel.sched_domain.cpu15.domain0.max_interval = 4
kernel.sched_domain.cpu15.domain0.max_newidle_lb_cost = 143
kernel.sched_domain.cpu15.domain0.min_interval = 2
kernel.sched_domain.cpu15.domain0.name = SMT
kernel.sched_domain.cpu15.domain1.busy_factor = 32
kernel.sched_domain.cpu15.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu15.domain1.flags = 4655
kernel.sched_domain.cpu15.domain1.imbalance_pct = 117
kernel.sched_domain.cpu15.domain1.max_interval = 96
kernel.sched_domain.cpu15.domain1.max_newidle_lb_cost = 900
kernel.sched_domain.cpu15.domain1.min_interval = 48
kernel.sched_domain.cpu15.domain1.name = MC
kernel.sched_domain.cpu16.domain0.busy_factor = 32
kernel.sched_domain.cpu16.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu16.domain0.flags = 4783
kernel.sched_domain.cpu16.domain0.imbalance_pct = 110
kernel.sched_domain.cpu16.domain0.max_interval = 4
kernel.sched_domain.cpu16.domain0.max_newidle_lb_cost = 392
kernel.sched_domain.cpu16.domain0.min_interval = 2
kernel.sched_domain.cpu16.domain0.name = SMT
kernel.sched_domain.cpu16.domain1.busy_factor = 32
kernel.sched_domain.cpu16.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu16.domain1.flags = 4655
kernel.sched_domain.cpu16.domain1.imbalance_pct = 117
kernel.sched_domain.cpu16.domain1.max_interval = 96
kernel.sched_domain.cpu16.domain1.max_newidle_lb_cost = 2618
kernel.sched_domain.cpu16.domain1.min_interval = 48
kernel.sched_domain.cpu16.domain1.name = MC
kernel.sched_domain.cpu17.domain0.busy_factor = 32
kernel.sched_domain.cpu17.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu17.domain0.flags = 4783
kernel.sched_domain.cpu17.domain0.imbalance_pct = 110
kernel.sched_domain.cpu17.domain0.max_interval = 4
kernel.sched_domain.cpu17.domain0.max_newidle_lb_cost = 246
kernel.sched_domain.cpu17.domain0.min_interval = 2
kernel.sched_domain.cpu17.domain0.name = SMT
kernel.sched_domain.cpu17.domain1.busy_factor = 32
kernel.sched_domain.cpu17.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu17.domain1.flags = 4655
kernel.sched_domain.cpu17.domain1.imbalance_pct = 117
kernel.sched_domain.cpu17.domain1.max_interval = 96
kernel.sched_domain.cpu17.domain1.max_newidle_lb_cost = 13723
kernel.sched_domain.cpu17.domain1.min_interval = 48
kernel.sched_domain.cpu17.domain1.name = MC
kernel.sched_domain.cpu18.domain0.busy_factor = 32
kernel.sched_domain.cpu18.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu18.domain0.flags = 4783
kernel.sched_domain.cpu18.domain0.imbalance_pct = 110
kernel.sched_domain.cpu18.domain0.max_interval = 4
kernel.sched_domain.cpu18.domain0.max_newidle_lb_cost = 564
kernel.sched_domain.cpu18.domain0.min_interval = 2
kernel.sched_domain.cpu18.domain0.name = SMT
kernel.sched_domain.cpu18.domain1.busy_factor = 32
kernel.sched_domain.cpu18.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu18.domain1.flags = 4655
kernel.sched_domain.cpu18.domain1.imbalance_pct = 117
kernel.sched_domain.cpu18.domain1.max_interval = 96
kernel.sched_domain.cpu18.domain1.max_newidle_lb_cost = 4134
kernel.sched_domain.cpu18.domain1.min_interval = 48
kernel.sched_domain.cpu18.domain1.name = MC
kernel.sched_domain.cpu19.domain0.busy_factor = 32
kernel.sched_domain.cpu19.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu19.domain0.flags = 4783
kernel.sched_domain.cpu19.domain0.imbalance_pct = 110
kernel.sched_domain.cpu19.domain0.max_interval = 4
kernel.sched_domain.cpu19.domain0.max_newidle_lb_cost = 82
kernel.sched_domain.cpu19.domain0.min_interval = 2
kernel.sched_domain.cpu19.domain0.name = SMT
kernel.sched_domain.cpu19.domain1.busy_factor = 32
kernel.sched_domain.cpu19.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu19.domain1.flags = 4655
kernel.sched_domain.cpu19.domain1.imbalance_pct = 117
kernel.sched_domain.cpu19.domain1.max_interval = 96
kernel.sched_domain.cpu19.domain1.max_newidle_lb_cost = 2186
kernel.sched_domain.cpu19.domain1.min_interval = 48
kernel.sched_domain.cpu19.domain1.name = MC
kernel.sched_domain.cpu2.domain0.busy_factor = 32
kernel.sched_domain.cpu2.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu2.domain0.flags = 4783
kernel.sched_domain.cpu2.domain0.imbalance_pct = 110
kernel.sched_domain.cpu2.domain0.max_interval = 4
kernel.sched_domain.cpu2.domain0.max_newidle_lb_cost = 926
kernel.sched_domain.cpu2.domain0.min_interval = 2
kernel.sched_domain.cpu2.domain0.name = SMT
kernel.sched_domain.cpu2.domain1.busy_factor = 32
kernel.sched_domain.cpu2.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu2.domain1.flags = 4655
kernel.sched_domain.cpu2.domain1.imbalance_pct = 117
kernel.sched_domain.cpu2.domain1.max_interval = 96
kernel.sched_domain.cpu2.domain1.max_newidle_lb_cost = 8898
kernel.sched_domain.cpu2.domain1.min_interval = 48
kernel.sched_domain.cpu2.domain1.name = MC
kernel.sched_domain.cpu20.domain0.busy_factor = 32
kernel.sched_domain.cpu20.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu20.domain0.flags = 4783
kernel.sched_domain.cpu20.domain0.imbalance_pct = 110
kernel.sched_domain.cpu20.domain0.max_interval = 4
kernel.sched_domain.cpu20.domain0.max_newidle_lb_cost = 259
kernel.sched_domain.cpu20.domain0.min_interval = 2
kernel.sched_domain.cpu20.domain0.name = SMT
kernel.sched_domain.cpu20.domain1.busy_factor = 32
kernel.sched_domain.cpu20.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu20.domain1.flags = 4655
kernel.sched_domain.cpu20.domain1.imbalance_pct = 117
kernel.sched_domain.cpu20.domain1.max_interval = 96
kernel.sched_domain.cpu20.domain1.max_newidle_lb_cost = 3818
kernel.sched_domain.cpu20.domain1.min_interval = 48
kernel.sched_domain.cpu20.domain1.name = MC
kernel.sched_domain.cpu21.domain0.busy_factor = 32
kernel.sched_domain.cpu21.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu21.domain0.flags = 4783
kernel.sched_domain.cpu21.domain0.imbalance_pct = 110
kernel.sched_domain.cpu21.domain0.max_interval = 4
kernel.sched_domain.cpu21.domain0.max_newidle_lb_cost = 346
kernel.sched_domain.cpu21.domain0.min_interval = 2
kernel.sched_domain.cpu21.domain0.name = SMT
kernel.sched_domain.cpu21.domain1.busy_factor = 32
kernel.sched_domain.cpu21.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu21.domain1.flags = 4655
kernel.sched_domain.cpu21.domain1.imbalance_pct = 117
kernel.sched_domain.cpu21.domain1.max_interval = 96
kernel.sched_domain.cpu21.domain1.max_newidle_lb_cost = 3853
kernel.sched_domain.cpu21.domain1.min_interval = 48
kernel.sched_domain.cpu21.domain1.name = MC
kernel.sched_domain.cpu22.domain0.busy_factor = 32
kernel.sched_domain.cpu22.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu22.domain0.flags = 4783
kernel.sched_domain.cpu22.domain0.imbalance_pct = 110
kernel.sched_domain.cpu22.domain0.max_interval = 4
kernel.sched_domain.cpu22.domain0.max_newidle_lb_cost = 317
kernel.sched_domain.cpu22.domain0.min_interval = 2
kernel.sched_domain.cpu22.domain0.name = SMT
kernel.sched_domain.cpu22.domain1.busy_factor = 32
kernel.sched_domain.cpu22.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu22.domain1.flags = 4655
kernel.sched_domain.cpu22.domain1.imbalance_pct = 117
kernel.sched_domain.cpu22.domain1.max_interval = 96
kernel.sched_domain.cpu22.domain1.max_newidle_lb_cost = 1984
kernel.sched_domain.cpu22.domain1.min_interval = 48
kernel.sched_domain.cpu22.domain1.name = MC
kernel.sched_domain.cpu23.domain0.busy_factor = 32
kernel.sched_domain.cpu23.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu23.domain0.flags = 4783
kernel.sched_domain.cpu23.domain0.imbalance_pct = 110
kernel.sched_domain.cpu23.domain0.max_interval = 4
kernel.sched_domain.cpu23.domain0.max_newidle_lb_cost = 641
kernel.sched_domain.cpu23.domain0.min_interval = 2
kernel.sched_domain.cpu23.domain0.name = SMT
kernel.sched_domain.cpu23.domain1.busy_factor = 32
kernel.sched_domain.cpu23.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu23.domain1.flags = 4655
kernel.sched_domain.cpu23.domain1.imbalance_pct = 117
kernel.sched_domain.cpu23.domain1.max_interval = 96
kernel.sched_domain.cpu23.domain1.max_newidle_lb_cost = 13867
kernel.sched_domain.cpu23.domain1.min_interval = 48
kernel.sched_domain.cpu23.domain1.name = MC
kernel.sched_domain.cpu24.domain0.busy_factor = 32
kernel.sched_domain.cpu24.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu24.domain0.flags = 4783
kernel.sched_domain.cpu24.domain0.imbalance_pct = 110
kernel.sched_domain.cpu24.domain0.max_interval = 4
kernel.sched_domain.cpu24.domain0.max_newidle_lb_cost = 901
kernel.sched_domain.cpu24.domain0.min_interval = 2
kernel.sched_domain.cpu24.domain0.name = SMT
kernel.sched_domain.cpu24.domain1.busy_factor = 32
kernel.sched_domain.cpu24.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu24.domain1.flags = 4655
kernel.sched_domain.cpu24.domain1.imbalance_pct = 117
kernel.sched_domain.cpu24.domain1.max_interval = 96
kernel.sched_domain.cpu24.domain1.max_newidle_lb_cost = 5323
kernel.sched_domain.cpu24.domain1.min_interval = 48
kernel.sched_domain.cpu24.domain1.name = MC
kernel.sched_domain.cpu25.domain0.busy_factor = 32
kernel.sched_domain.cpu25.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu25.domain0.flags = 4783
kernel.sched_domain.cpu25.domain0.imbalance_pct = 110
kernel.sched_domain.cpu25.domain0.max_interval = 4
kernel.sched_domain.cpu25.domain0.max_newidle_lb_cost = 370
kernel.sched_domain.cpu25.domain0.min_interval = 2
kernel.sched_domain.cpu25.domain0.name = SMT
kernel.sched_domain.cpu25.domain1.busy_factor = 32
kernel.sched_domain.cpu25.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu25.domain1.flags = 4655
kernel.sched_domain.cpu25.domain1.imbalance_pct = 117
kernel.sched_domain.cpu25.domain1.max_interval = 96
kernel.sched_domain.cpu25.domain1.max_newidle_lb_cost = 3380
kernel.sched_domain.cpu25.domain1.min_interval = 48
kernel.sched_domain.cpu25.domain1.name = MC
kernel.sched_domain.cpu26.domain0.busy_factor = 32
kernel.sched_domain.cpu26.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu26.domain0.flags = 4783
kernel.sched_domain.cpu26.domain0.imbalance_pct = 110
kernel.sched_domain.cpu26.domain0.max_interval = 4
kernel.sched_domain.cpu26.domain0.max_newidle_lb_cost = 67
kernel.sched_domain.cpu26.domain0.min_interval = 2
kernel.sched_domain.cpu26.domain0.name = SMT
kernel.sched_domain.cpu26.domain1.busy_factor = 32
kernel.sched_domain.cpu26.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu26.domain1.flags = 4655
kernel.sched_domain.cpu26.domain1.imbalance_pct = 117
kernel.sched_domain.cpu26.domain1.max_interval = 96
kernel.sched_domain.cpu26.domain1.max_newidle_lb_cost = 840
kernel.sched_domain.cpu26.domain1.min_interval = 48
kernel.sched_domain.cpu26.domain1.name = MC
kernel.sched_domain.cpu27.domain0.busy_factor = 32
kernel.sched_domain.cpu27.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu27.domain0.flags = 4783
kernel.sched_domain.cpu27.domain0.imbalance_pct = 110
kernel.sched_domain.cpu27.domain0.max_interval = 4
kernel.sched_domain.cpu27.domain0.max_newidle_lb_cost = 1146
kernel.sched_domain.cpu27.domain0.min_interval = 2
kernel.sched_domain.cpu27.domain0.name = SMT
kernel.sched_domain.cpu27.domain1.busy_factor = 32
kernel.sched_domain.cpu27.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu27.domain1.flags = 4655
kernel.sched_domain.cpu27.domain1.imbalance_pct = 117
kernel.sched_domain.cpu27.domain1.max_interval = 96
kernel.sched_domain.cpu27.domain1.max_newidle_lb_cost = 4451
kernel.sched_domain.cpu27.domain1.min_interval = 48
kernel.sched_domain.cpu27.domain1.name = MC
kernel.sched_domain.cpu28.domain0.busy_factor = 32
kernel.sched_domain.cpu28.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu28.domain0.flags = 4783
kernel.sched_domain.cpu28.domain0.imbalance_pct = 110
kernel.sched_domain.cpu28.domain0.max_interval = 4
kernel.sched_domain.cpu28.domain0.max_newidle_lb_cost = 483
kernel.sched_domain.cpu28.domain0.min_interval = 2
kernel.sched_domain.cpu28.domain0.name = SMT
kernel.sched_domain.cpu28.domain1.busy_factor = 32
kernel.sched_domain.cpu28.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu28.domain1.flags = 4655
kernel.sched_domain.cpu28.domain1.imbalance_pct = 117
kernel.sched_domain.cpu28.domain1.max_interval = 96
kernel.sched_domain.cpu28.domain1.max_newidle_lb_cost = 3296
kernel.sched_domain.cpu28.domain1.min_interval = 48
kernel.sched_domain.cpu28.domain1.name = MC
kernel.sched_domain.cpu29.domain0.busy_factor = 32
kernel.sched_domain.cpu29.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu29.domain0.flags = 4783
kernel.sched_domain.cpu29.domain0.imbalance_pct = 110
kernel.sched_domain.cpu29.domain0.max_interval = 4
kernel.sched_domain.cpu29.domain0.max_newidle_lb_cost = 163
kernel.sched_domain.cpu29.domain0.min_interval = 2
kernel.sched_domain.cpu29.domain0.name = SMT
kernel.sched_domain.cpu29.domain1.busy_factor = 32
kernel.sched_domain.cpu29.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu29.domain1.flags = 4655
kernel.sched_domain.cpu29.domain1.imbalance_pct = 117
kernel.sched_domain.cpu29.domain1.max_interval = 96
kernel.sched_domain.cpu29.domain1.max_newidle_lb_cost = 1191
kernel.sched_domain.cpu29.domain1.min_interval = 48
kernel.sched_domain.cpu29.domain1.name = MC
kernel.sched_domain.cpu3.domain0.busy_factor = 32
kernel.sched_domain.cpu3.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu3.domain0.flags = 4783
kernel.sched_domain.cpu3.domain0.imbalance_pct = 110
kernel.sched_domain.cpu3.domain0.max_interval = 4
kernel.sched_domain.cpu3.domain0.max_newidle_lb_cost = 528
kernel.sched_domain.cpu3.domain0.min_interval = 2
kernel.sched_domain.cpu3.domain0.name = SMT
kernel.sched_domain.cpu3.domain1.busy_factor = 32
kernel.sched_domain.cpu3.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu3.domain1.flags = 4655
kernel.sched_domain.cpu3.domain1.imbalance_pct = 117
kernel.sched_domain.cpu3.domain1.max_interval = 96
kernel.sched_domain.cpu3.domain1.max_newidle_lb_cost = 4269
kernel.sched_domain.cpu3.domain1.min_interval = 48
kernel.sched_domain.cpu3.domain1.name = MC
kernel.sched_domain.cpu30.domain0.busy_factor = 32
kernel.sched_domain.cpu30.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu30.domain0.flags = 4783
kernel.sched_domain.cpu30.domain0.imbalance_pct = 110
kernel.sched_domain.cpu30.domain0.max_interval = 4
kernel.sched_domain.cpu30.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu30.domain0.min_interval = 2
kernel.sched_domain.cpu30.domain0.name = SMT
kernel.sched_domain.cpu30.domain1.busy_factor = 32
kernel.sched_domain.cpu30.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu30.domain1.flags = 4655
kernel.sched_domain.cpu30.domain1.imbalance_pct = 117
kernel.sched_domain.cpu30.domain1.max_interval = 96
kernel.sched_domain.cpu30.domain1.max_newidle_lb_cost = 0
kernel.sched_domain.cpu30.domain1.min_interval = 48
kernel.sched_domain.cpu30.domain1.name = MC
kernel.sched_domain.cpu31.domain0.busy_factor = 32
kernel.sched_domain.cpu31.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu31.domain0.flags = 4783
kernel.sched_domain.cpu31.domain0.imbalance_pct = 110
kernel.sched_domain.cpu31.domain0.max_interval = 4
kernel.sched_domain.cpu31.domain0.max_newidle_lb_cost = 629
kernel.sched_domain.cpu31.domain0.min_interval = 2
kernel.sched_domain.cpu31.domain0.name = SMT
kernel.sched_domain.cpu31.domain1.busy_factor = 32
kernel.sched_domain.cpu31.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu31.domain1.flags = 4655
kernel.sched_domain.cpu31.domain1.imbalance_pct = 117
kernel.sched_domain.cpu31.domain1.max_interval = 96
kernel.sched_domain.cpu31.domain1.max_newidle_lb_cost = 3757
kernel.sched_domain.cpu31.domain1.min_interval = 48
kernel.sched_domain.cpu31.domain1.name = MC
kernel.sched_domain.cpu32.domain0.busy_factor = 32
kernel.sched_domain.cpu32.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu32.domain0.flags = 4783
kernel.sched_domain.cpu32.domain0.imbalance_pct = 110
kernel.sched_domain.cpu32.domain0.max_interval = 4
kernel.sched_domain.cpu32.domain0.max_newidle_lb_cost = 617
kernel.sched_domain.cpu32.domain0.min_interval = 2
kernel.sched_domain.cpu32.domain0.name = SMT
kernel.sched_domain.cpu32.domain1.busy_factor = 32
kernel.sched_domain.cpu32.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu32.domain1.flags = 4655
kernel.sched_domain.cpu32.domain1.imbalance_pct = 117
kernel.sched_domain.cpu32.domain1.max_interval = 96
kernel.sched_domain.cpu32.domain1.max_newidle_lb_cost = 3582
kernel.sched_domain.cpu32.domain1.min_interval = 48
kernel.sched_domain.cpu32.domain1.name = MC
kernel.sched_domain.cpu33.domain0.busy_factor = 32
kernel.sched_domain.cpu33.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu33.domain0.flags = 4783
kernel.sched_domain.cpu33.domain0.imbalance_pct = 110
kernel.sched_domain.cpu33.domain0.max_interval = 4
kernel.sched_domain.cpu33.domain0.max_newidle_lb_cost = 338
kernel.sched_domain.cpu33.domain0.min_interval = 2
kernel.sched_domain.cpu33.domain0.name = SMT
kernel.sched_domain.cpu33.domain1.busy_factor = 32
kernel.sched_domain.cpu33.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu33.domain1.flags = 4655
kernel.sched_domain.cpu33.domain1.imbalance_pct = 117
kernel.sched_domain.cpu33.domain1.max_interval = 96
kernel.sched_domain.cpu33.domain1.max_newidle_lb_cost = 4608
kernel.sched_domain.cpu33.domain1.min_interval = 48
kernel.sched_domain.cpu33.domain1.name = MC
kernel.sched_domain.cpu34.domain0.busy_factor = 32
kernel.sched_domain.cpu34.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu34.domain0.flags = 4783
kernel.sched_domain.cpu34.domain0.imbalance_pct = 110
kernel.sched_domain.cpu34.domain0.max_interval = 4
kernel.sched_domain.cpu34.domain0.max_newidle_lb_cost = 662
kernel.sched_domain.cpu34.domain0.min_interval = 2
kernel.sched_domain.cpu34.domain0.name = SMT
kernel.sched_domain.cpu34.domain1.busy_factor = 32
kernel.sched_domain.cpu34.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu34.domain1.flags = 4655
kernel.sched_domain.cpu34.domain1.imbalance_pct = 117
kernel.sched_domain.cpu34.domain1.max_interval = 96
kernel.sched_domain.cpu34.domain1.max_newidle_lb_cost = 5312
kernel.sched_domain.cpu34.domain1.min_interval = 48
kernel.sched_domain.cpu34.domain1.name = MC
kernel.sched_domain.cpu35.domain0.busy_factor = 32
kernel.sched_domain.cpu35.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu35.domain0.flags = 4783
kernel.sched_domain.cpu35.domain0.imbalance_pct = 110
kernel.sched_domain.cpu35.domain0.max_interval = 4
kernel.sched_domain.cpu35.domain0.max_newidle_lb_cost = 769
kernel.sched_domain.cpu35.domain0.min_interval = 2
kernel.sched_domain.cpu35.domain0.name = SMT
kernel.sched_domain.cpu35.domain1.busy_factor = 32
kernel.sched_domain.cpu35.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu35.domain1.flags = 4655
kernel.sched_domain.cpu35.domain1.imbalance_pct = 117
kernel.sched_domain.cpu35.domain1.max_interval = 96
kernel.sched_domain.cpu35.domain1.max_newidle_lb_cost = 9419
kernel.sched_domain.cpu35.domain1.min_interval = 48
kernel.sched_domain.cpu35.domain1.name = MC
kernel.sched_domain.cpu36.domain0.busy_factor = 32
kernel.sched_domain.cpu36.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu36.domain0.flags = 4783
kernel.sched_domain.cpu36.domain0.imbalance_pct = 110
kernel.sched_domain.cpu36.domain0.max_interval = 4
kernel.sched_domain.cpu36.domain0.max_newidle_lb_cost = 813
kernel.sched_domain.cpu36.domain0.min_interval = 2
kernel.sched_domain.cpu36.domain0.name = SMT
kernel.sched_domain.cpu36.domain1.busy_factor = 32
kernel.sched_domain.cpu36.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu36.domain1.flags = 4655
kernel.sched_domain.cpu36.domain1.imbalance_pct = 117
kernel.sched_domain.cpu36.domain1.max_interval = 96
kernel.sched_domain.cpu36.domain1.max_newidle_lb_cost = 3822
kernel.sched_domain.cpu36.domain1.min_interval = 48
kernel.sched_domain.cpu36.domain1.name = MC
kernel.sched_domain.cpu37.domain0.busy_factor = 32
kernel.sched_domain.cpu37.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu37.domain0.flags = 4783
kernel.sched_domain.cpu37.domain0.imbalance_pct = 110
kernel.sched_domain.cpu37.domain0.max_interval = 4
kernel.sched_domain.cpu37.domain0.max_newidle_lb_cost = 106
kernel.sched_domain.cpu37.domain0.min_interval = 2
kernel.sched_domain.cpu37.domain0.name = SMT
kernel.sched_domain.cpu37.domain1.busy_factor = 32
kernel.sched_domain.cpu37.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu37.domain1.flags = 4655
kernel.sched_domain.cpu37.domain1.imbalance_pct = 117
kernel.sched_domain.cpu37.domain1.max_interval = 96
kernel.sched_domain.cpu37.domain1.max_newidle_lb_cost = 783
kernel.sched_domain.cpu37.domain1.min_interval = 48
kernel.sched_domain.cpu37.domain1.name = MC
kernel.sched_domain.cpu38.domain0.busy_factor = 32
kernel.sched_domain.cpu38.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu38.domain0.flags = 4783
kernel.sched_domain.cpu38.domain0.imbalance_pct = 110
kernel.sched_domain.cpu38.domain0.max_interval = 4
kernel.sched_domain.cpu38.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu38.domain0.min_interval = 2
kernel.sched_domain.cpu38.domain0.name = SMT
kernel.sched_domain.cpu38.domain1.busy_factor = 32
kernel.sched_domain.cpu38.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu38.domain1.flags = 4655
kernel.sched_domain.cpu38.domain1.imbalance_pct = 117
kernel.sched_domain.cpu38.domain1.max_interval = 96
kernel.sched_domain.cpu38.domain1.max_newidle_lb_cost = 0
kernel.sched_domain.cpu38.domain1.min_interval = 48
kernel.sched_domain.cpu38.domain1.name = MC
kernel.sched_domain.cpu39.domain0.busy_factor = 32
kernel.sched_domain.cpu39.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu39.domain0.flags = 4783
kernel.sched_domain.cpu39.domain0.imbalance_pct = 110
kernel.sched_domain.cpu39.domain0.max_interval = 4
kernel.sched_domain.cpu39.domain0.max_newidle_lb_cost = 165
kernel.sched_domain.cpu39.domain0.min_interval = 2
kernel.sched_domain.cpu39.domain0.name = SMT
kernel.sched_domain.cpu39.domain1.busy_factor = 32
kernel.sched_domain.cpu39.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu39.domain1.flags = 4655
kernel.sched_domain.cpu39.domain1.imbalance_pct = 117
kernel.sched_domain.cpu39.domain1.max_interval = 96
kernel.sched_domain.cpu39.domain1.max_newidle_lb_cost = 935
kernel.sched_domain.cpu39.domain1.min_interval = 48
kernel.sched_domain.cpu39.domain1.name = MC
kernel.sched_domain.cpu4.domain0.busy_factor = 32
kernel.sched_domain.cpu4.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu4.domain0.flags = 4783
kernel.sched_domain.cpu4.domain0.imbalance_pct = 110
kernel.sched_domain.cpu4.domain0.max_interval = 4
kernel.sched_domain.cpu4.domain0.max_newidle_lb_cost = 655
kernel.sched_domain.cpu4.domain0.min_interval = 2
kernel.sched_domain.cpu4.domain0.name = SMT
kernel.sched_domain.cpu4.domain1.busy_factor = 32
kernel.sched_domain.cpu4.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu4.domain1.flags = 4655
kernel.sched_domain.cpu4.domain1.imbalance_pct = 117
kernel.sched_domain.cpu4.domain1.max_interval = 96
kernel.sched_domain.cpu4.domain1.max_newidle_lb_cost = 13492
kernel.sched_domain.cpu4.domain1.min_interval = 48
kernel.sched_domain.cpu4.domain1.name = MC
kernel.sched_domain.cpu40.domain0.busy_factor = 32
kernel.sched_domain.cpu40.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu40.domain0.flags = 4783
kernel.sched_domain.cpu40.domain0.imbalance_pct = 110
kernel.sched_domain.cpu40.domain0.max_interval = 4
kernel.sched_domain.cpu40.domain0.max_newidle_lb_cost = 94
kernel.sched_domain.cpu40.domain0.min_interval = 2
kernel.sched_domain.cpu40.domain0.name = SMT
kernel.sched_domain.cpu40.domain1.busy_factor = 32
kernel.sched_domain.cpu40.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu40.domain1.flags = 4655
kernel.sched_domain.cpu40.domain1.imbalance_pct = 117
kernel.sched_domain.cpu40.domain1.max_interval = 96
kernel.sched_domain.cpu40.domain1.max_newidle_lb_cost = 630
kernel.sched_domain.cpu40.domain1.min_interval = 48
kernel.sched_domain.cpu40.domain1.name = MC
kernel.sched_domain.cpu41.domain0.busy_factor = 32
kernel.sched_domain.cpu41.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu41.domain0.flags = 4783
kernel.sched_domain.cpu41.domain0.imbalance_pct = 110
kernel.sched_domain.cpu41.domain0.max_interval = 4
kernel.sched_domain.cpu41.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu41.domain0.min_interval = 2
kernel.sched_domain.cpu41.domain0.name = SMT
kernel.sched_domain.cpu41.domain1.busy_factor = 32
kernel.sched_domain.cpu41.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu41.domain1.flags = 4655
kernel.sched_domain.cpu41.domain1.imbalance_pct = 117
kernel.sched_domain.cpu41.domain1.max_interval = 96
kernel.sched_domain.cpu41.domain1.max_newidle_lb_cost = 0
kernel.sched_domain.cpu41.domain1.min_interval = 48
kernel.sched_domain.cpu41.domain1.name = MC
kernel.sched_domain.cpu42.domain0.busy_factor = 32
kernel.sched_domain.cpu42.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu42.domain0.flags = 4783
kernel.sched_domain.cpu42.domain0.imbalance_pct = 110
kernel.sched_domain.cpu42.domain0.max_interval = 4
kernel.sched_domain.cpu42.domain0.max_newidle_lb_cost = 532
kernel.sched_domain.cpu42.domain0.min_interval = 2
kernel.sched_domain.cpu42.domain0.name = SMT
kernel.sched_domain.cpu42.domain1.busy_factor = 32
kernel.sched_domain.cpu42.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu42.domain1.flags = 4655
kernel.sched_domain.cpu42.domain1.imbalance_pct = 117
kernel.sched_domain.cpu42.domain1.max_interval = 96
kernel.sched_domain.cpu42.domain1.max_newidle_lb_cost = 3944
kernel.sched_domain.cpu42.domain1.min_interval = 48
kernel.sched_domain.cpu42.domain1.name = MC
kernel.sched_domain.cpu43.domain0.busy_factor = 32
kernel.sched_domain.cpu43.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu43.domain0.flags = 4783
kernel.sched_domain.cpu43.domain0.imbalance_pct = 110
kernel.sched_domain.cpu43.domain0.max_interval = 4
kernel.sched_domain.cpu43.domain0.max_newidle_lb_cost = 696
kernel.sched_domain.cpu43.domain0.min_interval = 2
kernel.sched_domain.cpu43.domain0.name = SMT
kernel.sched_domain.cpu43.domain1.busy_factor = 32
kernel.sched_domain.cpu43.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu43.domain1.flags = 4655
kernel.sched_domain.cpu43.domain1.imbalance_pct = 117
kernel.sched_domain.cpu43.domain1.max_interval = 96
kernel.sched_domain.cpu43.domain1.max_newidle_lb_cost = 5533
kernel.sched_domain.cpu43.domain1.min_interval = 48
kernel.sched_domain.cpu43.domain1.name = MC
kernel.sched_domain.cpu44.domain0.busy_factor = 32
kernel.sched_domain.cpu44.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu44.domain0.flags = 4783
kernel.sched_domain.cpu44.domain0.imbalance_pct = 110
kernel.sched_domain.cpu44.domain0.max_interval = 4
kernel.sched_domain.cpu44.domain0.max_newidle_lb_cost = 281
kernel.sched_domain.cpu44.domain0.min_interval = 2
kernel.sched_domain.cpu44.domain0.name = SMT
kernel.sched_domain.cpu44.domain1.busy_factor = 32
kernel.sched_domain.cpu44.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu44.domain1.flags = 4655
kernel.sched_domain.cpu44.domain1.imbalance_pct = 117
kernel.sched_domain.cpu44.domain1.max_interval = 96
kernel.sched_domain.cpu44.domain1.max_newidle_lb_cost = 3730
kernel.sched_domain.cpu44.domain1.min_interval = 48
kernel.sched_domain.cpu44.domain1.name = MC
kernel.sched_domain.cpu45.domain0.busy_factor = 32
kernel.sched_domain.cpu45.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu45.domain0.flags = 4783
kernel.sched_domain.cpu45.domain0.imbalance_pct = 110
kernel.sched_domain.cpu45.domain0.max_interval = 4
kernel.sched_domain.cpu45.domain0.max_newidle_lb_cost = 214
kernel.sched_domain.cpu45.domain0.min_interval = 2
kernel.sched_domain.cpu45.domain0.name = SMT
kernel.sched_domain.cpu45.domain1.busy_factor = 32
kernel.sched_domain.cpu45.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu45.domain1.flags = 4655
kernel.sched_domain.cpu45.domain1.imbalance_pct = 117
kernel.sched_domain.cpu45.domain1.max_interval = 96
kernel.sched_domain.cpu45.domain1.max_newidle_lb_cost = 2498
kernel.sched_domain.cpu45.domain1.min_interval = 48
kernel.sched_domain.cpu45.domain1.name = MC
kernel.sched_domain.cpu46.domain0.busy_factor = 32
kernel.sched_domain.cpu46.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu46.domain0.flags = 4783
kernel.sched_domain.cpu46.domain0.imbalance_pct = 110
kernel.sched_domain.cpu46.domain0.max_interval = 4
kernel.sched_domain.cpu46.domain0.max_newidle_lb_cost = 0
kernel.sched_domain.cpu46.domain0.min_interval = 2
kernel.sched_domain.cpu46.domain0.name = SMT
kernel.sched_domain.cpu46.domain1.busy_factor = 32
kernel.sched_domain.cpu46.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu46.domain1.flags = 4655
kernel.sched_domain.cpu46.domain1.imbalance_pct = 117
kernel.sched_domain.cpu46.domain1.max_interval = 96
kernel.sched_domain.cpu46.domain1.max_newidle_lb_cost = 0
kernel.sched_domain.cpu46.domain1.min_interval = 48
kernel.sched_domain.cpu46.domain1.name = MC
kernel.sched_domain.cpu47.domain0.busy_factor = 32
kernel.sched_domain.cpu47.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu47.domain0.flags = 4783
kernel.sched_domain.cpu47.domain0.imbalance_pct = 110
kernel.sched_domain.cpu47.domain0.max_interval = 4
kernel.sched_domain.cpu47.domain0.max_newidle_lb_cost = 148
kernel.sched_domain.cpu47.domain0.min_interval = 2
kernel.sched_domain.cpu47.domain0.name = SMT
kernel.sched_domain.cpu47.domain1.busy_factor = 32
kernel.sched_domain.cpu47.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu47.domain1.flags = 4655
kernel.sched_domain.cpu47.domain1.imbalance_pct = 117
kernel.sched_domain.cpu47.domain1.max_interval = 96
kernel.sched_domain.cpu47.domain1.max_newidle_lb_cost = 1138
kernel.sched_domain.cpu47.domain1.min_interval = 48
kernel.sched_domain.cpu47.domain1.name = MC
kernel.sched_domain.cpu5.domain0.busy_factor = 32
kernel.sched_domain.cpu5.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu5.domain0.flags = 4783
kernel.sched_domain.cpu5.domain0.imbalance_pct = 110
kernel.sched_domain.cpu5.domain0.max_interval = 4
kernel.sched_domain.cpu5.domain0.max_newidle_lb_cost = 292
kernel.sched_domain.cpu5.domain0.min_interval = 2
kernel.sched_domain.cpu5.domain0.name = SMT
kernel.sched_domain.cpu5.domain1.busy_factor = 32
kernel.sched_domain.cpu5.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu5.domain1.flags = 4655
kernel.sched_domain.cpu5.domain1.imbalance_pct = 117
kernel.sched_domain.cpu5.domain1.max_interval = 96
kernel.sched_domain.cpu5.domain1.max_newidle_lb_cost = 2502
kernel.sched_domain.cpu5.domain1.min_interval = 48
kernel.sched_domain.cpu5.domain1.name = MC
kernel.sched_domain.cpu6.domain0.busy_factor = 32
kernel.sched_domain.cpu6.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu6.domain0.flags = 4783
kernel.sched_domain.cpu6.domain0.imbalance_pct = 110
kernel.sched_domain.cpu6.domain0.max_interval = 4
kernel.sched_domain.cpu6.domain0.max_newidle_lb_cost = 698
kernel.sched_domain.cpu6.domain0.min_interval = 2
kernel.sched_domain.cpu6.domain0.name = SMT
kernel.sched_domain.cpu6.domain1.busy_factor = 32
kernel.sched_domain.cpu6.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu6.domain1.flags = 4655
kernel.sched_domain.cpu6.domain1.imbalance_pct = 117
kernel.sched_domain.cpu6.domain1.max_interval = 96
kernel.sched_domain.cpu6.domain1.max_newidle_lb_cost = 14389
kernel.sched_domain.cpu6.domain1.min_interval = 48
kernel.sched_domain.cpu6.domain1.name = MC
kernel.sched_domain.cpu7.domain0.busy_factor = 32
kernel.sched_domain.cpu7.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu7.domain0.flags = 4783
kernel.sched_domain.cpu7.domain0.imbalance_pct = 110
kernel.sched_domain.cpu7.domain0.max_interval = 4
kernel.sched_domain.cpu7.domain0.max_newidle_lb_cost = 517
kernel.sched_domain.cpu7.domain0.min_interval = 2
kernel.sched_domain.cpu7.domain0.name = SMT
kernel.sched_domain.cpu7.domain1.busy_factor = 32
kernel.sched_domain.cpu7.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu7.domain1.flags = 4655
kernel.sched_domain.cpu7.domain1.imbalance_pct = 117
kernel.sched_domain.cpu7.domain1.max_interval = 96
kernel.sched_domain.cpu7.domain1.max_newidle_lb_cost = 3315
kernel.sched_domain.cpu7.domain1.min_interval = 48
kernel.sched_domain.cpu7.domain1.name = MC
kernel.sched_domain.cpu8.domain0.busy_factor = 32
kernel.sched_domain.cpu8.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu8.domain0.flags = 4783
kernel.sched_domain.cpu8.domain0.imbalance_pct = 110
kernel.sched_domain.cpu8.domain0.max_interval = 4
kernel.sched_domain.cpu8.domain0.max_newidle_lb_cost = 1873
kernel.sched_domain.cpu8.domain0.min_interval = 2
kernel.sched_domain.cpu8.domain0.name = SMT
kernel.sched_domain.cpu8.domain1.busy_factor = 32
kernel.sched_domain.cpu8.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu8.domain1.flags = 4655
kernel.sched_domain.cpu8.domain1.imbalance_pct = 117
kernel.sched_domain.cpu8.domain1.max_interval = 96
kernel.sched_domain.cpu8.domain1.max_newidle_lb_cost = 12517
kernel.sched_domain.cpu8.domain1.min_interval = 48
kernel.sched_domain.cpu8.domain1.name = MC
kernel.sched_domain.cpu9.domain0.busy_factor = 32
kernel.sched_domain.cpu9.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu9.domain0.flags = 4783
kernel.sched_domain.cpu9.domain0.imbalance_pct = 110
kernel.sched_domain.cpu9.domain0.max_interval = 4
kernel.sched_domain.cpu9.domain0.max_newidle_lb_cost = 44
kernel.sched_domain.cpu9.domain0.min_interval = 2
kernel.sched_domain.cpu9.domain0.name = SMT
kernel.sched_domain.cpu9.domain1.busy_factor = 32
kernel.sched_domain.cpu9.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu9.domain1.flags = 4655
kernel.sched_domain.cpu9.domain1.imbalance_pct = 117
kernel.sched_domain.cpu9.domain1.max_interval = 96
kernel.sched_domain.cpu9.domain1.max_newidle_lb_cost = 931
kernel.sched_domain.cpu9.domain1.min_interval = 48
kernel.sched_domain.cpu9.domain1.name = MC
kernel.sched_latency_ns = 24000000
kernel.sched_migration_cost_ns = 500000
kernel.sched_min_granularity_ns = 3000000
kernel.sched_nr_migrate = 32
kernel.sched_rr_timeslice_ms = 100
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.sched_schedstats = 0
kernel.sched_tunable_scaling = 1
kernel.sched_util_clamp_max = 1024
kernel.sched_util_clamp_min = 1024
kernel.sched_wakeup_granularity_ns = 4000000
kernel.seccomp.actions_avail = kill_process kill_thread trap errno user_notif trace log allow
kernel.seccomp.actions_logged = kill_process kill_thread trap errno user_notif trace log
kernel.sem = 32000	1024000000	500	32000
kernel.sem_next_id = -1
kernel.sg-big-buff = 32768
kernel.shm_next_id = -1
kernel.shm_rmid_forced = 0
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
kernel.soft_watchdog = 1
kernel.softlockup_all_cpu_backtrace = 0
kernel.softlockup_panic = 0
kernel.stack_tracer_enabled = 0
kernel.sysctl_writes_strict = 1
kernel.sysrq = 176
kernel.tainted = 0
kernel.threads-max = 755124
kernel.timer_migration = 1
kernel.traceoff_on_warning = 0
kernel.tracepoint_printk = 0
kernel.unknown_nmi_panic = 0
kernel.unprivileged_bpf_disabled = 0
kernel.unprivileged_userns_apparmor_policy = 1
kernel.usermodehelper.bset = 4294967295	63
kernel.usermodehelper.inheritable = 4294967295	63
kernel.version = #2 SMP Thu Nov 21 16:52:26 HKT 2019
kernel.watchdog = 1
kernel.watchdog_cpumask = 0-47
kernel.watchdog_thresh = 10
kernel.yama.ptrace_scope = 1
net.core.bpf_jit_enable = 1
net.core.bpf_jit_harden = 0
net.core.bpf_jit_kallsyms = 0
net.core.bpf_jit_limit = 264241152
net.core.busy_poll = 0
net.core.busy_read = 0
net.core.default_qdisc = fq
net.core.dev_weight = 64
net.core.dev_weight_rx_bias = 1
net.core.dev_weight_tx_bias = 1
net.core.devconf_inherit_init_net = 0
net.core.fb_tunnels_only_for_init_net = 0
net.core.flow_limit_cpu_bitmap = 0000,00000000
net.core.flow_limit_table_len = 4096
net.core.gro_normal_batch = 8
net.core.high_order_alloc_disable = 0
net.core.max_skb_frags = 17
net.core.message_burst = 10
net.core.message_cost = 5
net.core.netdev_budget = 300
net.core.netdev_budget_usecs = 2000
net.core.netdev_max_backlog = 1000
net.core.netdev_rss_key = 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
net.core.netdev_tstamp_prequeue = 1
net.core.optmem_max = 20480
net.core.rmem_default = 212992
net.core.rmem_max = 212992
net.core.rps_sock_flow_entries = 0
net.core.somaxconn = 4096
net.core.tstamp_allow_data = 1
net.core.warnings = 0
net.core.wmem_default = 212992
net.core.wmem_max = 212992
net.core.xfrm_acq_expires = 30
net.core.xfrm_aevent_etime = 10
net.core.xfrm_aevent_rseqth = 2
net.core.xfrm_larval_drop = 1
net.ipv4.cipso_cache_bucket_size = 10
net.ipv4.cipso_cache_enable = 1
net.ipv4.cipso_rbm_optfmt = 0
net.ipv4.cipso_rbm_strictvalid = 1
net.ipv4.conf.all.accept_local = 0
net.ipv4.conf.all.accept_redirects = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.arp_accept = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_notify = 0
net.ipv4.conf.all.bc_forwarding = 0
net.ipv4.conf.all.bootp_relay = 0
net.ipv4.conf.all.disable_policy = 0
net.ipv4.conf.all.disable_xfrm = 0
net.ipv4.conf.all.drop_gratuitous_arp = 0
net.ipv4.conf.all.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.all.force_igmp_version = 0
net.ipv4.conf.all.forwarding = 0
net.ipv4.conf.all.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.all.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.all.ignore_routes_with_linkdown = 0
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.all.medium_id = 0
net.ipv4.conf.all.promote_secondaries = 0
net.ipv4.conf.all.proxy_arp = 0
net.ipv4.conf.all.proxy_arp_pvlan = 0
net.ipv4.conf.all.route_localnet = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.all.send_redirects = 1
net.ipv4.conf.all.shared_media = 1
net.ipv4.conf.all.src_valid_mark = 0
net.ipv4.conf.all.tag = 0
net.ipv4.conf.default.accept_local = 0
net.ipv4.conf.default.accept_redirects = 1
net.ipv4.conf.default.accept_source_route = 1
net.ipv4.conf.default.arp_accept = 0
net.ipv4.conf.default.arp_announce = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.arp_ignore = 0
net.ipv4.conf.default.arp_notify = 0
net.ipv4.conf.default.bc_forwarding = 0
net.ipv4.conf.default.bootp_relay = 0
net.ipv4.conf.default.disable_policy = 0
net.ipv4.conf.default.disable_xfrm = 0
net.ipv4.conf.default.drop_gratuitous_arp = 0
net.ipv4.conf.default.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.default.force_igmp_version = 0
net.ipv4.conf.default.forwarding = 0
net.ipv4.conf.default.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.default.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.default.ignore_routes_with_linkdown = 0
net.ipv4.conf.default.log_martians = 0
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.default.medium_id = 0
net.ipv4.conf.default.promote_secondaries = 0
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.proxy_arp_pvlan = 0
net.ipv4.conf.default.route_localnet = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.secure_redirects = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.default.shared_media = 1
net.ipv4.conf.default.src_valid_mark = 0
net.ipv4.conf.default.tag = 0
net.ipv4.conf.ens5.accept_local = 0
net.ipv4.conf.ens5.accept_redirects = 1
net.ipv4.conf.ens5.accept_source_route = 1
net.ipv4.conf.ens5.arp_accept = 0
net.ipv4.conf.ens5.arp_announce = 0
net.ipv4.conf.ens5.arp_filter = 0
net.ipv4.conf.ens5.arp_ignore = 0
net.ipv4.conf.ens5.arp_notify = 0
net.ipv4.conf.ens5.bc_forwarding = 0
net.ipv4.conf.ens5.bootp_relay = 0
net.ipv4.conf.ens5.disable_policy = 0
net.ipv4.conf.ens5.disable_xfrm = 0
net.ipv4.conf.ens5.drop_gratuitous_arp = 0
net.ipv4.conf.ens5.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.ens5.force_igmp_version = 0
net.ipv4.conf.ens5.forwarding = 0
net.ipv4.conf.ens5.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.ens5.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.ens5.ignore_routes_with_linkdown = 0
net.ipv4.conf.ens5.log_martians = 0
net.ipv4.conf.ens5.mc_forwarding = 0
net.ipv4.conf.ens5.medium_id = 0
net.ipv4.conf.ens5.promote_secondaries = 0
net.ipv4.conf.ens5.proxy_arp = 0
net.ipv4.conf.ens5.proxy_arp_pvlan = 0
net.ipv4.conf.ens5.route_localnet = 0
net.ipv4.conf.ens5.rp_filter = 1
net.ipv4.conf.ens5.secure_redirects = 1
net.ipv4.conf.ens5.send_redirects = 1
net.ipv4.conf.ens5.shared_media = 1
net.ipv4.conf.ens5.src_valid_mark = 0
net.ipv4.conf.ens5.tag = 0
net.ipv4.conf.lo.accept_local = 0
net.ipv4.conf.lo.accept_redirects = 1
net.ipv4.conf.lo.accept_source_route = 1
net.ipv4.conf.lo.arp_accept = 0
net.ipv4.conf.lo.arp_announce = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_notify = 0
net.ipv4.conf.lo.bc_forwarding = 0
net.ipv4.conf.lo.bootp_relay = 0
net.ipv4.conf.lo.disable_policy = 1
net.ipv4.conf.lo.disable_xfrm = 1
net.ipv4.conf.lo.drop_gratuitous_arp = 0
net.ipv4.conf.lo.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.lo.force_igmp_version = 0
net.ipv4.conf.lo.forwarding = 0
net.ipv4.conf.lo.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.lo.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.lo.ignore_routes_with_linkdown = 0
net.ipv4.conf.lo.log_martians = 0
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.conf.lo.medium_id = 0
net.ipv4.conf.lo.promote_secondaries = 0
net.ipv4.conf.lo.proxy_arp = 0
net.ipv4.conf.lo.proxy_arp_pvlan = 0
net.ipv4.conf.lo.route_localnet = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.lo.secure_redirects = 1
net.ipv4.conf.lo.send_redirects = 1
net.ipv4.conf.lo.shared_media = 1
net.ipv4.conf.lo.src_valid_mark = 0
net.ipv4.conf.lo.tag = 0
net.ipv4.fib_multipath_hash_policy = 0
net.ipv4.fib_multipath_use_neigh = 0
net.ipv4.fib_sync_mem = 524288
net.ipv4.fwmark_reflect = 0
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_errors_use_inbound_ifaddr = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.icmp_msgs_burst = 50
net.ipv4.icmp_msgs_per_sec = 1000
net.ipv4.icmp_ratelimit = 1000
net.ipv4.icmp_ratemask = 6168
net.ipv4.igmp_link_local_mcast_reports = 1
net.ipv4.igmp_max_memberships = 20
net.ipv4.igmp_max_msf = 10
net.ipv4.igmp_qrv = 2
net.ipv4.inet_peer_maxttl = 600
net.ipv4.inet_peer_minttl = 120
net.ipv4.inet_peer_threshold = 65664
net.ipv4.ip_default_ttl = 64
net.ipv4.ip_dynaddr = 0
net.ipv4.ip_early_demux = 1
net.ipv4.ip_forward = 0
net.ipv4.ip_forward_update_priority = 1
net.ipv4.ip_forward_use_pmtu = 0
net.ipv4.ip_local_port_range = 32768	60999
net.ipv4.ip_local_reserved_ports = 
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.ip_nonlocal_bind = 0
net.ipv4.ip_unprivileged_port_start = 1024
net.ipv4.ipfrag_high_thresh = 4194304
net.ipv4.ipfrag_low_thresh = 3145728
net.ipv4.ipfrag_max_dist = 64
net.ipv4.ipfrag_secret_interval = 0
net.ipv4.ipfrag_time = 30
net.ipv4.neigh.default.anycast_delay = 100
net.ipv4.neigh.default.app_solicit = 0
net.ipv4.neigh.default.base_reachable_time_ms = 30000
net.ipv4.neigh.default.delay_first_probe_time = 5
net.ipv4.neigh.default.gc_interval = 30
net.ipv4.neigh.default.gc_stale_time = 60
net.ipv4.neigh.default.gc_thresh1 = 128
net.ipv4.neigh.default.gc_thresh2 = 512
net.ipv4.neigh.default.gc_thresh3 = 1024
net.ipv4.neigh.default.locktime = 100
net.ipv4.neigh.default.mcast_resolicit = 0
net.ipv4.neigh.default.mcast_solicit = 3
net.ipv4.neigh.default.proxy_delay = 80
net.ipv4.neigh.default.proxy_qlen = 64
net.ipv4.neigh.default.retrans_time_ms = 1000
net.ipv4.neigh.default.ucast_solicit = 3
net.ipv4.neigh.default.unres_qlen = 101
net.ipv4.neigh.default.unres_qlen_bytes = 212992
net.ipv4.neigh.ens5.anycast_delay = 100
net.ipv4.neigh.ens5.app_solicit = 0
net.ipv4.neigh.ens5.base_reachable_time_ms = 30000
net.ipv4.neigh.ens5.delay_first_probe_time = 5
net.ipv4.neigh.ens5.gc_stale_time = 60
net.ipv4.neigh.ens5.locktime = 100
net.ipv4.neigh.ens5.mcast_resolicit = 0
net.ipv4.neigh.ens5.mcast_solicit = 3
net.ipv4.neigh.ens5.proxy_delay = 80
net.ipv4.neigh.ens5.proxy_qlen = 64
net.ipv4.neigh.ens5.retrans_time_ms = 1000
net.ipv4.neigh.ens5.ucast_solicit = 3
net.ipv4.neigh.ens5.unres_qlen = 101
net.ipv4.neigh.ens5.unres_qlen_bytes = 212992
net.ipv4.neigh.lo.anycast_delay = 100
net.ipv4.neigh.lo.app_solicit = 0
net.ipv4.neigh.lo.base_reachable_time_ms = 30000
net.ipv4.neigh.lo.delay_first_probe_time = 5
net.ipv4.neigh.lo.gc_stale_time = 60
net.ipv4.neigh.lo.locktime = 100
net.ipv4.neigh.lo.mcast_resolicit = 0
net.ipv4.neigh.lo.mcast_solicit = 3
net.ipv4.neigh.lo.proxy_delay = 80
net.ipv4.neigh.lo.proxy_qlen = 64
net.ipv4.neigh.lo.retrans_time_ms = 1000
net.ipv4.neigh.lo.ucast_solicit = 3
net.ipv4.neigh.lo.unres_qlen = 101
net.ipv4.neigh.lo.unres_qlen_bytes = 212992
net.ipv4.ping_group_range = 1	0
net.ipv4.raw_l3mdev_accept = 1
net.ipv4.route.error_burst = 1250
net.ipv4.route.error_cost = 250
net.ipv4.route.gc_elasticity = 8
net.ipv4.route.gc_interval = 60
net.ipv4.route.gc_min_interval = 0
net.ipv4.route.gc_min_interval_ms = 500
net.ipv4.route.gc_thresh = -1
net.ipv4.route.gc_timeout = 300
net.ipv4.route.max_size = 2147483647
net.ipv4.route.min_adv_mss = 256
net.ipv4.route.min_pmtu = 552
net.ipv4.route.mtu_expires = 600
net.ipv4.route.redirect_load = 5
net.ipv4.route.redirect_number = 9
net.ipv4.route.redirect_silence = 5120
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_allowed_congestion_control = reno cubic bbr2
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_autocorking = 1
net.ipv4.tcp_available_congestion_control = reno cubic bbr2
net.ipv4.tcp_available_ulp = 
net.ipv4.tcp_base_mss = 1024
net.ipv4.tcp_challenge_ack_limit = 1000
net.ipv4.tcp_comp_sack_delay_ns = 1000000
net.ipv4.tcp_comp_sack_nr = 44
net.ipv4.tcp_congestion_control = bbr2
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_early_demux = 1
net.ipv4.tcp_early_retrans = 3
net.ipv4.tcp_ecn = 1
net.ipv4.tcp_ecn_fallback = 1
net.ipv4.tcp_fack = 0
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_fastopen_blackhole_timeout_sec = 3600
net.ipv4.tcp_fastopen_key = 00000000-00000000-00000000-00000000
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_frto = 2
net.ipv4.tcp_fwmark_accept = 0
net.ipv4.tcp_invalid_ratelimit = 500
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_l3mdev_accept = 0
net.ipv4.tcp_limit_output_bytes = 1048576
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_reordering = 300
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_mem = 1129629	1506173	2259258
net.ipv4.tcp_min_rtt_wlen = 300
net.ipv4.tcp_min_snd_mss = 48
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_mtu_probe_floor = 48
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_notsent_lowat = 4294967295
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_pacing_ca_ratio = 120
net.ipv4.tcp_pacing_ss_ratio = 200
net.ipv4.tcp_probe_interval = 600
net.ipv4.tcp_probe_threshold = 8
net.ipv4.tcp_recovery = 1
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rmem = 4096	131072	6291456
net.ipv4.tcp_rx_skb_cache = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_tw_reuse = 2
net.ipv4.tcp_tx_skb_cache = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096	16384	4194304
net.ipv4.tcp_workaround_signed_windows = 0
net.ipv4.udp_early_demux = 1
net.ipv4.udp_l3mdev_accept = 0
net.ipv4.udp_mem = 2259258	3012347	4518516
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096
net.ipv4.xfrm4_gc_thresh = 32768
net.ipv6.anycast_src_echo_reply = 0
net.ipv6.auto_flowlabels = 1
net.ipv6.bindv6only = 0
net.ipv6.calipso_cache_bucket_size = 10
net.ipv6.calipso_cache_enable = 1
net.ipv6.conf.all.accept_dad = 0
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_ra_defrtr = 1
net.ipv6.conf.all.accept_ra_from_local = 0
net.ipv6.conf.all.accept_ra_min_hop_limit = 1
net.ipv6.conf.all.accept_ra_mtu = 1
net.ipv6.conf.all.accept_ra_pinfo = 1
net.ipv6.conf.all.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.all.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.all.accept_ra_rtr_pref = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.all.addr_gen_mode = 0
net.ipv6.conf.all.autoconf = 1
net.ipv6.conf.all.dad_transmits = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.disable_policy = 0
net.ipv6.conf.all.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.all.drop_unsolicited_na = 0
net.ipv6.conf.all.enhanced_dad = 1
net.ipv6.conf.all.force_mld_version = 0
net.ipv6.conf.all.force_tllao = 0
net.ipv6.conf.all.forwarding = 0
net.ipv6.conf.all.hop_limit = 64
net.ipv6.conf.all.ignore_routes_with_linkdown = 0
net.ipv6.conf.all.keep_addr_on_down = 0
net.ipv6.conf.all.max_addresses = 16
net.ipv6.conf.all.max_desync_factor = 600
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.all.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.all.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.all.mtu = 1280
net.ipv6.conf.all.ndisc_notify = 0
net.ipv6.conf.all.ndisc_tclass = 0
net.ipv6.conf.all.proxy_ndp = 0
net.ipv6.conf.all.regen_max_retry = 3
net.ipv6.conf.all.router_probe_interval = 60
net.ipv6.conf.all.router_solicitation_delay = 1
net.ipv6.conf.all.router_solicitation_interval = 4
net.ipv6.conf.all.router_solicitation_max_interval = 3600
net.ipv6.conf.all.router_solicitations = -1
net.ipv6.conf.all.seg6_enabled = 0
net.ipv6.conf.all.seg6_require_hmac = 0
net.ipv6.conf.all.suppress_frag_ndisc = 1
net.ipv6.conf.all.temp_prefered_lft = 86400
net.ipv6.conf.all.temp_valid_lft = 604800
net.ipv6.conf.all.use_oif_addrs_only = 0
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.accept_dad = 1
net.ipv6.conf.default.accept_ra = 1
net.ipv6.conf.default.accept_ra_defrtr = 1
net.ipv6.conf.default.accept_ra_from_local = 0
net.ipv6.conf.default.accept_ra_min_hop_limit = 1
net.ipv6.conf.default.accept_ra_mtu = 1
net.ipv6.conf.default.accept_ra_pinfo = 1
net.ipv6.conf.default.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.default.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 1
net.ipv6.conf.default.accept_redirects = 1
net.ipv6.conf.default.accept_source_route = 0
net.ipv6.conf.default.addr_gen_mode = 0
net.ipv6.conf.default.autoconf = 1
net.ipv6.conf.default.dad_transmits = 1
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.default.disable_policy = 0
net.ipv6.conf.default.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.default.drop_unsolicited_na = 0
net.ipv6.conf.default.enhanced_dad = 1
net.ipv6.conf.default.force_mld_version = 0
net.ipv6.conf.default.force_tllao = 0
net.ipv6.conf.default.forwarding = 0
net.ipv6.conf.default.hop_limit = 64
net.ipv6.conf.default.ignore_routes_with_linkdown = 0
net.ipv6.conf.default.keep_addr_on_down = 0
net.ipv6.conf.default.max_addresses = 16
net.ipv6.conf.default.max_desync_factor = 600
net.ipv6.conf.default.mc_forwarding = 0
net.ipv6.conf.default.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.default.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.default.mtu = 1280
net.ipv6.conf.default.ndisc_notify = 0
net.ipv6.conf.default.ndisc_tclass = 0
net.ipv6.conf.default.proxy_ndp = 0
net.ipv6.conf.default.regen_max_retry = 3
net.ipv6.conf.default.router_probe_interval = 60
net.ipv6.conf.default.router_solicitation_delay = 1
net.ipv6.conf.default.router_solicitation_interval = 4
net.ipv6.conf.default.router_solicitation_max_interval = 3600
net.ipv6.conf.default.router_solicitations = -1
net.ipv6.conf.default.seg6_enabled = 0
net.ipv6.conf.default.seg6_require_hmac = 0
net.ipv6.conf.default.suppress_frag_ndisc = 1
net.ipv6.conf.default.temp_prefered_lft = 86400
net.ipv6.conf.default.temp_valid_lft = 604800
net.ipv6.conf.default.use_oif_addrs_only = 0
net.ipv6.conf.default.use_tempaddr = 0
net.ipv6.conf.ens5.accept_dad = 1
net.ipv6.conf.ens5.accept_ra = 1
net.ipv6.conf.ens5.accept_ra_defrtr = 1
net.ipv6.conf.ens5.accept_ra_from_local = 0
net.ipv6.conf.ens5.accept_ra_min_hop_limit = 1
net.ipv6.conf.ens5.accept_ra_mtu = 1
net.ipv6.conf.ens5.accept_ra_pinfo = 1
net.ipv6.conf.ens5.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.ens5.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.ens5.accept_ra_rtr_pref = 1
net.ipv6.conf.ens5.accept_redirects = 1
net.ipv6.conf.ens5.accept_source_route = 0
net.ipv6.conf.ens5.addr_gen_mode = 0
net.ipv6.conf.ens5.autoconf = 1
net.ipv6.conf.ens5.dad_transmits = 1
net.ipv6.conf.ens5.disable_ipv6 = 0
net.ipv6.conf.ens5.disable_policy = 0
net.ipv6.conf.ens5.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.ens5.drop_unsolicited_na = 0
net.ipv6.conf.ens5.enhanced_dad = 1
net.ipv6.conf.ens5.force_mld_version = 0
net.ipv6.conf.ens5.force_tllao = 0
net.ipv6.conf.ens5.forwarding = 0
net.ipv6.conf.ens5.hop_limit = 64
net.ipv6.conf.ens5.ignore_routes_with_linkdown = 0
net.ipv6.conf.ens5.keep_addr_on_down = 0
net.ipv6.conf.ens5.max_addresses = 16
net.ipv6.conf.ens5.max_desync_factor = 600
net.ipv6.conf.ens5.mc_forwarding = 0
net.ipv6.conf.ens5.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.ens5.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.ens5.mtu = 9001
net.ipv6.conf.ens5.ndisc_notify = 0
net.ipv6.conf.ens5.ndisc_tclass = 0
net.ipv6.conf.ens5.proxy_ndp = 0
net.ipv6.conf.ens5.regen_max_retry = 3
net.ipv6.conf.ens5.router_probe_interval = 60
net.ipv6.conf.ens5.router_solicitation_delay = 1
net.ipv6.conf.ens5.router_solicitation_interval = 4
net.ipv6.conf.ens5.router_solicitation_max_interval = 3600
net.ipv6.conf.ens5.router_solicitations = -1
net.ipv6.conf.ens5.seg6_enabled = 0
net.ipv6.conf.ens5.seg6_require_hmac = 0
net.ipv6.conf.ens5.suppress_frag_ndisc = 1
net.ipv6.conf.ens5.temp_prefered_lft = 86400
net.ipv6.conf.ens5.temp_valid_lft = 604800
net.ipv6.conf.ens5.use_oif_addrs_only = 0
net.ipv6.conf.ens5.use_tempaddr = 0
net.ipv6.conf.lo.accept_dad = -1
net.ipv6.conf.lo.accept_ra = 1
net.ipv6.conf.lo.accept_ra_defrtr = 1
net.ipv6.conf.lo.accept_ra_from_local = 0
net.ipv6.conf.lo.accept_ra_min_hop_limit = 1
net.ipv6.conf.lo.accept_ra_mtu = 1
net.ipv6.conf.lo.accept_ra_pinfo = 1
net.ipv6.conf.lo.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.lo.accept_ra_rt_info_min_plen = 0
net.ipv6.conf.lo.accept_ra_rtr_pref = 1
net.ipv6.conf.lo.accept_redirects = 1
net.ipv6.conf.lo.accept_source_route = 0
net.ipv6.conf.lo.addr_gen_mode = 0
net.ipv6.conf.lo.autoconf = 1
net.ipv6.conf.lo.dad_transmits = 1
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.lo.disable_policy = 0
net.ipv6.conf.lo.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.lo.drop_unsolicited_na = 0
net.ipv6.conf.lo.enhanced_dad = 1
net.ipv6.conf.lo.force_mld_version = 0
net.ipv6.conf.lo.force_tllao = 0
net.ipv6.conf.lo.forwarding = 0
net.ipv6.conf.lo.hop_limit = 64
net.ipv6.conf.lo.ignore_routes_with_linkdown = 0
net.ipv6.conf.lo.keep_addr_on_down = 0
net.ipv6.conf.lo.max_addresses = 16
net.ipv6.conf.lo.max_desync_factor = 600
net.ipv6.conf.lo.mc_forwarding = 0
net.ipv6.conf.lo.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.lo.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.lo.mtu = 65536
net.ipv6.conf.lo.ndisc_notify = 0
net.ipv6.conf.lo.ndisc_tclass = 0
net.ipv6.conf.lo.proxy_ndp = 0
net.ipv6.conf.lo.regen_max_retry = 3
net.ipv6.conf.lo.router_probe_interval = 60
net.ipv6.conf.lo.router_solicitation_delay = 1
net.ipv6.conf.lo.router_solicitation_interval = 4
net.ipv6.conf.lo.router_solicitation_max_interval = 3600
net.ipv6.conf.lo.router_solicitations = -1
net.ipv6.conf.lo.seg6_enabled = 0
net.ipv6.conf.lo.seg6_require_hmac = 0
net.ipv6.conf.lo.suppress_frag_ndisc = 1
net.ipv6.conf.lo.temp_prefered_lft = 86400
net.ipv6.conf.lo.temp_valid_lft = 604800
net.ipv6.conf.lo.use_oif_addrs_only = 0
net.ipv6.conf.lo.use_tempaddr = -1
net.ipv6.fib_multipath_hash_policy = 0
net.ipv6.flowlabel_consistency = 1
net.ipv6.flowlabel_reflect = 0
net.ipv6.flowlabel_state_ranges = 0
net.ipv6.fwmark_reflect = 0
net.ipv6.icmp.echo_ignore_all = 0
net.ipv6.icmp.echo_ignore_anycast = 0
net.ipv6.icmp.echo_ignore_multicast = 0
net.ipv6.icmp.ratelimit = 1000
net.ipv6.icmp.ratemask = 0-1,3-127
net.ipv6.idgen_delay = 1
net.ipv6.idgen_retries = 3
net.ipv6.ip6frag_high_thresh = 4194304
net.ipv6.ip6frag_low_thresh = 3145728
net.ipv6.ip6frag_secret_interval = 0
net.ipv6.ip6frag_time = 60
net.ipv6.ip_nonlocal_bind = 0
net.ipv6.max_dst_opts_length = 2147483647
net.ipv6.max_dst_opts_number = 8
net.ipv6.max_hbh_length = 2147483647
net.ipv6.max_hbh_opts_number = 8
net.ipv6.mld_max_msf = 64
net.ipv6.mld_qrv = 2
net.ipv6.neigh.default.anycast_delay = 100
net.ipv6.neigh.default.app_solicit = 0
net.ipv6.neigh.default.base_reachable_time_ms = 30000
net.ipv6.neigh.default.delay_first_probe_time = 5
net.ipv6.neigh.default.gc_interval = 30
net.ipv6.neigh.default.gc_stale_time = 60
net.ipv6.neigh.default.gc_thresh1 = 128
net.ipv6.neigh.default.gc_thresh2 = 512
net.ipv6.neigh.default.gc_thresh3 = 1024
net.ipv6.neigh.default.locktime = 0
net.ipv6.neigh.default.mcast_resolicit = 0
net.ipv6.neigh.default.mcast_solicit = 3
net.ipv6.neigh.default.proxy_delay = 80
net.ipv6.neigh.default.proxy_qlen = 64
net.ipv6.neigh.default.retrans_time_ms = 1000
net.ipv6.neigh.default.ucast_solicit = 3
net.ipv6.neigh.default.unres_qlen = 101
net.ipv6.neigh.default.unres_qlen_bytes = 212992
net.ipv6.neigh.ens5.anycast_delay = 100
net.ipv6.neigh.ens5.app_solicit = 0
net.ipv6.neigh.ens5.base_reachable_time_ms = 30000
net.ipv6.neigh.ens5.delay_first_probe_time = 5
net.ipv6.neigh.ens5.gc_stale_time = 60
net.ipv6.neigh.ens5.locktime = 0
net.ipv6.neigh.ens5.mcast_resolicit = 0
net.ipv6.neigh.ens5.mcast_solicit = 3
net.ipv6.neigh.ens5.proxy_delay = 80
net.ipv6.neigh.ens5.proxy_qlen = 64
net.ipv6.neigh.ens5.retrans_time_ms = 1000
net.ipv6.neigh.ens5.ucast_solicit = 3
net.ipv6.neigh.ens5.unres_qlen = 101
net.ipv6.neigh.ens5.unres_qlen_bytes = 212992
net.ipv6.neigh.lo.anycast_delay = 100
net.ipv6.neigh.lo.app_solicit = 0
net.ipv6.neigh.lo.base_reachable_time_ms = 30000
net.ipv6.neigh.lo.delay_first_probe_time = 5
net.ipv6.neigh.lo.gc_stale_time = 60
net.ipv6.neigh.lo.locktime = 0
net.ipv6.neigh.lo.mcast_resolicit = 0
net.ipv6.neigh.lo.mcast_solicit = 3
net.ipv6.neigh.lo.proxy_delay = 80
net.ipv6.neigh.lo.proxy_qlen = 64
net.ipv6.neigh.lo.retrans_time_ms = 1000
net.ipv6.neigh.lo.ucast_solicit = 3
net.ipv6.neigh.lo.unres_qlen = 101
net.ipv6.neigh.lo.unres_qlen_bytes = 212992
net.ipv6.route.gc_elasticity = 9
net.ipv6.route.gc_interval = 30
net.ipv6.route.gc_min_interval = 0
net.ipv6.route.gc_min_interval_ms = 500
net.ipv6.route.gc_thresh = 1024
net.ipv6.route.gc_timeout = 60
net.ipv6.route.max_size = 4096
net.ipv6.route.min_adv_mss = 1220
net.ipv6.route.mtu_expires = 600
net.ipv6.route.skip_notify_on_dev_down = 0
net.ipv6.seg6_flowlabel = 0
net.ipv6.xfrm6_gc_thresh = 32768
net.iw_cm.default_backlog = 256
net.netfilter.nf_log.0 = NONE
net.netfilter.nf_log.1 = NONE
net.netfilter.nf_log.10 = NONE
net.netfilter.nf_log.11 = NONE
net.netfilter.nf_log.12 = NONE
net.netfilter.nf_log.2 = NONE
net.netfilter.nf_log.3 = NONE
net.netfilter.nf_log.4 = NONE
net.netfilter.nf_log.5 = NONE
net.netfilter.nf_log.6 = NONE
net.netfilter.nf_log.7 = NONE
net.netfilter.nf_log.8 = NONE
net.netfilter.nf_log.9 = NONE
net.netfilter.nf_log_all_netns = 0
net.unix.max_dgram_qlen = 512
user.max_cgroup_namespaces = 377562
user.max_inotify_instances = 1024
user.max_inotify_watches = 8192
user.max_ipc_namespaces = 377562
user.max_mnt_namespaces = 377562
user.max_net_namespaces = 377562
user.max_pid_namespaces = 377562
user.max_user_namespaces = 377562
user.max_uts_namespaces = 377562
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.compact_unevictable_allowed = 1
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
vm.drop_caches = 0
vm.extfrag_threshold = 500
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256	256	32	0	0
vm.max_map_count = 65530
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
vm.min_free_kbytes = 67584
vm.min_slab_ratio = 5
vm.min_unmapped_ratio = 1
vm.mmap_min_addr = 65536
vm.mmap_rnd_bits = 28
vm.mmap_rnd_compat_bits = 8
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.nr_overcommit_hugepages = 0
vm.numa_stat = 1
vm.numa_zonelist_order = Node
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.unprivileged_userfaultfd = 1
vm.user_reserve_kbytes = 131072
vm.vfs_cache_pressure = 100
vm.watermark_boost_factor = 15000
vm.watermark_scale_factor = 10
vm.zone_reclaim_mode = 0

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 17:16               ` Mohamed Abuelfotoh, Hazem
@ 2020-12-07 17:27                 ` Eric Dumazet
  2020-12-08 16:28                   ` Mohamed Abuelfotoh, Hazem
  0 siblings, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2020-12-07 17:27 UTC (permalink / raw)
  To: Mohamed Abuelfotoh, Hazem
  Cc: Neal Cardwell, netdev, stable, ycheng, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

On Mon, Dec 7, 2020 at 6:17 PM Mohamed Abuelfotoh, Hazem
<abuehaze@amazon.com> wrote:
>
>     >Thanks for testing this, Eric. Would you be able to share the MTU
>     >config commands you used, and the tcpdump traces you get? I'm
>     >surprised that receive buffer autotuning would work for advmss of
>     >around 6500 or higher.
>
> Packet capture before applying the proposed patch
>
> https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-bad-unpatched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T170123Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=a599a0e0e6632a957e5619007ba5ce4f63c8e8535ea24470b7093fef440a8300
>
> Packet capture after applying the proposed patch
>
> https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-good-patched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T165831Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=f18ec7246107590e8ac35c24322af699e4c2a73d174067c51cf6b0a06bbbca77
>
> kernel version & MTU and configuration  from my receiver & sender is attached to this e-mail, please be aware that EC2 is doing MSS clamping so you need to configure MTU as 1500 on the sender side if you don’t have any MSS clamping between sender & receiver.
>
> Thank you.
>
> Hazem

Please try again, with a fixed tcp_rmem[1] on receiver, taking into
account bigger memory requirement for MTU 9000

Rationale : TCP should be ready to receive 10 full frames before
autotuning takes place (these 10 MSS are typically in a single GRO
packet)

At 9000 MTU, one frame typically consumes 12KB (or 16KB on some arches/drivers)

TCP uses a 50% factor rule, accounting 18000 bytes of kernel memory per MSS.

->

echo "4096 180000 15728640" >/proc/sys/net/ipv4/tcp_rmem



>
>
> On 07/12/2020, 16:34, "Neal Cardwell" <ncardwell@google.com> wrote:
>
>     CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>
>
>
>     On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@google.com> wrote:
>     >
>     > On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
>     > <abuehaze@amazon.com> wrote:
>     > >
>     > >     >Since I can not reproduce this problem with another NIC on x86, I
>     > >     >really wonder if this is not an issue with ENA driver on PowerPC
>     > >     >perhaps ?
>     > >
>     > >
>     > > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
>     > >
>     > > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
>     >
>     >
>     > 100ms RTT
>     >
>     > Which exact version of linux kernel are you using ?
>
>     Thanks for testing this, Eric. Would you be able to share the MTU
>     config commands you used, and the tcpdump traces you get? I'm
>     surprised that receive buffer autotuning would work for advmss of
>     around 6500 or higher.
>
>     thanks,
>     neal
>
>
>
>
> Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
>
> Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 16:34             ` Mohamed Abuelfotoh, Hazem
@ 2020-12-07 17:46               ` Greg KH
  2020-12-07 17:54                 ` Mohamed Abuelfotoh, Hazem
  0 siblings, 1 reply; 26+ messages in thread
From: Greg KH @ 2020-12-07 17:46 UTC (permalink / raw)
  To: Mohamed Abuelfotoh, Hazem
  Cc: Eric Dumazet, netdev, stable, ycheng, ncardwell, weiwan,
	Strohman, Andy, Herrenschmidt, Benjamin

On Mon, Dec 07, 2020 at 04:34:57PM +0000, Mohamed Abuelfotoh, Hazem wrote:
> 100ms RTT
> 
> >Which exact version of linux kernel are you using ?
> On the receiver side I could see the issue with any mainline kernel
> version >=4.19.86 which is the first kernel version that has patches
> [1] & [2] included.  On the sender I am using kernel 5.4.0-rc6.

5.4.0-rc6 is a very old and odd kernel to be doing anything with.  Are
you sure you don't mean "5.10-rc6" here?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 17:46               ` Greg KH
@ 2020-12-07 17:54                 ` Mohamed Abuelfotoh, Hazem
  0 siblings, 0 replies; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-07 17:54 UTC (permalink / raw)
  To: Greg KH
  Cc: Eric Dumazet, netdev, stable, ycheng, ncardwell, weiwan,
	Strohman, Andy, Herrenschmidt, Benjamin

>5.4.0-rc6 is a very old and odd kernel to be doing anything with.  Are
>you sure you don't mean "5.10-rc6" here?

I was able to reproduce it on the latest mainline kernel as well  so anything newer than 4.19.85 is just broken.

Thank you.

Hazem

On 07/12/2020, 17:45, "Greg KH" <gregkh@linuxfoundation.org> wrote:

    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



    On Mon, Dec 07, 2020 at 04:34:57PM +0000, Mohamed Abuelfotoh, Hazem wrote:
    > 100ms RTT
    >
    > >Which exact version of linux kernel are you using ?
    > On the receiver side I could see the issue with any mainline kernel
    > version >=4.19.86 which is the first kernel version that has patches
    > [1] & [2] included.  On the sender I am using kernel 5.4.0-rc6.

    5.4.0-rc6 is a very old and odd kernel to be doing anything with.  Are
    you sure you don't mean "5.10-rc6" here?

    thanks,

    greg k-h




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS
  2020-12-07 11:46   ` [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS Hazem Mohamed Abuelfotoh
@ 2020-12-07 18:53     ` Jakub Kicinski
  0 siblings, 0 replies; 26+ messages in thread
From: Jakub Kicinski @ 2020-12-07 18:53 UTC (permalink / raw)
  To: Hazem Mohamed Abuelfotoh
  Cc: netdev, stable, edumazet, ycheng, ncardwell, weiwan, astroh, benh

On Mon, 7 Dec 2020 11:46:25 +0000 Hazem Mohamed Abuelfotoh wrote:
>     Previously receiver buffer auto-tuning starts after receiving
>     one advertised window amount of data.After the initial
>     receiver buffer was raised by
>     commit a337531b942b ("tcp: up initial rmem to 128KB
>     and SYN rwin to around 64KB"),the receiver buffer may
>     take too long for TCP autotuning to start raising
>     the receiver buffer size.
>     commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>     tried to decrease the threshold at which TCP auto-tuning starts
>     but it's doesn't work well in some environments
>     where the receiver has large MTU (9001) especially with high RTT
>     connections as in these environments rcvq_space.space will be the same
>     as rcv_wnd so TCP autotuning will never start because
>     sender can't send more than rcv_wnd size in one round trip.
>     To address this issue this patch is decreasing the initial
>     rcvq_space.space so TCP autotuning kicks in whenever the sender is
>     able to send more than 5360 bytes in one round trip regardless the
>     receiver's configured MTU.
> 
>     Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
>     Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
> 
> Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>

If the discussion concludes in favor of this patch please un-indent
this commit message, remove the empty line after the fixes tag, and 
repost.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 17:08               ` Eric Dumazet
@ 2020-12-07 20:09                 ` Mohamed Abuelfotoh, Hazem
  2020-12-07 23:22                   ` Eric Dumazet
  0 siblings, 1 reply; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-07 20:09 UTC (permalink / raw)
  To: Eric Dumazet, Neal Cardwell
  Cc: netdev, stable, ycheng, weiwan, Strohman, Andy, Herrenschmidt, Benjamin

    >I want to state again that using 536 bytes as a magic value makes no
    sense to me.

 >autotuning might be delayed by one RTT, this does not match numbers
 >given by Mohamed (flows stuck in low speed)

  >autotuning is an heuristic, and because it has one RTT latency, it is
   >crucial to get proper initial rcvmem values.

   >People using MTU=9000 should know they have to tune tcp_rmem[1]
   >accordingly, especially when using drivers consuming one page per
   >+incoming MSS.



The magic number would be 10*rcv_mss=5360 not 536 and in my opinion it's a big amount of data to be sent in security attack so if we are talking about DDos attack triggering Autotuning at 5360 bytes I'd say he will also be able to trigger it sending 64KB but I totally agree that it would be easier with lower rcvq_space.space, it's always a tradeoff between security and performance.

Other options would be to either consider the configured MTU in the rcv_wnd calculation or probably check the MTU before calculating the initial rcvspace. We have to make sure that initial receive space is lower than initial receive window so Autotuning would work regardless the configured MTU on the receiver and only people using Jumbo frames will be paying the price if we agreed that it's expected for Jumbo frame users to have machines with more memory,  I'd say something as below should work:

void tcp_init_buffer_space(struct sock *sk)
{
	int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
	struct inet_connection_sock *icsk = inet_csk(sk);
	struct tcp_sock *tp = tcp_sk(sk);
	int maxwin;

	if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
		tcp_sndbuf_expand(sk);
	if(tp->advmss < 6000)
		tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
	else
		tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
	tcp_mstamp_refresh(tp);
	tp->rcvq_space.time = tp->tcp_mstamp;
	tp->rcvq_space.seq = tp->copied_seq;



I don't think that we should rely on Admins manually tuning this tcp_rmem[1] with Jumbo frame in use also Linux users shouldn't expect performance degradation after kernel upgrade. although [1] is the only public reporting of this issue, I am pretty sure we will see more users reporting this with Linux Main distributions moving to kernel 5.4 as stable version. In Summary we should come up with something either the proposed patch or something else to avoid admins doing the manual job.


Links

[1] https://github.com/kubernetes/kops/issues/10206

On 07/12/2020, 17:08, "Eric Dumazet" <edumazet@google.com> wrote:

    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



    On Mon, Dec 7, 2020 at 5:34 PM Neal Cardwell <ncardwell@google.com> wrote:
    >
    > On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@google.com> wrote:
    > >
    > > On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
    > > <abuehaze@amazon.com> wrote:
    > > >
    > > >     >Since I can not reproduce this problem with another NIC on x86, I
    > > >     >really wonder if this is not an issue with ENA driver on PowerPC
    > > >     >perhaps ?
    > > >
    > > >
    > > > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
    > > >
    > > > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
    > >
    > >
    > > 100ms RTT
    > >
    > > Which exact version of linux kernel are you using ?
    >
    > Thanks for testing this, Eric. Would you be able to share the MTU
    > config commands you used, and the tcpdump traces you get? I'm
    > surprised that receive buffer autotuning would work for advmss of
    > around 6500 or higher.

    autotuning might be delayed by one RTT, this does not match numbers
    given by Mohamed (flows stuck in low speed)

    autotuning is an heuristic, and because it has one RTT latency, it is
    crucial to get proper initial rcvmem values.

    People using MTU=9000 should know they have to tune tcp_rmem[1]
    accordingly, especially when using drivers consuming one page per
    incoming MSS.


    (mlx4 driver only uses ome 2048 bytes fragment for a 1500 MTU packet.
    even with MTU set to 9000)

    I want to state again that using 536 bytes as a magic value makes no
    sense to me.


    For the record, Google has increased tcp_rmem[1] when switching to a bigger MTU.

    The reason is simple : If we intend to receive 10 MSS, we should allow
    for 90000 bytes of payload, or tcp_rmem[1] set to 180,000
    Because of autotuning latency, doubling the value is advised : 360000

    Another problem with kicking autotuning too fast is that it might
    allow bigger sk->sk_rcvbuf values even for small flows, opening more
    surface to malicious attacks.

    I _think_ that if we want to allow admins to set high MTU without
    having to tune tcp_rmem[], we need something different than current
    proposal.




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 20:09                 ` Mohamed Abuelfotoh, Hazem
@ 2020-12-07 23:22                   ` Eric Dumazet
  0 siblings, 0 replies; 26+ messages in thread
From: Eric Dumazet @ 2020-12-07 23:22 UTC (permalink / raw)
  To: Mohamed Abuelfotoh, Hazem
  Cc: Neal Cardwell, netdev, stable, ycheng, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

On Mon, Dec 7, 2020 at 9:09 PM Mohamed Abuelfotoh, Hazem
<abuehaze@amazon.com> wrote:
>
>     >I want to state again that using 536 bytes as a magic value makes no
>     sense to me.
>
>  >autotuning might be delayed by one RTT, this does not match numbers
>  >given by Mohamed (flows stuck in low speed)
>
>   >autotuning is an heuristic, and because it has one RTT latency, it is
>    >crucial to get proper initial rcvmem values.
>
>    >People using MTU=9000 should know they have to tune tcp_rmem[1]
>    >accordingly, especially when using drivers consuming one page per
>    >+incoming MSS.
>
>
>
> The magic number would be 10*rcv_mss=5360 not 536 and in my opinion it's a big amount of data to be sent in security attack so if we are talking about DDos attack triggering Autotuning at 5360 bytes I'd say he will also be able to trigger it sending 64KB but I totally agree that it would be easier with lower rcvq_space.space, it's always a tradeoff between security and performance.



>
> Other options would be to either consider the configured MTU in the rcv_wnd calculation or probably check the MTU before calculating the initial rcvspace. We have to make sure that initial receive space is lower than initial receive window so Autotuning would work regardless the configured MTU on the receiver and only people using Jumbo frames will be paying the price if we agreed that it's expected for Jumbo frame users to have machines with more memory,  I'd say something as below should work:
>
> void tcp_init_buffer_space(struct sock *sk)
> {
>         int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
>         struct inet_connection_sock *icsk = inet_csk(sk);
>         struct tcp_sock *tp = tcp_sk(sk);
>         int maxwin;
>
>         if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
>                 tcp_sndbuf_expand(sk);
>         if(tp->advmss < 6000)
>                 tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);

This is just another hack, based on 'magic' numbers.

>         else
>                 tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
>         tcp_mstamp_refresh(tp);
>         tp->rcvq_space.time = tp->tcp_mstamp;
>         tp->rcvq_space.seq = tp->copied_seq;
>
>
>
> I don't think that we should rely on Admins manually tuning this tcp_rmem[1] with Jumbo frame in use also Linux users shouldn't expect performance degradation after kernel upgrade. although [1] is the only public reporting of this issue, I am pretty sure we will see more users reporting this with Linux Main distributions moving to kernel 5.4 as stable version. In Summary we should come up with something either the proposed patch or something else to avoid admins doing the manual job.
>



Default MTU is 1500, not 9000.

I hinted in my very first reply to you that MTU  9000 is not easy and
needs tuning. We could argue and try to make this less of a pain in
future kernel (net-next)

<quote>Also worth noting that if you set MTU to 9000 (instead of
standard 1500), you probably need to tweak a few sysctls.
</quote>

I think I have asked you multiple times to test appropriate
tcp_rmem[1] settings...

I gave the reason why tcp_rmem[1] set to 131072 is not good for MTU
9000, I will prefer a solution that involves no kernel patch, no
backports, just a matter of educating sysadmins, for increased TCP
performance,
especially when really using 9000 MTU...

Your patch would change the behavior of TCP stack for standard
MTU=1500 flows which are yet the majority. This is very risky.

Anyway. _if_ we really wanted to change the kernel, ( keeping stupid
tcp_rmem[1] value ) :

In the tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND *
tp->advmss);  formula, really the bug is in the tp->rcv_wnd term, not
the second one.

This is buggy, because tcp_init_buffer_space() ends up with
tp->window_clamp smaller than tp->rcv_wnd, so tcp_grow_window() is not
able to change tp->rcv_ssthresh

The only mechanism allowing to change tp->window_clamp later would be
DRS, so we better use the proper limit when initializing
tp->rcvq_space.space

This issue disappears if tcp_rmem[1] is slightly above 131072, because
then the following is not needed.

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 9e8a6c1aa0190cc248b3b99b073a4c6e45884cf5..81b5d9375860ae583e08045fb25b089c456c60ab
100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -534,6 +534,7 @@ static void tcp_init_buffer_space(struct sock *sk)

        tp->rcv_ssthresh = min(tp->rcv_ssthresh, tp->window_clamp);
        tp->snd_cwnd_stamp = tcp_jiffies32;
+       tp->rcvq_space.space = min(tp->rcv_ssthresh, tp->rcvq_space.space);
 }

 /* 4. Recalculate window clamp after socket hit its memory bounds. */

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-07 17:27                 ` Eric Dumazet
@ 2020-12-08 16:28                   ` Mohamed Abuelfotoh, Hazem
  2020-12-08 16:30                     ` Mohamed Abuelfotoh, Hazem
  2020-12-08 16:46                     ` Eric Dumazet
  0 siblings, 2 replies; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-08 16:28 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Neal Cardwell, netdev, stable, ycheng, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

    >Please try again, with a fixed tcp_rmem[1] on receiver, taking into
    >account bigger memory requirement for MTU 9000

    >Rationale : TCP should be ready to receive 10 full frames before
    >autotuning takes place (these 10 MSS are typically in a single GRO
   > packet)

    >At 9000 MTU, one frame typically consumes 12KB (or 16KB on some arches/drivers)

   >TCP uses a 50% factor rule, accounting 18000 bytes of kernel memory per MSS.

    ->

    >echo "4096 180000 15728640" >/proc/sys/net/ipv4/tcp_rmem



>diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
>index 9e8a6c1aa0190cc248b3b99b073a4c6e45884cf5..81b5d9375860ae583e08045fb25b089c456c60ab
>100644
>--- a/net/ipv4/tcp_input.c
>+++ b/net/ipv4/tcp_input.c
>@@ -534,6 +534,7 @@ static void tcp_init_buffer_space(struct sock *sk)
>
>        tp->rcv_ssthresh = min(tp->rcv_ssthresh, tp->window_clamp);
>       tp->snd_cwnd_stamp = tcp_jiffies32;
>+       tp->rcvq_space.space = min(tp->rcv_ssthresh, tp->rcvq_space.space);
>}

Yes this worked and it looks like echo "4096 140000 15728640" >/proc/sys/net/ipv4/tcp_rmem is actually enough to trigger TCP autotuning, if the current default tcp_rmem[1] doesn't work well with 9000 MTU I am curious to know  if there is specific reason behind having 131072 specifically   as  tcp_rmem[1]?I think the number itself has to be divisible by page size (4K) and 16KB given what you said that each Jumbo frame packet may consume up to 16KB.

if the patch I proposed would be risky for users who have MTU of 1500 because of its higher memory footprint in my opinion we should  get the patch you proposed merged instead of asking the Admins doing the manual work.

Thank you.

Hazem

On 07/12/2020, 17:28, "Eric Dumazet" <edumazet@google.com> wrote:

    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



    On Mon, Dec 7, 2020 at 6:17 PM Mohamed Abuelfotoh, Hazem
    <abuehaze@amazon.com> wrote:
    >
    >     >Thanks for testing this, Eric. Would you be able to share the MTU
    >     >config commands you used, and the tcpdump traces you get? I'm
    >     >surprised that receive buffer autotuning would work for advmss of
    >     >around 6500 or higher.
    >
    > Packet capture before applying the proposed patch
    >
    > https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-bad-unpatched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T170123Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=a599a0e0e6632a957e5619007ba5ce4f63c8e8535ea24470b7093fef440a8300
    >
    > Packet capture after applying the proposed patch
    >
    > https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-good-patched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T165831Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=f18ec7246107590e8ac35c24322af699e4c2a73d174067c51cf6b0a06bbbca77
    >
    > kernel version & MTU and configuration  from my receiver & sender is attached to this e-mail, please be aware that EC2 is doing MSS clamping so you need to configure MTU as 1500 on the sender side if you don’t have any MSS clamping between sender & receiver.
    >
    > Thank you.
    >
    > Hazem

    Please try again, with a fixed tcp_rmem[1] on receiver, taking into
    account bigger memory requirement for MTU 9000

    Rationale : TCP should be ready to receive 10 full frames before
    autotuning takes place (these 10 MSS are typically in a single GRO
    packet)

    At 9000 MTU, one frame typically consumes 12KB (or 16KB on some arches/drivers)

    TCP uses a 50% factor rule, accounting 18000 bytes of kernel memory per MSS.

    ->

    echo "4096 180000 15728640" >/proc/sys/net/ipv4/tcp_rmem



    >
    >
    > On 07/12/2020, 16:34, "Neal Cardwell" <ncardwell@google.com> wrote:
    >
    >     CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
    >
    >
    >
    >     On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@google.com> wrote:
    >     >
    >     > On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
    >     > <abuehaze@amazon.com> wrote:
    >     > >
    >     > >     >Since I can not reproduce this problem with another NIC on x86, I
    >     > >     >really wonder if this is not an issue with ENA driver on PowerPC
    >     > >     >perhaps ?
    >     > >
    >     > >
    >     > > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
    >     > >
    >     > > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
    >     >
    >     >
    >     > 100ms RTT
    >     >
    >     > Which exact version of linux kernel are you using ?
    >
    >     Thanks for testing this, Eric. Would you be able to share the MTU
    >     config commands you used, and the tcpdump traces you get? I'm
    >     surprised that receive buffer autotuning would work for advmss of
    >     around 6500 or higher.
    >
    >     thanks,
    >     neal
    >
    >
    >
    >
    > Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
    >
    > Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
    >
    >




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-08 16:28                   ` Mohamed Abuelfotoh, Hazem
@ 2020-12-08 16:30                     ` Mohamed Abuelfotoh, Hazem
  2020-12-08 16:46                     ` Eric Dumazet
  1 sibling, 0 replies; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-08 16:30 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Neal Cardwell, netdev, stable, ycheng, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

Feel free to ignore this message  as I sent it before seeing  your newly submitted patch (

Thank you.

Hazem



On 08/12/2020, 16:28, "Mohamed Abuelfotoh, Hazem" <abuehaze@amazon.com> wrote:

        >Please try again, with a fixed tcp_rmem[1] on receiver, taking into
        >account bigger memory requirement for MTU 9000

        >Rationale : TCP should be ready to receive 10 full frames before
        >autotuning takes place (these 10 MSS are typically in a single GRO
       > packet)

        >At 9000 MTU, one frame typically consumes 12KB (or 16KB on some arches/drivers)

       >TCP uses a 50% factor rule, accounting 18000 bytes of kernel memory per MSS.

        ->

        >echo "4096 180000 15728640" >/proc/sys/net/ipv4/tcp_rmem



    >diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
    >index 9e8a6c1aa0190cc248b3b99b073a4c6e45884cf5..81b5d9375860ae583e08045fb25b089c456c60ab
    >100644
    >--- a/net/ipv4/tcp_input.c
    >+++ b/net/ipv4/tcp_input.c
    >@@ -534,6 +534,7 @@ static void tcp_init_buffer_space(struct sock *sk)
    >
    >        tp->rcv_ssthresh = min(tp->rcv_ssthresh, tp->window_clamp);
    >       tp->snd_cwnd_stamp = tcp_jiffies32;
    >+       tp->rcvq_space.space = min(tp->rcv_ssthresh, tp->rcvq_space.space);
    >}

    Yes this worked and it looks like echo "4096 140000 15728640" >/proc/sys/net/ipv4/tcp_rmem is actually enough to trigger TCP autotuning, if the current default tcp_rmem[1] doesn't work well with 9000 MTU I am curious to know  if there is specific reason behind having 131072 specifically   as  tcp_rmem[1]?I think the number itself has to be divisible by page size (4K) and 16KB given what you said that each Jumbo frame packet may consume up to 16KB.

    if the patch I proposed would be risky for users who have MTU of 1500 because of its higher memory footprint in my opinion we should  get the patch you proposed merged instead of asking the Admins doing the manual work.

    Thank you.

    Hazem

    On 07/12/2020, 17:28, "Eric Dumazet" <edumazet@google.com> wrote:

        CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



        On Mon, Dec 7, 2020 at 6:17 PM Mohamed Abuelfotoh, Hazem
        <abuehaze@amazon.com> wrote:
        >
        >     >Thanks for testing this, Eric. Would you be able to share the MTU
        >     >config commands you used, and the tcpdump traces you get? I'm
        >     >surprised that receive buffer autotuning would work for advmss of
        >     >around 6500 or higher.
        >
        > Packet capture before applying the proposed patch
        >
        > https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-bad-unpatched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T170123Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=a599a0e0e6632a957e5619007ba5ce4f63c8e8535ea24470b7093fef440a8300
        >
        > Packet capture after applying the proposed patch
        >
        > https://tcpautotuningpcaps.s3.eu-west-1.amazonaws.com/sender-bbr-good-patched.pcap?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJNMP5ZZ3I4FAQGAQ%2F20201207%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20201207T165831Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=f18ec7246107590e8ac35c24322af699e4c2a73d174067c51cf6b0a06bbbca77
        >
        > kernel version & MTU and configuration  from my receiver & sender is attached to this e-mail, please be aware that EC2 is doing MSS clamping so you need to configure MTU as 1500 on the sender side if you don’t have any MSS clamping between sender & receiver.
        >
        > Thank you.
        >
        > Hazem

        Please try again, with a fixed tcp_rmem[1] on receiver, taking into
        account bigger memory requirement for MTU 9000

        Rationale : TCP should be ready to receive 10 full frames before
        autotuning takes place (these 10 MSS are typically in a single GRO
        packet)

        At 9000 MTU, one frame typically consumes 12KB (or 16KB on some arches/drivers)

        TCP uses a 50% factor rule, accounting 18000 bytes of kernel memory per MSS.

        ->

        echo "4096 180000 15728640" >/proc/sys/net/ipv4/tcp_rmem



        >
        >
        > On 07/12/2020, 16:34, "Neal Cardwell" <ncardwell@google.com> wrote:
        >
        >     CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
        >
        >
        >
        >     On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@google.com> wrote:
        >     >
        >     > On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
        >     > <abuehaze@amazon.com> wrote:
        >     > >
        >     > >     >Since I can not reproduce this problem with another NIC on x86, I
        >     > >     >really wonder if this is not an issue with ENA driver on PowerPC
        >     > >     >perhaps ?
        >     > >
        >     > >
        >     > > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
        >     > >
        >     > > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
        >     >
        >     >
        >     > 100ms RTT
        >     >
        >     > Which exact version of linux kernel are you using ?
        >
        >     Thanks for testing this, Eric. Would you be able to share the MTU
        >     config commands you used, and the tcpdump traces you get? I'm
        >     surprised that receive buffer autotuning would work for advmss of
        >     around 6500 or higher.
        >
        >     thanks,
        >     neal
        >
        >
        >
        >
        > Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
        >
        > Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
        >
        >





Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections
  2020-12-08 16:28                   ` Mohamed Abuelfotoh, Hazem
  2020-12-08 16:30                     ` Mohamed Abuelfotoh, Hazem
@ 2020-12-08 16:46                     ` Eric Dumazet
  1 sibling, 0 replies; 26+ messages in thread
From: Eric Dumazet @ 2020-12-08 16:46 UTC (permalink / raw)
  To: Mohamed Abuelfotoh, Hazem
  Cc: Neal Cardwell, netdev, stable, ycheng, weiwan, Strohman, Andy,
	Herrenschmidt, Benjamin

On Tue, Dec 8, 2020 at 5:28 PM Mohamed Abuelfotoh, Hazem
<abuehaze@amazon.com> wrote:
>
>     >Please try again, with a fixed tcp_rmem[1] on receiver, taking into
>     >account bigger memory requirement for MTU 9000
>
>     >Rationale : TCP should be ready to receive 10 full frames before
>     >autotuning takes place (these 10 MSS are typically in a single GRO
>    > packet)
>
>     >At 9000 MTU, one frame typically consumes 12KB (or 16KB on some arches/drivers)
>
>    >TCP uses a 50% factor rule, accounting 18000 bytes of kernel memory per MSS.
>
>     ->
>
>     >echo "4096 180000 15728640" >/proc/sys/net/ipv4/tcp_rmem
>
>
>
> >diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> >index 9e8a6c1aa0190cc248b3b99b073a4c6e45884cf5..81b5d9375860ae583e08045fb25b089c456c60ab
> >100644
> >--- a/net/ipv4/tcp_input.c
> >+++ b/net/ipv4/tcp_input.c
> >@@ -534,6 +534,7 @@ static void tcp_init_buffer_space(struct sock *sk)
> >
> >        tp->rcv_ssthresh = min(tp->rcv_ssthresh, tp->window_clamp);
> >       tp->snd_cwnd_stamp = tcp_jiffies32;
> >+       tp->rcvq_space.space = min(tp->rcv_ssthresh, tp->rcvq_space.space);
> >}
>
> Yes this worked and it looks like echo "4096 140000 15728640" >/proc/sys/net/ipv4/tcp_rmem is actually enough to trigger TCP autotuning, if the current default tcp_rmem[1] doesn't work well with 9000 MTU I am curious to know  if there is specific reason behind having 131072 specifically   as  tcp_rmem[1]?I think the number itself has to be divisible by page size (4K) and 16KB given what you said that each Jumbo frame packet may consume up to 16KB.


I think the idea behind the value of 131072 was that because TCP RWIN
was set to 65535, we had to reserve twice this amount of memory ->
131072 bytes.

Assuming DRS works well, the exact value should matter only for
unresponsive applications (slow to read/drain the receive queue),
since DRS is delayed for them.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS
  2020-12-07 16:08     ` Eric Dumazet
@ 2020-12-07 17:49       ` Mohamed Abuelfotoh, Hazem
  0 siblings, 0 replies; 26+ messages in thread
From: Mohamed Abuelfotoh, Hazem @ 2020-12-07 17:49 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: netdev, stable, Yuchung Cheng, Neal Cardwell, Wei Wang, Strohman,
	Andy, Herrenschmidt, Benjamin

    > I find using icsk->icsk_ack.rcv_mss misleading.
    > I would either use TCP_MSS_DEFAULT , or maybe simply 0, since we had
    > no samples yet, there is little point to use a magic value.

My point is  by definition  rcv_space is used in TCP's internal auto-tuning to grow socket buffers based on how much data the kernel estimates the sender can send so we are talking about an estimation anyway that won't be 100% accurate especially during the connection initialization, I proposed using TCP_INIT_CWND * icsk->icsk_ack.rcv_mss as initial receive space because it's the minimum that the sender can send assuming they are filling up their initial congestion window this shouldn't cause security impact in my opinion because the rcv_buf and receive window won't scale unless the sender sent 5360 in one round trip so anything lower than that won't trigger the TCP autotuning.

Another point is that the proposed  patch is impacting the  initial rcv_space.space but has no impact on how the receive window/receive buffer  will scale while the connection is ongoing.

I was actually thinking about the below options before proposing my final patch because I am afraid that they will have impact of higher  memory footprint or connection get stuck later with packet loss.


I am listing them below as well.


A)The below patch would be enough as the usercopied data would be usually more than 50KB so the window will scale 


# diff -u a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c 
--- a/net/ipv4/tcp_input.c	2020-11-18 19:54:23.624306129 +0000
+++ b/net/ipv4/tcp_input.c	2020-11-18 19:55:05.032259419 +0000
@@ -605,7 +605,7 @@
 
 	/* Number of bytes copied to user in last RTT */
 	copied = tp->copied_seq - tp->rcvq_space.seq;
-	if (copied <= tp->rcvq_space.space)
+	if (copied <= (tp->rcvq_space.space >> 1))
 		goto new_measure;


Pros:
-it will decrease the threshold where we are scaling up the receive buffers and advertised window to half what it's currently on so we should see the connection stuck using the current default kernel configurations with High RTT.
-it's likely hood for getting stuck later is low because it's dynamic and should impact the DRS during the whole connection lifetime not just during the initialization phase.

Cons:
-This may have Higher memory footprint because we are increasing the receiver buffer size on lower threshold than before.


################################################################################################################################

B)Otherwise we can be looking into decreasing the initial rcv_wnd and accordingly  rcvq_space.space   by decreasing the default ipv4.sysctl_tcp_rmem[1] from 131072 to 87380 as it was before https://lore.kernel.org/patchwork/patch/1157936/.

#################################################################################################################################


C)Other solution would be to bound current  rcvq_space.space with the usercopied amount of the previous RTT, something as below would also work.


--- a/net/ipv4/tcp_input.c      2020-11-19 15:43:10.441524021 +0000
+++ b/net/ipv4/tcp_input.c      2020-11-19 15:45:42.772614521 +0000
@@ -649,6 +649,7 @@
        tp->rcvq_space.space = copied;
 
 new_measure:
+       tp->rcvq_space.space = copied;
        tp->rcvq_space.seq = tp->copied_seq;
        tp->rcvq_space.time = tp->tcp_mstamp;
 }

Cons:
-When I emulated packet loss later after connection establishment  I could see the connection get stuck on low speed for a long time.


On 07/12/2020, 16:09, "Eric Dumazet" <edumazet@google.com> wrote:

    CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.



    On Mon, Dec 7, 2020 at 4:37 PM Eric Dumazet <edumazet@google.com> wrote:
    >
    > On Mon, Dec 7, 2020 at 12:41 PM Hazem Mohamed Abuelfotoh
    > <abuehaze@amazon.com> wrote:
    > >
    > >     Previously receiver buffer auto-tuning starts after receiving
    > >     one advertised window amount of data.After the initial
    > >     receiver buffer was raised by
    > >     commit a337531b942b ("tcp: up initial rmem to 128KB
    > >     and SYN rwin to around 64KB"),the receiver buffer may
    > >     take too long for TCP autotuning to start raising
    > >     the receiver buffer size.
    > >     commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
    > >     tried to decrease the threshold at which TCP auto-tuning starts
    > >     but it's doesn't work well in some environments
    > >     where the receiver has large MTU (9001) especially with high RTT
    > >     connections as in these environments rcvq_space.space will be the same
    > >     as rcv_wnd so TCP autotuning will never start because
    > >     sender can't send more than rcv_wnd size in one round trip.
    > >     To address this issue this patch is decreasing the initial
    > >     rcvq_space.space so TCP autotuning kicks in whenever the sender is
    > >     able to send more than 5360 bytes in one round trip regardless the
    > >     receiver's configured MTU.
    > >
    > >     Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
    > >     Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
    > >
    > > Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
    > > ---
    > >  net/ipv4/tcp_input.c | 3 ++-
    > >  1 file changed, 2 insertions(+), 1 deletion(-)
    > >
    > > diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
    > > index 389d1b340248..f0ffac9e937b 100644
    > > --- a/net/ipv4/tcp_input.c
    > > +++ b/net/ipv4/tcp_input.c
    > > @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
    > >  static void tcp_init_buffer_space(struct sock *sk)
    > >  {
    > >         int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
    > > +       struct inet_connection_sock *icsk = inet_csk(sk);
    > >         struct tcp_sock *tp = tcp_sk(sk);
    > >         int maxwin;
    > >
    > >         if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
    > >                 tcp_sndbuf_expand(sk);
    > >
    > > -       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
    > > +       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
    >
    

    0 will not work, since we use a do_div(grow, tp->rcvq_space.space)

    >
    > Note that if a driver uses 16KB of memory to hold a 1500 bytes packet,
    > then a 10 MSS GRO packet is consuming 160 KB of memory,
    > which is bigger than tcp_rmem[1]. TCP could decide to drop these fat packets.
    >
    > I wonder if your patch does not work around a more fundamental issue,
    > I am still unable to reproduce the issue.




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS
  2020-12-07 15:37   ` Eric Dumazet
@ 2020-12-07 16:08     ` Eric Dumazet
  2020-12-07 17:49       ` Mohamed Abuelfotoh, Hazem
  0 siblings, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2020-12-07 16:08 UTC (permalink / raw)
  To: Hazem Mohamed Abuelfotoh
  Cc: netdev, stable, Yuchung Cheng, Neal Cardwell, Wei Wang, Strohman,
	Andy, Benjamin Herrenschmidt

On Mon, Dec 7, 2020 at 4:37 PM Eric Dumazet <edumazet@google.com> wrote:
>
> On Mon, Dec 7, 2020 at 12:41 PM Hazem Mohamed Abuelfotoh
> <abuehaze@amazon.com> wrote:
> >
> >     Previously receiver buffer auto-tuning starts after receiving
> >     one advertised window amount of data.After the initial
> >     receiver buffer was raised by
> >     commit a337531b942b ("tcp: up initial rmem to 128KB
> >     and SYN rwin to around 64KB"),the receiver buffer may
> >     take too long for TCP autotuning to start raising
> >     the receiver buffer size.
> >     commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
> >     tried to decrease the threshold at which TCP auto-tuning starts
> >     but it's doesn't work well in some environments
> >     where the receiver has large MTU (9001) especially with high RTT
> >     connections as in these environments rcvq_space.space will be the same
> >     as rcv_wnd so TCP autotuning will never start because
> >     sender can't send more than rcv_wnd size in one round trip.
> >     To address this issue this patch is decreasing the initial
> >     rcvq_space.space so TCP autotuning kicks in whenever the sender is
> >     able to send more than 5360 bytes in one round trip regardless the
> >     receiver's configured MTU.
> >
> >     Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
> >     Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
> >
> > Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
> > ---
> >  net/ipv4/tcp_input.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> > index 389d1b340248..f0ffac9e937b 100644
> > --- a/net/ipv4/tcp_input.c
> > +++ b/net/ipv4/tcp_input.c
> > @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
> >  static void tcp_init_buffer_space(struct sock *sk)
> >  {
> >         int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
> > +       struct inet_connection_sock *icsk = inet_csk(sk);
> >         struct tcp_sock *tp = tcp_sk(sk);
> >         int maxwin;
> >
> >         if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
> >                 tcp_sndbuf_expand(sk);
> >
> > -       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
> > +       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
>
> I find using icsk->icsk_ack.rcv_mss misleading.
>
> I would either use TCP_MSS_DEFAULT , or maybe simply 0, since we had
> no samples yet, there is little point to use a magic value.

0 will not work, since we use a do_div(grow, tp->rcvq_space.space)

>
> Note that if a driver uses 16KB of memory to hold a 1500 bytes packet,
> then a 10 MSS GRO packet is consuming 160 KB of memory,
> which is bigger than tcp_rmem[1]. TCP could decide to drop these fat packets.
>
> I wonder if your patch does not work around a more fundamental issue,
> I am still unable to reproduce the issue.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS
  2020-12-07 11:40 ` Hazem Mohamed Abuelfotoh
@ 2020-12-07 15:37   ` Eric Dumazet
  2020-12-07 16:08     ` Eric Dumazet
  0 siblings, 1 reply; 26+ messages in thread
From: Eric Dumazet @ 2020-12-07 15:37 UTC (permalink / raw)
  To: Hazem Mohamed Abuelfotoh
  Cc: netdev, stable, Yuchung Cheng, Neal Cardwell, Wei Wang, Strohman,
	Andy, Benjamin Herrenschmidt

On Mon, Dec 7, 2020 at 12:41 PM Hazem Mohamed Abuelfotoh
<abuehaze@amazon.com> wrote:
>
>     Previously receiver buffer auto-tuning starts after receiving
>     one advertised window amount of data.After the initial
>     receiver buffer was raised by
>     commit a337531b942b ("tcp: up initial rmem to 128KB
>     and SYN rwin to around 64KB"),the receiver buffer may
>     take too long for TCP autotuning to start raising
>     the receiver buffer size.
>     commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>     tried to decrease the threshold at which TCP auto-tuning starts
>     but it's doesn't work well in some environments
>     where the receiver has large MTU (9001) especially with high RTT
>     connections as in these environments rcvq_space.space will be the same
>     as rcv_wnd so TCP autotuning will never start because
>     sender can't send more than rcv_wnd size in one round trip.
>     To address this issue this patch is decreasing the initial
>     rcvq_space.space so TCP autotuning kicks in whenever the sender is
>     able to send more than 5360 bytes in one round trip regardless the
>     receiver's configured MTU.
>
>     Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
>     Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>
> Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
> ---
>  net/ipv4/tcp_input.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 389d1b340248..f0ffac9e937b 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
>  static void tcp_init_buffer_space(struct sock *sk)
>  {
>         int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
> +       struct inet_connection_sock *icsk = inet_csk(sk);
>         struct tcp_sock *tp = tcp_sk(sk);
>         int maxwin;
>
>         if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
>                 tcp_sndbuf_expand(sk);
>
> -       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
> +       tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);

I find using icsk->icsk_ack.rcv_mss misleading.

I would either use TCP_MSS_DEFAULT , or maybe simply 0, since we had
no samples yet, there is little point to use a magic value.

Note that if a driver uses 16KB of memory to hold a 1500 bytes packet,
then a 10 MSS GRO packet is consuming 160 KB of memory,
which is bigger than tcp_rmem[1]. TCP could decide to drop these fat packets.

I wonder if your patch does not work around a more fundamental issue,
I am still unable to reproduce the issue.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS
       [not found] <4ABEB85B-262F-4657-BB69-4F37ABC0AE3D@amazon.com>
@ 2020-12-07 11:40 ` Hazem Mohamed Abuelfotoh
  2020-12-07 15:37   ` Eric Dumazet
  0 siblings, 1 reply; 26+ messages in thread
From: Hazem Mohamed Abuelfotoh @ 2020-12-07 11:40 UTC (permalink / raw)
  To: netdev
  Cc: stable, edumazet, ycheng, ncardwell, weiwan, astroh, benh,
	Hazem Mohamed Abuelfotoh

    Previously receiver buffer auto-tuning starts after receiving
    one advertised window amount of data.After the initial
    receiver buffer was raised by
    commit a337531b942b ("tcp: up initial rmem to 128KB
    and SYN rwin to around 64KB"),the receiver buffer may
    take too long for TCP autotuning to start raising
    the receiver buffer size.
    commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
    tried to decrease the threshold at which TCP auto-tuning starts
    but it's doesn't work well in some environments
    where the receiver has large MTU (9001) especially with high RTT
    connections as in these environments rcvq_space.space will be the same
    as rcv_wnd so TCP autotuning will never start because
    sender can't send more than rcv_wnd size in one round trip.
    To address this issue this patch is decreasing the initial
    rcvq_space.space so TCP autotuning kicks in whenever the sender is
    able to send more than 5360 bytes in one round trip regardless the
    receiver's configured MTU.

    Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
    Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")

Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@amazon.com>
---
 net/ipv4/tcp_input.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 389d1b340248..f0ffac9e937b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
 static void tcp_init_buffer_space(struct sock *sk)
 {
 	int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
+	struct inet_connection_sock *icsk = inet_csk(sk);
 	struct tcp_sock *tp = tcp_sk(sk);
 	int maxwin;
 
 	if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
 		tcp_sndbuf_expand(sk);
 
-	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
+	tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
 	tcp_mstamp_refresh(tp);
 	tp->rcvq_space.time = tp->tcp_mstamp;
 	tp->rcvq_space.seq = tp->copied_seq;
-- 
2.16.6




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705




^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2020-12-08 16:47 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-04 18:06 [PATCH net-next] tcp: optimise receiver buffer autotuning initialisation for high latency connections Hazem Mohamed Abuelfotoh
2020-12-04 18:19 ` Mohamed Abuelfotoh, Hazem
2020-12-04 18:41   ` Eric Dumazet
     [not found]     ` <3F02FF08-EDA6-4DFD-8D93-479A5B05E25A@amazon.com>
2020-12-07 15:25       ` Eric Dumazet
2020-12-07 16:09         ` Mohamed Abuelfotoh, Hazem
2020-12-07 16:22           ` Eric Dumazet
2020-12-07 16:33             ` Neal Cardwell
2020-12-07 17:08               ` Eric Dumazet
2020-12-07 20:09                 ` Mohamed Abuelfotoh, Hazem
2020-12-07 23:22                   ` Eric Dumazet
2020-12-07 17:16               ` Mohamed Abuelfotoh, Hazem
2020-12-07 17:27                 ` Eric Dumazet
2020-12-08 16:28                   ` Mohamed Abuelfotoh, Hazem
2020-12-08 16:30                     ` Mohamed Abuelfotoh, Hazem
2020-12-08 16:46                     ` Eric Dumazet
2020-12-07 16:34             ` Mohamed Abuelfotoh, Hazem
2020-12-07 17:46               ` Greg KH
2020-12-07 17:54                 ` Mohamed Abuelfotoh, Hazem
2020-12-04 19:10 ` Eric Dumazet
2020-12-04 21:28 ` Neal Cardwell
2020-12-07 11:46   ` [PATCH net] tcp: fix receive buffer autotuning to trigger for any valid advertised MSS Hazem Mohamed Abuelfotoh
2020-12-07 18:53     ` Jakub Kicinski
     [not found] <4ABEB85B-262F-4657-BB69-4F37ABC0AE3D@amazon.com>
2020-12-07 11:40 ` Hazem Mohamed Abuelfotoh
2020-12-07 15:37   ` Eric Dumazet
2020-12-07 16:08     ` Eric Dumazet
2020-12-07 17:49       ` Mohamed Abuelfotoh, Hazem

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.