linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/1] gro: decrease size of CB
@ 2023-06-01 16:09 Richard Gobert
  2023-06-01 16:14 ` [PATCH v3 1/1] " Richard Gobert
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Richard Gobert @ 2023-06-01 16:09 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, richardbgobert, alexanderduyck, netdev, linux-kernel

This patch frees up space in the GRO CB, which is currently at its maximum
size. This patch was submitted and reviewed previously in a patch series,
but is now reposted as a standalone patch, as suggested by Paolo.
(https://lore.kernel.org/netdev/889f2dc5e646992033e0d9b0951d5a42f1907e07.camel@redhat.com/)

Changelog:

v2 -> v3:
  * add comment

v1 -> v2:
  * remove inline keyword

Richard Gobert (1):
  gro: decrease size of CB

 include/net/gro.h | 26 ++++++++++++++++----------
 net/core/gro.c    | 19 ++++++++++++-------
 2 files changed, 28 insertions(+), 17 deletions(-)

-- 
2.36.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 1/1] gro: decrease size of CB
  2023-06-01 16:09 [PATCH v3 0/1] gro: decrease size of CB Richard Gobert
@ 2023-06-01 16:14 ` Richard Gobert
  2023-06-06  7:25   ` Eric Dumazet
  2023-06-26  8:55   ` Gal Pressman
  2023-06-02 14:22 ` [PATCH v3 0/1] " Alexander Lobakin
  2023-06-06  9:20 ` patchwork-bot+netdevbpf
  2 siblings, 2 replies; 21+ messages in thread
From: Richard Gobert @ 2023-06-01 16:14 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

The GRO control block (NAPI_GRO_CB) is currently at its maximum size.
This commit reduces its size by putting two groups of fields that are
used only at different times into a union.

Specifically, the fields frag0 and frag0_len are the fields that make up
the frag0 optimisation mechanism, which is used during the initial
parsing of the SKB.

The fields last and age are used after the initial parsing, while the
SKB is stored in the GRO list, waiting for other packets to arrive.

There was one location in dev_gro_receive that modified the frag0 fields
after setting last and age. I changed this accordingly without altering
the code behaviour.

Signed-off-by: Richard Gobert <richardbgobert@gmail.com>
---
 include/net/gro.h | 26 ++++++++++++++++----------
 net/core/gro.c    | 19 ++++++++++++-------
 2 files changed, 28 insertions(+), 17 deletions(-)

diff --git a/include/net/gro.h b/include/net/gro.h
index a4fab706240d..7b47dd6ce94f 100644
--- a/include/net/gro.h
+++ b/include/net/gro.h
@@ -11,11 +11,23 @@
 #include <net/udp.h>
 
 struct napi_gro_cb {
-	/* Virtual address of skb_shinfo(skb)->frags[0].page + offset. */
-	void	*frag0;
+	union {
+		struct {
+			/* Virtual address of skb_shinfo(skb)->frags[0].page + offset. */
+			void	*frag0;
 
-	/* Length of frag0. */
-	unsigned int frag0_len;
+			/* Length of frag0. */
+			unsigned int frag0_len;
+		};
+
+		struct {
+			/* used in skb_gro_receive() slow path */
+			struct sk_buff *last;
+
+			/* jiffies when first packet was created/queued */
+			unsigned long age;
+		};
+	};
 
 	/* This indicates where we are processing relative to skb->data. */
 	int	data_offset;
@@ -32,9 +44,6 @@ struct napi_gro_cb {
 	/* Used in ipv6_gro_receive() and foo-over-udp */
 	u16	proto;
 
-	/* jiffies when first packet was created/queued */
-	unsigned long age;
-
 /* Used in napi_gro_cb::free */
 #define NAPI_GRO_FREE             1
 #define NAPI_GRO_FREE_STOLEN_HEAD 2
@@ -77,9 +86,6 @@ struct napi_gro_cb {
 
 	/* used to support CHECKSUM_COMPLETE for tunneling protocols */
 	__wsum	csum;
-
-	/* used in skb_gro_receive() slow path */
-	struct sk_buff *last;
 };
 
 #define NAPI_GRO_CB(skb) ((struct napi_gro_cb *)(skb)->cb)
diff --git a/net/core/gro.c b/net/core/gro.c
index 2d84165cb4f1..a709155994ad 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -460,6 +460,14 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow)
 	}
 }
 
+static void gro_try_pull_from_frag0(struct sk_buff *skb)
+{
+	int grow = skb_gro_offset(skb) - skb_headlen(skb);
+
+	if (grow > 0)
+		gro_pull_from_frag0(skb, grow);
+}
+
 static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head)
 {
 	struct sk_buff *oldest;
@@ -489,7 +497,6 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff
 	struct sk_buff *pp = NULL;
 	enum gro_result ret;
 	int same_flow;
-	int grow;
 
 	if (netif_elide_gro(skb->dev))
 		goto normal;
@@ -564,17 +571,14 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff
 	else
 		gro_list->count++;
 
+	/* Must be called before setting NAPI_GRO_CB(skb)->{age|last} */
+	gro_try_pull_from_frag0(skb);
 	NAPI_GRO_CB(skb)->age = jiffies;
 	NAPI_GRO_CB(skb)->last = skb;
 	if (!skb_is_gso(skb))
 		skb_shinfo(skb)->gso_size = skb_gro_len(skb);
 	list_add(&skb->list, &gro_list->list);
 	ret = GRO_HELD;
-
-pull:
-	grow = skb_gro_offset(skb) - skb_headlen(skb);
-	if (grow > 0)
-		gro_pull_from_frag0(skb, grow);
 ok:
 	if (gro_list->count) {
 		if (!test_bit(bucket, &napi->gro_bitmask))
@@ -587,7 +591,8 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff
 
 normal:
 	ret = GRO_NORMAL;
-	goto pull;
+	gro_try_pull_from_frag0(skb);
+	goto ok;
 }
 
 struct packet_offload *gro_find_receive_by_type(__be16 type)
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/1] gro: decrease size of CB
  2023-06-01 16:09 [PATCH v3 0/1] gro: decrease size of CB Richard Gobert
  2023-06-01 16:14 ` [PATCH v3 1/1] " Richard Gobert
@ 2023-06-02 14:22 ` Alexander Lobakin
  2023-06-05 13:58   ` Richard Gobert
  2023-06-06  9:20 ` patchwork-bot+netdevbpf
  2 siblings, 1 reply; 21+ messages in thread
From: Alexander Lobakin @ 2023-06-02 14:22 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, edumazet, kuba, pabeni, lixiaoyan, lucien.xin,
	alexanderduyck, netdev, linux-kernel

From: Richard Gobert <richardbgobert@gmail.com>
Date: Thu, 1 Jun 2023 18:09:28 +0200

> This patch frees up space in the GRO CB, which is currently at its maximum
> size. This patch was submitted and reviewed previously in a patch series,
> but is now reposted as a standalone patch, as suggested by Paolo.
> (https://lore.kernel.org/netdev/889f2dc5e646992033e0d9b0951d5a42f1907e07.camel@redhat.com/)
> 
> Changelog:
> 
> v2 -> v3:
>   * add comment
> 
> v1 -> v2:
>   * remove inline keyword

I hope you've checked that there's no difference in object code with and
w/o `inline`? Sometimes the compilers do weird things and stop inlining
oneliners if they're used more than once. skb_gro_reset_offset() is
marked `inline` exactly due to that =\

> 
> Richard Gobert (1):
>   gro: decrease size of CB
> 
>  include/net/gro.h | 26 ++++++++++++++++----------
>  net/core/gro.c    | 19 ++++++++++++-------
>  2 files changed, 28 insertions(+), 17 deletions(-)
Thanks,
Olek

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/1] gro: decrease size of CB
  2023-06-02 14:22 ` [PATCH v3 0/1] " Alexander Lobakin
@ 2023-06-05 13:58   ` Richard Gobert
  2023-06-06 13:24     ` Alexander Lobakin
  0 siblings, 1 reply; 21+ messages in thread
From: Richard Gobert @ 2023-06-05 13:58 UTC (permalink / raw)
  To: Alexander Lobakin
  Cc: davem, edumazet, kuba, pabeni, lixiaoyan, lucien.xin,
	alexanderduyck, netdev, linux-kernel

> I hope you've checked that there's no difference in object code with and
> w/o `inline`? Sometimes the compilers do weird things and stop inlining
> oneliners if they're used more than once. skb_gro_reset_offset() is
> marked `inline` exactly due to that =\

Checked on standard x86-64 and arm64 gcc compilers.
Would you check any other cases?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-01 16:14 ` [PATCH v3 1/1] " Richard Gobert
@ 2023-06-06  7:25   ` Eric Dumazet
  2023-06-26  8:55   ` Gal Pressman
  1 sibling, 0 replies; 21+ messages in thread
From: Eric Dumazet @ 2023-06-06  7:25 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, kuba, pabeni, aleksander.lobakin, lixiaoyan, lucien.xin,
	alexanderduyck, netdev, linux-kernel

On Thu, Jun 1, 2023 at 6:14 PM Richard Gobert <richardbgobert@gmail.com> wrote:
>
> The GRO control block (NAPI_GRO_CB) is currently at its maximum size.
> This commit reduces its size by putting two groups of fields that are
> used only at different times into a union.
>
> Specifically, the fields frag0 and frag0_len are the fields that make up
> the frag0 optimisation mechanism, which is used during the initial
> parsing of the SKB.
>
> The fields last and age are used after the initial parsing, while the
> SKB is stored in the GRO list, waiting for other packets to arrive.
>
> There was one location in dev_gro_receive that modified the frag0 fields
> after setting last and age. I changed this accordingly without altering
> the code behaviour.
>
> Signed-off-by: Richard Gobert <richardbgobert@gmail.com>

Reviewed-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/1] gro: decrease size of CB
  2023-06-01 16:09 [PATCH v3 0/1] gro: decrease size of CB Richard Gobert
  2023-06-01 16:14 ` [PATCH v3 1/1] " Richard Gobert
  2023-06-02 14:22 ` [PATCH v3 0/1] " Alexander Lobakin
@ 2023-06-06  9:20 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 21+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-06-06  9:20 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

Hello:

This patch was applied to netdev/net-next.git (main)
by Paolo Abeni <pabeni@redhat.com>:

On Thu, 1 Jun 2023 18:09:28 +0200 you wrote:
> This patch frees up space in the GRO CB, which is currently at its maximum
> size. This patch was submitted and reviewed previously in a patch series,
> but is now reposted as a standalone patch, as suggested by Paolo.
> (https://lore.kernel.org/netdev/889f2dc5e646992033e0d9b0951d5a42f1907e07.camel@redhat.com/)
> 
> Changelog:
> 
> [...]

Here is the summary with links:
  - [v3,1/1] gro: decrease size of CB
    https://git.kernel.org/netdev/net-next/c/7b355b76e2b3

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/1] gro: decrease size of CB
  2023-06-05 13:58   ` Richard Gobert
@ 2023-06-06 13:24     ` Alexander Lobakin
  0 siblings, 0 replies; 21+ messages in thread
From: Alexander Lobakin @ 2023-06-06 13:24 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, edumazet, kuba, pabeni, lixiaoyan, lucien.xin,
	alexanderduyck, netdev, linux-kernel

From: Richard Gobert <richardbgobert@gmail.com>
Date: Mon, 5 Jun 2023 15:58:23 +0200

>> I hope you've checked that there's no difference in object code with and
>> w/o `inline`? Sometimes the compilers do weird things and stop inlining
>> oneliners if they're used more than once. skb_gro_reset_offset() is
>> marked `inline` exactly due to that =\
> 
> Checked on standard x86-64 and arm64 gcc compilers.
> Would you check any other cases?
I'd say it's enough, so no problem. Sometimes some odd things happen,
but you can never predict all of them.
I can't give Reviewed-by now that it has been already applied :D Had a
long weekend and all that. But the change is really good.

Thanks,
Olek

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-01 16:14 ` [PATCH v3 1/1] " Richard Gobert
  2023-06-06  7:25   ` Eric Dumazet
@ 2023-06-26  8:55   ` Gal Pressman
  2023-06-27 14:21     ` David Ahern
  2023-06-29 12:36     ` Richard Gobert
  1 sibling, 2 replies; 21+ messages in thread
From: Gal Pressman @ 2023-06-26  8:55 UTC (permalink / raw)
  To: Richard Gobert, davem, edumazet, kuba, pabeni,
	aleksander.lobakin, lixiaoyan, lucien.xin, alexanderduyck,
	netdev, linux-kernel

On 01/06/2023 19:14, Richard Gobert wrote:
> The GRO control block (NAPI_GRO_CB) is currently at its maximum size.
> This commit reduces its size by putting two groups of fields that are
> used only at different times into a union.
> 
> Specifically, the fields frag0 and frag0_len are the fields that make up
> the frag0 optimisation mechanism, which is used during the initial
> parsing of the SKB.
> 
> The fields last and age are used after the initial parsing, while the
> SKB is stored in the GRO list, waiting for other packets to arrive.
> 
> There was one location in dev_gro_receive that modified the frag0 fields
> after setting last and age. I changed this accordingly without altering
> the code behaviour.
> 
> Signed-off-by: Richard Gobert <richardbgobert@gmail.com>

Hello Richard,

I believe this commit broke gro over udp tunnels.
I'm running iperf tcp traffic over geneve interfaces and the bandwidth
is pretty much zero.

Turning off gro on the receiving side (or reverting this commit)
resolves the issue.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-26  8:55   ` Gal Pressman
@ 2023-06-27 14:21     ` David Ahern
  2023-06-28 12:42       ` Gal Pressman
  2023-06-29 12:36     ` Richard Gobert
  1 sibling, 1 reply; 21+ messages in thread
From: David Ahern @ 2023-06-27 14:21 UTC (permalink / raw)
  To: Gal Pressman, Richard Gobert, davem, edumazet, kuba, pabeni,
	aleksander.lobakin, lixiaoyan, lucien.xin, alexanderduyck,
	netdev, linux-kernel

On 6/26/23 2:55 AM, Gal Pressman wrote:
> I believe this commit broke gro over udp tunnels.
> I'm running iperf tcp traffic over geneve interfaces and the bandwidth
> is pretty much zero.
> 

Could you add a test script to tools/testing/selftests/net? It will help
catch future regressions.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-27 14:21     ` David Ahern
@ 2023-06-28 12:42       ` Gal Pressman
  2023-06-28 14:19         ` David Ahern
  0 siblings, 1 reply; 21+ messages in thread
From: Gal Pressman @ 2023-06-28 12:42 UTC (permalink / raw)
  To: David Ahern, Richard Gobert, davem, edumazet, kuba, pabeni,
	aleksander.lobakin, lixiaoyan, lucien.xin, alexanderduyck,
	netdev, linux-kernel

On 27/06/2023 17:21, David Ahern wrote:
> On 6/26/23 2:55 AM, Gal Pressman wrote:
>> I believe this commit broke gro over udp tunnels.
>> I'm running iperf tcp traffic over geneve interfaces and the bandwidth
>> is pretty much zero.
>>
> 
> Could you add a test script to tools/testing/selftests/net? It will help
> catch future regressions.
> 

I'm checking internally, someone from the team might be able to work on
this, though I'm not sure that a test that verifies bandwidth makes much
sense as a selftest.

Richard, did you get a chance to look at this issue? Maybe we should
revert this patch until it is properly fixed?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-28 12:42       ` Gal Pressman
@ 2023-06-28 14:19         ` David Ahern
  2023-08-23 14:43           ` Gal Pressman
  0 siblings, 1 reply; 21+ messages in thread
From: David Ahern @ 2023-06-28 14:19 UTC (permalink / raw)
  To: Gal Pressman, Richard Gobert, davem, edumazet, kuba, pabeni,
	aleksander.lobakin, lixiaoyan, lucien.xin, alexanderduyck,
	netdev, linux-kernel

On 6/28/23 6:42 AM, Gal Pressman wrote:
> On 27/06/2023 17:21, David Ahern wrote:
>> On 6/26/23 2:55 AM, Gal Pressman wrote:
>>> I believe this commit broke gro over udp tunnels.
>>> I'm running iperf tcp traffic over geneve interfaces and the bandwidth
>>> is pretty much zero.
>>>
>>
>> Could you add a test script to tools/testing/selftests/net? It will help
>> catch future regressions.
>>
> 
> I'm checking internally, someone from the team might be able to work on
> this, though I'm not sure that a test that verifies bandwidth makes much
> sense as a selftest.
> 

With veth and namespaces I expect up to 25-30G performance levels,
depending on the test. When something fundamental breaks like this patch
a drop to < 1G would be a red flag, so there is value to the test.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-26  8:55   ` Gal Pressman
  2023-06-27 14:21     ` David Ahern
@ 2023-06-29 12:36     ` Richard Gobert
  2023-06-29 13:04       ` Gal Pressman
  1 sibling, 1 reply; 21+ messages in thread
From: Richard Gobert @ 2023-06-29 12:36 UTC (permalink / raw)
  To: Gal Pressman
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

> On 01/06/2023 19:14, Richard Gobert wrote:
> > The GRO control block (NAPI_GRO_CB) is currently at its maximum size.
> > This commit reduces its size by putting two groups of fields that are
> > used only at different times into a union.
> > 
> > Specifically, the fields frag0 and frag0_len are the fields that make up
> > the frag0 optimisation mechanism, which is used during the initial
> > parsing of the SKB.
> > 
> > The fields last and age are used after the initial parsing, while the
> > SKB is stored in the GRO list, waiting for other packets to arrive.
> > 
> > There was one location in dev_gro_receive that modified the frag0 fields
> > after setting last and age. I changed this accordingly without altering
> > the code behaviour.
> > 
> > Signed-off-by: Richard Gobert <richardbgobert@gmail.com>
> 
> Hello Richard,
> 
> I believe this commit broke gro over udp tunnels.
> I'm running iperf tcp traffic over geneve interfaces and the bandwidth
> is pretty much zero.
> 
> Turning off gro on the receiving side (or reverting this commit)
> resolves the issue.

Sorry for the late response.
I am starting to look into it right now. Can you please share more details about your setup?
- I'd like to see the output of these commands:
  ethtool -k
  sysctl net
- The iperf command
- Your network topology

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-29 12:36     ` Richard Gobert
@ 2023-06-29 13:04       ` Gal Pressman
  2023-06-30 15:39         ` Richard Gobert
  0 siblings, 1 reply; 21+ messages in thread
From: Gal Pressman @ 2023-06-29 13:04 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

On 29/06/2023 15:36, Richard Gobert wrote:
>> On 01/06/2023 19:14, Richard Gobert wrote:
>>> The GRO control block (NAPI_GRO_CB) is currently at its maximum size.
>>> This commit reduces its size by putting two groups of fields that are
>>> used only at different times into a union.
>>>
>>> Specifically, the fields frag0 and frag0_len are the fields that make up
>>> the frag0 optimisation mechanism, which is used during the initial
>>> parsing of the SKB.
>>>
>>> The fields last and age are used after the initial parsing, while the
>>> SKB is stored in the GRO list, waiting for other packets to arrive.
>>>
>>> There was one location in dev_gro_receive that modified the frag0 fields
>>> after setting last and age. I changed this accordingly without altering
>>> the code behaviour.
>>>
>>> Signed-off-by: Richard Gobert <richardbgobert@gmail.com>
>>
>> Hello Richard,
>>
>> I believe this commit broke gro over udp tunnels.
>> I'm running iperf tcp traffic over geneve interfaces and the bandwidth
>> is pretty much zero.
>>
>> Turning off gro on the receiving side (or reverting this commit)
>> resolves the issue.
> 
> Sorry for the late response.
> I am starting to look into it right now. Can you please share more details about your setup?
> - I'd like to see the output of these commands:

Sure!

>   ethtool -k

Features for eth2:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off [fixed]
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off [fixed]
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off [fixed]
        tx-tcp-mangleid-segmentation: off
        tx-tcp6-segmentation: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: on
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-tunnel-remcsum-segmentation: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: on
tx-udp-segmentation: on
tx-gso-list: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: on
tx-vlan-stag-hw-insert: on
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: on [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: on
esp-hw-offload: on [fixed]
esp-tx-csum-hw-offload: on [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: on
tls-hw-rx-offload: off
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
rx-gro-list: off
macsec-hw-offload: on
rx-udp-gro-forwarding: off
hsr-tag-ins-offload: off [fixed]
hsr-tag-rm-offload: off [fixed]
hsr-fwd-offload: off [fixed]
hsr-dup-offload: off [fixed]

>   sysctl net

net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
net.core.bpf_jit_enable = 1
net.core.bpf_jit_harden = 0
net.core.bpf_jit_kallsyms = 1
net.core.bpf_jit_limit = 796917760
net.core.busy_poll = 0
net.core.busy_read = 0
net.core.default_qdisc = pfifo_fast
net.core.dev_weight = 64
net.core.dev_weight_rx_bias = 1
net.core.dev_weight_tx_bias = 1
net.core.devconf_inherit_init_net = 0
net.core.fb_tunnels_only_for_init_net = 0
net.core.flow_limit_cpu_bitmap = 000
net.core.flow_limit_table_len = 4096
net.core.gro_normal_batch = 8
net.core.high_order_alloc_disable = 0
net.core.max_skb_frags = 17
net.core.message_burst = 10
net.core.message_cost = 5
net.core.netdev_budget = 300
net.core.netdev_budget_usecs = 8000
net.core.netdev_max_backlog = 1000
net.core.netdev_rss_key =
60:d8:27:5f:2d:0b:db:ad:3f:6c:8f:8b:e3:18:ca:3a:78:83:24:cb:8c:a4:f4:77:5d:d5:31:82:44:2e:a3:10:6a:00:25:ec:1a:b2:81:43:5b:45:3c:ef:bc:49:02:93:a9:bf:a4:e0
net.core.netdev_tstamp_prequeue = 1
net.core.netdev_unregister_timeout_secs = 10
net.core.optmem_max = 81920
net.core.rmem_default = 212992
net.core.rmem_max = 212992
net.core.rps_default_mask = 000
net.core.rps_sock_flow_entries = 0
net.core.skb_defer_max = 64
net.core.somaxconn = 4096
net.core.tstamp_allow_data = 1
net.core.txrehash = 1
net.core.warnings = 0
net.core.wmem_default = 212992
net.core.wmem_max = 212992
net.core.xfrm_acq_expires = 30
net.core.xfrm_aevent_etime = 10
net.core.xfrm_aevent_rseqth = 2
net.core.xfrm_larval_drop = 1
net.dccp.default.request_retries = 6
net.dccp.default.retries1 = 3
net.dccp.default.retries2 = 15
net.dccp.default.rx_ccid = 2
net.dccp.default.seq_window = 100
net.dccp.default.sync_ratelimit = 124
net.dccp.default.tx_ccid = 2
net.dccp.default.tx_qlen = 5
net.ipv4.conf.all.accept_local = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.arp_accept = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.all.arp_evict_nocarrier = 1
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_notify = 0
net.ipv4.conf.all.bc_forwarding = 0
net.ipv4.conf.all.bootp_relay = 0
net.ipv4.conf.all.disable_policy = 0
net.ipv4.conf.all.disable_xfrm = 0
net.ipv4.conf.all.drop_gratuitous_arp = 0
net.ipv4.conf.all.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.all.force_igmp_version = 0
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.all.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.all.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.all.ignore_routes_with_linkdown = 0
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.all.medium_id = 0
net.ipv4.conf.all.promote_secondaries = 0
net.ipv4.conf.all.proxy_arp = 0
net.ipv4.conf.all.proxy_arp_pvlan = 0
net.ipv4.conf.all.route_localnet = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.shared_media = 1
net.ipv4.conf.all.src_valid_mark = 0
net.ipv4.conf.all.tag = 0
net.ipv4.conf.default.accept_local = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.arp_accept = 0
net.ipv4.conf.default.arp_announce = 0
net.ipv4.conf.default.arp_evict_nocarrier = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.arp_ignore = 0
net.ipv4.conf.default.arp_notify = 0
net.ipv4.conf.default.bc_forwarding = 0
net.ipv4.conf.default.bootp_relay = 0
net.ipv4.conf.default.disable_policy = 0
net.ipv4.conf.default.disable_xfrm = 0
net.ipv4.conf.default.drop_gratuitous_arp = 0
net.ipv4.conf.default.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.default.force_igmp_version = 0
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.default.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.default.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.default.ignore_routes_with_linkdown = 0
net.ipv4.conf.default.log_martians = 0
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.default.medium_id = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.proxy_arp_pvlan = 0
net.ipv4.conf.default.route_localnet = 0
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.secure_redirects = 1
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.default.shared_media = 1
net.ipv4.conf.default.src_valid_mark = 0
net.ipv4.conf.default.tag = 0
net.ipv4.conf.docker0.accept_local = 0
net.ipv4.conf.docker0.accept_redirects = 0
net.ipv4.conf.docker0.accept_source_route = 0
net.ipv4.conf.docker0.arp_accept = 0
net.ipv4.conf.docker0.arp_announce = 0
net.ipv4.conf.docker0.arp_evict_nocarrier = 1
net.ipv4.conf.docker0.arp_filter = 0
net.ipv4.conf.docker0.arp_ignore = 0
net.ipv4.conf.docker0.arp_notify = 0
net.ipv4.conf.docker0.bc_forwarding = 0
net.ipv4.conf.docker0.bootp_relay = 0
net.ipv4.conf.docker0.disable_policy = 0
net.ipv4.conf.docker0.disable_xfrm = 0
net.ipv4.conf.docker0.drop_gratuitous_arp = 0
net.ipv4.conf.docker0.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.docker0.force_igmp_version = 0
net.ipv4.conf.docker0.forwarding = 1
net.ipv4.conf.docker0.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.docker0.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.docker0.ignore_routes_with_linkdown = 0
net.ipv4.conf.docker0.log_martians = 0
net.ipv4.conf.docker0.mc_forwarding = 0
net.ipv4.conf.docker0.medium_id = 0
net.ipv4.conf.docker0.promote_secondaries = 1
net.ipv4.conf.docker0.proxy_arp = 0
net.ipv4.conf.docker0.proxy_arp_pvlan = 0
net.ipv4.conf.docker0.route_localnet = 0
net.ipv4.conf.docker0.rp_filter = 2
net.ipv4.conf.docker0.secure_redirects = 1
net.ipv4.conf.docker0.send_redirects = 0
net.ipv4.conf.docker0.shared_media = 1
net.ipv4.conf.docker0.src_valid_mark = 0
net.ipv4.conf.docker0.tag = 0
net.ipv4.conf.eth0.accept_local = 0
net.ipv4.conf.eth0.accept_redirects = 0
net.ipv4.conf.eth0.accept_source_route = 0
net.ipv4.conf.eth0.arp_accept = 0
net.ipv4.conf.eth0.arp_announce = 0
net.ipv4.conf.eth0.arp_evict_nocarrier = 1
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.arp_ignore = 0
net.ipv4.conf.eth0.arp_notify = 0
net.ipv4.conf.eth0.bc_forwarding = 0
net.ipv4.conf.eth0.bootp_relay = 0
net.ipv4.conf.eth0.disable_policy = 0
net.ipv4.conf.eth0.disable_xfrm = 0
net.ipv4.conf.eth0.drop_gratuitous_arp = 0
net.ipv4.conf.eth0.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.eth0.force_igmp_version = 0
net.ipv4.conf.eth0.forwarding = 1
net.ipv4.conf.eth0.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.eth0.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.eth0.ignore_routes_with_linkdown = 0
net.ipv4.conf.eth0.log_martians = 0
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.eth0.medium_id = 0
net.ipv4.conf.eth0.promote_secondaries = 1
net.ipv4.conf.eth0.proxy_arp = 0
net.ipv4.conf.eth0.proxy_arp_pvlan = 0
net.ipv4.conf.eth0.route_localnet = 0
net.ipv4.conf.eth0.rp_filter = 2
net.ipv4.conf.eth0.secure_redirects = 1
net.ipv4.conf.eth0.send_redirects = 0
net.ipv4.conf.eth0.shared_media = 1
net.ipv4.conf.eth0.src_valid_mark = 0
net.ipv4.conf.eth0.tag = 0
net.ipv4.conf.eth1.accept_local = 0
net.ipv4.conf.eth1.accept_redirects = 0
net.ipv4.conf.eth1.accept_source_route = 0
net.ipv4.conf.eth1.arp_accept = 0
net.ipv4.conf.eth1.arp_announce = 0
net.ipv4.conf.eth1.arp_evict_nocarrier = 1
net.ipv4.conf.eth1.arp_filter = 0
net.ipv4.conf.eth1.arp_ignore = 0
net.ipv4.conf.eth1.arp_notify = 0
net.ipv4.conf.eth1.bc_forwarding = 0
net.ipv4.conf.eth1.bootp_relay = 0
net.ipv4.conf.eth1.disable_policy = 0
net.ipv4.conf.eth1.disable_xfrm = 0
net.ipv4.conf.eth1.drop_gratuitous_arp = 0
net.ipv4.conf.eth1.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.eth1.force_igmp_version = 0
net.ipv4.conf.eth1.forwarding = 1
net.ipv4.conf.eth1.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.eth1.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.eth1.ignore_routes_with_linkdown = 0
net.ipv4.conf.eth1.log_martians = 0
net.ipv4.conf.eth1.mc_forwarding = 0
net.ipv4.conf.eth1.medium_id = 0
net.ipv4.conf.eth1.promote_secondaries = 1
net.ipv4.conf.eth1.proxy_arp = 0
net.ipv4.conf.eth1.proxy_arp_pvlan = 0
net.ipv4.conf.eth1.route_localnet = 0
net.ipv4.conf.eth1.rp_filter = 2
net.ipv4.conf.eth1.secure_redirects = 1
net.ipv4.conf.eth1.send_redirects = 0
net.ipv4.conf.eth1.shared_media = 1
net.ipv4.conf.eth1.src_valid_mark = 0
net.ipv4.conf.eth1.tag = 0
net.ipv4.conf.eth2.accept_local = 0
net.ipv4.conf.eth2.accept_redirects = 0
net.ipv4.conf.eth2.accept_source_route = 0
net.ipv4.conf.eth2.arp_accept = 0
net.ipv4.conf.eth2.arp_announce = 0
net.ipv4.conf.eth2.arp_evict_nocarrier = 1
net.ipv4.conf.eth2.arp_filter = 0
net.ipv4.conf.eth2.arp_ignore = 0
net.ipv4.conf.eth2.arp_notify = 0
net.ipv4.conf.eth2.bc_forwarding = 0
net.ipv4.conf.eth2.bootp_relay = 0
net.ipv4.conf.eth2.disable_policy = 0
net.ipv4.conf.eth2.disable_xfrm = 0
net.ipv4.conf.eth2.drop_gratuitous_arp = 0
net.ipv4.conf.eth2.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.eth2.force_igmp_version = 0
net.ipv4.conf.eth2.forwarding = 1
net.ipv4.conf.eth2.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.eth2.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.eth2.ignore_routes_with_linkdown = 0
net.ipv4.conf.eth2.log_martians = 0
net.ipv4.conf.eth2.mc_forwarding = 0
net.ipv4.conf.eth2.medium_id = 0
net.ipv4.conf.eth2.promote_secondaries = 1
net.ipv4.conf.eth2.proxy_arp = 0
net.ipv4.conf.eth2.proxy_arp_pvlan = 0
net.ipv4.conf.eth2.route_localnet = 0
net.ipv4.conf.eth2.rp_filter = 2
net.ipv4.conf.eth2.secure_redirects = 1
net.ipv4.conf.eth2.send_redirects = 0
net.ipv4.conf.eth2.shared_media = 1
net.ipv4.conf.eth2.src_valid_mark = 0
net.ipv4.conf.eth2.tag = 0
net.ipv4.conf.eth3.accept_local = 0
net.ipv4.conf.eth3.accept_redirects = 0
net.ipv4.conf.eth3.accept_source_route = 0
net.ipv4.conf.eth3.arp_accept = 0
net.ipv4.conf.eth3.arp_announce = 0
net.ipv4.conf.eth3.arp_evict_nocarrier = 1
net.ipv4.conf.eth3.arp_filter = 0
net.ipv4.conf.eth3.arp_ignore = 0
net.ipv4.conf.eth3.arp_notify = 0
net.ipv4.conf.eth3.bc_forwarding = 0
net.ipv4.conf.eth3.bootp_relay = 0
net.ipv4.conf.eth3.disable_policy = 0
net.ipv4.conf.eth3.disable_xfrm = 0
net.ipv4.conf.eth3.drop_gratuitous_arp = 0
net.ipv4.conf.eth3.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.eth3.force_igmp_version = 0
net.ipv4.conf.eth3.forwarding = 1
net.ipv4.conf.eth3.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.eth3.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.eth3.ignore_routes_with_linkdown = 0
net.ipv4.conf.eth3.log_martians = 0
net.ipv4.conf.eth3.mc_forwarding = 0
net.ipv4.conf.eth3.medium_id = 0
net.ipv4.conf.eth3.promote_secondaries = 1
net.ipv4.conf.eth3.proxy_arp = 0
net.ipv4.conf.eth3.proxy_arp_pvlan = 0
net.ipv4.conf.eth3.route_localnet = 0
net.ipv4.conf.eth3.rp_filter = 2
net.ipv4.conf.eth3.secure_redirects = 1
net.ipv4.conf.eth3.send_redirects = 0
net.ipv4.conf.eth3.shared_media = 1
net.ipv4.conf.eth3.src_valid_mark = 0
net.ipv4.conf.eth3.tag = 0
net.ipv4.conf.lo.accept_local = 0
net.ipv4.conf.lo.accept_redirects = 1
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.lo.arp_accept = 0
net.ipv4.conf.lo.arp_announce = 0
net.ipv4.conf.lo.arp_evict_nocarrier = 1
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_notify = 0
net.ipv4.conf.lo.bc_forwarding = 0
net.ipv4.conf.lo.bootp_relay = 0
net.ipv4.conf.lo.disable_policy = 1
net.ipv4.conf.lo.disable_xfrm = 1
net.ipv4.conf.lo.drop_gratuitous_arp = 0
net.ipv4.conf.lo.drop_unicast_in_l2_multicast = 0
net.ipv4.conf.lo.force_igmp_version = 0
net.ipv4.conf.lo.forwarding = 1
net.ipv4.conf.lo.igmpv2_unsolicited_report_interval = 10000
net.ipv4.conf.lo.igmpv3_unsolicited_report_interval = 1000
net.ipv4.conf.lo.ignore_routes_with_linkdown = 0
net.ipv4.conf.lo.log_martians = 0
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.conf.lo.medium_id = 0
net.ipv4.conf.lo.promote_secondaries = 1
net.ipv4.conf.lo.proxy_arp = 0
net.ipv4.conf.lo.proxy_arp_pvlan = 0
net.ipv4.conf.lo.route_localnet = 0
net.ipv4.conf.lo.rp_filter = 2
net.ipv4.conf.lo.secure_redirects = 1
net.ipv4.conf.lo.send_redirects = 1
net.ipv4.conf.lo.shared_media = 1
net.ipv4.conf.lo.src_valid_mark = 0
net.ipv4.conf.lo.tag = 0
net.ipv4.fib_multipath_hash_fields = 7
net.ipv4.fib_multipath_hash_policy = 0
net.ipv4.fib_multipath_use_neigh = 0
net.ipv4.fib_notify_on_flag_change = 0
net.ipv4.fib_sync_mem = 524288
net.ipv4.fwmark_reflect = 0
net.ipv4.icmp_echo_enable_probe = 0
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_errors_use_inbound_ifaddr = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.icmp_msgs_burst = 50
net.ipv4.icmp_msgs_per_sec = 1000
net.ipv4.icmp_ratelimit = 1000
net.ipv4.icmp_ratemask = 6168
net.ipv4.igmp_link_local_mcast_reports = 1
net.ipv4.igmp_max_memberships = 20
net.ipv4.igmp_max_msf = 10
net.ipv4.igmp_qrv = 2
net.ipv4.inet_peer_maxttl = 600
net.ipv4.inet_peer_minttl = 120
net.ipv4.inet_peer_threshold = 65664
net.ipv4.ip_autobind_reuse = 0
net.ipv4.ip_default_ttl = 64
net.ipv4.ip_dynaddr = 0
net.ipv4.ip_early_demux = 1
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_update_priority = 1
net.ipv4.ip_forward_use_pmtu = 0
net.ipv4.ip_local_port_range = 32768    60999
net.ipv4.ip_local_reserved_ports =
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.ip_nonlocal_bind = 0
net.ipv4.ip_unprivileged_port_start = 1024
net.ipv4.ipfrag_high_thresh = 4194304
net.ipv4.ipfrag_low_thresh = 3145728
net.ipv4.ipfrag_max_dist = 64
net.ipv4.ipfrag_secret_interval = 0
net.ipv4.ipfrag_time = 30
net.ipv4.neigh.default.anycast_delay = 100
net.ipv4.neigh.default.app_solicit = 0
net.ipv4.neigh.default.base_reachable_time_ms = 30000
net.ipv4.neigh.default.delay_first_probe_time = 5
net.ipv4.neigh.default.gc_interval = 30
net.ipv4.neigh.default.gc_stale_time = 60
net.ipv4.neigh.default.gc_thresh1 = 128
net.ipv4.neigh.default.gc_thresh2 = 512
net.ipv4.neigh.default.gc_thresh3 = 1024
net.ipv4.neigh.default.interval_probe_time_ms = 5000
net.ipv4.neigh.default.locktime = 100
net.ipv4.neigh.default.mcast_resolicit = 0
net.ipv4.neigh.default.mcast_solicit = 3
net.ipv4.neigh.default.proxy_delay = 80
net.ipv4.neigh.default.proxy_qlen = 64
net.ipv4.neigh.default.retrans_time_ms = 1000
net.ipv4.neigh.default.ucast_solicit = 3
net.ipv4.neigh.default.unres_qlen = 101
net.ipv4.neigh.default.unres_qlen_bytes = 212992
net.ipv4.neigh.docker0.anycast_delay = 100
net.ipv4.neigh.docker0.app_solicit = 0
net.ipv4.neigh.docker0.base_reachable_time_ms = 30000
net.ipv4.neigh.docker0.delay_first_probe_time = 5
net.ipv4.neigh.docker0.gc_stale_time = 60
net.ipv4.neigh.docker0.interval_probe_time_ms = 5000
net.ipv4.neigh.docker0.locktime = 100
net.ipv4.neigh.docker0.mcast_resolicit = 0
net.ipv4.neigh.docker0.mcast_solicit = 3
net.ipv4.neigh.docker0.proxy_delay = 80
net.ipv4.neigh.docker0.proxy_qlen = 64
net.ipv4.neigh.docker0.retrans_time_ms = 1000
net.ipv4.neigh.docker0.ucast_solicit = 3
net.ipv4.neigh.docker0.unres_qlen = 101
net.ipv4.neigh.docker0.unres_qlen_bytes = 212992
net.ipv4.neigh.eth0.anycast_delay = 100
net.ipv4.neigh.eth0.app_solicit = 0
net.ipv4.neigh.eth0.base_reachable_time_ms = 30000
net.ipv4.neigh.eth0.delay_first_probe_time = 5
net.ipv4.neigh.eth0.gc_stale_time = 60
net.ipv4.neigh.eth0.interval_probe_time_ms = 5000
net.ipv4.neigh.eth0.locktime = 100
net.ipv4.neigh.eth0.mcast_resolicit = 0
net.ipv4.neigh.eth0.mcast_solicit = 3
net.ipv4.neigh.eth0.proxy_delay = 80
net.ipv4.neigh.eth0.proxy_qlen = 64
net.ipv4.neigh.eth0.retrans_time_ms = 1000
net.ipv4.neigh.eth0.ucast_solicit = 3
net.ipv4.neigh.eth0.unres_qlen = 101
net.ipv4.neigh.eth0.unres_qlen_bytes = 212992
net.ipv4.neigh.eth1.anycast_delay = 100
net.ipv4.neigh.eth1.app_solicit = 0
net.ipv4.neigh.eth1.base_reachable_time_ms = 30000
net.ipv4.neigh.eth1.delay_first_probe_time = 5
net.ipv4.neigh.eth1.gc_stale_time = 60
net.ipv4.neigh.eth1.interval_probe_time_ms = 5000
net.ipv4.neigh.eth1.locktime = 100
net.ipv4.neigh.eth1.mcast_resolicit = 0
net.ipv4.neigh.eth1.mcast_solicit = 3
net.ipv4.neigh.eth1.proxy_delay = 80
net.ipv4.neigh.eth1.proxy_qlen = 64
net.ipv4.neigh.eth1.retrans_time_ms = 1000
net.ipv4.neigh.eth1.ucast_solicit = 3
net.ipv4.neigh.eth1.unres_qlen = 101
net.ipv4.neigh.eth1.unres_qlen_bytes = 212992
net.ipv4.neigh.eth2.anycast_delay = 100
net.ipv4.neigh.eth2.app_solicit = 0
net.ipv4.neigh.eth2.base_reachable_time_ms = 30000
net.ipv4.neigh.eth2.delay_first_probe_time = 5
net.ipv4.neigh.eth2.gc_stale_time = 60
net.ipv4.neigh.eth2.interval_probe_time_ms = 5000
net.ipv4.neigh.eth2.locktime = 100
net.ipv4.neigh.eth2.mcast_resolicit = 0
net.ipv4.neigh.eth2.mcast_solicit = 3
net.ipv4.neigh.eth2.proxy_delay = 80
net.ipv4.neigh.eth2.proxy_qlen = 64
net.ipv4.neigh.eth2.retrans_time_ms = 1000
net.ipv4.neigh.eth2.ucast_solicit = 3
net.ipv4.neigh.eth2.unres_qlen = 101
net.ipv4.neigh.eth2.unres_qlen_bytes = 212992
net.ipv4.neigh.eth3.anycast_delay = 100
net.ipv4.neigh.eth3.app_solicit = 0
net.ipv4.neigh.eth3.base_reachable_time_ms = 30000
net.ipv4.neigh.eth3.delay_first_probe_time = 5
net.ipv4.neigh.eth3.gc_stale_time = 60
net.ipv4.neigh.eth3.interval_probe_time_ms = 5000
net.ipv4.neigh.eth3.locktime = 100
net.ipv4.neigh.eth3.mcast_resolicit = 0
net.ipv4.neigh.eth3.mcast_solicit = 3
net.ipv4.neigh.eth3.proxy_delay = 80
net.ipv4.neigh.eth3.proxy_qlen = 64
net.ipv4.neigh.eth3.retrans_time_ms = 1000
net.ipv4.neigh.eth3.ucast_solicit = 3
net.ipv4.neigh.eth3.unres_qlen = 101
net.ipv4.neigh.eth3.unres_qlen_bytes = 212992
net.ipv4.neigh.lo.anycast_delay = 100
net.ipv4.neigh.lo.app_solicit = 0
net.ipv4.neigh.lo.base_reachable_time_ms = 30000
net.ipv4.neigh.lo.delay_first_probe_time = 5
net.ipv4.neigh.lo.gc_stale_time = 60
net.ipv4.neigh.lo.interval_probe_time_ms = 5000
net.ipv4.neigh.lo.locktime = 100
net.ipv4.neigh.lo.mcast_resolicit = 0
net.ipv4.neigh.lo.mcast_solicit = 3
net.ipv4.neigh.lo.proxy_delay = 80
net.ipv4.neigh.lo.proxy_qlen = 64
net.ipv4.neigh.lo.retrans_time_ms = 1000
net.ipv4.neigh.lo.ucast_solicit = 3
net.ipv4.neigh.lo.unres_qlen = 101
net.ipv4.neigh.lo.unres_qlen_bytes = 212992
net.ipv4.nexthop_compat_mode = 1
net.ipv4.ping_group_range = 0   2147483647
net.ipv4.raw_l3mdev_accept = 1
net.ipv4.route.error_burst = 1250
net.ipv4.route.error_cost = 250
net.ipv4.route.gc_elasticity = 8
net.ipv4.route.gc_interval = 60
net.ipv4.route.gc_min_interval = 0
net.ipv4.route.gc_min_interval_ms = 500
net.ipv4.route.gc_thresh = -1
net.ipv4.route.gc_timeout = 300
net.ipv4.route.max_size = 2147483647
net.ipv4.route.min_adv_mss = 256
net.ipv4.route.min_pmtu = 552
net.ipv4.route.mtu_expires = 600
net.ipv4.route.redirect_load = 5
net.ipv4.route.redirect_number = 9
net.ipv4.route.redirect_silence = 5120
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_allowed_congestion_control = reno cubic
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_autocorking = 1
net.ipv4.tcp_available_congestion_control = reno bic cubic westwood vegas
net.ipv4.tcp_available_ulp = tls
net.ipv4.tcp_base_mss = 1024
net.ipv4.tcp_challenge_ack_limit = 2147483647
net.ipv4.tcp_child_ehash_entries = 0
net.ipv4.tcp_comp_sack_delay_ns = 1000000
net.ipv4.tcp_comp_sack_nr = 44
net.ipv4.tcp_comp_sack_slack_ns = 100000
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_early_demux = 1
net.ipv4.tcp_early_retrans = 3
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_ecn_fallback = 1
net.ipv4.tcp_ehash_entries = 262144
net.ipv4.tcp_fack = 0
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_fastopen_blackhole_timeout_sec = 0
net.ipv4.tcp_fastopen_key = c8240665-597927c0-7915b17e-b650917f
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_frto = 2
net.ipv4.tcp_fwmark_accept = 0
net.ipv4.tcp_invalid_ratelimit = 500
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_l3mdev_accept = 0
net.ipv4.tcp_limit_output_bytes = 1048576
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_max_orphans = 131072
net.ipv4.tcp_max_reordering = 300
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_tw_buckets = 131072
net.ipv4.tcp_mem = 226611       302148  453222
net.ipv4.tcp_migrate_req = 0
net.ipv4.tcp_min_rtt_wlen = 300
net.ipv4.tcp_min_snd_mss = 48
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_mtu_probe_floor = 48
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_no_ssthresh_metrics_save = 1
net.ipv4.tcp_notsent_lowat = 4294967295
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_pacing_ca_ratio = 120
net.ipv4.tcp_pacing_ss_ratio = 200
net.ipv4.tcp_plb_cong_thresh = 128
net.ipv4.tcp_plb_enabled = 0
net.ipv4.tcp_plb_idle_rehash_rounds = 3
net.ipv4.tcp_plb_rehash_rounds = 12
net.ipv4.tcp_plb_suspend_rto_sec = 60
net.ipv4.tcp_probe_interval = 600
net.ipv4.tcp_probe_threshold = 8
net.ipv4.tcp_recovery = 1
net.ipv4.tcp_reflect_tos = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rmem = 4096        131072  6291456
net.ipv4.tcp_sack = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_syn_linear_timeouts = 4
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tso_rtt_log = 9
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_tw_reuse = 2
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_workaround_signed_windows = 0
net.ipv4.udp_child_hash_entries = 0
net.ipv4.udp_early_demux = 1
net.ipv4.udp_hash_entries = 16384
net.ipv4.udp_l3mdev_accept = 0
net.ipv4.udp_mem = 453222       604296  906444
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096
net.ipv4.xfrm4_gc_thresh = 32768
net.ipv6.anycast_src_echo_reply = 0
net.ipv6.auto_flowlabels = 1
net.ipv6.bindv6only = 0
net.ipv6.conf.all.accept_dad = 0
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_ra_defrtr = 1
net.ipv6.conf.all.accept_ra_from_local = 0
net.ipv6.conf.all.accept_ra_min_hop_limit = 1
net.ipv6.conf.all.accept_ra_mtu = 1
net.ipv6.conf.all.accept_ra_pinfo = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_untracked_na = 0
net.ipv6.conf.all.addr_gen_mode = 0
net.ipv6.conf.all.autoconf = 1
net.ipv6.conf.all.dad_transmits = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.disable_policy = 0
net.ipv6.conf.all.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.all.drop_unsolicited_na = 0
net.ipv6.conf.all.enhanced_dad = 1
net.ipv6.conf.all.force_mld_version = 0
net.ipv6.conf.all.force_tllao = 0
net.ipv6.conf.all.forwarding = 0
net.ipv6.conf.all.hop_limit = 64
net.ipv6.conf.all.ignore_routes_with_linkdown = 0
net.ipv6.conf.all.ioam6_enabled = 0
net.ipv6.conf.all.ioam6_id = 65535
net.ipv6.conf.all.ioam6_id_wide = 4294967295
net.ipv6.conf.all.keep_addr_on_down = 0
net.ipv6.conf.all.max_addresses = 16
net.ipv6.conf.all.max_desync_factor = 600
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.all.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.all.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.all.mtu = 1280
net.ipv6.conf.all.ndisc_evict_nocarrier = 1
net.ipv6.conf.all.ndisc_notify = 0
net.ipv6.conf.all.ndisc_tclass = 0
net.ipv6.conf.all.proxy_ndp = 0
net.ipv6.conf.all.ra_defrtr_metric = 1024
net.ipv6.conf.all.regen_max_retry = 3
net.ipv6.conf.all.router_solicitation_delay = 1
net.ipv6.conf.all.router_solicitation_interval = 4
net.ipv6.conf.all.router_solicitation_max_interval = 3600
net.ipv6.conf.all.router_solicitations = -1
net.ipv6.conf.all.rpl_seg_enabled = 0
net.ipv6.conf.all.seg6_enabled = 0
net.ipv6.conf.all.suppress_frag_ndisc = 1
net.ipv6.conf.all.temp_prefered_lft = 86400
net.ipv6.conf.all.temp_valid_lft = 604800
net.ipv6.conf.all.use_oif_addrs_only = 0
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.accept_dad = 1
net.ipv6.conf.default.accept_ra = 1
net.ipv6.conf.default.accept_ra_defrtr = 1
net.ipv6.conf.default.accept_ra_from_local = 0
net.ipv6.conf.default.accept_ra_min_hop_limit = 1
net.ipv6.conf.default.accept_ra_mtu = 1
net.ipv6.conf.default.accept_ra_pinfo = 1
net.ipv6.conf.default.accept_redirects = 0
net.ipv6.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_untracked_na = 0
net.ipv6.conf.default.addr_gen_mode = 0
net.ipv6.conf.default.autoconf = 1
net.ipv6.conf.default.dad_transmits = 1
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.default.disable_policy = 0
net.ipv6.conf.default.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.default.drop_unsolicited_na = 0
net.ipv6.conf.default.enhanced_dad = 1
net.ipv6.conf.default.force_mld_version = 0
net.ipv6.conf.default.force_tllao = 0
net.ipv6.conf.default.forwarding = 0
net.ipv6.conf.default.hop_limit = 64
net.ipv6.conf.default.ignore_routes_with_linkdown = 0
net.ipv6.conf.default.ioam6_enabled = 0
net.ipv6.conf.default.ioam6_id = 65535
net.ipv6.conf.default.ioam6_id_wide = 4294967295
net.ipv6.conf.default.keep_addr_on_down = 0
net.ipv6.conf.default.max_addresses = 16
net.ipv6.conf.default.max_desync_factor = 600
net.ipv6.conf.default.mc_forwarding = 0
net.ipv6.conf.default.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.default.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.default.mtu = 1280
net.ipv6.conf.default.ndisc_evict_nocarrier = 1
net.ipv6.conf.default.ndisc_notify = 0
net.ipv6.conf.default.ndisc_tclass = 0
net.ipv6.conf.default.proxy_ndp = 0
net.ipv6.conf.default.ra_defrtr_metric = 1024
net.ipv6.conf.default.regen_max_retry = 3
net.ipv6.conf.default.router_solicitation_delay = 1
net.ipv6.conf.default.router_solicitation_interval = 4
net.ipv6.conf.default.router_solicitation_max_interval = 3600
net.ipv6.conf.default.router_solicitations = -1
net.ipv6.conf.default.rpl_seg_enabled = 0
net.ipv6.conf.default.seg6_enabled = 0
net.ipv6.conf.default.suppress_frag_ndisc = 1
net.ipv6.conf.default.temp_prefered_lft = 86400
net.ipv6.conf.default.temp_valid_lft = 604800
net.ipv6.conf.default.use_oif_addrs_only = 0
net.ipv6.conf.default.use_tempaddr = 0
net.ipv6.conf.docker0.accept_dad = 1
net.ipv6.conf.docker0.accept_ra = 0
net.ipv6.conf.docker0.accept_ra_defrtr = 1
net.ipv6.conf.docker0.accept_ra_from_local = 0
net.ipv6.conf.docker0.accept_ra_min_hop_limit = 1
net.ipv6.conf.docker0.accept_ra_mtu = 1
net.ipv6.conf.docker0.accept_ra_pinfo = 1
net.ipv6.conf.docker0.accept_redirects = 0
net.ipv6.conf.docker0.accept_source_route = 0
net.ipv6.conf.docker0.accept_untracked_na = 0
net.ipv6.conf.docker0.addr_gen_mode = 0
net.ipv6.conf.docker0.autoconf = 1
net.ipv6.conf.docker0.dad_transmits = 1
net.ipv6.conf.docker0.disable_ipv6 = 0
net.ipv6.conf.docker0.disable_policy = 0
net.ipv6.conf.docker0.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.docker0.drop_unsolicited_na = 0
net.ipv6.conf.docker0.enhanced_dad = 1
net.ipv6.conf.docker0.force_mld_version = 0
net.ipv6.conf.docker0.force_tllao = 0
net.ipv6.conf.docker0.forwarding = 0
net.ipv6.conf.docker0.hop_limit = 64
net.ipv6.conf.docker0.ignore_routes_with_linkdown = 0
net.ipv6.conf.docker0.ioam6_enabled = 0
net.ipv6.conf.docker0.ioam6_id = 65535
net.ipv6.conf.docker0.ioam6_id_wide = 4294967295
net.ipv6.conf.docker0.keep_addr_on_down = 0
net.ipv6.conf.docker0.max_addresses = 16
net.ipv6.conf.docker0.max_desync_factor = 600
net.ipv6.conf.docker0.mc_forwarding = 0
net.ipv6.conf.docker0.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.docker0.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.docker0.mtu = 1500
net.ipv6.conf.docker0.ndisc_evict_nocarrier = 1
net.ipv6.conf.docker0.ndisc_notify = 0
net.ipv6.conf.docker0.ndisc_tclass = 0
net.ipv6.conf.docker0.proxy_ndp = 0
net.ipv6.conf.docker0.ra_defrtr_metric = 1024
net.ipv6.conf.docker0.regen_max_retry = 3
net.ipv6.conf.docker0.router_solicitation_delay = 1
net.ipv6.conf.docker0.router_solicitation_interval = 4
net.ipv6.conf.docker0.router_solicitation_max_interval = 3600
net.ipv6.conf.docker0.router_solicitations = -1
net.ipv6.conf.docker0.rpl_seg_enabled = 0
net.ipv6.conf.docker0.seg6_enabled = 0
net.ipv6.conf.docker0.suppress_frag_ndisc = 1
net.ipv6.conf.docker0.temp_prefered_lft = 86400
net.ipv6.conf.docker0.temp_valid_lft = 604800
net.ipv6.conf.docker0.use_oif_addrs_only = 0
net.ipv6.conf.docker0.use_tempaddr = 0
net.ipv6.conf.eth0.accept_dad = 1
net.ipv6.conf.eth0.accept_ra = 0
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_from_local = 0
net.ipv6.conf.eth0.accept_ra_min_hop_limit = 1
net.ipv6.conf.eth0.accept_ra_mtu = 1
net.ipv6.conf.eth0.accept_ra_pinfo = 1
net.ipv6.conf.eth0.accept_redirects = 1
net.ipv6.conf.eth0.accept_source_route = 0
net.ipv6.conf.eth0.accept_untracked_na = 0
net.ipv6.conf.eth0.addr_gen_mode = 1
net.ipv6.conf.eth0.autoconf = 1
net.ipv6.conf.eth0.dad_transmits = 1
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.eth0.disable_policy = 0
net.ipv6.conf.eth0.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.eth0.drop_unsolicited_na = 0
net.ipv6.conf.eth0.enhanced_dad = 1
net.ipv6.conf.eth0.force_mld_version = 0
net.ipv6.conf.eth0.force_tllao = 0
net.ipv6.conf.eth0.forwarding = 0
net.ipv6.conf.eth0.hop_limit = 64
net.ipv6.conf.eth0.ignore_routes_with_linkdown = 0
net.ipv6.conf.eth0.ioam6_enabled = 0
net.ipv6.conf.eth0.ioam6_id = 65535
net.ipv6.conf.eth0.ioam6_id_wide = 4294967295
net.ipv6.conf.eth0.keep_addr_on_down = 0
net.ipv6.conf.eth0.max_addresses = 16
net.ipv6.conf.eth0.max_desync_factor = 600
net.ipv6.conf.eth0.mc_forwarding = 0
net.ipv6.conf.eth0.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.eth0.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.eth0.mtu = 1500
net.ipv6.conf.eth0.ndisc_evict_nocarrier = 1
net.ipv6.conf.eth0.ndisc_notify = 0
net.ipv6.conf.eth0.ndisc_tclass = 0
net.ipv6.conf.eth0.proxy_ndp = 0
net.ipv6.conf.eth0.ra_defrtr_metric = 1024
net.ipv6.conf.eth0.regen_max_retry = 3
net.ipv6.conf.eth0.router_solicitation_delay = 1
net.ipv6.conf.eth0.router_solicitation_interval = 4
net.ipv6.conf.eth0.router_solicitation_max_interval = 3600
net.ipv6.conf.eth0.router_solicitations = -1
net.ipv6.conf.eth0.rpl_seg_enabled = 0
net.ipv6.conf.eth0.seg6_enabled = 0
net.ipv6.conf.eth0.suppress_frag_ndisc = 1
net.ipv6.conf.eth0.temp_prefered_lft = 86400
net.ipv6.conf.eth0.temp_valid_lft = 604800
net.ipv6.conf.eth0.use_oif_addrs_only = 0
net.ipv6.conf.eth0.use_tempaddr = 0
net.ipv6.conf.eth1.accept_dad = 1
net.ipv6.conf.eth1.accept_ra = 1
net.ipv6.conf.eth1.accept_ra_defrtr = 1
net.ipv6.conf.eth1.accept_ra_from_local = 0
net.ipv6.conf.eth1.accept_ra_min_hop_limit = 1
net.ipv6.conf.eth1.accept_ra_mtu = 1
net.ipv6.conf.eth1.accept_ra_pinfo = 1
net.ipv6.conf.eth1.accept_redirects = 1
net.ipv6.conf.eth1.accept_source_route = 0
net.ipv6.conf.eth1.accept_untracked_na = 0
net.ipv6.conf.eth1.addr_gen_mode = 0
net.ipv6.conf.eth1.autoconf = 1
net.ipv6.conf.eth1.dad_transmits = 1
net.ipv6.conf.eth1.disable_ipv6 = 0
net.ipv6.conf.eth1.disable_policy = 0
net.ipv6.conf.eth1.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.eth1.drop_unsolicited_na = 0
net.ipv6.conf.eth1.enhanced_dad = 1
net.ipv6.conf.eth1.force_mld_version = 0
net.ipv6.conf.eth1.force_tllao = 0
net.ipv6.conf.eth1.forwarding = 0
net.ipv6.conf.eth1.hop_limit = 64
net.ipv6.conf.eth1.ignore_routes_with_linkdown = 0
net.ipv6.conf.eth1.ioam6_enabled = 0
net.ipv6.conf.eth1.ioam6_id = 65535
net.ipv6.conf.eth1.ioam6_id_wide = 4294967295
net.ipv6.conf.eth1.keep_addr_on_down = 0
net.ipv6.conf.eth1.max_addresses = 16
net.ipv6.conf.eth1.max_desync_factor = 600
net.ipv6.conf.eth1.mc_forwarding = 0
net.ipv6.conf.eth1.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.eth1.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.eth1.mtu = 1500
net.ipv6.conf.eth1.ndisc_evict_nocarrier = 1
net.ipv6.conf.eth1.ndisc_notify = 0
net.ipv6.conf.eth1.ndisc_tclass = 0
net.ipv6.conf.eth1.proxy_ndp = 0
net.ipv6.conf.eth1.ra_defrtr_metric = 1024
net.ipv6.conf.eth1.regen_max_retry = 3
net.ipv6.conf.eth1.router_solicitation_delay = 1
net.ipv6.conf.eth1.router_solicitation_interval = 4
net.ipv6.conf.eth1.router_solicitation_max_interval = 3600
net.ipv6.conf.eth1.router_solicitations = -1
net.ipv6.conf.eth1.rpl_seg_enabled = 0
net.ipv6.conf.eth1.seg6_enabled = 0
net.ipv6.conf.eth1.suppress_frag_ndisc = 1
net.ipv6.conf.eth1.temp_prefered_lft = 86400
net.ipv6.conf.eth1.temp_valid_lft = 604800
net.ipv6.conf.eth1.use_oif_addrs_only = 0
net.ipv6.conf.eth1.use_tempaddr = 0
net.ipv6.conf.eth2.accept_dad = 1
net.ipv6.conf.eth2.accept_ra = 1
net.ipv6.conf.eth2.accept_ra_defrtr = 1
net.ipv6.conf.eth2.accept_ra_from_local = 0
net.ipv6.conf.eth2.accept_ra_min_hop_limit = 1
net.ipv6.conf.eth2.accept_ra_mtu = 1
net.ipv6.conf.eth2.accept_ra_pinfo = 1
net.ipv6.conf.eth2.accept_redirects = 0
net.ipv6.conf.eth2.accept_source_route = 0
net.ipv6.conf.eth2.accept_untracked_na = 0
net.ipv6.conf.eth2.addr_gen_mode = 0
net.ipv6.conf.eth2.autoconf = 1
net.ipv6.conf.eth2.dad_transmits = 1
net.ipv6.conf.eth2.disable_ipv6 = 0
net.ipv6.conf.eth2.disable_policy = 0
net.ipv6.conf.eth2.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.eth2.drop_unsolicited_na = 0
net.ipv6.conf.eth2.enhanced_dad = 1
net.ipv6.conf.eth2.force_mld_version = 0
net.ipv6.conf.eth2.force_tllao = 0
net.ipv6.conf.eth2.forwarding = 0
net.ipv6.conf.eth2.hop_limit = 64
net.ipv6.conf.eth2.ignore_routes_with_linkdown = 0
net.ipv6.conf.eth2.ioam6_enabled = 0
net.ipv6.conf.eth2.ioam6_id = 65535
net.ipv6.conf.eth2.ioam6_id_wide = 4294967295
net.ipv6.conf.eth2.keep_addr_on_down = 0
net.ipv6.conf.eth2.max_addresses = 16
net.ipv6.conf.eth2.max_desync_factor = 600
net.ipv6.conf.eth2.mc_forwarding = 0
net.ipv6.conf.eth2.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.eth2.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.eth2.mtu = 1500
net.ipv6.conf.eth2.ndisc_evict_nocarrier = 1
net.ipv6.conf.eth2.ndisc_notify = 0
net.ipv6.conf.eth2.ndisc_tclass = 0
net.ipv6.conf.eth2.proxy_ndp = 0
net.ipv6.conf.eth2.ra_defrtr_metric = 1024
net.ipv6.conf.eth2.regen_max_retry = 3
net.ipv6.conf.eth2.router_solicitation_delay = 1
net.ipv6.conf.eth2.router_solicitation_interval = 4
net.ipv6.conf.eth2.router_solicitation_max_interval = 3600
net.ipv6.conf.eth2.router_solicitations = -1
net.ipv6.conf.eth2.rpl_seg_enabled = 0
net.ipv6.conf.eth2.seg6_enabled = 0
net.ipv6.conf.eth2.suppress_frag_ndisc = 1
net.ipv6.conf.eth2.temp_prefered_lft = 86400
net.ipv6.conf.eth2.temp_valid_lft = 604800
net.ipv6.conf.eth2.use_oif_addrs_only = 0
net.ipv6.conf.eth2.use_tempaddr = 0
net.ipv6.conf.eth3.accept_dad = 1
net.ipv6.conf.eth3.accept_ra = 1
net.ipv6.conf.eth3.accept_ra_defrtr = 1
net.ipv6.conf.eth3.accept_ra_from_local = 0
net.ipv6.conf.eth3.accept_ra_min_hop_limit = 1
net.ipv6.conf.eth3.accept_ra_mtu = 1
net.ipv6.conf.eth3.accept_ra_pinfo = 1
net.ipv6.conf.eth3.accept_redirects = 0
net.ipv6.conf.eth3.accept_source_route = 0
net.ipv6.conf.eth3.accept_untracked_na = 0
net.ipv6.conf.eth3.addr_gen_mode = 0
net.ipv6.conf.eth3.autoconf = 1
net.ipv6.conf.eth3.dad_transmits = 1
net.ipv6.conf.eth3.disable_ipv6 = 0
net.ipv6.conf.eth3.disable_policy = 0
net.ipv6.conf.eth3.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.eth3.drop_unsolicited_na = 0
net.ipv6.conf.eth3.enhanced_dad = 1
net.ipv6.conf.eth3.force_mld_version = 0
net.ipv6.conf.eth3.force_tllao = 0
net.ipv6.conf.eth3.forwarding = 0
net.ipv6.conf.eth3.hop_limit = 64
net.ipv6.conf.eth3.ignore_routes_with_linkdown = 0
net.ipv6.conf.eth3.ioam6_enabled = 0
net.ipv6.conf.eth3.ioam6_id = 65535
net.ipv6.conf.eth3.ioam6_id_wide = 4294967295
net.ipv6.conf.eth3.keep_addr_on_down = 0
net.ipv6.conf.eth3.max_addresses = 16
net.ipv6.conf.eth3.max_desync_factor = 600
net.ipv6.conf.eth3.mc_forwarding = 0
net.ipv6.conf.eth3.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.eth3.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.eth3.mtu = 1500
net.ipv6.conf.eth3.ndisc_evict_nocarrier = 1
net.ipv6.conf.eth3.ndisc_notify = 0
net.ipv6.conf.eth3.ndisc_tclass = 0
net.ipv6.conf.eth3.proxy_ndp = 0
net.ipv6.conf.eth3.ra_defrtr_metric = 1024
net.ipv6.conf.eth3.regen_max_retry = 3
net.ipv6.conf.eth3.router_solicitation_delay = 1
net.ipv6.conf.eth3.router_solicitation_interval = 4
net.ipv6.conf.eth3.router_solicitation_max_interval = 3600
net.ipv6.conf.eth3.router_solicitations = -1
net.ipv6.conf.eth3.rpl_seg_enabled = 0
net.ipv6.conf.eth3.seg6_enabled = 0
net.ipv6.conf.eth3.suppress_frag_ndisc = 1
net.ipv6.conf.eth3.temp_prefered_lft = 86400
net.ipv6.conf.eth3.temp_valid_lft = 604800
net.ipv6.conf.eth3.use_oif_addrs_only = 0
net.ipv6.conf.eth3.use_tempaddr = 0
net.ipv6.conf.lo.accept_dad = -1
net.ipv6.conf.lo.accept_ra = 1
net.ipv6.conf.lo.accept_ra_defrtr = 1
net.ipv6.conf.lo.accept_ra_from_local = 0
net.ipv6.conf.lo.accept_ra_min_hop_limit = 1
net.ipv6.conf.lo.accept_ra_mtu = 1
net.ipv6.conf.lo.accept_ra_pinfo = 1
net.ipv6.conf.lo.accept_redirects = 1
net.ipv6.conf.lo.accept_source_route = 0
net.ipv6.conf.lo.accept_untracked_na = 0
net.ipv6.conf.lo.addr_gen_mode = 0
net.ipv6.conf.lo.autoconf = 1
net.ipv6.conf.lo.dad_transmits = 1
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.lo.disable_policy = 0
net.ipv6.conf.lo.drop_unicast_in_l2_multicast = 0
net.ipv6.conf.lo.drop_unsolicited_na = 0
net.ipv6.conf.lo.enhanced_dad = 1
net.ipv6.conf.lo.force_mld_version = 0
net.ipv6.conf.lo.force_tllao = 0
net.ipv6.conf.lo.forwarding = 0
net.ipv6.conf.lo.hop_limit = 64
net.ipv6.conf.lo.ignore_routes_with_linkdown = 0
net.ipv6.conf.lo.ioam6_enabled = 0
net.ipv6.conf.lo.ioam6_id = 65535
net.ipv6.conf.lo.ioam6_id_wide = 4294967295
net.ipv6.conf.lo.keep_addr_on_down = 0
net.ipv6.conf.lo.max_addresses = 16
net.ipv6.conf.lo.max_desync_factor = 600
net.ipv6.conf.lo.mc_forwarding = 0
net.ipv6.conf.lo.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.lo.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.lo.mtu = 65536
net.ipv6.conf.lo.ndisc_evict_nocarrier = 1
net.ipv6.conf.lo.ndisc_notify = 0
net.ipv6.conf.lo.ndisc_tclass = 0
net.ipv6.conf.lo.proxy_ndp = 0
net.ipv6.conf.lo.ra_defrtr_metric = 1024
net.ipv6.conf.lo.regen_max_retry = 3
net.ipv6.conf.lo.router_solicitation_delay = 1
net.ipv6.conf.lo.router_solicitation_interval = 4
net.ipv6.conf.lo.router_solicitation_max_interval = 3600
net.ipv6.conf.lo.router_solicitations = -1
net.ipv6.conf.lo.rpl_seg_enabled = 0
net.ipv6.conf.lo.seg6_enabled = 0
net.ipv6.conf.lo.suppress_frag_ndisc = 1
net.ipv6.conf.lo.temp_prefered_lft = 86400
net.ipv6.conf.lo.temp_valid_lft = 604800
net.ipv6.conf.lo.use_oif_addrs_only = 0
net.ipv6.conf.lo.use_tempaddr = -1
net.ipv6.fib_multipath_hash_fields = 7
net.ipv6.fib_multipath_hash_policy = 0
net.ipv6.fib_notify_on_flag_change = 0
net.ipv6.flowlabel_consistency = 1
net.ipv6.flowlabel_reflect = 0
net.ipv6.flowlabel_state_ranges = 0
net.ipv6.fwmark_reflect = 0
net.ipv6.icmp.echo_ignore_all = 0
net.ipv6.icmp.echo_ignore_anycast = 0
net.ipv6.icmp.echo_ignore_multicast = 0
net.ipv6.icmp.error_anycast_as_unicast = 0
net.ipv6.icmp.ratelimit = 1000
net.ipv6.icmp.ratemask = 0-1,3-127
net.ipv6.idgen_delay = 1
net.ipv6.idgen_retries = 3
net.ipv6.ioam6_id = 16777215
net.ipv6.ioam6_id_wide = 72057594037927935
net.ipv6.ip6frag_high_thresh = 4194304
net.ipv6.ip6frag_low_thresh = 3145728
net.ipv6.ip6frag_secret_interval = 0
net.ipv6.ip6frag_time = 60
net.ipv6.ip_nonlocal_bind = 0
net.ipv6.max_dst_opts_length = 2147483647
net.ipv6.max_dst_opts_number = 8
net.ipv6.max_hbh_length = 2147483647
net.ipv6.max_hbh_opts_number = 8
net.ipv6.mld_max_msf = 64
net.ipv6.mld_qrv = 2
net.ipv6.neigh.default.anycast_delay = 100
net.ipv6.neigh.default.app_solicit = 0
net.ipv6.neigh.default.base_reachable_time_ms = 30000
net.ipv6.neigh.default.delay_first_probe_time = 5
net.ipv6.neigh.default.gc_interval = 30
net.ipv6.neigh.default.gc_stale_time = 60
net.ipv6.neigh.default.gc_thresh1 = 128
net.ipv6.neigh.default.gc_thresh2 = 512
net.ipv6.neigh.default.gc_thresh3 = 1024
net.ipv6.neigh.default.interval_probe_time_ms = 5000
net.ipv6.neigh.default.locktime = 0
net.ipv6.neigh.default.mcast_resolicit = 0
net.ipv6.neigh.default.mcast_solicit = 3
net.ipv6.neigh.default.proxy_delay = 80
net.ipv6.neigh.default.proxy_qlen = 64
net.ipv6.neigh.default.retrans_time_ms = 1000
net.ipv6.neigh.default.ucast_solicit = 3
net.ipv6.neigh.default.unres_qlen = 101
net.ipv6.neigh.default.unres_qlen_bytes = 212992
net.ipv6.neigh.docker0.anycast_delay = 100
net.ipv6.neigh.docker0.app_solicit = 0
net.ipv6.neigh.docker0.base_reachable_time_ms = 30000
net.ipv6.neigh.docker0.delay_first_probe_time = 5
net.ipv6.neigh.docker0.gc_stale_time = 60
net.ipv6.neigh.docker0.interval_probe_time_ms = 5000
net.ipv6.neigh.docker0.locktime = 0
net.ipv6.neigh.docker0.mcast_resolicit = 0
net.ipv6.neigh.docker0.mcast_solicit = 3
net.ipv6.neigh.docker0.proxy_delay = 80
net.ipv6.neigh.docker0.proxy_qlen = 64
net.ipv6.neigh.docker0.retrans_time_ms = 1000
net.ipv6.neigh.docker0.ucast_solicit = 3
net.ipv6.neigh.docker0.unres_qlen = 101
net.ipv6.neigh.docker0.unres_qlen_bytes = 212992
net.ipv6.neigh.eth0.anycast_delay = 100
net.ipv6.neigh.eth0.app_solicit = 0
net.ipv6.neigh.eth0.base_reachable_time_ms = 30000
net.ipv6.neigh.eth0.delay_first_probe_time = 5
net.ipv6.neigh.eth0.gc_stale_time = 60
net.ipv6.neigh.eth0.interval_probe_time_ms = 5000
net.ipv6.neigh.eth0.locktime = 0
net.ipv6.neigh.eth0.mcast_resolicit = 0
net.ipv6.neigh.eth0.mcast_solicit = 3
net.ipv6.neigh.eth0.proxy_delay = 80
net.ipv6.neigh.eth0.proxy_qlen = 64
net.ipv6.neigh.eth0.retrans_time_ms = 1000
net.ipv6.neigh.eth0.ucast_solicit = 3
net.ipv6.neigh.eth0.unres_qlen = 101
net.ipv6.neigh.eth0.unres_qlen_bytes = 212992
net.ipv6.neigh.eth1.anycast_delay = 100
net.ipv6.neigh.eth1.app_solicit = 0
net.ipv6.neigh.eth1.base_reachable_time_ms = 30000
net.ipv6.neigh.eth1.delay_first_probe_time = 5
net.ipv6.neigh.eth1.gc_stale_time = 60
net.ipv6.neigh.eth1.interval_probe_time_ms = 5000
net.ipv6.neigh.eth1.locktime = 0
net.ipv6.neigh.eth1.mcast_resolicit = 0
net.ipv6.neigh.eth1.mcast_solicit = 3
net.ipv6.neigh.eth1.proxy_delay = 80
net.ipv6.neigh.eth1.proxy_qlen = 64
net.ipv6.neigh.eth1.retrans_time_ms = 1000
net.ipv6.neigh.eth1.ucast_solicit = 3
net.ipv6.neigh.eth1.unres_qlen = 101
net.ipv6.neigh.eth1.unres_qlen_bytes = 212992
net.ipv6.neigh.eth2.anycast_delay = 100
net.ipv6.neigh.eth2.app_solicit = 0
net.ipv6.neigh.eth2.base_reachable_time_ms = 30000
net.ipv6.neigh.eth2.delay_first_probe_time = 5
net.ipv6.neigh.eth2.gc_stale_time = 60
net.ipv6.neigh.eth2.interval_probe_time_ms = 5000
net.ipv6.neigh.eth2.locktime = 0
net.ipv6.neigh.eth2.mcast_resolicit = 0
net.ipv6.neigh.eth2.mcast_solicit = 3
net.ipv6.neigh.eth2.proxy_delay = 80
net.ipv6.neigh.eth2.proxy_qlen = 64
net.ipv6.neigh.eth2.retrans_time_ms = 1000
net.ipv6.neigh.eth2.ucast_solicit = 3
net.ipv6.neigh.eth2.unres_qlen = 101
net.ipv6.neigh.eth2.unres_qlen_bytes = 212992
net.ipv6.neigh.eth3.anycast_delay = 100
net.ipv6.neigh.eth3.app_solicit = 0
net.ipv6.neigh.eth3.base_reachable_time_ms = 30000
net.ipv6.neigh.eth3.delay_first_probe_time = 5
net.ipv6.neigh.eth3.gc_stale_time = 60
net.ipv6.neigh.eth3.interval_probe_time_ms = 5000
net.ipv6.neigh.eth3.locktime = 0
net.ipv6.neigh.eth3.mcast_resolicit = 0
net.ipv6.neigh.eth3.mcast_solicit = 3
net.ipv6.neigh.eth3.proxy_delay = 80
net.ipv6.neigh.eth3.proxy_qlen = 64
net.ipv6.neigh.eth3.retrans_time_ms = 1000
net.ipv6.neigh.eth3.ucast_solicit = 3
net.ipv6.neigh.eth3.unres_qlen = 101
net.ipv6.neigh.eth3.unres_qlen_bytes = 212992
net.ipv6.neigh.lo.anycast_delay = 100
net.ipv6.neigh.lo.app_solicit = 0
net.ipv6.neigh.lo.base_reachable_time_ms = 30000
net.ipv6.neigh.lo.delay_first_probe_time = 5
net.ipv6.neigh.lo.gc_stale_time = 60
net.ipv6.neigh.lo.interval_probe_time_ms = 5000
net.ipv6.neigh.lo.locktime = 0
net.ipv6.neigh.lo.mcast_resolicit = 0
net.ipv6.neigh.lo.mcast_solicit = 3
net.ipv6.neigh.lo.proxy_delay = 80
net.ipv6.neigh.lo.proxy_qlen = 64
net.ipv6.neigh.lo.retrans_time_ms = 1000
net.ipv6.neigh.lo.ucast_solicit = 3
net.ipv6.neigh.lo.unres_qlen = 101
net.ipv6.neigh.lo.unres_qlen_bytes = 212992
net.ipv6.route.gc_elasticity = 9
net.ipv6.route.gc_interval = 30
net.ipv6.route.gc_min_interval = 0
net.ipv6.route.gc_min_interval_ms = 500
net.ipv6.route.gc_thresh = 1024
net.ipv6.route.gc_timeout = 60
net.ipv6.route.max_size = 2147483647
net.ipv6.route.min_adv_mss = 1220
net.ipv6.route.mtu_expires = 600
net.ipv6.route.skip_notify_on_dev_down = 0
net.ipv6.seg6_flowlabel = 0
net.ipv6.xfrm6_gc_thresh = 32768
net.iw_cm.default_backlog = 256
net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_buckets = 262144
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_count = 112
net.netfilter.nf_conntrack_dccp_loose = 1
net.netfilter.nf_conntrack_dccp_timeout_closereq = 64
net.netfilter.nf_conntrack_dccp_timeout_closing = 64
net.netfilter.nf_conntrack_dccp_timeout_open = 43200
net.netfilter.nf_conntrack_dccp_timeout_partopen = 480
net.netfilter.nf_conntrack_dccp_timeout_request = 240
net.netfilter.nf_conntrack_dccp_timeout_respond = 480
net.netfilter.nf_conntrack_dccp_timeout_timewait = 240
net.netfilter.nf_conntrack_events = 2
net.netfilter.nf_conntrack_expect_max = 4096
net.netfilter.nf_conntrack_frag6_high_thresh = 4194304
net.netfilter.nf_conntrack_frag6_low_thresh = 3145728
net.netfilter.nf_conntrack_frag6_timeout = 60
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_gre_timeout = 30
net.netfilter.nf_conntrack_gre_timeout_stream = 180
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_icmpv6_timeout = 30
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_max = 262144
net.netfilter.nf_conntrack_sctp_timeout_closed = 10
net.netfilter.nf_conntrack_sctp_timeout_cookie_echoed = 3
net.netfilter.nf_conntrack_sctp_timeout_cookie_wait = 3
net.netfilter.nf_conntrack_sctp_timeout_established = 210
net.netfilter.nf_conntrack_sctp_timeout_heartbeat_sent = 30
net.netfilter.nf_conntrack_sctp_timeout_shutdown_ack_sent = 3
net.netfilter.nf_conntrack_sctp_timeout_shutdown_recd = 0
net.netfilter.nf_conntrack_sctp_timeout_shutdown_sent = 0
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_ignore_invalid_rst = 0
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_timestamp = 0
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 120
net.netfilter.nf_flowtable_tcp_timeout = 30
net.netfilter.nf_flowtable_udp_timeout = 30
net.netfilter.nf_hooks_lwtunnel = 0
net.netfilter.nf_log.0 = NONE
net.netfilter.nf_log.1 = NONE
net.netfilter.nf_log.10 = NONE
net.netfilter.nf_log.2 = NONE
net.netfilter.nf_log.3 = NONE
net.netfilter.nf_log.4 = NONE
net.netfilter.nf_log.5 = NONE
net.netfilter.nf_log.6 = NONE
net.netfilter.nf_log.7 = NONE
net.netfilter.nf_log.8 = NONE
net.netfilter.nf_log.9 = NONE
net.netfilter.nf_log_all_netns = 0
net.nf_conntrack_max = 262144
net.rdma_ucm.max_backlog = 1024
net.sctp.addip_enable = 0
net.sctp.addip_noauth_enable = 0
net.sctp.addr_scope_policy = 1
net.sctp.association_max_retrans = 10
net.sctp.auth_enable = 0
net.sctp.cookie_hmac_alg = md5
net.sctp.cookie_preserve_enable = 1
net.sctp.default_auto_asconf = 0
net.sctp.ecn_enable = 1
net.sctp.encap_port = 0
net.sctp.hb_interval = 30000
net.sctp.intl_enable = 0
net.sctp.l3mdev_accept = 1
net.sctp.max_autoclose = 8589934
net.sctp.max_burst = 4
net.sctp.max_init_retransmits = 8
net.sctp.path_max_retrans = 5
net.sctp.pf_enable = 1
net.sctp.pf_expose = 0
net.sctp.pf_retrans = 0
net.sctp.plpmtud_probe_interval = 0
net.sctp.prsctp_enable = 1
net.sctp.ps_retrans = 65535
net.sctp.rcvbuf_policy = 0
net.sctp.reconf_enable = 0
net.sctp.rto_alpha_exp_divisor = 3
net.sctp.rto_beta_exp_divisor = 2
net.sctp.rto_initial = 3000
net.sctp.rto_max = 60000
net.sctp.rto_min = 1000
net.sctp.rwnd_update_shift = 4
net.sctp.sack_timeout = 200
net.sctp.sctp_mem = 453222      604296  906444
net.sctp.sctp_rmem = 4096       865500  4194304
net.sctp.sctp_wmem = 4096       16384   4194304
net.sctp.sndbuf_policy = 0
net.sctp.udp_port = 0
net.sctp.valid_cookie_life = 60000
net.unix.max_dgram_qlen = 512

> - The iperf command

Just sending traffic from one geneve interface to the other.
iperf -c <ip of remote geneve> -i1 -t10
iperf -s # on the server side

The geneve tunnel is created using:
ip link add <dev> type geneve id 464 remote <ip>

> - Your network topology

Just back-to-back NICs on two VMs.

Please let me know if you need anything else.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-29 13:04       ` Gal Pressman
@ 2023-06-30 15:39         ` Richard Gobert
  2023-07-02 14:41           ` Gal Pressman
  0 siblings, 1 reply; 21+ messages in thread
From: Richard Gobert @ 2023-06-30 15:39 UTC (permalink / raw)
  To: Gal Pressman
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

I haven't been able to reproduce it yet, I tried two different setups:
    - 2 VMs running locally on my PC, and a geneve interface for each. Over
      these geneve interfaces, I sent tcp traffic with a similar iperf
      command as yours.
    - A geneve tunnel over veth peers inside two separate namespaces as
      David suggested.

The throughput looked fine and identical with and without my patch in both
setups.

Although I did validate it while working on the patch, a problem may arise
from:
    - Packing CB members into a union, which could've led to some sort of
      corruption.
    - Calling `gro_pull_from_frag0` on the current skb before inserting it
      into `gro_list`.

Could I ask you to run some tests:
    - Running the script I attached here on one machine and checking whether
      it reproduces the problem. 
    - Reverting part of my commit: 
        - Reverting the change to CB struct while keeping the changes to
          `gro_pull_from_frag0`.
        - Checking whether the regression remains.

Also, could you give me some more details:
    - The VMs' NIC and driver. Are you using Qemu? 
    - iperf results.
    - The exact kernel versions (commit hashes) you are using.
    - Did you run the commands (sysctl/ethtool) on the receiving VM?


Here are the commands I used for the namespaces test's setup:
```
ip netns add ns1

ip link add veth0 type veth peer name veth1
ip link set veth1 netns ns1

ip a add 192.168.1.1/32 dev veth0
ip link set veth0 up
ip r add 192.168.1.0/24 dev veth0

ip netns exec ns1 ip a add 192.168.1.2/32 dev veth1
ip netns exec ns1 ip link set veth1 up
ip netns exec ns1 ip r add 192.168.1.0/24 dev veth1

ip link add name gnv0 type geneve id 1000 remote 192.168.1.2
ip a add 10.0.0.1/32 dev gnv0
ip link set gnv0 up
ip r add 10.0.1.1/32 dev gnv0

ip netns exec ns1 ip link add name gnv0 type geneve id 1000 remote 192.168.1.1
ip netns exec ns1 ip a add 10.0.1.1/32 dev gnv0
ip netns exec ns1 ip link set gnv0 up
ip netns exec ns1 ip r add 10.0.0.1/32 dev gnv0

ethtool -K veth0 generic-receive-offload off
ip netns exec ns1 ethtool -K veth1 generic-receive-offload off

# quick way to enable gro on veth devices
ethtool -K veth0 tcp-segmentation-offload off
ip netns exec ns1 ethtool -K veth1 tcp-segmentation-offload off
```

I'll continue looking into it on Monday. It would be great if someone from
your team can write a test that reproduces this issue.

Thanks.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-30 15:39         ` Richard Gobert
@ 2023-07-02 14:41           ` Gal Pressman
  2023-07-02 14:46             ` Gal Pressman
  0 siblings, 1 reply; 21+ messages in thread
From: Gal Pressman @ 2023-07-02 14:41 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

On 30/06/2023 18:39, Richard Gobert wrote:
> I haven't been able to reproduce it yet, I tried two different setups:
>     - 2 VMs running locally on my PC, and a geneve interface for each. Over
>       these geneve interfaces, I sent tcp traffic with a similar iperf
>       command as yours.
>     - A geneve tunnel over veth peers inside two separate namespaces as
>       David suggested.
> 
> The throughput looked fine and identical with and without my patch in both
> setups.
> 
> Although I did validate it while working on the patch, a problem may arise
> from:
>     - Packing CB members into a union, which could've led to some sort of
>       corruption.
>     - Calling `gro_pull_from_frag0` on the current skb before inserting it
>       into `gro_list`.
> 
> Could I ask you to run some tests:
>     - Running the script I attached here on one machine and checking whether
>       it reproduces the problem. 
>     - Reverting part of my commit: 
>         - Reverting the change to CB struct while keeping the changes to
>           `gro_pull_from_frag0`.
>         - Checking whether the regression remains.
> 
> Also, could you give me some more details:
>     - The VMs' NIC and driver. Are you using Qemu? 
>     - iperf results.
>     - The exact kernel versions (commit hashes) you are using.
>     - Did you run the commands (sysctl/ethtool) on the receiving VM?
> 
> 
> Here are the commands I used for the namespaces test's setup:
> ```
> ip netns add ns1
> 
> ip link add veth0 type veth peer name veth1
> ip link set veth1 netns ns1
> 
> ip a add 192.168.1.1/32 dev veth0
> ip link set veth0 up
> ip r add 192.168.1.0/24 dev veth0
> 
> ip netns exec ns1 ip a add 192.168.1.2/32 dev veth1
> ip netns exec ns1 ip link set veth1 up
> ip netns exec ns1 ip r add 192.168.1.0/24 dev veth1
> 
> ip link add name gnv0 type geneve id 1000 remote 192.168.1.2
> ip a add 10.0.0.1/32 dev gnv0
> ip link set gnv0 up
> ip r add 10.0.1.1/32 dev gnv0
> 
> ip netns exec ns1 ip link add name gnv0 type geneve id 1000 remote 192.168.1.1
> ip netns exec ns1 ip a add 10.0.1.1/32 dev gnv0
> ip netns exec ns1 ip link set gnv0 up
> ip netns exec ns1 ip r add 10.0.0.1/32 dev gnv0
> 
> ethtool -K veth0 generic-receive-offload off
> ip netns exec ns1 ethtool -K veth1 generic-receive-offload off
> 
> # quick way to enable gro on veth devices
> ethtool -K veth0 tcp-segmentation-offload off
> ip netns exec ns1 ethtool -K veth1 tcp-segmentation-offload off
> ```
> 
> I'll continue looking into it on Monday. It would be great if someone from
> your team can write a test that reproduces this issue.
> 
> Thanks.

Hey,

I don't have an answer for all of your questions yet, but it turns out I
left out an important detail, the issue reproduces when outer ipv6 is used.

I'm using ConnectX-6 Dx, with these scripts:

Server:
ip addr add 194.236.5.246/16 dev eth2
ip addr add ::12:236:5:246/96 dev eth2
ip link set dev eth2 up

ip link add p1_g464 type geneve id 464 remote ::12:236:4:245
ip link set dev p1_g464 up
ip addr add 196.236.5.1/16 dev p1_g464

Client:
ip addr add 194.236.4.245/16 dev eth2
ip addr add ::12:236:4:245/96 dev eth2
ip link set dev eth2 up

ip link add p0_g464 type geneve id 464 remote ::12:236:5:246
ip link set dev p0_g464 up
ip addr add 196.236.4.2/16 dev p0_g464

Once everything is set up, iperf -s on the server and
iperf -c 196.236.5.1 -i1 -t1000
On the client, should do the work.

Unfortunately, I haven't been able to reproduce the same issue with veth
interfaces.

Reverting the napi_gro_cb part indeed resolves the issue.

Thanks for taking a look!

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-07-02 14:41           ` Gal Pressman
@ 2023-07-02 14:46             ` Gal Pressman
  2023-07-03 14:23               ` Richard Gobert
  0 siblings, 1 reply; 21+ messages in thread
From: Gal Pressman @ 2023-07-02 14:46 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

On 02/07/2023 17:41, Gal Pressman wrote:
> On 30/06/2023 18:39, Richard Gobert wrote:
>> I haven't been able to reproduce it yet, I tried two different setups:
>>     - 2 VMs running locally on my PC, and a geneve interface for each. Over
>>       these geneve interfaces, I sent tcp traffic with a similar iperf
>>       command as yours.
>>     - A geneve tunnel over veth peers inside two separate namespaces as
>>       David suggested.
>>
>> The throughput looked fine and identical with and without my patch in both
>> setups.
>>
>> Although I did validate it while working on the patch, a problem may arise
>> from:
>>     - Packing CB members into a union, which could've led to some sort of
>>       corruption.
>>     - Calling `gro_pull_from_frag0` on the current skb before inserting it
>>       into `gro_list`.
>>
>> Could I ask you to run some tests:
>>     - Running the script I attached here on one machine and checking whether
>>       it reproduces the problem. 
>>     - Reverting part of my commit: 
>>         - Reverting the change to CB struct while keeping the changes to
>>           `gro_pull_from_frag0`.
>>         - Checking whether the regression remains.
>>
>> Also, could you give me some more details:
>>     - The VMs' NIC and driver. Are you using Qemu? 
>>     - iperf results.
>>     - The exact kernel versions (commit hashes) you are using.
>>     - Did you run the commands (sysctl/ethtool) on the receiving VM?
>>
>>
>> Here are the commands I used for the namespaces test's setup:
>> ```
>> ip netns add ns1
>>
>> ip link add veth0 type veth peer name veth1
>> ip link set veth1 netns ns1
>>
>> ip a add 192.168.1.1/32 dev veth0
>> ip link set veth0 up
>> ip r add 192.168.1.0/24 dev veth0
>>
>> ip netns exec ns1 ip a add 192.168.1.2/32 dev veth1
>> ip netns exec ns1 ip link set veth1 up
>> ip netns exec ns1 ip r add 192.168.1.0/24 dev veth1
>>
>> ip link add name gnv0 type geneve id 1000 remote 192.168.1.2
>> ip a add 10.0.0.1/32 dev gnv0
>> ip link set gnv0 up
>> ip r add 10.0.1.1/32 dev gnv0
>>
>> ip netns exec ns1 ip link add name gnv0 type geneve id 1000 remote 192.168.1.1
>> ip netns exec ns1 ip a add 10.0.1.1/32 dev gnv0
>> ip netns exec ns1 ip link set gnv0 up
>> ip netns exec ns1 ip r add 10.0.0.1/32 dev gnv0
>>
>> ethtool -K veth0 generic-receive-offload off
>> ip netns exec ns1 ethtool -K veth1 generic-receive-offload off
>>
>> # quick way to enable gro on veth devices
>> ethtool -K veth0 tcp-segmentation-offload off
>> ip netns exec ns1 ethtool -K veth1 tcp-segmentation-offload off
>> ```
>>
>> I'll continue looking into it on Monday. It would be great if someone from
>> your team can write a test that reproduces this issue.
>>
>> Thanks.
> 
> Hey,
> 
> I don't have an answer for all of your questions yet, but it turns out I
> left out an important detail, the issue reproduces when outer ipv6 is used.
> 
> I'm using ConnectX-6 Dx, with these scripts:
> 
> Server:
> ip addr add 194.236.5.246/16 dev eth2
> ip addr add ::12:236:5:246/96 dev eth2
> ip link set dev eth2 up
> 
> ip link add p1_g464 type geneve id 464 remote ::12:236:4:245
> ip link set dev p1_g464 up
> ip addr add 196.236.5.1/16 dev p1_g464
> 
> Client:
> ip addr add 194.236.4.245/16 dev eth2
> ip addr add ::12:236:4:245/96 dev eth2
> ip link set dev eth2 up
> 
> ip link add p0_g464 type geneve id 464 remote ::12:236:5:246
> ip link set dev p0_g464 up
> ip addr add 196.236.4.2/16 dev p0_g464
> 
> Once everything is set up, iperf -s on the server and
> iperf -c 196.236.5.1 -i1 -t1000
> On the client, should do the work.
> 
> Unfortunately, I haven't been able to reproduce the same issue with veth
> interfaces.
> 
> Reverting the napi_gro_cb part indeed resolves the issue.
> 
> Thanks for taking a look!

BTW, all testing is done after checking out to your commit:
7b355b76e2b3 ("gro: decrease size of CB")

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-07-02 14:46             ` Gal Pressman
@ 2023-07-03 14:23               ` Richard Gobert
  2023-07-07 12:31                 ` Richard Gobert
  0 siblings, 1 reply; 21+ messages in thread
From: Richard Gobert @ 2023-07-03 14:23 UTC (permalink / raw)
  To: Gal Pressman
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

Thank you for replying.
I will check it out and update once there is something new.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-07-03 14:23               ` Richard Gobert
@ 2023-07-07 12:31                 ` Richard Gobert
  2023-07-09  6:55                   ` Gal Pressman
  0 siblings, 1 reply; 21+ messages in thread
From: Richard Gobert @ 2023-07-07 12:31 UTC (permalink / raw)
  To: Gal Pressman
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

I managed to reproduce it and found the bug that explains the problem
you're experiencing.
I submitted a bugfix here: https://lore.kernel.org/netdev/20230707121650.GA17677@debian/
Thanks!

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-07-07 12:31                 ` Richard Gobert
@ 2023-07-09  6:55                   ` Gal Pressman
  0 siblings, 0 replies; 21+ messages in thread
From: Gal Pressman @ 2023-07-09  6:55 UTC (permalink / raw)
  To: Richard Gobert
  Cc: davem, edumazet, kuba, pabeni, aleksander.lobakin, lixiaoyan,
	lucien.xin, alexanderduyck, netdev, linux-kernel

On 07/07/2023 15:31, Richard Gobert wrote:
> I managed to reproduce it and found the bug that explains the problem
> you're experiencing.
> I submitted a bugfix here: https://lore.kernel.org/netdev/20230707121650.GA17677@debian/
> Thanks!

Thanks Richard!
Will test it and update.

BTW, did you manage to reproduce the issue with veth?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-06-28 14:19         ` David Ahern
@ 2023-08-23 14:43           ` Gal Pressman
  2023-08-24  3:31             ` David Ahern
  0 siblings, 1 reply; 21+ messages in thread
From: Gal Pressman @ 2023-08-23 14:43 UTC (permalink / raw)
  To: David Ahern, Richard Gobert, davem, edumazet, kuba, pabeni,
	aleksander.lobakin, lixiaoyan, lucien.xin, alexanderduyck,
	netdev, linux-kernel

On 28/06/2023 17:19, David Ahern wrote:
> On 6/28/23 6:42 AM, Gal Pressman wrote:
>> On 27/06/2023 17:21, David Ahern wrote:
>>> On 6/26/23 2:55 AM, Gal Pressman wrote:
>>>> I believe this commit broke gro over udp tunnels.
>>>> I'm running iperf tcp traffic over geneve interfaces and the bandwidth
>>>> is pretty much zero.
>>>>
>>>
>>> Could you add a test script to tools/testing/selftests/net? It will help
>>> catch future regressions.
>>>
>>
>> I'm checking internally, someone from the team might be able to work on
>> this, though I'm not sure that a test that verifies bandwidth makes much
>> sense as a selftest.
>>
> 
> With veth and namespaces I expect up to 25-30G performance levels,
> depending on the test. When something fundamental breaks like this patch
> a drop to < 1G would be a red flag, so there is value to the test.

Circling back to this, I believe such test already exists:
tools/testing/selftests/net/udpgro_fwd.sh

And it indeed fails before Richard's fix.

I guess all that's left is to actually run these tests :)?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/1] gro: decrease size of CB
  2023-08-23 14:43           ` Gal Pressman
@ 2023-08-24  3:31             ` David Ahern
  0 siblings, 0 replies; 21+ messages in thread
From: David Ahern @ 2023-08-24  3:31 UTC (permalink / raw)
  To: Gal Pressman, Richard Gobert, davem, edumazet, kuba, pabeni,
	aleksander.lobakin, lixiaoyan, lucien.xin, alexanderduyck,
	netdev, linux-kernel

On 8/23/23 7:43 AM, Gal Pressman wrote:
>> With veth and namespaces I expect up to 25-30G performance levels,
>> depending on the test. When something fundamental breaks like this patch
>> a drop to < 1G would be a red flag, so there is value to the test.
> Circling back to this, I believe such test already exists:
> tools/testing/selftests/net/udpgro_fwd.sh
> 
> And it indeed fails before Richard's fix.
> 
> I guess all that's left is to actually run these tests 😄?

hmmm... if that is the case, the Makefile shows:

TEST_PROGS += udpgro_fwd.sh

so it should be run. I wonder why one of the many bots did not flag it.

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-08-24  3:32 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-01 16:09 [PATCH v3 0/1] gro: decrease size of CB Richard Gobert
2023-06-01 16:14 ` [PATCH v3 1/1] " Richard Gobert
2023-06-06  7:25   ` Eric Dumazet
2023-06-26  8:55   ` Gal Pressman
2023-06-27 14:21     ` David Ahern
2023-06-28 12:42       ` Gal Pressman
2023-06-28 14:19         ` David Ahern
2023-08-23 14:43           ` Gal Pressman
2023-08-24  3:31             ` David Ahern
2023-06-29 12:36     ` Richard Gobert
2023-06-29 13:04       ` Gal Pressman
2023-06-30 15:39         ` Richard Gobert
2023-07-02 14:41           ` Gal Pressman
2023-07-02 14:46             ` Gal Pressman
2023-07-03 14:23               ` Richard Gobert
2023-07-07 12:31                 ` Richard Gobert
2023-07-09  6:55                   ` Gal Pressman
2023-06-02 14:22 ` [PATCH v3 0/1] " Alexander Lobakin
2023-06-05 13:58   ` Richard Gobert
2023-06-06 13:24     ` Alexander Lobakin
2023-06-06  9:20 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).