* Kernel 4.19 network performance - forwarding/routing normal users traffic
@ 2018-10-31 21:57 Paweł Staszewski
2018-10-31 22:09 ` Eric Dumazet
` (2 more replies)
0 siblings, 3 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-10-31 21:57 UTC (permalink / raw)
To: netdev
Hi
So maybee someone will be interested how linux kernel handles normal
traffic (not pktgen :) )
Server HW configuration:
CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
Server software:
FRR - as routing daemon
enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa
node)
enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node)
Maximum traffic that server can handle:
Bandwidth
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
\ iface Rx Tx Total
==============================================================================
enp175s0f1: 28.51 Gb/s 37.24 Gb/s
65.74 Gb/s
enp175s0f0: 38.07 Gb/s 28.44 Gb/s
66.51 Gb/s
------------------------------------------------------------------------------
total: 66.58 Gb/s 65.67 Gb/s
132.25 Gb/s
Packets per second:
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
- iface Rx Tx Total
==============================================================================
enp175s0f1: 5248589.00 P/s 3486617.75 P/s 8735207.00 P/s
enp175s0f0: 3557944.25 P/s 5232516.00 P/s 8790460.00 P/s
------------------------------------------------------------------------------
total: 8806533.00 P/s 8719134.00 P/s 17525668.00 P/s
After reaching that limits nics on the upstream side (more RX traffic)
start to drop packets
I just dont understand that server can't handle more bandwidth
(~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX
side are increasing.
Was thinking that maybee reached some pcie x16 limit - but x16 8GT is
126Gbit - and also when testing with pktgen i can reach more bw and pps
(like 4x more comparing to normal internet traffic)
And wondering if there is something that can be improved here.
Some more informations / counters / stats and perf top below:
Perf top flame graph:
https://uploadfiles.io/7zo6u
System configuration(long):
cat /sys/devices/system/node/node1/cpulist
14-27,42-55
cat /sys/class/net/enp175s0f0/device/numa_node
1
cat /sys/class/net/enp175s0f1/device/numa_node
1
ip -s -d link ls dev enp175s0f0
6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
UP mode DEFAULT group default qlen 8192
link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity 0
addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536
gso_max_segs 65535
RX: bytes packets errors dropped overrun mcast
184142375840858 141347715974 2 2806325 0 85050528
TX: bytes packets errors dropped carrier collsns
99270697277430 172227994003 0 0 0 0
ip -s -d link ls dev enp175s0f1
7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
UP mode DEFAULT group default qlen 8192
link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity 0
addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536
gso_max_segs 65535
RX: bytes packets errors dropped overrun mcast
99686284170801 173507590134 61 669685 0 100304421
TX: bytes packets errors dropped carrier collsns
184435107970545 142383178304 0 0 0 0
./softnet.sh
cpu total dropped squeezed collision rps flow_limit
0 3961392822 0 1221478 0 0 0
1 3701952251 0 1258234 0 0 0
2 3879522030 0 1584282 0 0 0
3 3731349789 0 1529029 0 0 0
4 1323956701 0 2176371 0 0 0
5 420528963 0 1880146 0 0 0
6 348720322 0 1830142 0 0 0
7 372736328 0 1820891 0 0 0
8 567888751 0 1414763 0 0 0
9 476075775 0 1868150 0 0 0
10 468946725 0 1841428 0 0 0
11 676591958 0 1900160 0 0 0
12 346803472 0 1834600 0 0 0
13 457960872 0 1874529 0 0 0
14 1990279665 0 4699000 0 0 0
15 1211873601 0 4541281 0 0 0
16 1123871928 0 4544712 0 0 0
17 1014957263 0 4152355 0 0 0
18 2603779724 0 4593869 0 0 0
19 2181924054 0 4930618 0 0 0
20 2273502182 0 4894627 0 0 0
21 2232030947 0 4860048 0 0 0
22 2203555394 0 4603830 0 0 0
23 2194756800 0 4921294 0 0 0
24 2347158294 0 4818354 0 0 0
25 2291097883 0 4744469 0 0 0
26 2206945011 0 4836483 0 0 0
27 2318530217 0 4917617 0 0 0
28 512797543 0 1895200 0 0 0
29 597279474 0 1532134 0 0 0
30 475317503 0 1451523 0 0 0
31 499172796 0 1901207 0 0 0
32 493874745 0 1915382 0 0 0
33 296056288 0 1865535 0 0 0
34 3905097041 0 1580822 0 0 0
35 3905112345 0 1536105 0 0 0
36 3900358950 0 1166319 0 0 0
37 3940978093 0 1600219 0 0 0
38 3878632215 0 1180389 0 0 0
39 3814804736 0 1584925 0 0 0
40 4152934337 0 1663660 0 0 0
41 3855273904 0 1552219 0 0 0
42 2319538182 0 4884480 0 0 0
43 2448606991 0 4387456 0 0 0
44 1436136753 0 4485073 0 0 0
45 1200500141 0 4537284 0 0 0
46 1307799923 0 4534156 0 0 0
47 1586575293 0 4272997 0 0 0
48 3852574 0 4162653 0 0 0
49 391449390 0 3935202 0 0 0
50 791388200 0 4290738 0 0 0
51 127107573 0 3907750 0 0 0
52 115622148 0 4012843 0 0 0
53 71098871 0 4200625 0 0 0
54 305121466 0 4365614 0 0 0
55 10914257 0 4369426 0 0 0
PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz
cycles], (all, 56 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
26.78% [kernel] [k] queued_spin_lock_slowpath
9.09% [kernel] [k] mlx5e_skb_from_cqe_linear
4.94% [kernel] [k] mlx5e_sq_xmit
3.63% [kernel] [k] memcpy_erms
3.30% [kernel] [k] fib_table_lookup
3.26% [kernel] [k] build_skb
2.41% [kernel] [k] mlx5e_poll_tx_cq
2.11% [kernel] [k] get_page_from_freelist
1.51% [kernel] [k] vlan_do_receive
1.51% [kernel] [k] _raw_spin_lock
1.43% [kernel] [k] __dev_queue_xmit
1.41% [kernel] [k] dev_gro_receive
1.34% [kernel] [k] mlx5e_poll_rx_cq
1.26% [kernel] [k] tcp_gro_receive
1.21% [kernel] [k] free_one_page
1.13% [kernel] [k] swiotlb_map_page
1.13% [kernel] [k] mlx5e_post_rx_wqes
1.05% [kernel] [k] pfifo_fast_dequeue
1.05% [kernel] [k] mlx5e_handle_rx_cqe
1.03% [kernel] [k] ip_finish_output2
1.02% [kernel] [k] ipt_do_table
0.96% [kernel] [k] inet_gro_receive
0.91% [kernel] [k] mlx5_eq_int
0.88% [kernel] [k] __slab_free.isra.79
0.86% [kernel] [k] __build_skb
0.84% [kernel] [k] page_frag_free
0.76% [kernel] [k] skb_release_data
0.75% [kernel] [k] __netif_receive_skb_core
0.75% [kernel] [k] irq_entries_start
0.71% [kernel] [k] ip_route_input_rcu
0.65% [kernel] [k] vlan_dev_hard_start_xmit
0.56% [kernel] [k] ip_forward
0.56% [kernel] [k] __memcpy
0.52% [kernel] [k] kmem_cache_alloc
0.52% [kernel] [k] kmem_cache_free_bulk
0.49% [kernel] [k] mlx5e_page_release
0.47% [kernel] [k] netif_skb_features
0.47% [kernel] [k] mlx5e_build_rx_skb
0.47% [kernel] [k] dev_hard_start_xmit
0.43% [kernel] [k] __page_pool_put_page
0.43% [kernel] [k] __netif_schedule
0.43% [kernel] [k] mlx5e_xmit
0.41% [kernel] [k] __qdisc_run
0.41% [kernel] [k] validate_xmit_skb.isra.142
0.41% [kernel] [k] swiotlb_unmap_page
0.40% [kernel] [k] inet_lookup_ifaddr_rcu
0.34% [kernel] [k] ip_rcv_core.isra.20.constprop.25
0.34% [kernel] [k] tcp4_gro_receive
0.29% [kernel] [k] _raw_spin_lock_irqsave
0.29% [kernel] [k] napi_consume_skb
0.29% [kernel] [k] skb_gro_receive
0.29% [kernel] [k] ___slab_alloc.isra.80
0.27% [kernel] [k] eth_type_trans
0.26% [kernel] [k] __free_pages_ok
0.26% [kernel] [k] __get_xps_queue_idx
0.24% [kernel] [k] _raw_spin_trylock
0.23% [kernel] [k] __local_bh_enable_ip
0.22% [kernel] [k] pfifo_fast_enqueue
0.21% [kernel] [k] tasklet_action_common.isra.21
0.21% [kernel] [k] sch_direct_xmit
0.21% [kernel] [k] skb_network_protocol
0.21% [kernel] [k] kmem_cache_free
0.20% [kernel] [k] netdev_pick_tx
0.18% [kernel] [k] napi_gro_complete
0.18% [kernel] [k] __sched_text_start
0.18% [kernel] [k] mlx5e_xdp_handle
0.17% [kernel] [k] ip_finish_output
0.16% [kernel] [k] napi_gro_flush
0.16% [kernel] [k] vlan_passthru_hard_header
0.16% [kernel] [k] skb_segment
0.15% [kernel] [k] __alloc_pages_nodemask
0.15% [kernel] [k] mlx5e_features_check
0.15% [kernel] [k] mlx5e_napi_poll
0.15% [kernel] [k] napi_gro_receive
0.14% [kernel] [k] fib_validate_source
0.14% [kernel] [k] _raw_spin_lock_irq
0.14% [kernel] [k] inet_gro_complete
0.14% [kernel] [k] get_partial_node.isra.78
0.13% [kernel] [k] napi_complete_done
0.13% [kernel] [k] ip_rcv_finish_core.isra.17
0.13% [kernel] [k] cmd_exec
ethtool -S enp175s0f1
NIC statistics:
rx_packets: 173730800927
rx_bytes: 99827422751332
tx_packets: 142532009512
tx_bytes: 184633045911222
tx_tso_packets: 25989113891
tx_tso_bytes: 132933363384458
tx_tso_inner_packets: 0
tx_tso_inner_bytes: 0
tx_added_vlan_packets: 74630239613
tx_nop: 2029817748
rx_lro_packets: 0
rx_lro_bytes: 0
rx_ecn_mark: 0
rx_removed_vlan_packets: 173730800927
rx_csum_unnecessary: 0
rx_csum_none: 434357
rx_csum_complete: 173730366570
rx_csum_unnecessary_inner: 0
rx_xdp_drop: 0
rx_xdp_redirect: 0
rx_xdp_tx_xmit: 0
rx_xdp_tx_full: 0
rx_xdp_tx_err: 0
rx_xdp_tx_cqe: 0
tx_csum_none: 38260960853
tx_csum_partial: 36369278774
tx_csum_partial_inner: 0
tx_queue_stopped: 1
tx_queue_dropped: 0
tx_xmit_more: 748638099
tx_recover: 0
tx_cqes: 73881645031
tx_queue_wake: 1
tx_udp_seg_rem: 0
tx_cqe_err: 0
tx_xdp_xmit: 0
tx_xdp_full: 0
tx_xdp_err: 0
tx_xdp_cqes: 0
rx_wqe_err: 0
rx_mpwqe_filler_cqes: 0
rx_mpwqe_filler_strides: 0
rx_buff_alloc_err: 0
rx_cqe_compress_blks: 0
rx_cqe_compress_pkts: 0
rx_page_reuse: 0
rx_cache_reuse: 14441066823
rx_cache_full: 51126004413
rx_cache_empty: 21297344082
rx_cache_busy: 51127247487
rx_cache_waive: 21298322293
rx_congst_umr: 0
rx_arfs_err: 0
ch_events: 24603119858
ch_poll: 25180949074
ch_arm: 24480437587
ch_aff_change: 75
ch_eq_rearm: 0
rx_out_of_buffer: 669685
rx_if_down_packets: 61
rx_vport_unicast_packets: 173731641945
rx_vport_unicast_bytes: 100522745036693
tx_vport_unicast_packets: 142531901313
tx_vport_unicast_bytes: 185189071776429
rx_vport_multicast_packets: 100360886
rx_vport_multicast_bytes: 6639236688
tx_vport_multicast_packets: 32837
tx_vport_multicast_bytes: 2978810
rx_vport_broadcast_packets: 44854
rx_vport_broadcast_bytes: 6313510
tx_vport_broadcast_packets: 72258
tx_vport_broadcast_bytes: 4335480
rx_vport_rdma_unicast_packets: 0
rx_vport_rdma_unicast_bytes: 0
tx_vport_rdma_unicast_packets: 0
tx_vport_rdma_unicast_bytes: 0
rx_vport_rdma_multicast_packets: 0
rx_vport_rdma_multicast_bytes: 0
tx_vport_rdma_multicast_packets: 0
tx_vport_rdma_multicast_bytes: 0
tx_packets_phy: 142532004669
rx_packets_phy: 173980375752
rx_crc_errors_phy: 0
tx_bytes_phy: 185759204762903
rx_bytes_phy: 101326109361379
tx_multicast_phy: 32837
tx_broadcast_phy: 72258
rx_multicast_phy: 100360885
rx_broadcast_phy: 44854
rx_in_range_len_errors_phy: 2
rx_out_of_range_len_phy: 0
rx_oversize_pkts_phy: 59
rx_symbol_err_phy: 0
tx_mac_control_phy: 0
rx_mac_control_phy: 0
rx_unsupported_op_phy: 0
rx_pause_ctrl_phy: 0
tx_pause_ctrl_phy: 0
rx_discards_phy: 148328738
tx_discards_phy: 0
tx_errors_phy: 0
rx_undersize_pkts_phy: 0
rx_fragments_phy: 0
rx_jabbers_phy: 0
rx_64_bytes_phy: 36551843112
rx_65_to_127_bytes_phy: 65102131735
rx_128_to_255_bytes_phy: 5755731137
rx_256_to_511_bytes_phy: 2475619839
rx_512_to_1023_bytes_phy: 2826971156
rx_1024_to_1518_bytes_phy: 42474023107
rx_1519_to_2047_bytes_phy: 18794051270
rx_2048_to_4095_bytes_phy: 0
rx_4096_to_8191_bytes_phy: 0
rx_8192_to_10239_bytes_phy: 0
link_down_events_phy: 0
rx_pcs_symbol_err_phy: 0
rx_corrected_bits_phy: 0
rx_pci_signal_integrity: 0
tx_pci_signal_integrity: 48
rx_prio0_bytes: 101316322498995
rx_prio0_packets: 173711151686
tx_prio0_bytes: 185759176566814
tx_prio0_packets: 142531983704
rx_prio1_bytes: 47062768
rx_prio1_packets: 228932
tx_prio1_bytes: 0
tx_prio1_packets: 0
rx_prio2_bytes: 12434759
rx_prio2_packets: 83773
tx_prio2_bytes: 0
tx_prio2_packets: 0
rx_prio3_bytes: 288843134
rx_prio3_packets: 982102
tx_prio3_bytes: 0
tx_prio3_packets: 0
rx_prio4_bytes: 699797236
rx_prio4_packets: 8109231
tx_prio4_bytes: 0
tx_prio4_packets: 0
rx_prio5_bytes: 1385386738
rx_prio5_packets: 9661187
tx_prio5_bytes: 0
tx_prio5_packets: 0
rx_prio6_bytes: 317092102
rx_prio6_packets: 1951538
tx_prio6_bytes: 0
tx_prio6_packets: 0
rx_prio7_bytes: 7015734695
rx_prio7_packets: 99847456
tx_prio7_bytes: 0
tx_prio7_packets: 0
module_unplug: 0
module_bus_stuck: 0
module_high_temp: 0
module_bad_shorted: 0
ch0_events: 936264703
ch0_poll: 963766474
ch0_arm: 930246079
ch0_aff_change: 0
ch0_eq_rearm: 0
ch1_events: 869408429
ch1_poll: 896099392
ch1_arm: 864336861
ch1_aff_change: 0
ch1_eq_rearm: 0
ch2_events: 843345698
ch2_poll: 869749522
ch2_arm: 838186113
ch2_aff_change: 2
ch2_eq_rearm: 0
ch3_events: 850261340
ch3_poll: 876721111
ch3_arm: 845295235
ch3_aff_change: 3
ch3_eq_rearm: 0
ch4_events: 974985780
ch4_poll: 997781915
ch4_arm: 969618250
ch4_aff_change: 3
ch4_eq_rearm: 0
ch5_events: 888559089
ch5_poll: 912783615
ch5_arm: 883826078
ch5_aff_change: 2
ch5_eq_rearm: 0
ch6_events: 873730730
ch6_poll: 899635752
ch6_arm: 868677574
ch6_aff_change: 4
ch6_eq_rearm: 0
ch7_events: 873478411
ch7_poll: 899216716
ch7_arm: 868693645
ch7_aff_change: 3
ch7_eq_rearm: 0
ch8_events: 871900967
ch8_poll: 898575518
ch8_arm: 866763693
ch8_aff_change: 3
ch8_eq_rearm: 0
ch9_events: 880325565
ch9_poll: 904983269
ch9_arm: 875643922
ch9_aff_change: 2
ch9_eq_rearm: 0
ch10_events: 889919775
ch10_poll: 915335809
ch10_arm: 885110225
ch10_aff_change: 4
ch10_eq_rearm: 0
ch11_events: 962709175
ch11_poll: 983963451
ch11_arm: 958117526
ch11_aff_change: 2
ch11_eq_rearm: 0
ch12_events: 941333837
ch12_poll: 964625523
ch12_arm: 936409706
ch12_aff_change: 2
ch12_eq_rearm: 0
ch13_events: 914996974
ch13_poll: 937441049
ch13_arm: 910478393
ch13_aff_change: 4
ch13_eq_rearm: 0
ch14_events: 888050001
ch14_poll: 911818008
ch14_arm: 883465035
ch14_aff_change: 4
ch14_eq_rearm: 0
ch15_events: 947547704
ch15_poll: 969073194
ch15_arm: 942686515
ch15_aff_change: 4
ch15_eq_rearm: 0
ch16_events: 825804904
ch16_poll: 840630747
ch16_arm: 822227488
ch16_aff_change: 2
ch16_eq_rearm: 0
ch17_events: 861673823
ch17_poll: 874754041
ch17_arm: 858520448
ch17_aff_change: 2
ch17_eq_rearm: 0
ch18_events: 879413440
ch18_poll: 893962529
ch18_arm: 875983204
ch18_aff_change: 4
ch18_eq_rearm: 0
ch19_events: 896073709
ch19_poll: 909216857
ch19_arm: 893022121
ch19_aff_change: 4
ch19_eq_rearm: 0
ch20_events: 865188535
ch20_poll: 880692345
ch20_arm: 861440265
ch20_aff_change: 3
ch20_eq_rearm: 0
ch21_events: 862709303
ch21_poll: 878104242
ch21_arm: 859041767
ch21_aff_change: 2
ch21_eq_rearm: 0
ch22_events: 887720551
ch22_poll: 904122074
ch22_arm: 883983794
ch22_aff_change: 2
ch22_eq_rearm: 0
ch23_events: 813355027
ch23_poll: 828074467
ch23_arm: 809912398
ch23_aff_change: 4
ch23_eq_rearm: 0
ch24_events: 822366675
ch24_poll: 839917937
ch24_arm: 818422754
ch24_aff_change: 2
ch24_eq_rearm: 0
ch25_events: 826642292
ch25_poll: 842630121
ch25_arm: 822642618
ch25_aff_change: 2
ch25_eq_rearm: 0
ch26_events: 826392584
ch26_poll: 843406973
ch26_arm: 822455000
ch26_aff_change: 3
ch26_eq_rearm: 0
ch27_events: 828960899
ch27_poll: 843866518
ch27_arm: 825230937
ch27_aff_change: 3
ch27_eq_rearm: 0
ch28_events: 7
ch28_poll: 7
ch28_arm: 7
ch28_aff_change: 0
ch28_eq_rearm: 0
ch29_events: 4
ch29_poll: 4
ch29_arm: 4
ch29_aff_change: 0
ch29_eq_rearm: 0
ch30_events: 4
ch30_poll: 4
ch30_arm: 4
ch30_aff_change: 0
ch30_eq_rearm: 0
ch31_events: 4
ch31_poll: 4
ch31_arm: 4
ch31_aff_change: 0
ch31_eq_rearm: 0
ch32_events: 4
ch32_poll: 4
ch32_arm: 4
ch32_aff_change: 0
ch32_eq_rearm: 0
ch33_events: 4
ch33_poll: 4
ch33_arm: 4
ch33_aff_change: 0
ch33_eq_rearm: 0
ch34_events: 4
ch34_poll: 4
ch34_arm: 4
ch34_aff_change: 0
ch34_eq_rearm: 0
ch35_events: 4
ch35_poll: 4
ch35_arm: 4
ch35_aff_change: 0
ch35_eq_rearm: 0
ch36_events: 4
ch36_poll: 4
ch36_arm: 4
ch36_aff_change: 0
ch36_eq_rearm: 0
ch37_events: 4
ch37_poll: 4
ch37_arm: 4
ch37_aff_change: 0
ch37_eq_rearm: 0
ch38_events: 4
ch38_poll: 4
ch38_arm: 4
ch38_aff_change: 0
ch38_eq_rearm: 0
ch39_events: 4
ch39_poll: 4
ch39_arm: 4
ch39_aff_change: 0
ch39_eq_rearm: 0
ch40_events: 4
ch40_poll: 4
ch40_arm: 4
ch40_aff_change: 0
ch40_eq_rearm: 0
ch41_events: 4
ch41_poll: 4
ch41_arm: 4
ch41_aff_change: 0
ch41_eq_rearm: 0
ch42_events: 4
ch42_poll: 4
ch42_arm: 4
ch42_aff_change: 0
ch42_eq_rearm: 0
ch43_events: 4
ch43_poll: 4
ch43_arm: 4
ch43_aff_change: 0
ch43_eq_rearm: 0
ch44_events: 4
ch44_poll: 4
ch44_arm: 4
ch44_aff_change: 0
ch44_eq_rearm: 0
ch45_events: 4
ch45_poll: 4
ch45_arm: 4
ch45_aff_change: 0
ch45_eq_rearm: 0
ch46_events: 4
ch46_poll: 4
ch46_arm: 4
ch46_aff_change: 0
ch46_eq_rearm: 0
ch47_events: 4
ch47_poll: 4
ch47_arm: 4
ch47_aff_change: 0
ch47_eq_rearm: 0
ch48_events: 4
ch48_poll: 4
ch48_arm: 4
ch48_aff_change: 0
ch48_eq_rearm: 0
ch49_events: 4
ch49_poll: 4
ch49_arm: 4
ch49_aff_change: 0
ch49_eq_rearm: 0
ch50_events: 4
ch50_poll: 4
ch50_arm: 4
ch50_aff_change: 0
ch50_eq_rearm: 0
ch51_events: 4
ch51_poll: 4
ch51_arm: 4
ch51_aff_change: 0
ch51_eq_rearm: 0
ch52_events: 4
ch52_poll: 4
ch52_arm: 4
ch52_aff_change: 0
ch52_eq_rearm: 0
ch53_events: 4
ch53_poll: 4
ch53_arm: 4
ch53_aff_change: 0
ch53_eq_rearm: 0
ch54_events: 4
ch54_poll: 4
ch54_arm: 4
ch54_aff_change: 0
ch54_eq_rearm: 0
ch55_events: 4
ch55_poll: 4
ch55_arm: 4
ch55_aff_change: 0
ch55_eq_rearm: 0
rx0_packets: 7284057433
rx0_bytes: 4330611281319
rx0_csum_complete: 7283623076
rx0_csum_unnecessary: 0
rx0_csum_unnecessary_inner: 0
rx0_csum_none: 434357
rx0_xdp_drop: 0
rx0_xdp_redirect: 0
rx0_lro_packets: 0
rx0_lro_bytes: 0
rx0_ecn_mark: 0
rx0_removed_vlan_packets: 7284057433
rx0_wqe_err: 0
rx0_mpwqe_filler_cqes: 0
rx0_mpwqe_filler_strides: 0
rx0_buff_alloc_err: 0
rx0_cqe_compress_blks: 0
rx0_cqe_compress_pkts: 0
rx0_page_reuse: 0
rx0_cache_reuse: 1989731589
rx0_cache_full: 28213297
rx0_cache_empty: 1624089822
rx0_cache_busy: 28213961
rx0_cache_waive: 1624083610
rx0_congst_umr: 0
rx0_arfs_err: 0
rx0_xdp_tx_xmit: 0
rx0_xdp_tx_full: 0
rx0_xdp_tx_err: 0
rx0_xdp_tx_cqes: 0
rx1_packets: 6691319211
rx1_bytes: 3799580210608
rx1_csum_complete: 6691319211
rx1_csum_unnecessary: 0
rx1_csum_unnecessary_inner: 0
rx1_csum_none: 0
rx1_xdp_drop: 0
rx1_xdp_redirect: 0
rx1_lro_packets: 0
rx1_lro_bytes: 0
rx1_ecn_mark: 0
rx1_removed_vlan_packets: 6691319211
rx1_wqe_err: 0
rx1_mpwqe_filler_cqes: 0
rx1_mpwqe_filler_strides: 0
rx1_buff_alloc_err: 0
rx1_cqe_compress_blks: 0
rx1_cqe_compress_pkts: 0
rx1_page_reuse: 0
rx1_cache_reuse: 2270019
rx1_cache_full: 3343389331
rx1_cache_empty: 6656
rx1_cache_busy: 3343389585
rx1_cache_waive: 0
rx1_congst_umr: 0
rx1_arfs_err: 0
rx1_xdp_tx_xmit: 0
rx1_xdp_tx_full: 0
rx1_xdp_tx_err: 0
rx1_xdp_tx_cqes: 0
rx2_packets: 6618370416
rx2_bytes: 3762508364015
rx2_csum_complete: 6618370416
rx2_csum_unnecessary: 0
rx2_csum_unnecessary_inner: 0
rx2_csum_none: 0
rx2_xdp_drop: 0
rx2_xdp_redirect: 0
rx2_lro_packets: 0
rx2_lro_bytes: 0
rx2_ecn_mark: 0
rx2_removed_vlan_packets: 6618370416
rx2_wqe_err: 0
rx2_mpwqe_filler_cqes: 0
rx2_mpwqe_filler_strides: 0
rx2_buff_alloc_err: 0
rx2_cqe_compress_blks: 0
rx2_cqe_compress_pkts: 0
rx2_page_reuse: 0
rx2_cache_reuse: 111419328
rx2_cache_full: 1807563903
rx2_cache_empty: 1390208158
rx2_cache_busy: 1807564378
rx2_cache_waive: 1390201722
rx2_congst_umr: 0
rx2_arfs_err: 0
rx2_xdp_tx_xmit: 0
rx2_xdp_tx_full: 0
rx2_xdp_tx_err: 0
rx2_xdp_tx_cqes: 0
rx3_packets: 6665308976
rx3_bytes: 3828546206006
rx3_csum_complete: 6665308976
rx3_csum_unnecessary: 0
rx3_csum_unnecessary_inner: 0
rx3_csum_none: 0
rx3_xdp_drop: 0
rx3_xdp_redirect: 0
rx3_lro_packets: 0
rx3_lro_bytes: 0
rx3_ecn_mark: 0
rx3_removed_vlan_packets: 6665308976
rx3_wqe_err: 0
rx3_mpwqe_filler_cqes: 0
rx3_mpwqe_filler_strides: 0
rx3_buff_alloc_err: 0
rx3_cqe_compress_blks: 0
rx3_cqe_compress_pkts: 0
rx3_page_reuse: 0
rx3_cache_reuse: 215779091
rx3_cache_full: 1720040649
rx3_cache_empty: 1396840926
rx3_cache_busy: 1720041127
rx3_cache_waive: 1396834493
rx3_congst_umr: 0
rx3_arfs_err: 0
rx3_xdp_tx_xmit: 0
rx3_xdp_tx_full: 0
rx3_xdp_tx_err: 0
rx3_xdp_tx_cqes: 0
rx4_packets: 6764448165
rx4_bytes: 3883101339142
rx4_csum_complete: 6764448165
rx4_csum_unnecessary: 0
rx4_csum_unnecessary_inner: 0
rx4_csum_none: 0
rx4_xdp_drop: 0
rx4_xdp_redirect: 0
rx4_lro_packets: 0
rx4_lro_bytes: 0
rx4_ecn_mark: 0
rx4_removed_vlan_packets: 6764448165
rx4_wqe_err: 0
rx4_mpwqe_filler_cqes: 0
rx4_mpwqe_filler_strides: 0
rx4_buff_alloc_err: 0
rx4_cqe_compress_blks: 0
rx4_cqe_compress_pkts: 0
rx4_page_reuse: 0
rx4_cache_reuse: 1930710653
rx4_cache_full: 6490815
rx4_cache_empty: 1445028605
rx4_cache_busy: 6491478
rx4_cache_waive: 1445022392
rx4_congst_umr: 0
rx4_arfs_err: 0
rx4_xdp_tx_xmit: 0
rx4_xdp_tx_full: 0
rx4_xdp_tx_err: 0
rx4_xdp_tx_cqes: 0
rx5_packets: 6736853264
rx5_bytes: 3925186068552
rx5_csum_complete: 6736853264
rx5_csum_unnecessary: 0
rx5_csum_unnecessary_inner: 0
rx5_csum_none: 0
rx5_xdp_drop: 0
rx5_xdp_redirect: 0
rx5_lro_packets: 0
rx5_lro_bytes: 0
rx5_ecn_mark: 0
rx5_removed_vlan_packets: 6736853264
rx5_wqe_err: 0
rx5_mpwqe_filler_cqes: 0
rx5_mpwqe_filler_strides: 0
rx5_buff_alloc_err: 0
rx5_cqe_compress_blks: 0
rx5_cqe_compress_pkts: 0
rx5_page_reuse: 0
rx5_cache_reuse: 7283914
rx5_cache_full: 3361142463
rx5_cache_empty: 6656
rx5_cache_busy: 3361142718
rx5_cache_waive: 0
rx5_congst_umr: 0
rx5_arfs_err: 0
rx5_xdp_tx_xmit: 0
rx5_xdp_tx_full: 0
rx5_xdp_tx_err: 0
rx5_xdp_tx_cqes: 0
rx6_packets: 6751588828
rx6_bytes: 3860537598885
rx6_csum_complete: 6751588828
rx6_csum_unnecessary: 0
rx6_csum_unnecessary_inner: 0
rx6_csum_none: 0
rx6_xdp_drop: 0
rx6_xdp_redirect: 0
rx6_lro_packets: 0
rx6_lro_bytes: 0
rx6_ecn_mark: 0
rx6_removed_vlan_packets: 6751588828
rx6_wqe_err: 0
rx6_mpwqe_filler_cqes: 0
rx6_mpwqe_filler_strides: 0
rx6_buff_alloc_err: 0
rx6_cqe_compress_blks: 0
rx6_cqe_compress_pkts: 0
rx6_page_reuse: 0
rx6_cache_reuse: 96032126
rx6_cache_full: 1857890923
rx6_cache_empty: 1421877543
rx6_cache_busy: 1857891399
rx6_cache_waive: 1421871110
rx6_congst_umr: 0
rx6_arfs_err: 0
rx6_xdp_tx_xmit: 0
rx6_xdp_tx_full: 0
rx6_xdp_tx_err: 0
rx6_xdp_tx_cqes: 0
rx7_packets: 6935300074
rx7_bytes: 4004713524388
rx7_csum_complete: 6935300074
rx7_csum_unnecessary: 0
rx7_csum_unnecessary_inner: 0
rx7_csum_none: 0
rx7_xdp_drop: 0
rx7_xdp_redirect: 0
rx7_lro_packets: 0
rx7_lro_bytes: 0
rx7_ecn_mark: 0
rx7_removed_vlan_packets: 6935300074
rx7_wqe_err: 0
rx7_mpwqe_filler_cqes: 0
rx7_mpwqe_filler_strides: 0
rx7_buff_alloc_err: 0
rx7_cqe_compress_blks: 0
rx7_cqe_compress_pkts: 0
rx7_page_reuse: 0
rx7_cache_reuse: 17555187
rx7_cache_full: 3450094595
rx7_cache_empty: 6656
rx7_cache_busy: 3450094849
rx7_cache_waive: 0
rx7_congst_umr: 0
rx7_arfs_err: 0
rx7_xdp_tx_xmit: 0
rx7_xdp_tx_full: 0
rx7_xdp_tx_err: 0
rx7_xdp_tx_cqes: 0
rx8_packets: 6678640094
rx8_bytes: 3783722686028
rx8_csum_complete: 6678640094
rx8_csum_unnecessary: 0
rx8_csum_unnecessary_inner: 0
rx8_csum_none: 0
rx8_xdp_drop: 0
rx8_xdp_redirect: 0
rx8_lro_packets: 0
rx8_lro_bytes: 0
rx8_ecn_mark: 0
rx8_removed_vlan_packets: 6678640094
rx8_wqe_err: 0
rx8_mpwqe_filler_cqes: 0
rx8_mpwqe_filler_strides: 0
rx8_buff_alloc_err: 0
rx8_cqe_compress_blks: 0
rx8_cqe_compress_pkts: 0
rx8_page_reuse: 0
rx8_cache_reuse: 71006578
rx8_cache_full: 1879380649
rx8_cache_empty: 1388938999
rx8_cache_busy: 1879381123
rx8_cache_waive: 1388932565
rx8_congst_umr: 0
rx8_arfs_err: 0
rx8_xdp_tx_xmit: 0
rx8_xdp_tx_full: 0
rx8_xdp_tx_err: 0
rx8_xdp_tx_cqes: 0
rx9_packets: 6709855557
rx9_bytes: 3849522227880
rx9_csum_complete: 6709855557
rx9_csum_unnecessary: 0
rx9_csum_unnecessary_inner: 0
rx9_csum_none: 0
rx9_xdp_drop: 0
rx9_xdp_redirect: 0
rx9_lro_packets: 0
rx9_lro_bytes: 0
rx9_ecn_mark: 0
rx9_removed_vlan_packets: 6709855557
rx9_wqe_err: 0
rx9_mpwqe_filler_cqes: 0
rx9_mpwqe_filler_strides: 0
rx9_buff_alloc_err: 0
rx9_cqe_compress_blks: 0
rx9_cqe_compress_pkts: 0
rx9_page_reuse: 0
rx9_cache_reuse: 108980215
rx9_cache_full: 1822730121
rx9_cache_empty: 1423223623
rx9_cache_busy: 1822730594
rx9_cache_waive: 1423217187
rx9_congst_umr: 0
rx9_arfs_err: 0
rx9_xdp_tx_xmit: 0
rx9_xdp_tx_full: 0
rx9_xdp_tx_err: 0
rx9_xdp_tx_cqes: 0
rx10_packets: 6761861066
rx10_bytes: 3816266733385
rx10_csum_complete: 6761861066
rx10_csum_unnecessary: 0
rx10_csum_unnecessary_inner: 0
rx10_csum_none: 0
rx10_xdp_drop: 0
rx10_xdp_redirect: 0
rx10_lro_packets: 0
rx10_lro_bytes: 0
rx10_ecn_mark: 0
rx10_removed_vlan_packets: 6761861066
rx10_wqe_err: 0
rx10_mpwqe_filler_cqes: 0
rx10_mpwqe_filler_strides: 0
rx10_buff_alloc_err: 0
rx10_cqe_compress_blks: 0
rx10_cqe_compress_pkts: 0
rx10_page_reuse: 0
rx10_cache_reuse: 3489300
rx10_cache_full: 3377440977
rx10_cache_empty: 6656
rx10_cache_busy: 3377441216
rx10_cache_waive: 0
rx10_congst_umr: 0
rx10_arfs_err: 0
rx10_xdp_tx_xmit: 0
rx10_xdp_tx_full: 0
rx10_xdp_tx_err: 0
rx10_xdp_tx_cqes: 0
rx11_packets: 6868113938
rx11_bytes: 4048196300710
rx11_csum_complete: 6868113938
rx11_csum_unnecessary: 0
rx11_csum_unnecessary_inner: 0
rx11_csum_none: 0
rx11_xdp_drop: 0
rx11_xdp_redirect: 0
rx11_lro_packets: 0
rx11_lro_bytes: 0
rx11_ecn_mark: 0
rx11_removed_vlan_packets: 6868113938
rx11_wqe_err: 0
rx11_mpwqe_filler_cqes: 0
rx11_mpwqe_filler_strides: 0
rx11_buff_alloc_err: 0
rx11_cqe_compress_blks: 0
rx11_cqe_compress_pkts: 0
rx11_page_reuse: 0
rx11_cache_reuse: 1948516819
rx11_cache_full: 17132157
rx11_cache_empty: 1468413985
rx11_cache_busy: 17132820
rx11_cache_waive: 1468407772
rx11_congst_umr: 0
rx11_arfs_err: 0
rx11_xdp_tx_xmit: 0
rx11_xdp_tx_full: 0
rx11_xdp_tx_err: 0
rx11_xdp_tx_cqes: 0
rx12_packets: 6742955386
rx12_bytes: 3865747629271
rx12_csum_complete: 6742955386
rx12_csum_unnecessary: 0
rx12_csum_unnecessary_inner: 0
rx12_csum_none: 0
rx12_xdp_drop: 0
rx12_xdp_redirect: 0
rx12_lro_packets: 0
rx12_lro_bytes: 0
rx12_ecn_mark: 0
rx12_removed_vlan_packets: 6742955386
rx12_wqe_err: 0
rx12_mpwqe_filler_cqes: 0
rx12_mpwqe_filler_strides: 0
rx12_buff_alloc_err: 0
rx12_cqe_compress_blks: 0
rx12_cqe_compress_pkts: 0
rx12_page_reuse: 0
rx12_cache_reuse: 30809331
rx12_cache_full: 3340668106
rx12_cache_empty: 6656
rx12_cache_busy: 3340668333
rx12_cache_waive: 0
rx12_congst_umr: 0
rx12_arfs_err: 0
rx12_xdp_tx_xmit: 0
rx12_xdp_tx_full: 0
rx12_xdp_tx_err: 0
rx12_xdp_tx_cqes: 0
rx13_packets: 6707028036
rx13_bytes: 3813462190623
rx13_csum_complete: 6707028036
rx13_csum_unnecessary: 0
rx13_csum_unnecessary_inner: 0
rx13_csum_none: 0
rx13_xdp_drop: 0
rx13_xdp_redirect: 0
rx13_lro_packets: 0
rx13_lro_bytes: 0
rx13_ecn_mark: 0
rx13_removed_vlan_packets: 6707028036
rx13_wqe_err: 0
rx13_mpwqe_filler_cqes: 0
rx13_mpwqe_filler_strides: 0
rx13_buff_alloc_err: 0
rx13_cqe_compress_blks: 0
rx13_cqe_compress_pkts: 0
rx13_page_reuse: 0
rx13_cache_reuse: 14951053
rx13_cache_full: 3338562710
rx13_cache_empty: 6656
rx13_cache_busy: 3338562963
rx13_cache_waive: 0
rx13_congst_umr: 0
rx13_arfs_err: 0
rx13_xdp_tx_xmit: 0
rx13_xdp_tx_full: 0
rx13_xdp_tx_err: 0
rx13_xdp_tx_cqes: 0
rx14_packets: 6737074410
rx14_bytes: 3868905276119
rx14_csum_complete: 6737074410
rx14_csum_unnecessary: 0
rx14_csum_unnecessary_inner: 0
rx14_csum_none: 0
rx14_xdp_drop: 0
rx14_xdp_redirect: 0
rx14_lro_packets: 0
rx14_lro_bytes: 0
rx14_ecn_mark: 0
rx14_removed_vlan_packets: 6737074410
rx14_wqe_err: 0
rx14_mpwqe_filler_cqes: 0
rx14_mpwqe_filler_strides: 0
rx14_buff_alloc_err: 0
rx14_cqe_compress_blks: 0
rx14_cqe_compress_pkts: 0
rx14_page_reuse: 0
rx14_cache_reuse: 967799432
rx14_cache_full: 982704312
rx14_cache_empty: 1418039639
rx14_cache_busy: 982704789
rx14_cache_waive: 1418033206
rx14_congst_umr: 0
rx14_arfs_err: 0
rx14_xdp_tx_xmit: 0
rx14_xdp_tx_full: 0
rx14_xdp_tx_err: 0
rx14_xdp_tx_cqes: 0
rx15_packets: 6641887441
rx15_bytes: 3742874400402
rx15_csum_complete: 6641887441
rx15_csum_unnecessary: 0
rx15_csum_unnecessary_inner: 0
rx15_csum_none: 0
rx15_xdp_drop: 0
rx15_xdp_redirect: 0
rx15_lro_packets: 0
rx15_lro_bytes: 0
rx15_ecn_mark: 0
rx15_removed_vlan_packets: 6641887441
rx15_wqe_err: 0
rx15_mpwqe_filler_cqes: 0
rx15_mpwqe_filler_strides: 0
rx15_buff_alloc_err: 0
rx15_cqe_compress_blks: 0
rx15_cqe_compress_pkts: 0
rx15_page_reuse: 0
rx15_cache_reuse: 1920227538
rx15_cache_full: 19386129
rx15_cache_empty: 1381335137
rx15_cache_busy: 19387693
rx15_cache_waive: 1381329825
rx15_congst_umr: 0
rx15_arfs_err: 0
rx15_xdp_tx_xmit: 0
rx15_xdp_tx_full: 0
rx15_xdp_tx_err: 0
rx15_xdp_tx_cqes: 0
rx16_packets: 5420472874
rx16_bytes: 3079293332581
rx16_csum_complete: 5420472874
rx16_csum_unnecessary: 0
rx16_csum_unnecessary_inner: 0
rx16_csum_none: 0
rx16_xdp_drop: 0
rx16_xdp_redirect: 0
rx16_lro_packets: 0
rx16_lro_bytes: 0
rx16_ecn_mark: 0
rx16_removed_vlan_packets: 5420472874
rx16_wqe_err: 0
rx16_mpwqe_filler_cqes: 0
rx16_mpwqe_filler_strides: 0
rx16_buff_alloc_err: 0
rx16_cqe_compress_blks: 0
rx16_cqe_compress_pkts: 0
rx16_page_reuse: 0
rx16_cache_reuse: 2361079
rx16_cache_full: 2707875103
rx16_cache_empty: 6656
rx16_cache_busy: 2707875349
rx16_cache_waive: 0
rx16_congst_umr: 0
rx16_arfs_err: 0
rx16_xdp_tx_xmit: 0
rx16_xdp_tx_full: 0
rx16_xdp_tx_err: 0
rx16_xdp_tx_cqes: 0
rx17_packets: 5428380986
rx17_bytes: 3080981893118
rx17_csum_complete: 5428380986
rx17_csum_unnecessary: 0
rx17_csum_unnecessary_inner: 0
rx17_csum_none: 0
rx17_xdp_drop: 0
rx17_xdp_redirect: 0
rx17_lro_packets: 0
rx17_lro_bytes: 0
rx17_ecn_mark: 0
rx17_removed_vlan_packets: 5428380986
rx17_wqe_err: 0
rx17_mpwqe_filler_cqes: 0
rx17_mpwqe_filler_strides: 0
rx17_buff_alloc_err: 0
rx17_cqe_compress_blks: 0
rx17_cqe_compress_pkts: 0
rx17_page_reuse: 0
rx17_cache_reuse: 1552266402
rx17_cache_full: 5947505
rx17_cache_empty: 1155981856
rx17_cache_busy: 5948870
rx17_cache_waive: 1155976345
rx17_congst_umr: 0
rx17_arfs_err: 0
rx17_xdp_tx_xmit: 0
rx17_xdp_tx_full: 0
rx17_xdp_tx_err: 0
rx17_xdp_tx_cqes: 0
rx18_packets: 5529118410
rx18_bytes: 3254749573833
rx18_csum_complete: 5529118410
rx18_csum_unnecessary: 0
rx18_csum_unnecessary_inner: 0
rx18_csum_none: 0
rx18_xdp_drop: 0
rx18_xdp_redirect: 0
rx18_lro_packets: 0
rx18_lro_bytes: 0
rx18_ecn_mark: 0
rx18_removed_vlan_packets: 5529118410
rx18_wqe_err: 0
rx18_mpwqe_filler_cqes: 0
rx18_mpwqe_filler_strides: 0
rx18_buff_alloc_err: 0
rx18_cqe_compress_blks: 0
rx18_cqe_compress_pkts: 0
rx18_page_reuse: 0
rx18_cache_reuse: 67438840
rx18_cache_full: 1536718472
rx18_cache_empty: 1160408072
rx18_cache_busy: 1536718932
rx18_cache_waive: 1160401638
rx18_congst_umr: 0
rx18_arfs_err: 0
rx18_xdp_tx_xmit: 0
rx18_xdp_tx_full: 0
rx18_xdp_tx_err: 0
rx18_xdp_tx_cqes: 0
rx19_packets: 5449932653
rx19_bytes: 3148726579411
rx19_csum_complete: 5449932653
rx19_csum_unnecessary: 0
rx19_csum_unnecessary_inner: 0
rx19_csum_none: 0
rx19_xdp_drop: 0
rx19_xdp_redirect: 0
rx19_lro_packets: 0
rx19_lro_bytes: 0
rx19_ecn_mark: 0
rx19_removed_vlan_packets: 5449932653
rx19_wqe_err: 0
rx19_mpwqe_filler_cqes: 0
rx19_mpwqe_filler_strides: 0
rx19_buff_alloc_err: 0
rx19_cqe_compress_blks: 0
rx19_cqe_compress_pkts: 0
rx19_page_reuse: 0
rx19_cache_reuse: 1537841743
rx19_cache_full: 9920960
rx19_cache_empty: 1177208938
rx19_cache_busy: 9922299
rx19_cache_waive: 1177203401
rx19_congst_umr: 0
rx19_arfs_err: 0
rx19_xdp_tx_xmit: 0
rx19_xdp_tx_full: 0
rx19_xdp_tx_err: 0
rx19_xdp_tx_cqes: 0
rx20_packets: 5407910071
rx20_bytes: 3123560861922
rx20_csum_complete: 5407910071
rx20_csum_unnecessary: 0
rx20_csum_unnecessary_inner: 0
rx20_csum_none: 0
rx20_xdp_drop: 0
rx20_xdp_redirect: 0
rx20_lro_packets: 0
rx20_lro_bytes: 0
rx20_ecn_mark: 0
rx20_removed_vlan_packets: 5407910071
rx20_wqe_err: 0
rx20_mpwqe_filler_cqes: 0
rx20_mpwqe_filler_strides: 0
rx20_buff_alloc_err: 0
rx20_cqe_compress_blks: 0
rx20_cqe_compress_pkts: 0
rx20_page_reuse: 0
rx20_cache_reuse: 10255209
rx20_cache_full: 2693699571
rx20_cache_empty: 6656
rx20_cache_busy: 2693699823
rx20_cache_waive: 0
rx20_congst_umr: 0
rx20_arfs_err: 0
rx20_xdp_tx_xmit: 0
rx20_xdp_tx_full: 0
rx20_xdp_tx_err: 0
rx20_xdp_tx_cqes: 0
rx21_packets: 5417498508
rx21_bytes: 3131335892379
rx21_csum_complete: 5417498508
rx21_csum_unnecessary: 0
rx21_csum_unnecessary_inner: 0
rx21_csum_none: 0
rx21_xdp_drop: 0
rx21_xdp_redirect: 0
rx21_lro_packets: 0
rx21_lro_bytes: 0
rx21_ecn_mark: 0
rx21_removed_vlan_packets: 5417498508
rx21_wqe_err: 0
rx21_mpwqe_filler_cqes: 0
rx21_mpwqe_filler_strides: 0
rx21_buff_alloc_err: 0
rx21_cqe_compress_blks: 0
rx21_cqe_compress_pkts: 0
rx21_page_reuse: 0
rx21_cache_reuse: 192662917
rx21_cache_full: 1374120417
rx21_cache_empty: 1141972100
rx21_cache_busy: 1374120891
rx21_cache_waive: 1141965665
rx21_congst_umr: 0
rx21_arfs_err: 0
rx21_xdp_tx_xmit: 0
rx21_xdp_tx_full: 0
rx21_xdp_tx_err: 0
rx21_xdp_tx_cqes: 0
rx22_packets: 5613634706
rx22_bytes: 3240055099058
rx22_csum_complete: 5613634706
rx22_csum_unnecessary: 0
rx22_csum_unnecessary_inner: 0
rx22_csum_none: 0
rx22_xdp_drop: 0
rx22_xdp_redirect: 0
rx22_lro_packets: 0
rx22_lro_bytes: 0
rx22_ecn_mark: 0
rx22_removed_vlan_packets: 5613634706
rx22_wqe_err: 0
rx22_mpwqe_filler_cqes: 0
rx22_mpwqe_filler_strides: 0
rx22_buff_alloc_err: 0
rx22_cqe_compress_blks: 0
rx22_cqe_compress_pkts: 0
rx22_page_reuse: 0
rx22_cache_reuse: 12161531
rx22_cache_full: 2794655567
rx22_cache_empty: 6656
rx22_cache_busy: 2794655821
rx22_cache_waive: 0
rx22_congst_umr: 0
rx22_arfs_err: 0
rx22_xdp_tx_xmit: 0
rx22_xdp_tx_full: 0
rx22_xdp_tx_err: 0
rx22_xdp_tx_cqes: 0
rx23_packets: 5389977167
rx23_bytes: 3054270771559
rx23_csum_complete: 5389977167
rx23_csum_unnecessary: 0
rx23_csum_unnecessary_inner: 0
rx23_csum_none: 0
rx23_xdp_drop: 0
rx23_xdp_redirect: 0
rx23_lro_packets: 0
rx23_lro_bytes: 0
rx23_ecn_mark: 0
rx23_removed_vlan_packets: 5389977167
rx23_wqe_err: 0
rx23_mpwqe_filler_cqes: 0
rx23_mpwqe_filler_strides: 0
rx23_buff_alloc_err: 0
rx23_cqe_compress_blks: 0
rx23_cqe_compress_pkts: 0
rx23_page_reuse: 0
rx23_cache_reuse: 709328
rx23_cache_full: 2694279000
rx23_cache_empty: 6656
rx23_cache_busy: 2694279252
rx23_cache_waive: 0
rx23_congst_umr: 0
rx23_arfs_err: 0
rx23_xdp_tx_xmit: 0
rx23_xdp_tx_full: 0
rx23_xdp_tx_err: 0
rx23_xdp_tx_cqes: 0
rx24_packets: 5547561932
rx24_bytes: 3166602453443
rx24_csum_complete: 5547561932
rx24_csum_unnecessary: 0
rx24_csum_unnecessary_inner: 0
rx24_csum_none: 0
rx24_xdp_drop: 0
rx24_xdp_redirect: 0
rx24_lro_packets: 0
rx24_lro_bytes: 0
rx24_ecn_mark: 0
rx24_removed_vlan_packets: 5547561932
rx24_wqe_err: 0
rx24_mpwqe_filler_cqes: 0
rx24_mpwqe_filler_strides: 0
rx24_buff_alloc_err: 0
rx24_cqe_compress_blks: 0
rx24_cqe_compress_pkts: 0
rx24_page_reuse: 0
rx24_cache_reuse: 57885119
rx24_cache_full: 1529450077
rx24_cache_empty: 1186451948
rx24_cache_busy: 1529450553
rx24_cache_waive: 1186445515
rx24_congst_umr: 0
rx24_arfs_err: 0
rx24_xdp_tx_xmit: 0
rx24_xdp_tx_full: 0
rx24_xdp_tx_err: 0
rx24_xdp_tx_cqes: 0
rx25_packets: 5414569326
rx25_bytes: 3184757708091
rx25_csum_complete: 5414569326
rx25_csum_unnecessary: 0
rx25_csum_unnecessary_inner: 0
rx25_csum_none: 0
rx25_xdp_drop: 0
rx25_xdp_redirect: 0
rx25_lro_packets: 0
rx25_lro_bytes: 0
rx25_ecn_mark: 0
rx25_removed_vlan_packets: 5414569326
rx25_wqe_err: 0
rx25_mpwqe_filler_cqes: 0
rx25_mpwqe_filler_strides: 0
rx25_buff_alloc_err: 0
rx25_cqe_compress_blks: 0
rx25_cqe_compress_pkts: 0
rx25_page_reuse: 0
rx25_cache_reuse: 5080853
rx25_cache_full: 2702203555
rx25_cache_empty: 6656
rx25_cache_busy: 2702203807
rx25_cache_waive: 0
rx25_congst_umr: 0
rx25_arfs_err: 0
rx25_xdp_tx_xmit: 0
rx25_xdp_tx_full: 0
rx25_xdp_tx_err: 0
rx25_xdp_tx_cqes: 0
rx26_packets: 5479972151
rx26_bytes: 3110642276239
rx26_csum_complete: 5479972151
rx26_csum_unnecessary: 0
rx26_csum_unnecessary_inner: 0
rx26_csum_none: 0
rx26_xdp_drop: 0
rx26_xdp_redirect: 0
rx26_lro_packets: 0
rx26_lro_bytes: 0
rx26_ecn_mark: 0
rx26_removed_vlan_packets: 5479972151
rx26_wqe_err: 0
rx26_mpwqe_filler_cqes: 0
rx26_mpwqe_filler_strides: 0
rx26_buff_alloc_err: 0
rx26_cqe_compress_blks: 0
rx26_cqe_compress_pkts: 0
rx26_page_reuse: 0
rx26_cache_reuse: 26543335
rx26_cache_full: 2713442485
rx26_cache_empty: 6656
rx26_cache_busy: 2713442737
rx26_cache_waive: 0
rx26_congst_umr: 0
rx26_arfs_err: 0
rx26_xdp_tx_xmit: 0
rx26_xdp_tx_full: 0
rx26_xdp_tx_err: 0
rx26_xdp_tx_cqes: 0
rx27_packets: 5337113900
rx27_bytes: 3068966906075
rx27_csum_complete: 5337113900
rx27_csum_unnecessary: 0
rx27_csum_unnecessary_inner: 0
rx27_csum_none: 0
rx27_xdp_drop: 0
rx27_xdp_redirect: 0
rx27_lro_packets: 0
rx27_lro_bytes: 0
rx27_ecn_mark: 0
rx27_removed_vlan_packets: 5337113900
rx27_wqe_err: 0
rx27_mpwqe_filler_cqes: 0
rx27_mpwqe_filler_strides: 0
rx27_buff_alloc_err: 0
rx27_cqe_compress_blks: 0
rx27_cqe_compress_pkts: 0
rx27_page_reuse: 0
rx27_cache_reuse: 1539298962
rx27_cache_full: 10861919
rx27_cache_empty: 1117173179
rx27_cache_busy: 12091463
rx27_cache_waive: 1118395847
rx27_congst_umr: 0
rx27_arfs_err: 0
rx27_xdp_tx_xmit: 0
rx27_xdp_tx_full: 0
rx27_xdp_tx_err: 0
rx27_xdp_tx_cqes: 0
rx28_packets: 0
rx28_bytes: 0
rx28_csum_complete: 0
rx28_csum_unnecessary: 0
rx28_csum_unnecessary_inner: 0
rx28_csum_none: 0
rx28_xdp_drop: 0
rx28_xdp_redirect: 0
rx28_lro_packets: 0
rx28_lro_bytes: 0
rx28_ecn_mark: 0
rx28_removed_vlan_packets: 0
rx28_wqe_err: 0
rx28_mpwqe_filler_cqes: 0
rx28_mpwqe_filler_strides: 0
rx28_buff_alloc_err: 0
rx28_cqe_compress_blks: 0
rx28_cqe_compress_pkts: 0
rx28_page_reuse: 0
rx28_cache_reuse: 0
rx28_cache_full: 0
rx28_cache_empty: 2560
rx28_cache_busy: 0
rx28_cache_waive: 0
rx28_congst_umr: 0
rx28_arfs_err: 0
rx28_xdp_tx_xmit: 0
rx28_xdp_tx_full: 0
rx28_xdp_tx_err: 0
rx28_xdp_tx_cqes: 0
rx29_packets: 0
rx29_bytes: 0
rx29_csum_complete: 0
rx29_csum_unnecessary: 0
rx29_csum_unnecessary_inner: 0
rx29_csum_none: 0
rx29_xdp_drop: 0
rx29_xdp_redirect: 0
rx29_lro_packets: 0
rx29_lro_bytes: 0
rx29_ecn_mark: 0
rx29_removed_vlan_packets: 0
rx29_wqe_err: 0
rx29_mpwqe_filler_cqes: 0
rx29_mpwqe_filler_strides: 0
rx29_buff_alloc_err: 0
rx29_cqe_compress_blks: 0
rx29_cqe_compress_pkts: 0
rx29_page_reuse: 0
rx29_cache_reuse: 0
rx29_cache_full: 0
rx29_cache_empty: 2560
rx29_cache_busy: 0
rx29_cache_waive: 0
rx29_congst_umr: 0
rx29_arfs_err: 0
rx29_xdp_tx_xmit: 0
rx29_xdp_tx_full: 0
rx29_xdp_tx_err: 0
rx29_xdp_tx_cqes: 0
rx30_packets: 0
rx30_bytes: 0
rx30_csum_complete: 0
rx30_csum_unnecessary: 0
rx30_csum_unnecessary_inner: 0
rx30_csum_none: 0
rx30_xdp_drop: 0
rx30_xdp_redirect: 0
rx30_lro_packets: 0
rx30_lro_bytes: 0
rx30_ecn_mark: 0
rx30_removed_vlan_packets: 0
rx30_wqe_err: 0
rx30_mpwqe_filler_cqes: 0
rx30_mpwqe_filler_strides: 0
rx30_buff_alloc_err: 0
rx30_cqe_compress_blks: 0
rx30_cqe_compress_pkts: 0
rx30_page_reuse: 0
rx30_cache_reuse: 0
rx30_cache_full: 0
rx30_cache_empty: 2560
rx30_cache_busy: 0
rx30_cache_waive: 0
rx30_congst_umr: 0
rx30_arfs_err: 0
rx30_xdp_tx_xmit: 0
rx30_xdp_tx_full: 0
rx30_xdp_tx_err: 0
rx30_xdp_tx_cqes: 0
rx31_packets: 0
rx31_bytes: 0
rx31_csum_complete: 0
rx31_csum_unnecessary: 0
rx31_csum_unnecessary_inner: 0
rx31_csum_none: 0
rx31_xdp_drop: 0
rx31_xdp_redirect: 0
rx31_lro_packets: 0
rx31_lro_bytes: 0
rx31_ecn_mark: 0
rx31_removed_vlan_packets: 0
rx31_wqe_err: 0
rx31_mpwqe_filler_cqes: 0
rx31_mpwqe_filler_strides: 0
rx31_buff_alloc_err: 0
rx31_cqe_compress_blks: 0
rx31_cqe_compress_pkts: 0
rx31_page_reuse: 0
rx31_cache_reuse: 0
rx31_cache_full: 0
rx31_cache_empty: 2560
rx31_cache_busy: 0
rx31_cache_waive: 0
rx31_congst_umr: 0
rx31_arfs_err: 0
rx31_xdp_tx_xmit: 0
rx31_xdp_tx_full: 0
rx31_xdp_tx_err: 0
rx31_xdp_tx_cqes: 0
rx32_packets: 0
rx32_bytes: 0
rx32_csum_complete: 0
rx32_csum_unnecessary: 0
rx32_csum_unnecessary_inner: 0
rx32_csum_none: 0
rx32_xdp_drop: 0
rx32_xdp_redirect: 0
rx32_lro_packets: 0
rx32_lro_bytes: 0
rx32_ecn_mark: 0
rx32_removed_vlan_packets: 0
rx32_wqe_err: 0
rx32_mpwqe_filler_cqes: 0
rx32_mpwqe_filler_strides: 0
rx32_buff_alloc_err: 0
rx32_cqe_compress_blks: 0
rx32_cqe_compress_pkts: 0
rx32_page_reuse: 0
rx32_cache_reuse: 0
rx32_cache_full: 0
rx32_cache_empty: 2560
rx32_cache_busy: 0
rx32_cache_waive: 0
rx32_congst_umr: 0
rx32_arfs_err: 0
rx32_xdp_tx_xmit: 0
rx32_xdp_tx_full: 0
rx32_xdp_tx_err: 0
rx32_xdp_tx_cqes: 0
rx33_packets: 0
rx33_bytes: 0
rx33_csum_complete: 0
rx33_csum_unnecessary: 0
rx33_csum_unnecessary_inner: 0
rx33_csum_none: 0
rx33_xdp_drop: 0
rx33_xdp_redirect: 0
rx33_lro_packets: 0
rx33_lro_bytes: 0
rx33_ecn_mark: 0
rx33_removed_vlan_packets: 0
rx33_wqe_err: 0
rx33_mpwqe_filler_cqes: 0
rx33_mpwqe_filler_strides: 0
rx33_buff_alloc_err: 0
rx33_cqe_compress_blks: 0
rx33_cqe_compress_pkts: 0
rx33_page_reuse: 0
rx33_cache_reuse: 0
rx33_cache_full: 0
rx33_cache_empty: 2560
rx33_cache_busy: 0
rx33_cache_waive: 0
rx33_congst_umr: 0
rx33_arfs_err: 0
rx33_xdp_tx_xmit: 0
rx33_xdp_tx_full: 0
rx33_xdp_tx_err: 0
rx33_xdp_tx_cqes: 0
rx34_packets: 0
rx34_bytes: 0
rx34_csum_complete: 0
rx34_csum_unnecessary: 0
rx34_csum_unnecessary_inner: 0
rx34_csum_none: 0
rx34_xdp_drop: 0
rx34_xdp_redirect: 0
rx34_lro_packets: 0
rx34_lro_bytes: 0
rx34_ecn_mark: 0
rx34_removed_vlan_packets: 0
rx34_wqe_err: 0
rx34_mpwqe_filler_cqes: 0
rx34_mpwqe_filler_strides: 0
rx34_buff_alloc_err: 0
rx34_cqe_compress_blks: 0
rx34_cqe_compress_pkts: 0
rx34_page_reuse: 0
rx34_cache_reuse: 0
rx34_cache_full: 0
rx34_cache_empty: 2560
rx34_cache_busy: 0
rx34_cache_waive: 0
rx34_congst_umr: 0
rx34_arfs_err: 0
rx34_xdp_tx_xmit: 0
rx34_xdp_tx_full: 0
rx34_xdp_tx_err: 0
rx34_xdp_tx_cqes: 0
rx35_packets: 0
rx35_bytes: 0
rx35_csum_complete: 0
rx35_csum_unnecessary: 0
rx35_csum_unnecessary_inner: 0
rx35_csum_none: 0
rx35_xdp_drop: 0
rx35_xdp_redirect: 0
rx35_lro_packets: 0
rx35_lro_bytes: 0
rx35_ecn_mark: 0
rx35_removed_vlan_packets: 0
rx35_wqe_err: 0
rx35_mpwqe_filler_cqes: 0
rx35_mpwqe_filler_strides: 0
rx35_buff_alloc_err: 0
rx35_cqe_compress_blks: 0
rx35_cqe_compress_pkts: 0
rx35_page_reuse: 0
rx35_cache_reuse: 0
rx35_cache_full: 0
rx35_cache_empty: 2560
rx35_cache_busy: 0
rx35_cache_waive: 0
rx35_congst_umr: 0
rx35_arfs_err: 0
rx35_xdp_tx_xmit: 0
rx35_xdp_tx_full: 0
rx35_xdp_tx_err: 0
rx35_xdp_tx_cqes: 0
rx36_packets: 0
rx36_bytes: 0
rx36_csum_complete: 0
rx36_csum_unnecessary: 0
rx36_csum_unnecessary_inner: 0
rx36_csum_none: 0
rx36_xdp_drop: 0
rx36_xdp_redirect: 0
rx36_lro_packets: 0
rx36_lro_bytes: 0
rx36_ecn_mark: 0
rx36_removed_vlan_packets: 0
rx36_wqe_err: 0
rx36_mpwqe_filler_cqes: 0
rx36_mpwqe_filler_strides: 0
rx36_buff_alloc_err: 0
rx36_cqe_compress_blks: 0
rx36_cqe_compress_pkts: 0
rx36_page_reuse: 0
rx36_cache_reuse: 0
rx36_cache_full: 0
rx36_cache_empty: 2560
rx36_cache_busy: 0
rx36_cache_waive: 0
rx36_congst_umr: 0
rx36_arfs_err: 0
rx36_xdp_tx_xmit: 0
rx36_xdp_tx_full: 0
rx36_xdp_tx_err: 0
rx36_xdp_tx_cqes: 0
rx37_packets: 0
rx37_bytes: 0
rx37_csum_complete: 0
rx37_csum_unnecessary: 0
rx37_csum_unnecessary_inner: 0
rx37_csum_none: 0
rx37_xdp_drop: 0
rx37_xdp_redirect: 0
rx37_lro_packets: 0
rx37_lro_bytes: 0
rx37_ecn_mark: 0
rx37_removed_vlan_packets: 0
rx37_wqe_err: 0
rx37_mpwqe_filler_cqes: 0
rx37_mpwqe_filler_strides: 0
rx37_buff_alloc_err: 0
rx37_cqe_compress_blks: 0
rx37_cqe_compress_pkts: 0
rx37_page_reuse: 0
rx37_cache_reuse: 0
rx37_cache_full: 0
rx37_cache_empty: 2560
rx37_cache_busy: 0
rx37_cache_waive: 0
rx37_congst_umr: 0
rx37_arfs_err: 0
rx37_xdp_tx_xmit: 0
rx37_xdp_tx_full: 0
rx37_xdp_tx_err: 0
rx37_xdp_tx_cqes: 0
rx38_packets: 0
rx38_bytes: 0
rx38_csum_complete: 0
rx38_csum_unnecessary: 0
rx38_csum_unnecessary_inner: 0
rx38_csum_none: 0
rx38_xdp_drop: 0
rx38_xdp_redirect: 0
rx38_lro_packets: 0
rx38_lro_bytes: 0
rx38_ecn_mark: 0
rx38_removed_vlan_packets: 0
rx38_wqe_err: 0
rx38_mpwqe_filler_cqes: 0
rx38_mpwqe_filler_strides: 0
rx38_buff_alloc_err: 0
rx38_cqe_compress_blks: 0
rx38_cqe_compress_pkts: 0
rx38_page_reuse: 0
rx38_cache_reuse: 0
rx38_cache_full: 0
rx38_cache_empty: 2560
rx38_cache_busy: 0
rx38_cache_waive: 0
rx38_congst_umr: 0
rx38_arfs_err: 0
rx38_xdp_tx_xmit: 0
rx38_xdp_tx_full: 0
rx38_xdp_tx_err: 0
rx38_xdp_tx_cqes: 0
rx39_packets: 0
rx39_bytes: 0
rx39_csum_complete: 0
rx39_csum_unnecessary: 0
rx39_csum_unnecessary_inner: 0
rx39_csum_none: 0
rx39_xdp_drop: 0
rx39_xdp_redirect: 0
rx39_lro_packets: 0
rx39_lro_bytes: 0
rx39_ecn_mark: 0
rx39_removed_vlan_packets: 0
rx39_wqe_err: 0
rx39_mpwqe_filler_cqes: 0
rx39_mpwqe_filler_strides: 0
rx39_buff_alloc_err: 0
rx39_cqe_compress_blks: 0
rx39_cqe_compress_pkts: 0
rx39_page_reuse: 0
rx39_cache_reuse: 0
rx39_cache_full: 0
rx39_cache_empty: 2560
rx39_cache_busy: 0
rx39_cache_waive: 0
rx39_congst_umr: 0
rx39_arfs_err: 0
rx39_xdp_tx_xmit: 0
rx39_xdp_tx_full: 0
rx39_xdp_tx_err: 0
rx39_xdp_tx_cqes: 0
rx40_packets: 0
rx40_bytes: 0
rx40_csum_complete: 0
rx40_csum_unnecessary: 0
rx40_csum_unnecessary_inner: 0
rx40_csum_none: 0
rx40_xdp_drop: 0
rx40_xdp_redirect: 0
rx40_lro_packets: 0
rx40_lro_bytes: 0
rx40_ecn_mark: 0
rx40_removed_vlan_packets: 0
rx40_wqe_err: 0
rx40_mpwqe_filler_cqes: 0
rx40_mpwqe_filler_strides: 0
rx40_buff_alloc_err: 0
rx40_cqe_compress_blks: 0
rx40_cqe_compress_pkts: 0
rx40_page_reuse: 0
rx40_cache_reuse: 0
rx40_cache_full: 0
rx40_cache_empty: 2560
rx40_cache_busy: 0
rx40_cache_waive: 0
rx40_congst_umr: 0
rx40_arfs_err: 0
rx40_xdp_tx_xmit: 0
rx40_xdp_tx_full: 0
rx40_xdp_tx_err: 0
rx40_xdp_tx_cqes: 0
rx41_packets: 0
rx41_bytes: 0
rx41_csum_complete: 0
rx41_csum_unnecessary: 0
rx41_csum_unnecessary_inner: 0
rx41_csum_none: 0
rx41_xdp_drop: 0
rx41_xdp_redirect: 0
rx41_lro_packets: 0
rx41_lro_bytes: 0
rx41_ecn_mark: 0
rx41_removed_vlan_packets: 0
rx41_wqe_err: 0
rx41_mpwqe_filler_cqes: 0
rx41_mpwqe_filler_strides: 0
rx41_buff_alloc_err: 0
rx41_cqe_compress_blks: 0
rx41_cqe_compress_pkts: 0
rx41_page_reuse: 0
rx41_cache_reuse: 0
rx41_cache_full: 0
rx41_cache_empty: 2560
rx41_cache_busy: 0
rx41_cache_waive: 0
rx41_congst_umr: 0
rx41_arfs_err: 0
rx41_xdp_tx_xmit: 0
rx41_xdp_tx_full: 0
rx41_xdp_tx_err: 0
rx41_xdp_tx_cqes: 0
rx42_packets: 0
rx42_bytes: 0
rx42_csum_complete: 0
rx42_csum_unnecessary: 0
rx42_csum_unnecessary_inner: 0
rx42_csum_none: 0
rx42_xdp_drop: 0
rx42_xdp_redirect: 0
rx42_lro_packets: 0
rx42_lro_bytes: 0
rx42_ecn_mark: 0
rx42_removed_vlan_packets: 0
rx42_wqe_err: 0
rx42_mpwqe_filler_cqes: 0
rx42_mpwqe_filler_strides: 0
rx42_buff_alloc_err: 0
rx42_cqe_compress_blks: 0
rx42_cqe_compress_pkts: 0
rx42_page_reuse: 0
rx42_cache_reuse: 0
rx42_cache_full: 0
rx42_cache_empty: 2560
rx42_cache_busy: 0
rx42_cache_waive: 0
rx42_congst_umr: 0
rx42_arfs_err: 0
rx42_xdp_tx_xmit: 0
rx42_xdp_tx_full: 0
rx42_xdp_tx_err: 0
rx42_xdp_tx_cqes: 0
rx43_packets: 0
rx43_bytes: 0
rx43_csum_complete: 0
rx43_csum_unnecessary: 0
rx43_csum_unnecessary_inner: 0
rx43_csum_none: 0
rx43_xdp_drop: 0
rx43_xdp_redirect: 0
rx43_lro_packets: 0
rx43_lro_bytes: 0
rx43_ecn_mark: 0
rx43_removed_vlan_packets: 0
rx43_wqe_err: 0
rx43_mpwqe_filler_cqes: 0
rx43_mpwqe_filler_strides: 0
rx43_buff_alloc_err: 0
rx43_cqe_compress_blks: 0
rx43_cqe_compress_pkts: 0
rx43_page_reuse: 0
rx43_cache_reuse: 0
rx43_cache_full: 0
rx43_cache_empty: 2560
rx43_cache_busy: 0
rx43_cache_waive: 0
rx43_congst_umr: 0
rx43_arfs_err: 0
rx43_xdp_tx_xmit: 0
rx43_xdp_tx_full: 0
rx43_xdp_tx_err: 0
rx43_xdp_tx_cqes: 0
rx44_packets: 0
rx44_bytes: 0
rx44_csum_complete: 0
rx44_csum_unnecessary: 0
rx44_csum_unnecessary_inner: 0
rx44_csum_none: 0
rx44_xdp_drop: 0
rx44_xdp_redirect: 0
rx44_lro_packets: 0
rx44_lro_bytes: 0
rx44_ecn_mark: 0
rx44_removed_vlan_packets: 0
rx44_wqe_err: 0
rx44_mpwqe_filler_cqes: 0
rx44_mpwqe_filler_strides: 0
rx44_buff_alloc_err: 0
rx44_cqe_compress_blks: 0
rx44_cqe_compress_pkts: 0
rx44_page_reuse: 0
rx44_cache_reuse: 0
rx44_cache_full: 0
rx44_cache_empty: 2560
rx44_cache_busy: 0
rx44_cache_waive: 0
rx44_congst_umr: 0
rx44_arfs_err: 0
rx44_xdp_tx_xmit: 0
rx44_xdp_tx_full: 0
rx44_xdp_tx_err: 0
rx44_xdp_tx_cqes: 0
rx45_packets: 0
rx45_bytes: 0
rx45_csum_complete: 0
rx45_csum_unnecessary: 0
rx45_csum_unnecessary_inner: 0
rx45_csum_none: 0
rx45_xdp_drop: 0
rx45_xdp_redirect: 0
rx45_lro_packets: 0
rx45_lro_bytes: 0
rx45_ecn_mark: 0
rx45_removed_vlan_packets: 0
rx45_wqe_err: 0
rx45_mpwqe_filler_cqes: 0
rx45_mpwqe_filler_strides: 0
rx45_buff_alloc_err: 0
rx45_cqe_compress_blks: 0
rx45_cqe_compress_pkts: 0
rx45_page_reuse: 0
rx45_cache_reuse: 0
rx45_cache_full: 0
rx45_cache_empty: 2560
rx45_cache_busy: 0
rx45_cache_waive: 0
rx45_congst_umr: 0
rx45_arfs_err: 0
rx45_xdp_tx_xmit: 0
rx45_xdp_tx_full: 0
rx45_xdp_tx_err: 0
rx45_xdp_tx_cqes: 0
rx46_packets: 0
rx46_bytes: 0
rx46_csum_complete: 0
rx46_csum_unnecessary: 0
rx46_csum_unnecessary_inner: 0
rx46_csum_none: 0
rx46_xdp_drop: 0
rx46_xdp_redirect: 0
rx46_lro_packets: 0
rx46_lro_bytes: 0
rx46_ecn_mark: 0
rx46_removed_vlan_packets: 0
rx46_wqe_err: 0
rx46_mpwqe_filler_cqes: 0
rx46_mpwqe_filler_strides: 0
rx46_buff_alloc_err: 0
rx46_cqe_compress_blks: 0
rx46_cqe_compress_pkts: 0
rx46_page_reuse: 0
rx46_cache_reuse: 0
rx46_cache_full: 0
rx46_cache_empty: 2560
rx46_cache_busy: 0
rx46_cache_waive: 0
rx46_congst_umr: 0
rx46_arfs_err: 0
rx46_xdp_tx_xmit: 0
rx46_xdp_tx_full: 0
rx46_xdp_tx_err: 0
rx46_xdp_tx_cqes: 0
rx47_packets: 0
rx47_bytes: 0
rx47_csum_complete: 0
rx47_csum_unnecessary: 0
rx47_csum_unnecessary_inner: 0
rx47_csum_none: 0
rx47_xdp_drop: 0
rx47_xdp_redirect: 0
rx47_lro_packets: 0
rx47_lro_bytes: 0
rx47_ecn_mark: 0
rx47_removed_vlan_packets: 0
rx47_wqe_err: 0
rx47_mpwqe_filler_cqes: 0
rx47_mpwqe_filler_strides: 0
rx47_buff_alloc_err: 0
rx47_cqe_compress_blks: 0
rx47_cqe_compress_pkts: 0
rx47_page_reuse: 0
rx47_cache_reuse: 0
rx47_cache_full: 0
rx47_cache_empty: 2560
rx47_cache_busy: 0
rx47_cache_waive: 0
rx47_congst_umr: 0
rx47_arfs_err: 0
rx47_xdp_tx_xmit: 0
rx47_xdp_tx_full: 0
rx47_xdp_tx_err: 0
rx47_xdp_tx_cqes: 0
rx48_packets: 0
rx48_bytes: 0
rx48_csum_complete: 0
rx48_csum_unnecessary: 0
rx48_csum_unnecessary_inner: 0
rx48_csum_none: 0
rx48_xdp_drop: 0
rx48_xdp_redirect: 0
rx48_lro_packets: 0
rx48_lro_bytes: 0
rx48_ecn_mark: 0
rx48_removed_vlan_packets: 0
rx48_wqe_err: 0
rx48_mpwqe_filler_cqes: 0
rx48_mpwqe_filler_strides: 0
rx48_buff_alloc_err: 0
rx48_cqe_compress_blks: 0
rx48_cqe_compress_pkts: 0
rx48_page_reuse: 0
rx48_cache_reuse: 0
rx48_cache_full: 0
rx48_cache_empty: 2560
rx48_cache_busy: 0
rx48_cache_waive: 0
rx48_congst_umr: 0
rx48_arfs_err: 0
rx48_xdp_tx_xmit: 0
rx48_xdp_tx_full: 0
rx48_xdp_tx_err: 0
rx48_xdp_tx_cqes: 0
rx49_packets: 0
rx49_bytes: 0
rx49_csum_complete: 0
rx49_csum_unnecessary: 0
rx49_csum_unnecessary_inner: 0
rx49_csum_none: 0
rx49_xdp_drop: 0
rx49_xdp_redirect: 0
rx49_lro_packets: 0
rx49_lro_bytes: 0
rx49_ecn_mark: 0
rx49_removed_vlan_packets: 0
rx49_wqe_err: 0
rx49_mpwqe_filler_cqes: 0
rx49_mpwqe_filler_strides: 0
rx49_buff_alloc_err: 0
rx49_cqe_compress_blks: 0
rx49_cqe_compress_pkts: 0
rx49_page_reuse: 0
rx49_cache_reuse: 0
rx49_cache_full: 0
rx49_cache_empty: 2560
rx49_cache_busy: 0
rx49_cache_waive: 0
rx49_congst_umr: 0
rx49_arfs_err: 0
rx49_xdp_tx_xmit: 0
rx49_xdp_tx_full: 0
rx49_xdp_tx_err: 0
rx49_xdp_tx_cqes: 0
rx50_packets: 0
rx50_bytes: 0
rx50_csum_complete: 0
rx50_csum_unnecessary: 0
rx50_csum_unnecessary_inner: 0
rx50_csum_none: 0
rx50_xdp_drop: 0
rx50_xdp_redirect: 0
rx50_lro_packets: 0
rx50_lro_bytes: 0
rx50_ecn_mark: 0
rx50_removed_vlan_packets: 0
rx50_wqe_err: 0
rx50_mpwqe_filler_cqes: 0
rx50_mpwqe_filler_strides: 0
rx50_buff_alloc_err: 0
rx50_cqe_compress_blks: 0
rx50_cqe_compress_pkts: 0
rx50_page_reuse: 0
rx50_cache_reuse: 0
rx50_cache_full: 0
rx50_cache_empty: 2560
rx50_cache_busy: 0
rx50_cache_waive: 0
rx50_congst_umr: 0
rx50_arfs_err: 0
rx50_xdp_tx_xmit: 0
rx50_xdp_tx_full: 0
rx50_xdp_tx_err: 0
rx50_xdp_tx_cqes: 0
rx51_packets: 0
rx51_bytes: 0
rx51_csum_complete: 0
rx51_csum_unnecessary: 0
rx51_csum_unnecessary_inner: 0
rx51_csum_none: 0
rx51_xdp_drop: 0
rx51_xdp_redirect: 0
rx51_lro_packets: 0
rx51_lro_bytes: 0
rx51_ecn_mark: 0
rx51_removed_vlan_packets: 0
rx51_wqe_err: 0
rx51_mpwqe_filler_cqes: 0
rx51_mpwqe_filler_strides: 0
rx51_buff_alloc_err: 0
rx51_cqe_compress_blks: 0
rx51_cqe_compress_pkts: 0
rx51_page_reuse: 0
rx51_cache_reuse: 0
rx51_cache_full: 0
rx51_cache_empty: 2560
rx51_cache_busy: 0
rx51_cache_waive: 0
rx51_congst_umr: 0
rx51_arfs_err: 0
rx51_xdp_tx_xmit: 0
rx51_xdp_tx_full: 0
rx51_xdp_tx_err: 0
rx51_xdp_tx_cqes: 0
rx52_packets: 0
rx52_bytes: 0
rx52_csum_complete: 0
rx52_csum_unnecessary: 0
rx52_csum_unnecessary_inner: 0
rx52_csum_none: 0
rx52_xdp_drop: 0
rx52_xdp_redirect: 0
rx52_lro_packets: 0
rx52_lro_bytes: 0
rx52_ecn_mark: 0
rx52_removed_vlan_packets: 0
rx52_wqe_err: 0
rx52_mpwqe_filler_cqes: 0
rx52_mpwqe_filler_strides: 0
rx52_buff_alloc_err: 0
rx52_cqe_compress_blks: 0
rx52_cqe_compress_pkts: 0
rx52_page_reuse: 0
rx52_cache_reuse: 0
rx52_cache_full: 0
rx52_cache_empty: 2560
rx52_cache_busy: 0
rx52_cache_waive: 0
rx52_congst_umr: 0
rx52_arfs_err: 0
rx52_xdp_tx_xmit: 0
rx52_xdp_tx_full: 0
rx52_xdp_tx_err: 0
rx52_xdp_tx_cqes: 0
rx53_packets: 0
rx53_bytes: 0
rx53_csum_complete: 0
rx53_csum_unnecessary: 0
rx53_csum_unnecessary_inner: 0
rx53_csum_none: 0
rx53_xdp_drop: 0
rx53_xdp_redirect: 0
rx53_lro_packets: 0
rx53_lro_bytes: 0
rx53_ecn_mark: 0
rx53_removed_vlan_packets: 0
rx53_wqe_err: 0
rx53_mpwqe_filler_cqes: 0
rx53_mpwqe_filler_strides: 0
rx53_buff_alloc_err: 0
rx53_cqe_compress_blks: 0
rx53_cqe_compress_pkts: 0
rx53_page_reuse: 0
rx53_cache_reuse: 0
rx53_cache_full: 0
rx53_cache_empty: 2560
rx53_cache_busy: 0
rx53_cache_waive: 0
rx53_congst_umr: 0
rx53_arfs_err: 0
rx53_xdp_tx_xmit: 0
rx53_xdp_tx_full: 0
rx53_xdp_tx_err: 0
rx53_xdp_tx_cqes: 0
rx54_packets: 0
rx54_bytes: 0
rx54_csum_complete: 0
rx54_csum_unnecessary: 0
rx54_csum_unnecessary_inner: 0
rx54_csum_none: 0
rx54_xdp_drop: 0
rx54_xdp_redirect: 0
rx54_lro_packets: 0
rx54_lro_bytes: 0
rx54_ecn_mark: 0
rx54_removed_vlan_packets: 0
rx54_wqe_err: 0
rx54_mpwqe_filler_cqes: 0
rx54_mpwqe_filler_strides: 0
rx54_buff_alloc_err: 0
rx54_cqe_compress_blks: 0
rx54_cqe_compress_pkts: 0
rx54_page_reuse: 0
rx54_cache_reuse: 0
rx54_cache_full: 0
rx54_cache_empty: 2560
rx54_cache_busy: 0
rx54_cache_waive: 0
rx54_congst_umr: 0
rx54_arfs_err: 0
rx54_xdp_tx_xmit: 0
rx54_xdp_tx_full: 0
rx54_xdp_tx_err: 0
rx54_xdp_tx_cqes: 0
rx55_packets: 0
rx55_bytes: 0
rx55_csum_complete: 0
rx55_csum_unnecessary: 0
rx55_csum_unnecessary_inner: 0
rx55_csum_none: 0
rx55_xdp_drop: 0
rx55_xdp_redirect: 0
rx55_lro_packets: 0
rx55_lro_bytes: 0
rx55_ecn_mark: 0
rx55_removed_vlan_packets: 0
rx55_wqe_err: 0
rx55_mpwqe_filler_cqes: 0
rx55_mpwqe_filler_strides: 0
rx55_buff_alloc_err: 0
rx55_cqe_compress_blks: 0
rx55_cqe_compress_pkts: 0
rx55_page_reuse: 0
rx55_cache_reuse: 0
rx55_cache_full: 0
rx55_cache_empty: 2560
rx55_cache_busy: 0
rx55_cache_waive: 0
rx55_congst_umr: 0
rx55_arfs_err: 0
rx55_xdp_tx_xmit: 0
rx55_xdp_tx_full: 0
rx55_xdp_tx_err: 0
rx55_xdp_tx_cqes: 0
tx0_packets: 5868971166
tx0_bytes: 7384241881537
tx0_tso_packets: 1005089669
tx0_tso_bytes: 5138882499687
tx0_tso_inner_packets: 0
tx0_tso_inner_bytes: 0
tx0_csum_partial: 1405330470
tx0_csum_partial_inner: 0
tx0_added_vlan_packets: 3247061022
tx0_nop: 83925216
tx0_csum_none: 1841730552
tx0_stopped: 0
tx0_dropped: 0
tx0_xmit_more: 29664303
tx0_recover: 0
tx0_cqes: 3217398842
tx0_wake: 0
tx0_cqe_err: 0
tx1_packets: 5599378674
tx1_bytes: 7272236466962
tx1_tso_packets: 1024612268
tx1_tso_bytes: 5244192050917
tx1_tso_inner_packets: 0
tx1_tso_inner_bytes: 0
tx1_csum_partial: 1438007932
tx1_csum_partial_inner: 0
tx1_added_vlan_packets: 2919765857
tx1_nop: 79661231
tx1_csum_none: 1481757925
tx1_stopped: 0
tx1_dropped: 0
tx1_xmit_more: 29485355
tx1_recover: 0
tx1_cqes: 2890282176
tx1_wake: 0
tx1_cqe_err: 0
tx2_packets: 5413821094
tx2_bytes: 7033951631334
tx2_tso_packets: 1002868589
tx2_tso_bytes: 5089549008985
tx2_tso_inner_packets: 0
tx2_tso_inner_bytes: 0
tx2_csum_partial: 1404186175
tx2_csum_partial_inner: 0
tx2_added_vlan_packets: 2822670460
tx2_nop: 77115408
tx2_csum_none: 1418484285
tx2_stopped: 0
tx2_dropped: 0
tx2_xmit_more: 29321129
tx2_recover: 0
tx2_cqes: 2793351019
tx2_wake: 0
tx2_cqe_err: 0
tx3_packets: 5479609727
tx3_bytes: 7116904107659
tx3_tso_packets: 1002992639
tx3_tso_bytes: 5154225081979
tx3_tso_inner_packets: 0
tx3_tso_inner_bytes: 0
tx3_csum_partial: 1415739849
tx3_csum_partial_inner: 0
tx3_added_vlan_packets: 2842823811
tx3_nop: 78060813
tx3_csum_none: 1427083971
tx3_stopped: 0
tx3_dropped: 0
tx3_xmit_more: 28575040
tx3_recover: 0
tx3_cqes: 2814250785
tx3_wake: 0
tx3_cqe_err: 0
tx4_packets: 5508297397
tx4_bytes: 7127659369902
tx4_tso_packets: 1007356432
tx4_tso_bytes: 5145975736034
tx4_tso_inner_packets: 0
tx4_tso_inner_bytes: 0
tx4_csum_partial: 1411271000
tx4_csum_partial_inner: 0
tx4_added_vlan_packets: 2882086825
tx4_nop: 78433610
tx4_csum_none: 1470815825
tx4_stopped: 0
tx4_dropped: 0
tx4_xmit_more: 28632444
tx4_recover: 0
tx4_cqes: 2853456464
tx4_wake: 0
tx4_cqe_err: 0
tx5_packets: 5513864156
tx5_bytes: 7165864145517
tx5_tso_packets: 1014046485
tx5_tso_bytes: 5192635614477
tx5_tso_inner_packets: 0
tx5_tso_inner_bytes: 0
tx5_csum_partial: 1420810473
tx5_csum_partial_inner: 0
tx5_added_vlan_packets: 2861370556
tx5_nop: 78481355
tx5_csum_none: 1440560083
tx5_stopped: 0
tx5_dropped: 0
tx5_xmit_more: 28222467
tx5_recover: 0
tx5_cqes: 2833149758
tx5_wake: 0
tx5_cqe_err: 0
tx6_packets: 5560724761
tx6_bytes: 7210309972086
tx6_tso_packets: 994050514
tx6_tso_bytes: 5171393741595
tx6_tso_inner_packets: 0
tx6_tso_inner_bytes: 0
tx6_csum_partial: 1414303265
tx6_csum_partial_inner: 0
tx6_added_vlan_packets: 2905794177
tx6_nop: 79353318
tx6_csum_none: 1491490912
tx6_stopped: 0
tx6_dropped: 0
tx6_xmit_more: 31246664
tx6_recover: 0
tx6_cqes: 2874549217
tx6_wake: 0
tx6_cqe_err: 0
tx7_packets: 5557594170
tx7_bytes: 7223138778685
tx7_tso_packets: 1013475396
tx7_tso_bytes: 5241530065484
tx7_tso_inner_packets: 0
tx7_tso_inner_bytes: 0
tx7_csum_partial: 1438604314
tx7_csum_partial_inner: 0
tx7_added_vlan_packets: 2873917552
tx7_nop: 79057059
tx7_csum_none: 1435313239
tx7_stopped: 0
tx7_dropped: 0
tx7_xmit_more: 29258761
tx7_recover: 0
tx7_cqes: 2844660578
tx7_wake: 0
tx7_cqe_err: 0
tx8_packets: 5521254733
tx8_bytes: 7208043146297
tx8_tso_packets: 1014670801
tx8_tso_bytes: 5185842447246
tx8_tso_inner_packets: 0
tx8_tso_inner_bytes: 0
tx8_csum_partial: 1431631562
tx8_csum_partial_inner: 0
tx8_added_vlan_packets: 2872641129
tx8_nop: 78545776
tx8_csum_none: 1441009567
tx8_stopped: 0
tx8_dropped: 0
tx8_xmit_more: 29106291
tx8_recover: 0
tx8_cqes: 2843536748
tx8_wake: 0
tx8_cqe_err: 0
tx9_packets: 5528889957
tx9_bytes: 7191793816058
tx9_tso_packets: 1015955476
tx9_tso_bytes: 5207232047828
tx9_tso_inner_packets: 0
tx9_tso_inner_bytes: 0
tx9_csum_partial: 1421266796
tx9_csum_partial_inner: 0
tx9_added_vlan_packets: 2869523921
tx9_nop: 78586218
tx9_csum_none: 1448257125
tx9_stopped: 0
tx9_dropped: 0
tx9_xmit_more: 29483347
tx9_recover: 0
tx9_cqes: 2840042245
tx9_wake: 0
tx9_cqe_err: 0
tx10_packets: 5556351222
tx10_bytes: 7254798330757
tx10_tso_packets: 1028554460
tx10_tso_bytes: 5246179615774
tx10_tso_inner_packets: 0
tx10_tso_inner_bytes: 0
tx10_csum_partial: 1430459021
tx10_csum_partial_inner: 0
tx10_added_vlan_packets: 2881683382
tx10_nop: 79139584
tx10_csum_none: 1451224361
tx10_stopped: 0
tx10_dropped: 0
tx10_xmit_more: 29217190
tx10_recover: 0
tx10_cqes: 2852467898
tx10_wake: 0
tx10_cqe_err: 0
tx11_packets: 5455631854
tx11_bytes: 7061121713772
tx11_tso_packets: 992133383
tx11_tso_bytes: 5089419722682
tx11_tso_inner_packets: 0
tx11_tso_inner_bytes: 0
tx11_csum_partial: 1395542033
tx11_csum_partial_inner: 0
tx11_added_vlan_packets: 2852589093
tx11_nop: 77799857
tx11_csum_none: 1457047060
tx11_stopped: 0
tx11_dropped: 0
tx11_xmit_more: 29559927
tx11_recover: 0
tx11_cqes: 2823031110
tx11_wake: 0
tx11_cqe_err: 0
tx12_packets: 5488286808
tx12_bytes: 7137087569303
tx12_tso_packets: 1006435537
tx12_tso_bytes: 5163371416750
tx12_tso_inner_packets: 0
tx12_tso_inner_bytes: 0
tx12_csum_partial: 1414799411
tx12_csum_partial_inner: 0
tx12_added_vlan_packets: 2841679543
tx12_nop: 78387039
tx12_csum_none: 1426880132
tx12_stopped: 0
tx12_dropped: 0
tx12_xmit_more: 28607526
tx12_recover: 0
tx12_cqes: 2813073557
tx12_wake: 0
tx12_cqe_err: 0
tx13_packets: 5594132290
tx13_bytes: 7251106284829
tx13_tso_packets: 1035172061
tx13_tso_bytes: 5251200286298
tx13_tso_inner_packets: 0
tx13_tso_inner_bytes: 0
tx13_csum_partial: 1443665981
tx13_csum_partial_inner: 0
tx13_added_vlan_packets: 2916604799
tx13_nop: 79670465
tx13_csum_none: 1472938818
tx13_stopped: 0
tx13_dropped: 0
tx13_xmit_more: 27797067
tx13_recover: 0
tx13_cqes: 2888809352
tx13_wake: 0
tx13_cqe_err: 0
tx14_packets: 5548790952
tx14_bytes: 7194211868411
tx14_tso_packets: 1021015561
tx14_tso_bytes: 5231483708869
tx14_tso_inner_packets: 0
tx14_tso_inner_bytes: 0
tx14_csum_partial: 1427711576
tx14_csum_partial_inner: 0
tx14_added_vlan_packets: 2875288572
tx14_nop: 78900224
tx14_csum_none: 1447576996
tx14_stopped: 0
tx14_dropped: 0
tx14_xmit_more: 30003496
tx14_recover: 0
tx14_cqes: 2845286732
tx14_wake: 0
tx14_cqe_err: 0
tx15_packets: 5609310963
tx15_bytes: 7271380831798
tx15_tso_packets: 1027830118
tx15_tso_bytes: 5229697431506
tx15_tso_inner_packets: 0
tx15_tso_inner_bytes: 0
tx15_csum_partial: 1429209941
tx15_csum_partial_inner: 0
tx15_added_vlan_packets: 2940315402
tx15_nop: 79950883
tx15_csum_none: 1511105462
tx15_stopped: 0
tx15_dropped: 0
tx15_xmit_more: 28820740
tx15_recover: 0
tx15_cqes: 2911496633
tx15_wake: 0
tx15_cqe_err: 0
tx16_packets: 4465363036
tx16_bytes: 5769771803704
tx16_tso_packets: 817101913
tx16_tso_bytes: 4180172833814
tx16_tso_inner_packets: 0
tx16_tso_inner_bytes: 0
tx16_csum_partial: 1136731404
tx16_csum_partial_inner: 0
tx16_added_vlan_packets: 2332178232
tx16_nop: 63458573
tx16_csum_none: 1195446828
tx16_stopped: 0
tx16_dropped: 0
tx16_xmit_more: 23756254
tx16_recover: 0
tx16_cqes: 2308423025
tx16_wake: 0
tx16_cqe_err: 0
tx17_packets: 4380386348
tx17_bytes: 5708702994526
tx17_tso_packets: 813638023
tx17_tso_bytes: 4130806014947
tx17_tso_inner_packets: 0
tx17_tso_inner_bytes: 0
tx17_csum_partial: 1133007164
tx17_csum_partial_inner: 0
tx17_added_vlan_packets: 2277314787
tx17_nop: 62377372
tx17_csum_none: 1144307623
tx17_stopped: 0
tx17_dropped: 0
tx17_xmit_more: 23731361
tx17_recover: 0
tx17_cqes: 2253584638
tx17_wake: 0
tx17_cqe_err: 0
tx18_packets: 4450359743
tx18_bytes: 5758968674820
tx18_tso_packets: 815791601
tx18_tso_bytes: 4179942688909
tx18_tso_inner_packets: 0
tx18_tso_inner_bytes: 0
tx18_csum_partial: 1137649257
tx18_csum_partial_inner: 0
tx18_added_vlan_packets: 2314556550
tx18_nop: 63271085
tx18_csum_none: 1176907293
tx18_stopped: 0
tx18_dropped: 0
tx18_xmit_more: 23055770
tx18_recover: 0
tx18_cqes: 2291501928
tx18_wake: 0
tx18_cqe_err: 0
tx19_packets: 4596064378
tx19_bytes: 5916675706535
tx19_tso_packets: 825788649
tx19_tso_bytes: 4208046929921
tx19_tso_inner_packets: 0
tx19_tso_inner_bytes: 0
tx19_csum_partial: 1150666569
tx19_csum_partial_inner: 0
tx19_added_vlan_packets: 2450567026
tx19_nop: 65468504
tx19_csum_none: 1299900457
tx19_stopped: 0
tx19_dropped: 0
tx19_xmit_more: 23846250
tx19_recover: 0
tx19_cqes: 2426722127
tx19_wake: 0
tx19_cqe_err: 0
tx20_packets: 4424935388
tx20_bytes: 5757631205901
tx20_tso_packets: 804875006
tx20_tso_bytes: 4156262736109
tx20_tso_inner_packets: 0
tx20_tso_inner_bytes: 0
tx20_csum_partial: 1134144916
tx20_csum_partial_inner: 0
tx20_added_vlan_packets: 2294839665
tx20_nop: 63023986
tx20_csum_none: 1160694749
tx20_stopped: 0
tx20_dropped: 0
tx20_xmit_more: 23393201
tx20_recover: 0
tx20_cqes: 2271447623
tx20_wake: 0
tx20_cqe_err: 0
tx21_packets: 4595062285
tx21_bytes: 5958671993467
tx21_tso_packets: 821936215
tx21_tso_bytes: 4187977870684
tx21_tso_inner_packets: 0
tx21_tso_inner_bytes: 0
tx21_csum_partial: 1143339787
tx21_csum_partial_inner: 0
tx21_added_vlan_packets: 2457167412
tx21_nop: 65697763
tx21_csum_none: 1313827625
tx21_stopped: 0
tx21_dropped: 0
tx21_xmit_more: 23858345
tx21_recover: 0
tx21_cqes: 2433310348
tx21_wake: 0
tx21_cqe_err: 0
tx22_packets: 4664446513
tx22_bytes: 5931429292082
tx22_tso_packets: 814457881
tx22_tso_bytes: 4148607956533
tx22_tso_inner_packets: 0
tx22_tso_inner_bytes: 0
tx22_csum_partial: 1127284783
tx22_csum_partial_inner: 0
tx22_added_vlan_packets: 2548650146
tx22_nop: 66299909
tx22_csum_none: 1421365363
tx22_stopped: 0
tx22_dropped: 0
tx22_xmit_more: 23800911
tx22_recover: 0
tx22_cqes: 2524850415
tx22_wake: 0
tx22_cqe_err: 0
tx23_packets: 4416221747
tx23_bytes: 5721472587985
tx23_tso_packets: 823538520
tx23_tso_bytes: 4163520218617
tx23_tso_inner_packets: 0
tx23_tso_inner_bytes: 0
tx23_csum_partial: 1135996006
tx23_csum_partial_inner: 0
tx23_added_vlan_packets: 2292404120
tx23_nop: 62709432
tx23_csum_none: 1156408114
tx23_stopped: 0
tx23_dropped: 0
tx23_xmit_more: 22299889
tx23_recover: 0
tx23_cqes: 2270105487
tx23_wake: 0
tx23_cqe_err: 0
tx24_packets: 4420014824
tx24_bytes: 5740767318521
tx24_tso_packets: 820838072
tx24_tso_bytes: 4183722948422
tx24_tso_inner_packets: 0
tx24_tso_inner_bytes: 0
tx24_csum_partial: 1138070059
tx24_csum_partial_inner: 0
tx24_added_vlan_packets: 2289043946
tx24_nop: 62797341
tx24_csum_none: 1150973887
tx24_stopped: 0
tx24_dropped: 0
tx24_xmit_more: 22744690
tx24_recover: 0
tx24_cqes: 2266300568
tx24_wake: 0
tx24_cqe_err: 0
tx25_packets: 4413225545
tx25_bytes: 5716162617155
tx25_tso_packets: 808274341
tx25_tso_bytes: 4138408857714
tx25_tso_inner_packets: 0
tx25_tso_inner_bytes: 0
tx25_csum_partial: 1134587898
tx25_csum_partial_inner: 0
tx25_added_vlan_packets: 2297149310
tx25_nop: 62958238
tx25_csum_none: 1162561412
tx25_stopped: 0
tx25_dropped: 0
tx25_xmit_more: 24463552
tx25_recover: 0
tx25_cqes: 2272686971
tx25_wake: 0
tx25_cqe_err: 0
tx26_packets: 4524907591
tx26_bytes: 5865394280699
tx26_tso_packets: 807270022
tx26_tso_bytes: 4148754705317
tx26_tso_inner_packets: 0
tx26_tso_inner_bytes: 0
tx26_csum_partial: 1130306933
tx26_csum_partial_inner: 0
tx26_added_vlan_packets: 2402682460
tx26_nop: 64474322
tx26_csum_none: 1272375527
tx26_stopped: 1
tx26_dropped: 0
tx26_xmit_more: 23316186
tx26_recover: 0
tx26_cqes: 2379367502
tx26_wake: 1
tx26_cqe_err: 0
tx27_packets: 4376114969
tx27_bytes: 5683551238304
tx27_tso_packets: 809344829
tx27_tso_bytes: 4124331859270
tx27_tso_inner_packets: 0
tx27_tso_inner_bytes: 0
tx27_csum_partial: 1124954937
tx27_csum_partial_inner: 0
tx27_added_vlan_packets: 2267871300
tx27_nop: 62213214
tx27_csum_none: 1142916363
tx27_stopped: 0
tx27_dropped: 0
tx27_xmit_more: 23369974
tx27_recover: 0
tx27_cqes: 2244502686
tx27_wake: 0
tx27_cqe_err: 0
tx28_packets: 3
tx28_bytes: 266
tx28_tso_packets: 0
tx28_tso_bytes: 0
tx28_tso_inner_packets: 0
tx28_tso_inner_bytes: 0
tx28_csum_partial: 0
tx28_csum_partial_inner: 0
tx28_added_vlan_packets: 0
tx28_nop: 0
tx28_csum_none: 3
tx28_stopped: 0
tx28_dropped: 0
tx28_xmit_more: 0
tx28_recover: 0
tx28_cqes: 3
tx28_wake: 0
tx28_cqe_err: 0
tx29_packets: 0
tx29_bytes: 0
tx29_tso_packets: 0
tx29_tso_bytes: 0
tx29_tso_inner_packets: 0
tx29_tso_inner_bytes: 0
tx29_csum_partial: 0
tx29_csum_partial_inner: 0
tx29_added_vlan_packets: 0
tx29_nop: 0
tx29_csum_none: 0
tx29_stopped: 0
tx29_dropped: 0
tx29_xmit_more: 0
tx29_recover: 0
tx29_cqes: 0
tx29_wake: 0
tx29_cqe_err: 0
tx30_packets: 0
tx30_bytes: 0
tx30_tso_packets: 0
tx30_tso_bytes: 0
tx30_tso_inner_packets: 0
tx30_tso_inner_bytes: 0
tx30_csum_partial: 0
tx30_csum_partial_inner: 0
tx30_added_vlan_packets: 0
tx30_nop: 0
tx30_csum_none: 0
tx30_stopped: 0
tx30_dropped: 0
tx30_xmit_more: 0
tx30_recover: 0
tx30_cqes: 0
tx30_wake: 0
tx30_cqe_err: 0
tx31_packets: 0
tx31_bytes: 0
tx31_tso_packets: 0
tx31_tso_bytes: 0
tx31_tso_inner_packets: 0
tx31_tso_inner_bytes: 0
tx31_csum_partial: 0
tx31_csum_partial_inner: 0
tx31_added_vlan_packets: 0
tx31_nop: 0
tx31_csum_none: 0
tx31_stopped: 0
tx31_dropped: 0
tx31_xmit_more: 0
tx31_recover: 0
tx31_cqes: 0
tx31_wake: 0
tx31_cqe_err: 0
tx32_packets: 0
tx32_bytes: 0
tx32_tso_packets: 0
tx32_tso_bytes: 0
tx32_tso_inner_packets: 0
tx32_tso_inner_bytes: 0
tx32_csum_partial: 0
tx32_csum_partial_inner: 0
tx32_added_vlan_packets: 0
tx32_nop: 0
tx32_csum_none: 0
tx32_stopped: 0
tx32_dropped: 0
tx32_xmit_more: 0
tx32_recover: 0
tx32_cqes: 0
tx32_wake: 0
tx32_cqe_err: 0
tx33_packets: 0
tx33_bytes: 0
tx33_tso_packets: 0
tx33_tso_bytes: 0
tx33_tso_inner_packets: 0
tx33_tso_inner_bytes: 0
tx33_csum_partial: 0
tx33_csum_partial_inner: 0
tx33_added_vlan_packets: 0
tx33_nop: 0
tx33_csum_none: 0
tx33_stopped: 0
tx33_dropped: 0
tx33_xmit_more: 0
tx33_recover: 0
tx33_cqes: 0
tx33_wake: 0
tx33_cqe_err: 0
tx34_packets: 0
tx34_bytes: 0
tx34_tso_packets: 0
tx34_tso_bytes: 0
tx34_tso_inner_packets: 0
tx34_tso_inner_bytes: 0
tx34_csum_partial: 0
tx34_csum_partial_inner: 0
tx34_added_vlan_packets: 0
tx34_nop: 0
tx34_csum_none: 0
tx34_stopped: 0
tx34_dropped: 0
tx34_xmit_more: 0
tx34_recover: 0
tx34_cqes: 0
tx34_wake: 0
tx34_cqe_err: 0
tx35_packets: 0
tx35_bytes: 0
tx35_tso_packets: 0
tx35_tso_bytes: 0
tx35_tso_inner_packets: 0
tx35_tso_inner_bytes: 0
tx35_csum_partial: 0
tx35_csum_partial_inner: 0
tx35_added_vlan_packets: 0
tx35_nop: 0
tx35_csum_none: 0
tx35_stopped: 0
tx35_dropped: 0
tx35_xmit_more: 0
tx35_recover: 0
tx35_cqes: 0
tx35_wake: 0
tx35_cqe_err: 0
tx36_packets: 0
tx36_bytes: 0
tx36_tso_packets: 0
tx36_tso_bytes: 0
tx36_tso_inner_packets: 0
tx36_tso_inner_bytes: 0
tx36_csum_partial: 0
tx36_csum_partial_inner: 0
tx36_added_vlan_packets: 0
tx36_nop: 0
tx36_csum_none: 0
tx36_stopped: 0
tx36_dropped: 0
tx36_xmit_more: 0
tx36_recover: 0
tx36_cqes: 0
tx36_wake: 0
tx36_cqe_err: 0
tx37_packets: 0
tx37_bytes: 0
tx37_tso_packets: 0
tx37_tso_bytes: 0
tx37_tso_inner_packets: 0
tx37_tso_inner_bytes: 0
tx37_csum_partial: 0
tx37_csum_partial_inner: 0
tx37_added_vlan_packets: 0
tx37_nop: 0
tx37_csum_none: 0
tx37_stopped: 0
tx37_dropped: 0
tx37_xmit_more: 0
tx37_recover: 0
tx37_cqes: 0
tx37_wake: 0
tx37_cqe_err: 0
tx38_packets: 0
tx38_bytes: 0
tx38_tso_packets: 0
tx38_tso_bytes: 0
tx38_tso_inner_packets: 0
tx38_tso_inner_bytes: 0
tx38_csum_partial: 0
tx38_csum_partial_inner: 0
tx38_added_vlan_packets: 0
tx38_nop: 0
tx38_csum_none: 0
tx38_stopped: 0
tx38_dropped: 0
tx38_xmit_more: 0
tx38_recover: 0
tx38_cqes: 0
tx38_wake: 0
tx38_cqe_err: 0
tx39_packets: 0
tx39_bytes: 0
tx39_tso_packets: 0
tx39_tso_bytes: 0
tx39_tso_inner_packets: 0
tx39_tso_inner_bytes: 0
tx39_csum_partial: 0
tx39_csum_partial_inner: 0
tx39_added_vlan_packets: 0
tx39_nop: 0
tx39_csum_none: 0
tx39_stopped: 0
tx39_dropped: 0
tx39_xmit_more: 0
tx39_recover: 0
tx39_cqes: 0
tx39_wake: 0
tx39_cqe_err: 0
tx40_packets: 0
tx40_bytes: 0
tx40_tso_packets: 0
tx40_tso_bytes: 0
tx40_tso_inner_packets: 0
tx40_tso_inner_bytes: 0
tx40_csum_partial: 0
tx40_csum_partial_inner: 0
tx40_added_vlan_packets: 0
tx40_nop: 0
tx40_csum_none: 0
tx40_stopped: 0
tx40_dropped: 0
tx40_xmit_more: 0
tx40_recover: 0
tx40_cqes: 0
tx40_wake: 0
tx40_cqe_err: 0
tx41_packets: 0
tx41_bytes: 0
tx41_tso_packets: 0
tx41_tso_bytes: 0
tx41_tso_inner_packets: 0
tx41_tso_inner_bytes: 0
tx41_csum_partial: 0
tx41_csum_partial_inner: 0
tx41_added_vlan_packets: 0
tx41_nop: 0
tx41_csum_none: 0
tx41_stopped: 0
tx41_dropped: 0
tx41_xmit_more: 0
tx41_recover: 0
tx41_cqes: 0
tx41_wake: 0
tx41_cqe_err: 0
tx42_packets: 0
tx42_bytes: 0
tx42_tso_packets: 0
tx42_tso_bytes: 0
tx42_tso_inner_packets: 0
tx42_tso_inner_bytes: 0
tx42_csum_partial: 0
tx42_csum_partial_inner: 0
tx42_added_vlan_packets: 0
tx42_nop: 0
tx42_csum_none: 0
tx42_stopped: 0
tx42_dropped: 0
tx42_xmit_more: 0
tx42_recover: 0
tx42_cqes: 0
tx42_wake: 0
tx42_cqe_err: 0
tx43_packets: 0
tx43_bytes: 0
tx43_tso_packets: 0
tx43_tso_bytes: 0
tx43_tso_inner_packets: 0
tx43_tso_inner_bytes: 0
tx43_csum_partial: 0
tx43_csum_partial_inner: 0
tx43_added_vlan_packets: 0
tx43_nop: 0
tx43_csum_none: 0
tx43_stopped: 0
tx43_dropped: 0
tx43_xmit_more: 0
tx43_recover: 0
tx43_cqes: 0
tx43_wake: 0
tx43_cqe_err: 0
tx44_packets: 0
tx44_bytes: 0
tx44_tso_packets: 0
tx44_tso_bytes: 0
tx44_tso_inner_packets: 0
tx44_tso_inner_bytes: 0
tx44_csum_partial: 0
tx44_csum_partial_inner: 0
tx44_added_vlan_packets: 0
tx44_nop: 0
tx44_csum_none: 0
tx44_stopped: 0
tx44_dropped: 0
tx44_xmit_more: 0
tx44_recover: 0
tx44_cqes: 0
tx44_wake: 0
tx44_cqe_err: 0
tx45_packets: 0
tx45_bytes: 0
tx45_tso_packets: 0
tx45_tso_bytes: 0
tx45_tso_inner_packets: 0
tx45_tso_inner_bytes: 0
tx45_csum_partial: 0
tx45_csum_partial_inner: 0
tx45_added_vlan_packets: 0
tx45_nop: 0
tx45_csum_none: 0
tx45_stopped: 0
tx45_dropped: 0
tx45_xmit_more: 0
tx45_recover: 0
tx45_cqes: 0
tx45_wake: 0
tx45_cqe_err: 0
tx46_packets: 0
tx46_bytes: 0
tx46_tso_packets: 0
tx46_tso_bytes: 0
tx46_tso_inner_packets: 0
tx46_tso_inner_bytes: 0
tx46_csum_partial: 0
tx46_csum_partial_inner: 0
tx46_added_vlan_packets: 0
tx46_nop: 0
tx46_csum_none: 0
tx46_stopped: 0
tx46_dropped: 0
tx46_xmit_more: 0
tx46_recover: 0
tx46_cqes: 0
tx46_wake: 0
tx46_cqe_err: 0
tx47_packets: 0
tx47_bytes: 0
tx47_tso_packets: 0
tx47_tso_bytes: 0
tx47_tso_inner_packets: 0
tx47_tso_inner_bytes: 0
tx47_csum_partial: 0
tx47_csum_partial_inner: 0
tx47_added_vlan_packets: 0
tx47_nop: 0
tx47_csum_none: 0
tx47_stopped: 0
tx47_dropped: 0
tx47_xmit_more: 0
tx47_recover: 0
tx47_cqes: 0
tx47_wake: 0
tx47_cqe_err: 0
tx48_packets: 0
tx48_bytes: 0
tx48_tso_packets: 0
tx48_tso_bytes: 0
tx48_tso_inner_packets: 0
tx48_tso_inner_bytes: 0
tx48_csum_partial: 0
tx48_csum_partial_inner: 0
tx48_added_vlan_packets: 0
tx48_nop: 0
tx48_csum_none: 0
tx48_stopped: 0
tx48_dropped: 0
tx48_xmit_more: 0
tx48_recover: 0
tx48_cqes: 0
tx48_wake: 0
tx48_cqe_err: 0
tx49_packets: 0
tx49_bytes: 0
tx49_tso_packets: 0
tx49_tso_bytes: 0
tx49_tso_inner_packets: 0
tx49_tso_inner_bytes: 0
tx49_csum_partial: 0
tx49_csum_partial_inner: 0
tx49_added_vlan_packets: 0
tx49_nop: 0
tx49_csum_none: 0
tx49_stopped: 0
tx49_dropped: 0
tx49_xmit_more: 0
tx49_recover: 0
tx49_cqes: 0
tx49_wake: 0
tx49_cqe_err: 0
tx50_packets: 0
tx50_bytes: 0
tx50_tso_packets: 0
tx50_tso_bytes: 0
tx50_tso_inner_packets: 0
tx50_tso_inner_bytes: 0
tx50_csum_partial: 0
tx50_csum_partial_inner: 0
tx50_added_vlan_packets: 0
tx50_nop: 0
tx50_csum_none: 0
tx50_stopped: 0
tx50_dropped: 0
tx50_xmit_more: 0
tx50_recover: 0
tx50_cqes: 0
tx50_wake: 0
tx50_cqe_err: 0
tx51_packets: 0
tx51_bytes: 0
tx51_tso_packets: 0
tx51_tso_bytes: 0
tx51_tso_inner_packets: 0
tx51_tso_inner_bytes: 0
tx51_csum_partial: 0
tx51_csum_partial_inner: 0
tx51_added_vlan_packets: 0
tx51_nop: 0
tx51_csum_none: 0
tx51_stopped: 0
tx51_dropped: 0
tx51_xmit_more: 0
tx51_recover: 0
tx51_cqes: 0
tx51_wake: 0
tx51_cqe_err: 0
tx52_packets: 0
tx52_bytes: 0
tx52_tso_packets: 0
tx52_tso_bytes: 0
tx52_tso_inner_packets: 0
tx52_tso_inner_bytes: 0
tx52_csum_partial: 0
tx52_csum_partial_inner: 0
tx52_added_vlan_packets: 0
tx52_nop: 0
tx52_csum_none: 0
tx52_stopped: 0
tx52_dropped: 0
tx52_xmit_more: 0
tx52_recover: 0
tx52_cqes: 0
tx52_wake: 0
tx52_cqe_err: 0
tx53_packets: 0
tx53_bytes: 0
tx53_tso_packets: 0
tx53_tso_bytes: 0
tx53_tso_inner_packets: 0
tx53_tso_inner_bytes: 0
tx53_csum_partial: 0
tx53_csum_partial_inner: 0
tx53_added_vlan_packets: 0
tx53_nop: 0
tx53_csum_none: 0
tx53_stopped: 0
tx53_dropped: 0
tx53_xmit_more: 0
tx53_recover: 0
tx53_cqes: 0
tx53_wake: 0
tx53_cqe_err: 0
tx54_packets: 0
tx54_bytes: 0
tx54_tso_packets: 0
tx54_tso_bytes: 0
tx54_tso_inner_packets: 0
tx54_tso_inner_bytes: 0
tx54_csum_partial: 0
tx54_csum_partial_inner: 0
tx54_added_vlan_packets: 0
tx54_nop: 0
tx54_csum_none: 0
tx54_stopped: 0
tx54_dropped: 0
tx54_xmit_more: 0
tx54_recover: 0
tx54_cqes: 0
tx54_wake: 0
tx54_cqe_err: 0
tx55_packets: 0
tx55_bytes: 0
tx55_tso_packets: 0
tx55_tso_bytes: 0
tx55_tso_inner_packets: 0
tx55_tso_inner_bytes: 0
tx55_csum_partial: 0
tx55_csum_partial_inner: 0
tx55_added_vlan_packets: 0
tx55_nop: 0
tx55_csum_none: 0
tx55_stopped: 0
tx55_dropped: 0
tx55_xmit_more: 0
tx55_recover: 0
tx55_cqes: 0
tx55_wake: 0
tx55_cqe_err: 0
tx0_xdp_xmit: 0
tx0_xdp_full: 0
tx0_xdp_err: 0
tx0_xdp_cqes: 0
tx1_xdp_xmit: 0
tx1_xdp_full: 0
tx1_xdp_err: 0
tx1_xdp_cqes: 0
tx2_xdp_xmit: 0
tx2_xdp_full: 0
tx2_xdp_err: 0
tx2_xdp_cqes: 0
tx3_xdp_xmit: 0
tx3_xdp_full: 0
tx3_xdp_err: 0
tx3_xdp_cqes: 0
tx4_xdp_xmit: 0
tx4_xdp_full: 0
tx4_xdp_err: 0
tx4_xdp_cqes: 0
tx5_xdp_xmit: 0
tx5_xdp_full: 0
tx5_xdp_err: 0
tx5_xdp_cqes: 0
tx6_xdp_xmit: 0
tx6_xdp_full: 0
tx6_xdp_err: 0
tx6_xdp_cqes: 0
tx7_xdp_xmit: 0
tx7_xdp_full: 0
tx7_xdp_err: 0
tx7_xdp_cqes: 0
tx8_xdp_xmit: 0
tx8_xdp_full: 0
tx8_xdp_err: 0
tx8_xdp_cqes: 0
tx9_xdp_xmit: 0
tx9_xdp_full: 0
tx9_xdp_err: 0
tx9_xdp_cqes: 0
tx10_xdp_xmit: 0
tx10_xdp_full: 0
tx10_xdp_err: 0
tx10_xdp_cqes: 0
tx11_xdp_xmit: 0
tx11_xdp_full: 0
tx11_xdp_err: 0
tx11_xdp_cqes: 0
tx12_xdp_xmit: 0
tx12_xdp_full: 0
tx12_xdp_err: 0
tx12_xdp_cqes: 0
tx13_xdp_xmit: 0
tx13_xdp_full: 0
tx13_xdp_err: 0
tx13_xdp_cqes: 0
tx14_xdp_xmit: 0
tx14_xdp_full: 0
tx14_xdp_err: 0
tx14_xdp_cqes: 0
tx15_xdp_xmit: 0
tx15_xdp_full: 0
tx15_xdp_err: 0
tx15_xdp_cqes: 0
tx16_xdp_xmit: 0
tx16_xdp_full: 0
tx16_xdp_err: 0
tx16_xdp_cqes: 0
tx17_xdp_xmit: 0
tx17_xdp_full: 0
tx17_xdp_err: 0
tx17_xdp_cqes: 0
tx18_xdp_xmit: 0
tx18_xdp_full: 0
tx18_xdp_err: 0
tx18_xdp_cqes: 0
tx19_xdp_xmit: 0
tx19_xdp_full: 0
tx19_xdp_err: 0
tx19_xdp_cqes: 0
tx20_xdp_xmit: 0
tx20_xdp_full: 0
tx20_xdp_err: 0
tx20_xdp_cqes: 0
tx21_xdp_xmit: 0
tx21_xdp_full: 0
tx21_xdp_err: 0
tx21_xdp_cqes: 0
tx22_xdp_xmit: 0
tx22_xdp_full: 0
tx22_xdp_err: 0
tx22_xdp_cqes: 0
tx23_xdp_xmit: 0
tx23_xdp_full: 0
tx23_xdp_err: 0
tx23_xdp_cqes: 0
tx24_xdp_xmit: 0
tx24_xdp_full: 0
tx24_xdp_err: 0
tx24_xdp_cqes: 0
tx25_xdp_xmit: 0
tx25_xdp_full: 0
tx25_xdp_err: 0
tx25_xdp_cqes: 0
tx26_xdp_xmit: 0
tx26_xdp_full: 0
tx26_xdp_err: 0
tx26_xdp_cqes: 0
tx27_xdp_xmit: 0
tx27_xdp_full: 0
tx27_xdp_err: 0
tx27_xdp_cqes: 0
tx28_xdp_xmit: 0
tx28_xdp_full: 0
tx28_xdp_err: 0
tx28_xdp_cqes: 0
tx29_xdp_xmit: 0
tx29_xdp_full: 0
tx29_xdp_err: 0
tx29_xdp_cqes: 0
tx30_xdp_xmit: 0
tx30_xdp_full: 0
tx30_xdp_err: 0
tx30_xdp_cqes: 0
tx31_xdp_xmit: 0
tx31_xdp_full: 0
tx31_xdp_err: 0
tx31_xdp_cqes: 0
tx32_xdp_xmit: 0
tx32_xdp_full: 0
tx32_xdp_err: 0
tx32_xdp_cqes: 0
tx33_xdp_xmit: 0
tx33_xdp_full: 0
tx33_xdp_err: 0
tx33_xdp_cqes: 0
tx34_xdp_xmit: 0
tx34_xdp_full: 0
tx34_xdp_err: 0
tx34_xdp_cqes: 0
tx35_xdp_xmit: 0
tx35_xdp_full: 0
tx35_xdp_err: 0
tx35_xdp_cqes: 0
tx36_xdp_xmit: 0
tx36_xdp_full: 0
tx36_xdp_err: 0
tx36_xdp_cqes: 0
tx37_xdp_xmit: 0
tx37_xdp_full: 0
tx37_xdp_err: 0
tx37_xdp_cqes: 0
tx38_xdp_xmit: 0
tx38_xdp_full: 0
tx38_xdp_err: 0
tx38_xdp_cqes: 0
tx39_xdp_xmit: 0
tx39_xdp_full: 0
tx39_xdp_err: 0
tx39_xdp_cqes: 0
tx40_xdp_xmit: 0
tx40_xdp_full: 0
tx40_xdp_err: 0
tx40_xdp_cqes: 0
tx41_xdp_xmit: 0
tx41_xdp_full: 0
tx41_xdp_err: 0
tx41_xdp_cqes: 0
tx42_xdp_xmit: 0
tx42_xdp_full: 0
tx42_xdp_err: 0
tx42_xdp_cqes: 0
tx43_xdp_xmit: 0
tx43_xdp_full: 0
tx43_xdp_err: 0
tx43_xdp_cqes: 0
tx44_xdp_xmit: 0
tx44_xdp_full: 0
tx44_xdp_err: 0
tx44_xdp_cqes: 0
tx45_xdp_xmit: 0
tx45_xdp_full: 0
tx45_xdp_err: 0
tx45_xdp_cqes: 0
tx46_xdp_xmit: 0
tx46_xdp_full: 0
tx46_xdp_err: 0
tx46_xdp_cqes: 0
tx47_xdp_xmit: 0
tx47_xdp_full: 0
tx47_xdp_err: 0
tx47_xdp_cqes: 0
tx48_xdp_xmit: 0
tx48_xdp_full: 0
tx48_xdp_err: 0
tx48_xdp_cqes: 0
tx49_xdp_xmit: 0
tx49_xdp_full: 0
tx49_xdp_err: 0
tx49_xdp_cqes: 0
tx50_xdp_xmit: 0
tx50_xdp_full: 0
tx50_xdp_err: 0
tx50_xdp_cqes: 0
tx51_xdp_xmit: 0
tx51_xdp_full: 0
tx51_xdp_err: 0
tx51_xdp_cqes: 0
tx52_xdp_xmit: 0
tx52_xdp_full: 0
tx52_xdp_err: 0
tx52_xdp_cqes: 0
tx53_xdp_xmit: 0
tx53_xdp_full: 0
tx53_xdp_err: 0
tx53_xdp_cqes: 0
tx54_xdp_xmit: 0
tx54_xdp_full: 0
tx54_xdp_err: 0
tx54_xdp_cqes: 0
tx55_xdp_xmit: 0
tx55_xdp_full: 0
tx55_xdp_err: 0
tx55_xdp_cqes: 0
ethtool -S enp175s0f0
NIC statistics:
rx_packets: 141574897253
rx_bytes: 184445040406258
tx_packets: 172569543894
tx_bytes: 99486882076365
tx_tso_packets: 9367664195
tx_tso_bytes: 56435233992948
tx_tso_inner_packets: 0
tx_tso_inner_bytes: 0
tx_added_vlan_packets: 141297671626
tx_nop: 2102916272
rx_lro_packets: 0
rx_lro_bytes: 0
rx_ecn_mark: 0
rx_removed_vlan_packets: 141574897252
rx_csum_unnecessary: 0
rx_csum_none: 23135854
rx_csum_complete: 141551761398
rx_csum_unnecessary_inner: 0
rx_xdp_drop: 0
rx_xdp_redirect: 0
rx_xdp_tx_xmit: 0
rx_xdp_tx_full: 0
rx_xdp_tx_err: 0
rx_xdp_tx_cqe: 0
tx_csum_none: 127934791664
tx_csum_partial: 13362879974
tx_csum_partial_inner: 0
tx_queue_stopped: 232561
tx_queue_dropped: 0
tx_xmit_more: 1266021946
tx_recover: 0
tx_cqes: 140031716469
tx_queue_wake: 232561
tx_udp_seg_rem: 0
tx_cqe_err: 0
tx_xdp_xmit: 0
tx_xdp_full: 0
tx_xdp_err: 0
tx_xdp_cqes: 0
rx_wqe_err: 0
rx_mpwqe_filler_cqes: 0
rx_mpwqe_filler_strides: 0
rx_buff_alloc_err: 0
rx_cqe_compress_blks: 0
rx_cqe_compress_pkts: 0
rx_page_reuse: 0
rx_cache_reuse: 16625975793
rx_cache_full: 54161465914
rx_cache_empty: 258048
rx_cache_busy: 54161472735
rx_cache_waive: 0
rx_congst_umr: 0
rx_arfs_err: 0
ch_events: 40572621887
ch_poll: 40885650979
ch_arm: 40429276692
ch_aff_change: 0
ch_eq_rearm: 0
rx_out_of_buffer: 2791690
rx_if_down_packets: 74
rx_vport_unicast_packets: 141843476308
rx_vport_unicast_bytes: 185421265403318
tx_vport_unicast_packets: 172569484005
tx_vport_unicast_bytes: 100019940094298
rx_vport_multicast_packets: 85122935
rx_vport_multicast_bytes: 5761316431
tx_vport_multicast_packets: 6452
tx_vport_multicast_bytes: 643540
rx_vport_broadcast_packets: 22423624
rx_vport_broadcast_bytes: 1390127090
tx_vport_broadcast_packets: 22024
tx_vport_broadcast_bytes: 1321440
rx_vport_rdma_unicast_packets: 0
rx_vport_rdma_unicast_bytes: 0
tx_vport_rdma_unicast_packets: 0
tx_vport_rdma_unicast_bytes: 0
rx_vport_rdma_multicast_packets: 0
rx_vport_rdma_multicast_bytes: 0
tx_vport_rdma_multicast_packets: 0
tx_vport_rdma_multicast_bytes: 0
tx_packets_phy: 172569501577
rx_packets_phy: 142871314588
rx_crc_errors_phy: 0
tx_bytes_phy: 100710212814151
rx_bytes_phy: 187209224289564
tx_multicast_phy: 6452
tx_broadcast_phy: 22024
rx_multicast_phy: 85122933
rx_broadcast_phy: 22423623
rx_in_range_len_errors_phy: 2
rx_out_of_range_len_phy: 0
rx_oversize_pkts_phy: 0
rx_symbol_err_phy: 0
tx_mac_control_phy: 0
rx_mac_control_phy: 0
rx_unsupported_op_phy: 0
rx_pause_ctrl_phy: 0
tx_pause_ctrl_phy: 0
rx_discards_phy: 920161423
tx_discards_phy: 0
tx_errors_phy: 0
rx_undersize_pkts_phy: 0
rx_fragments_phy: 0
rx_jabbers_phy: 0
rx_64_bytes_phy: 412006326
rx_65_to_127_bytes_phy: 11934371453
rx_128_to_255_bytes_phy: 3415281165
rx_256_to_511_bytes_phy: 2072955511
rx_512_to_1023_bytes_phy: 2415393005
rx_1024_to_1518_bytes_phy: 72182391608
rx_1519_to_2047_bytes_phy: 50438902587
rx_2048_to_4095_bytes_phy: 0
rx_4096_to_8191_bytes_phy: 0
rx_8192_to_10239_bytes_phy: 0
link_down_events_phy: 0
rx_pcs_symbol_err_phy: 0
rx_corrected_bits_phy: 0
rx_pci_signal_integrity: 0
tx_pci_signal_integrity: 48
rx_prio0_bytes: 186709842592642
rx_prio0_packets: 141481966007
tx_prio0_bytes: 100710171118138
tx_prio0_packets: 172569437949
rx_prio1_bytes: 492288152326
rx_prio1_packets: 385996045
tx_prio1_bytes: 0
tx_prio1_packets: 0
rx_prio2_bytes: 22119952
rx_prio2_packets: 70788
tx_prio2_bytes: 0
tx_prio2_packets: 0
rx_prio3_bytes: 546141102
rx_prio3_packets: 681608
tx_prio3_bytes: 0
tx_prio3_packets: 0
rx_prio4_bytes: 14665067
rx_prio4_packets: 29486
tx_prio4_bytes: 0
tx_prio4_packets: 0
rx_prio5_bytes: 158862504
rx_prio5_packets: 965307
tx_prio5_bytes: 0
tx_prio5_packets: 0
rx_prio6_bytes: 669337783
rx_prio6_packets: 1475775
tx_prio6_bytes: 0
tx_prio6_packets: 0
rx_prio7_bytes: 5623481349
rx_prio7_packets: 79926412
tx_prio7_bytes: 0
tx_prio7_packets: 0
module_unplug: 0
module_bus_stuck: 0
module_high_temp: 0
module_bad_shorted: 0
ch0_events: 1446162630
ch0_poll: 1463312972
ch0_arm: 1440728278
ch0_aff_change: 0
ch0_eq_rearm: 0
ch1_events: 1384301405
ch1_poll: 1399210915
ch1_arm: 1378636486
ch1_aff_change: 0
ch1_eq_rearm: 0
ch2_events: 1382788887
ch2_poll: 1397231470
ch2_arm: 1377058116
ch2_aff_change: 0
ch2_eq_rearm: 0
ch3_events: 1461956995
ch3_poll: 1475553146
ch3_arm: 1456571625
ch3_aff_change: 0
ch3_eq_rearm: 0
ch4_events: 1497359109
ch4_poll: 1511021037
ch4_arm: 1491733757
ch4_aff_change: 0
ch4_eq_rearm: 0
ch5_events: 1387736262
ch5_poll: 1400964615
ch5_arm: 1382382834
ch5_aff_change: 0
ch5_eq_rearm: 0
ch6_events: 1376772405
ch6_poll: 1390851449
ch6_arm: 1371551764
ch6_aff_change: 0
ch6_eq_rearm: 0
ch7_events: 1431271514
ch7_poll: 1445049729
ch7_arm: 1425753718
ch7_aff_change: 0
ch7_eq_rearm: 0
ch8_events: 1426976374
ch8_poll: 1439938692
ch8_arm: 1421392984
ch8_aff_change: 0
ch8_eq_rearm: 0
ch9_events: 1456160031
ch9_poll: 1468922870
ch9_arm: 1450930446
ch9_aff_change: 0
ch9_eq_rearm: 0
ch10_events: 1443640165
ch10_poll: 1456812203
ch10_arm: 1438425101
ch10_aff_change: 0
ch10_eq_rearm: 0
ch11_events: 1381104776
ch11_poll: 1393811057
ch11_arm: 1376059326
ch11_aff_change: 0
ch11_eq_rearm: 0
ch12_events: 1365223276
ch12_poll: 1378406059
ch12_arm: 1359950494
ch12_aff_change: 0
ch12_eq_rearm: 0
ch13_events: 1421622259
ch13_poll: 1434670996
ch13_arm: 1416241801
ch13_aff_change: 0
ch13_eq_rearm: 0
ch14_events: 1379084590
ch14_poll: 1392425015
ch14_arm: 1373675179
ch14_aff_change: 0
ch14_eq_rearm: 0
ch15_events: 1531217338
ch15_poll: 1543353833
ch15_arm: 1526350453
ch15_aff_change: 0
ch15_eq_rearm: 0
ch16_events: 1460469776
ch16_poll: 1467995928
ch16_arm: 1456010194
ch16_aff_change: 0
ch16_eq_rearm: 0
ch17_events: 1494067670
ch17_poll: 1500856680
ch17_arm: 1489232674
ch17_aff_change: 0
ch17_eq_rearm: 0
ch18_events: 1530126866
ch18_poll: 1537293620
ch18_arm: 1525476123
ch18_aff_change: 0
ch18_eq_rearm: 0
ch19_events: 1499526149
ch19_poll: 1506789309
ch19_arm: 1495161602
ch19_aff_change: 0
ch19_eq_rearm: 0
ch20_events: 1451479763
ch20_poll: 1459767921
ch20_arm: 1446360801
ch20_aff_change: 0
ch20_eq_rearm: 0
ch21_events: 1521413613
ch21_poll: 1529345146
ch21_arm: 1517229314
ch21_aff_change: 0
ch21_eq_rearm: 0
ch22_events: 1471950045
ch22_poll: 1479746764
ch22_arm: 1467681629
ch22_aff_change: 0
ch22_eq_rearm: 0
ch23_events: 1502968393
ch23_poll: 1510419909
ch23_arm: 1498168438
ch23_aff_change: 0
ch23_eq_rearm: 0
ch24_events: 1473451639
ch24_poll: 1482606899
ch24_arm: 1468212489
ch24_aff_change: 0
ch24_eq_rearm: 0
ch25_events: 1440399182
ch25_poll: 1448897475
ch25_arm: 1435044786
ch25_aff_change: 0
ch25_eq_rearm: 0
ch26_events: 1436831565
ch26_poll: 1445485731
ch26_arm: 1431827527
ch26_aff_change: 0
ch26_eq_rearm: 0
ch27_events: 1516560621
ch27_poll: 1524911010
ch27_arm: 1511430164
ch27_aff_change: 0
ch27_eq_rearm: 0
ch28_events: 4
ch28_poll: 4
ch28_arm: 4
ch28_aff_change: 0
ch28_eq_rearm: 0
ch29_events: 6
ch29_poll: 6
ch29_arm: 6
ch29_aff_change: 0
ch29_eq_rearm: 0
ch30_events: 4
ch30_poll: 4
ch30_arm: 4
ch30_aff_change: 0
ch30_eq_rearm: 0
ch31_events: 4
ch31_poll: 4
ch31_arm: 4
ch31_aff_change: 0
ch31_eq_rearm: 0
ch32_events: 4
ch32_poll: 4
ch32_arm: 4
ch32_aff_change: 0
ch32_eq_rearm: 0
ch33_events: 4
ch33_poll: 4
ch33_arm: 4
ch33_aff_change: 0
ch33_eq_rearm: 0
ch34_events: 4
ch34_poll: 4
ch34_arm: 4
ch34_aff_change: 0
ch34_eq_rearm: 0
ch35_events: 4
ch35_poll: 4
ch35_arm: 4
ch35_aff_change: 0
ch35_eq_rearm: 0
ch36_events: 4
ch36_poll: 4
ch36_arm: 4
ch36_aff_change: 0
ch36_eq_rearm: 0
ch37_events: 4
ch37_poll: 4
ch37_arm: 4
ch37_aff_change: 0
ch37_eq_rearm: 0
ch38_events: 4
ch38_poll: 4
ch38_arm: 4
ch38_aff_change: 0
ch38_eq_rearm: 0
ch39_events: 4
ch39_poll: 4
ch39_arm: 4
ch39_aff_change: 0
ch39_eq_rearm: 0
ch40_events: 4
ch40_poll: 4
ch40_arm: 4
ch40_aff_change: 0
ch40_eq_rearm: 0
ch41_events: 4
ch41_poll: 4
ch41_arm: 4
ch41_aff_change: 0
ch41_eq_rearm: 0
ch42_events: 4
ch42_poll: 4
ch42_arm: 4
ch42_aff_change: 0
ch42_eq_rearm: 0
ch43_events: 4
ch43_poll: 4
ch43_arm: 4
ch43_aff_change: 0
ch43_eq_rearm: 0
ch44_events: 4
ch44_poll: 4
ch44_arm: 4
ch44_aff_change: 0
ch44_eq_rearm: 0
ch45_events: 4
ch45_poll: 4
ch45_arm: 4
ch45_aff_change: 0
ch45_eq_rearm: 0
ch46_events: 4
ch46_poll: 4
ch46_arm: 4
ch46_aff_change: 0
ch46_eq_rearm: 0
ch47_events: 4
ch47_poll: 4
ch47_arm: 4
ch47_aff_change: 0
ch47_eq_rearm: 0
ch48_events: 4
ch48_poll: 4
ch48_arm: 4
ch48_aff_change: 0
ch48_eq_rearm: 0
ch49_events: 4
ch49_poll: 4
ch49_arm: 4
ch49_aff_change: 0
ch49_eq_rearm: 0
ch50_events: 4
ch50_poll: 4
ch50_arm: 4
ch50_aff_change: 0
ch50_eq_rearm: 0
ch51_events: 4
ch51_poll: 4
ch51_arm: 4
ch51_aff_change: 0
ch51_eq_rearm: 0
ch52_events: 4
ch52_poll: 4
ch52_arm: 4
ch52_aff_change: 0
ch52_eq_rearm: 0
ch53_events: 4
ch53_poll: 4
ch53_arm: 4
ch53_aff_change: 0
ch53_eq_rearm: 0
ch54_events: 4
ch54_poll: 4
ch54_arm: 4
ch54_aff_change: 0
ch54_eq_rearm: 0
ch55_events: 4
ch55_poll: 4
ch55_arm: 4
ch55_aff_change: 0
ch55_eq_rearm: 0
rx0_packets: 5861448653
rx0_bytes: 7389128595728
rx0_csum_complete: 5838312798
rx0_csum_unnecessary: 0
rx0_csum_unnecessary_inner: 0
rx0_csum_none: 23135855
rx0_xdp_drop: 0
rx0_xdp_redirect: 0
rx0_lro_packets: 0
rx0_lro_bytes: 0
rx0_ecn_mark: 0
rx0_removed_vlan_packets: 5861448653
rx0_wqe_err: 0
rx0_mpwqe_filler_cqes: 0
rx0_mpwqe_filler_strides: 0
rx0_buff_alloc_err: 0
rx0_cqe_compress_blks: 0
rx0_cqe_compress_pkts: 0
rx0_page_reuse: 0
rx0_cache_reuse: 2559
rx0_cache_full: 2930721512
rx0_cache_empty: 6656
rx0_cache_busy: 2930721765
rx0_cache_waive: 0
rx0_congst_umr: 0
rx0_arfs_err: 0
rx0_xdp_tx_xmit: 0
rx0_xdp_tx_full: 0
rx0_xdp_tx_err: 0
rx0_xdp_tx_cqes: 0
rx1_packets: 5550585106
rx1_bytes: 7255635262803
rx1_csum_complete: 5550585106
rx1_csum_unnecessary: 0
rx1_csum_unnecessary_inner: 0
rx1_csum_none: 0
rx1_xdp_drop: 0
rx1_xdp_redirect: 0
rx1_lro_packets: 0
rx1_lro_bytes: 0
rx1_ecn_mark: 0
rx1_removed_vlan_packets: 5550585106
rx1_wqe_err: 0
rx1_mpwqe_filler_cqes: 0
rx1_mpwqe_filler_strides: 0
rx1_buff_alloc_err: 0
rx1_cqe_compress_blks: 0
rx1_cqe_compress_pkts: 0
rx1_page_reuse: 0
rx1_cache_reuse: 2918845
rx1_cache_full: 2772373453
rx1_cache_empty: 6656
rx1_cache_busy: 2772373707
rx1_cache_waive: 0
rx1_congst_umr: 0
rx1_arfs_err: 0
rx1_xdp_tx_xmit: 0
rx1_xdp_tx_full: 0
rx1_xdp_tx_err: 0
rx1_xdp_tx_cqes: 0
rx2_packets: 5383874739
rx2_bytes: 7031545423967
rx2_csum_complete: 5383874739
rx2_csum_unnecessary: 0
rx2_csum_unnecessary_inner: 0
rx2_csum_none: 0
rx2_xdp_drop: 0
rx2_xdp_redirect: 0
rx2_lro_packets: 0
rx2_lro_bytes: 0
rx2_ecn_mark: 0
rx2_removed_vlan_packets: 5383874739
rx2_wqe_err: 0
rx2_mpwqe_filler_cqes: 0
rx2_mpwqe_filler_strides: 0
rx2_buff_alloc_err: 0
rx2_cqe_compress_blks: 0
rx2_cqe_compress_pkts: 0
rx2_page_reuse: 0
rx2_cache_reuse: 2173370
rx2_cache_full: 2689763744
rx2_cache_empty: 6656
rx2_cache_busy: 2689763998
rx2_cache_waive: 0
rx2_congst_umr: 0
rx2_arfs_err: 0
rx2_xdp_tx_xmit: 0
rx2_xdp_tx_full: 0
rx2_xdp_tx_err: 0
rx2_xdp_tx_cqes: 0
rx3_packets: 5456494012
rx3_bytes: 7120241119485
rx3_csum_complete: 5456494012
rx3_csum_unnecessary: 0
rx3_csum_unnecessary_inner: 0
rx3_csum_none: 0
rx3_xdp_drop: 0
rx3_xdp_redirect: 0
rx3_lro_packets: 0
rx3_lro_bytes: 0
rx3_ecn_mark: 0
rx3_removed_vlan_packets: 5456494012
rx3_wqe_err: 0
rx3_mpwqe_filler_cqes: 0
rx3_mpwqe_filler_strides: 0
rx3_buff_alloc_err: 0
rx3_cqe_compress_blks: 0
rx3_cqe_compress_pkts: 0
rx3_page_reuse: 0
rx3_cache_reuse: 2120123
rx3_cache_full: 2726126628
rx3_cache_empty: 6656
rx3_cache_busy: 2726126881
rx3_cache_waive: 0
rx3_congst_umr: 0
rx3_arfs_err: 0
rx3_xdp_tx_xmit: 0
rx3_xdp_tx_full: 0
rx3_xdp_tx_err: 0
rx3_xdp_tx_cqes: 0
rx4_packets: 5475216251
rx4_bytes: 7123129170196
rx4_csum_complete: 5475216251
rx4_csum_unnecessary: 0
rx4_csum_unnecessary_inner: 0
rx4_csum_none: 0
rx4_xdp_drop: 0
rx4_xdp_redirect: 0
rx4_lro_packets: 0
rx4_lro_bytes: 0
rx4_ecn_mark: 0
rx4_removed_vlan_packets: 5475216251
rx4_wqe_err: 0
rx4_mpwqe_filler_cqes: 0
rx4_mpwqe_filler_strides: 0
rx4_buff_alloc_err: 0
rx4_cqe_compress_blks: 0
rx4_cqe_compress_pkts: 0
rx4_page_reuse: 0
rx4_cache_reuse: 2668296355
rx4_cache_full: 69311549
rx4_cache_empty: 6656
rx4_cache_busy: 69311769
rx4_cache_waive: 0
rx4_congst_umr: 0
rx4_arfs_err: 0
rx4_xdp_tx_xmit: 0
rx4_xdp_tx_full: 0
rx4_xdp_tx_err: 0
rx4_xdp_tx_cqes: 0
rx5_packets: 5474372232
rx5_bytes: 7159146801926
rx5_csum_complete: 5474372232
rx5_csum_unnecessary: 0
rx5_csum_unnecessary_inner: 0
rx5_csum_none: 0
rx5_xdp_drop: 0
rx5_xdp_redirect: 0
rx5_lro_packets: 0
rx5_lro_bytes: 0
rx5_ecn_mark: 0
rx5_removed_vlan_packets: 5474372232
rx5_wqe_err: 0
rx5_mpwqe_filler_cqes: 0
rx5_mpwqe_filler_strides: 0
rx5_buff_alloc_err: 0
rx5_cqe_compress_blks: 0
rx5_cqe_compress_pkts: 0
rx5_page_reuse: 0
rx5_cache_reuse: 626187
rx5_cache_full: 2736559674
rx5_cache_empty: 6656
rx5_cache_busy: 2736559929
rx5_cache_waive: 0
rx5_congst_umr: 0
rx5_arfs_err: 0
rx5_xdp_tx_xmit: 0
rx5_xdp_tx_full: 0
rx5_xdp_tx_err: 0
rx5_xdp_tx_cqes: 0
rx6_packets: 5533622456
rx6_bytes: 7207308809081
rx6_csum_complete: 5533622456
rx6_csum_unnecessary: 0
rx6_csum_unnecessary_inner: 0
rx6_csum_none: 0
rx6_xdp_drop: 0
rx6_xdp_redirect: 0
rx6_lro_packets: 0
rx6_lro_bytes: 0
rx6_ecn_mark: 0
rx6_removed_vlan_packets: 5533622456
rx6_wqe_err: 0
rx6_mpwqe_filler_cqes: 0
rx6_mpwqe_filler_strides: 0
rx6_buff_alloc_err: 0
rx6_cqe_compress_blks: 0
rx6_cqe_compress_pkts: 0
rx6_page_reuse: 0
rx6_cache_reuse: 2325217
rx6_cache_full: 2764485756
rx6_cache_empty: 6656
rx6_cache_busy: 2764486011
rx6_cache_waive: 0
rx6_congst_umr: 0
rx6_arfs_err: 0
rx6_xdp_tx_xmit: 0
rx6_xdp_tx_full: 0
rx6_xdp_tx_err: 0
rx6_xdp_tx_cqes: 0
rx7_packets: 5533901822
rx7_bytes: 7227441240536
rx7_csum_complete: 5533901822
rx7_csum_unnecessary: 0
rx7_csum_unnecessary_inner: 0
rx7_csum_none: 0
rx7_xdp_drop: 0
rx7_xdp_redirect: 0
rx7_lro_packets: 0
rx7_lro_bytes: 0
rx7_ecn_mark: 0
rx7_removed_vlan_packets: 5533901822
rx7_wqe_err: 0
rx7_mpwqe_filler_cqes: 0
rx7_mpwqe_filler_strides: 0
rx7_buff_alloc_err: 0
rx7_cqe_compress_blks: 0
rx7_cqe_compress_pkts: 0
rx7_page_reuse: 0
rx7_cache_reuse: 2372505
rx7_cache_full: 2764578151
rx7_cache_empty: 6656
rx7_cache_busy: 2764578403
rx7_cache_waive: 0
rx7_congst_umr: 0
rx7_arfs_err: 0
rx7_xdp_tx_xmit: 0
rx7_xdp_tx_full: 0
rx7_xdp_tx_err: 0
rx7_xdp_tx_cqes: 0
rx8_packets: 5485670137
rx8_bytes: 7203339989013
rx8_csum_complete: 5485670137
rx8_csum_unnecessary: 0
rx8_csum_unnecessary_inner: 0
rx8_csum_none: 0
rx8_xdp_drop: 0
rx8_xdp_redirect: 0
rx8_lro_packets: 0
rx8_lro_bytes: 0
rx8_ecn_mark: 0
rx8_removed_vlan_packets: 5485670137
rx8_wqe_err: 0
rx8_mpwqe_filler_cqes: 0
rx8_mpwqe_filler_strides: 0
rx8_buff_alloc_err: 0
rx8_cqe_compress_blks: 0
rx8_cqe_compress_pkts: 0
rx8_page_reuse: 0
rx8_cache_reuse: 7522232
rx8_cache_full: 2735312581
rx8_cache_empty: 6656
rx8_cache_busy: 2735312836
rx8_cache_waive: 0
rx8_congst_umr: 0
rx8_arfs_err: 0
rx8_xdp_tx_xmit: 0
rx8_xdp_tx_full: 0
rx8_xdp_tx_err: 0
rx8_xdp_tx_cqes: 0
rx9_packets: 5482212354
rx9_bytes: 7169663341718
rx9_csum_complete: 5482212354
rx9_csum_unnecessary: 0
rx9_csum_unnecessary_inner: 0
rx9_csum_none: 0
rx9_xdp_drop: 0
rx9_xdp_redirect: 0
rx9_lro_packets: 0
rx9_lro_bytes: 0
rx9_ecn_mark: 0
rx9_removed_vlan_packets: 5482212354
rx9_wqe_err: 0
rx9_mpwqe_filler_cqes: 0
rx9_mpwqe_filler_strides: 0
rx9_buff_alloc_err: 0
rx9_cqe_compress_blks: 0
rx9_cqe_compress_pkts: 0
rx9_page_reuse: 0
rx9_cache_reuse: 37279961
rx9_cache_full: 2703825961
rx9_cache_empty: 6656
rx9_cache_busy: 2703826215
rx9_cache_waive: 0
rx9_congst_umr: 0
rx9_arfs_err: 0
rx9_xdp_tx_xmit: 0
rx9_xdp_tx_full: 0
rx9_xdp_tx_err: 0
rx9_xdp_tx_cqes: 0
rx10_packets: 5524679952
rx10_bytes: 7248301275181
rx10_csum_complete: 5524679952
rx10_csum_unnecessary: 0
rx10_csum_unnecessary_inner: 0
rx10_csum_none: 0
rx10_xdp_drop: 0
rx10_xdp_redirect: 0
rx10_lro_packets: 0
rx10_lro_bytes: 0
rx10_ecn_mark: 0
rx10_removed_vlan_packets: 5524679952
rx10_wqe_err: 0
rx10_mpwqe_filler_cqes: 0
rx10_mpwqe_filler_strides: 0
rx10_buff_alloc_err: 0
rx10_cqe_compress_blks: 0
rx10_cqe_compress_pkts: 0
rx10_page_reuse: 0
rx10_cache_reuse: 2049666
rx10_cache_full: 2760290055
rx10_cache_empty: 6656
rx10_cache_busy: 2760290310
rx10_cache_waive: 0
rx10_congst_umr: 0
rx10_arfs_err: 0
rx10_xdp_tx_xmit: 0
rx10_xdp_tx_full: 0
rx10_xdp_tx_err: 0
rx10_xdp_tx_cqes: 0
rx11_packets: 5394633545
rx11_bytes: 7033509636092
rx11_csum_complete: 5394633545
rx11_csum_unnecessary: 0
rx11_csum_unnecessary_inner: 0
rx11_csum_none: 0
rx11_xdp_drop: 0
rx11_xdp_redirect: 0
rx11_lro_packets: 0
rx11_lro_bytes: 0
rx11_ecn_mark: 0
rx11_removed_vlan_packets: 5394633545
rx11_wqe_err: 0
rx11_mpwqe_filler_cqes: 0
rx11_mpwqe_filler_strides: 0
rx11_buff_alloc_err: 0
rx11_cqe_compress_blks: 0
rx11_cqe_compress_pkts: 0
rx11_page_reuse: 0
rx11_cache_reuse: 2617466268
rx11_cache_full: 79850284
rx11_cache_empty: 6656
rx11_cache_busy: 79850504
rx11_cache_waive: 0
rx11_congst_umr: 0
rx11_arfs_err: 0
rx11_xdp_tx_xmit: 0
rx11_xdp_tx_full: 0
rx11_xdp_tx_err: 0
rx11_xdp_tx_cqes: 0
rx12_packets: 5458907385
rx12_bytes: 7134867867515
rx12_csum_complete: 5458907385
rx12_csum_unnecessary: 0
rx12_csum_unnecessary_inner: 0
rx12_csum_none: 0
rx12_xdp_drop: 0
rx12_xdp_redirect: 0
rx12_lro_packets: 0
rx12_lro_bytes: 0
rx12_ecn_mark: 0
rx12_removed_vlan_packets: 5458907385
rx12_wqe_err: 0
rx12_mpwqe_filler_cqes: 0
rx12_mpwqe_filler_strides: 0
rx12_buff_alloc_err: 0
rx12_cqe_compress_blks: 0
rx12_cqe_compress_pkts: 0
rx12_page_reuse: 0
rx12_cache_reuse: 2650214169
rx12_cache_full: 79239303
rx12_cache_empty: 6656
rx12_cache_busy: 79239523
rx12_cache_waive: 0
rx12_congst_umr: 0
rx12_arfs_err: 0
rx12_xdp_tx_xmit: 0
rx12_xdp_tx_full: 0
rx12_xdp_tx_err: 0
rx12_xdp_tx_cqes: 0
rx13_packets: 5549932912
rx13_bytes: 7232548705586
rx13_csum_complete: 5549932912
rx13_csum_unnecessary: 0
rx13_csum_unnecessary_inner: 0
rx13_csum_none: 0
rx13_xdp_drop: 0
rx13_xdp_redirect: 0
rx13_lro_packets: 0
rx13_lro_bytes: 0
rx13_ecn_mark: 0
rx13_removed_vlan_packets: 5549932912
rx13_wqe_err: 0
rx13_mpwqe_filler_cqes: 0
rx13_mpwqe_filler_strides: 0
rx13_buff_alloc_err: 0
rx13_cqe_compress_blks: 0
rx13_cqe_compress_pkts: 0
rx13_page_reuse: 0
rx13_cache_reuse: 2417696
rx13_cache_full: 2772548505
rx13_cache_empty: 6656
rx13_cache_busy: 2772548760
rx13_cache_waive: 0
rx13_congst_umr: 0
rx13_arfs_err: 0
rx13_xdp_tx_xmit: 0
rx13_xdp_tx_full: 0
rx13_xdp_tx_err: 0
rx13_xdp_tx_cqes: 0
rx14_packets: 5517712329
rx14_bytes: 7192111965227
rx14_csum_complete: 5517712329
rx14_csum_unnecessary: 0
rx14_csum_unnecessary_inner: 0
rx14_csum_none: 0
rx14_xdp_drop: 0
rx14_xdp_redirect: 0
rx14_lro_packets: 0
rx14_lro_bytes: 0
rx14_ecn_mark: 0
rx14_removed_vlan_packets: 5517712329
rx14_wqe_err: 0
rx14_mpwqe_filler_cqes: 0
rx14_mpwqe_filler_strides: 0
rx14_buff_alloc_err: 0
rx14_cqe_compress_blks: 0
rx14_cqe_compress_pkts: 0
rx14_page_reuse: 0
rx14_cache_reuse: 1830206
rx14_cache_full: 2757025703
rx14_cache_empty: 6656
rx14_cache_busy: 2757025958
rx14_cache_waive: 0
rx14_congst_umr: 0
rx14_arfs_err: 0
rx14_xdp_tx_xmit: 0
rx14_xdp_tx_full: 0
rx14_xdp_tx_err: 0
rx14_xdp_tx_cqes: 0
rx15_packets: 5578343373
rx15_bytes: 7268484501219
rx15_csum_complete: 5578343373
rx15_csum_unnecessary: 0
rx15_csum_unnecessary_inner: 0
rx15_csum_none: 0
rx15_xdp_drop: 0
rx15_xdp_redirect: 0
rx15_lro_packets: 0
rx15_lro_bytes: 0
rx15_ecn_mark: 0
rx15_removed_vlan_packets: 5578343373
rx15_wqe_err: 0
rx15_mpwqe_filler_cqes: 0
rx15_mpwqe_filler_strides: 0
rx15_buff_alloc_err: 0
rx15_cqe_compress_blks: 0
rx15_cqe_compress_pkts: 0
rx15_page_reuse: 0
rx15_cache_reuse: 2317165
rx15_cache_full: 2786854266
rx15_cache_empty: 6656
rx15_cache_busy: 2786854519
rx15_cache_waive: 0
rx15_congst_umr: 0
rx15_arfs_err: 0
rx15_xdp_tx_xmit: 0
rx15_xdp_tx_full: 0
rx15_xdp_tx_err: 0
rx15_xdp_tx_cqes: 0
rx16_packets: 4435773951
rx16_bytes: 5766665272007
rx16_csum_complete: 4435773951
rx16_csum_unnecessary: 0
rx16_csum_unnecessary_inner: 0
rx16_csum_none: 0
rx16_xdp_drop: 0
rx16_xdp_redirect: 0
rx16_lro_packets: 0
rx16_lro_bytes: 0
rx16_ecn_mark: 0
rx16_removed_vlan_packets: 4435773951
rx16_wqe_err: 0
rx16_mpwqe_filler_cqes: 0
rx16_mpwqe_filler_strides: 0
rx16_buff_alloc_err: 0
rx16_cqe_compress_blks: 0
rx16_cqe_compress_pkts: 0
rx16_page_reuse: 0
rx16_cache_reuse: 2033793
rx16_cache_full: 2215852927
rx16_cache_empty: 6656
rx16_cache_busy: 2215853179
rx16_cache_waive: 0
rx16_congst_umr: 0
rx16_arfs_err: 0
rx16_xdp_tx_xmit: 0
rx16_xdp_tx_full: 0
rx16_xdp_tx_err: 0
rx16_xdp_tx_cqes: 0
rx17_packets: 4344087587
rx17_bytes: 5695006496323
rx17_csum_complete: 4344087587
rx17_csum_unnecessary: 0
rx17_csum_unnecessary_inner: 0
rx17_csum_none: 0
rx17_xdp_drop: 0
rx17_xdp_redirect: 0
rx17_lro_packets: 0
rx17_lro_bytes: 0
rx17_ecn_mark: 0
rx17_removed_vlan_packets: 4344087587
rx17_wqe_err: 0
rx17_mpwqe_filler_cqes: 0
rx17_mpwqe_filler_strides: 0
rx17_buff_alloc_err: 0
rx17_cqe_compress_blks: 0
rx17_cqe_compress_pkts: 0
rx17_page_reuse: 0
rx17_cache_reuse: 2652127
rx17_cache_full: 2169391411
rx17_cache_empty: 6656
rx17_cache_busy: 2169391665
rx17_cache_waive: 0
rx17_congst_umr: 0
rx17_arfs_err: 0
rx17_xdp_tx_xmit: 0
rx17_xdp_tx_full: 0
rx17_xdp_tx_err: 0
rx17_xdp_tx_cqes: 0
rx18_packets: 4407422804
rx18_bytes: 5741134634177
rx18_csum_complete: 4407422804
rx18_csum_unnecessary: 0
rx18_csum_unnecessary_inner: 0
rx18_csum_none: 0
rx18_xdp_drop: 0
rx18_xdp_redirect: 0
rx18_lro_packets: 0
rx18_lro_bytes: 0
rx18_ecn_mark: 0
rx18_removed_vlan_packets: 4407422804
rx18_wqe_err: 0
rx18_mpwqe_filler_cqes: 0
rx18_mpwqe_filler_strides: 0
rx18_buff_alloc_err: 0
rx18_cqe_compress_blks: 0
rx18_cqe_compress_pkts: 0
rx18_page_reuse: 0
rx18_cache_reuse: 2156080239
rx18_cache_full: 47630941
rx18_cache_empty: 6656
rx18_cache_busy: 47631161
rx18_cache_waive: 0
rx18_congst_umr: 0
rx18_arfs_err: 0
rx18_xdp_tx_xmit: 0
rx18_xdp_tx_full: 0
rx18_xdp_tx_err: 0
rx18_xdp_tx_cqes: 0
rx19_packets: 4545554180
rx19_bytes: 5905277503466
rx19_csum_complete: 4545554180
rx19_csum_unnecessary: 0
rx19_csum_unnecessary_inner: 0
rx19_csum_none: 0
rx19_xdp_drop: 0
rx19_xdp_redirect: 0
rx19_lro_packets: 0
rx19_lro_bytes: 0
rx19_ecn_mark: 0
rx19_removed_vlan_packets: 4545554180
rx19_wqe_err: 0
rx19_mpwqe_filler_cqes: 0
rx19_mpwqe_filler_strides: 0
rx19_buff_alloc_err: 0
rx19_cqe_compress_blks: 0
rx19_cqe_compress_pkts: 0
rx19_page_reuse: 0
rx19_cache_reuse: 11112455
rx19_cache_full: 2261664379
rx19_cache_empty: 6656
rx19_cache_busy: 2261664601
rx19_cache_waive: 0
rx19_congst_umr: 0
rx19_arfs_err: 0
rx19_xdp_tx_xmit: 0
rx19_xdp_tx_full: 0
rx19_xdp_tx_err: 0
rx19_xdp_tx_cqes: 0
rx20_packets: 4397428553
rx20_bytes: 5757329184301
rx20_csum_complete: 4397428553
rx20_csum_unnecessary: 0
rx20_csum_unnecessary_inner: 0
rx20_csum_none: 0
rx20_xdp_drop: 0
rx20_xdp_redirect: 0
rx20_lro_packets: 0
rx20_lro_bytes: 0
rx20_ecn_mark: 0
rx20_removed_vlan_packets: 4397428553
rx20_wqe_err: 0
rx20_mpwqe_filler_cqes: 0
rx20_mpwqe_filler_strides: 0
rx20_buff_alloc_err: 0
rx20_cqe_compress_blks: 0
rx20_cqe_compress_pkts: 0
rx20_page_reuse: 0
rx20_cache_reuse: 2168116995
rx20_cache_full: 30597061
rx20_cache_empty: 6656
rx20_cache_busy: 30597281
rx20_cache_waive: 0
rx20_congst_umr: 0
rx20_arfs_err: 0
rx20_xdp_tx_xmit: 0
rx20_xdp_tx_full: 0
rx20_xdp_tx_err: 0
rx20_xdp_tx_cqes: 0
rx21_packets: 4552564821
rx21_bytes: 5944840329249
rx21_csum_complete: 4552564821
rx21_csum_unnecessary: 0
rx21_csum_unnecessary_inner: 0
rx21_csum_none: 0
rx21_xdp_drop: 0
rx21_xdp_redirect: 0
rx21_lro_packets: 0
rx21_lro_bytes: 0
rx21_ecn_mark: 0
rx21_removed_vlan_packets: 4552564821
rx21_wqe_err: 0
rx21_mpwqe_filler_cqes: 0
rx21_mpwqe_filler_strides: 0
rx21_buff_alloc_err: 0
rx21_cqe_compress_blks: 0
rx21_cqe_compress_pkts: 0
rx21_page_reuse: 0
rx21_cache_reuse: 2295681
rx21_cache_full: 2273986474
rx21_cache_empty: 6656
rx21_cache_busy: 2273986727
rx21_cache_waive: 0
rx21_congst_umr: 0
rx21_arfs_err: 0
rx21_xdp_tx_xmit: 0
rx21_xdp_tx_full: 0
rx21_xdp_tx_err: 0
rx21_xdp_tx_cqes: 0
rx22_packets: 4629499740
rx22_bytes: 5924206566499
rx22_csum_complete: 4629499740
rx22_csum_unnecessary: 0
rx22_csum_unnecessary_inner: 0
rx22_csum_none: 0
rx22_xdp_drop: 0
rx22_xdp_redirect: 0
rx22_lro_packets: 0
rx22_lro_bytes: 0
rx22_ecn_mark: 0
rx22_removed_vlan_packets: 4629499740
rx22_wqe_err: 0
rx22_mpwqe_filler_cqes: 0
rx22_mpwqe_filler_strides: 0
rx22_buff_alloc_err: 0
rx22_cqe_compress_blks: 0
rx22_cqe_compress_pkts: 0
rx22_page_reuse: 0
rx22_cache_reuse: 1407527
rx22_cache_full: 2313342088
rx22_cache_empty: 6656
rx22_cache_busy: 2313342341
rx22_cache_waive: 0
rx22_congst_umr: 0
rx22_arfs_err: 0
rx22_xdp_tx_xmit: 0
rx22_xdp_tx_full: 0
rx22_xdp_tx_err: 0
rx22_xdp_tx_cqes: 0
rx23_packets: 4387124505
rx23_bytes: 5718118678470
rx23_csum_complete: 4387124505
rx23_csum_unnecessary: 0
rx23_csum_unnecessary_inner: 0
rx23_csum_none: 0
rx23_xdp_drop: 0
rx23_xdp_redirect: 0
rx23_lro_packets: 0
rx23_lro_bytes: 0
rx23_ecn_mark: 0
rx23_removed_vlan_packets: 4387124505
rx23_wqe_err: 0
rx23_mpwqe_filler_cqes: 0
rx23_mpwqe_filler_strides: 0
rx23_buff_alloc_err: 0
rx23_cqe_compress_blks: 0
rx23_cqe_compress_pkts: 0
rx23_page_reuse: 0
rx23_cache_reuse: 2013280
rx23_cache_full: 2191548717
rx23_cache_empty: 6656
rx23_cache_busy: 2191548972
rx23_cache_waive: 0
rx23_congst_umr: 0
rx23_arfs_err: 0
rx23_xdp_tx_xmit: 0
rx23_xdp_tx_full: 0
rx23_xdp_tx_err: 0
rx23_xdp_tx_cqes: 0
rx24_packets: 4398791634
rx24_bytes: 5744875564632
rx24_csum_complete: 4398791634
rx24_csum_unnecessary: 0
rx24_csum_unnecessary_inner: 0
rx24_csum_none: 0
rx24_xdp_drop: 0
rx24_xdp_redirect: 0
rx24_lro_packets: 0
rx24_lro_bytes: 0
rx24_ecn_mark: 0
rx24_removed_vlan_packets: 4398791634
rx24_wqe_err: 0
rx24_mpwqe_filler_cqes: 0
rx24_mpwqe_filler_strides: 0
rx24_buff_alloc_err: 0
rx24_cqe_compress_blks: 0
rx24_cqe_compress_pkts: 0
rx24_page_reuse: 0
rx24_cache_reuse: 2143926100
rx24_cache_full: 55469496
rx24_cache_empty: 6656
rx24_cache_busy: 55469716
rx24_cache_waive: 0
rx24_congst_umr: 0
rx24_arfs_err: 0
rx24_xdp_tx_xmit: 0
rx24_xdp_tx_full: 0
rx24_xdp_tx_err: 0
rx24_xdp_tx_cqes: 0
rx25_packets: 4377204935
rx25_bytes: 5710369124105
rx25_csum_complete: 4377204935
rx25_csum_unnecessary: 0
rx25_csum_unnecessary_inner: 0
rx25_csum_none: 0
rx25_xdp_drop: 0
rx25_xdp_redirect: 0
rx25_lro_packets: 0
rx25_lro_bytes: 0
rx25_ecn_mark: 0
rx25_removed_vlan_packets: 4377204935
rx25_wqe_err: 0
rx25_mpwqe_filler_cqes: 0
rx25_mpwqe_filler_strides: 0
rx25_buff_alloc_err: 0
rx25_cqe_compress_blks: 0
rx25_cqe_compress_pkts: 0
rx25_page_reuse: 0
rx25_cache_reuse: 2132658660
rx25_cache_full: 55943584
rx25_cache_empty: 6656
rx25_cache_busy: 55943804
rx25_cache_waive: 0
rx25_congst_umr: 0
rx25_arfs_err: 0
rx25_xdp_tx_xmit: 0
rx25_xdp_tx_full: 0
rx25_xdp_tx_err: 0
rx25_xdp_tx_cqes: 0
rx26_packets: 4496003688
rx26_bytes: 5862180715503
rx26_csum_complete: 4496003688
rx26_csum_unnecessary: 0
rx26_csum_unnecessary_inner: 0
rx26_csum_none: 0
rx26_xdp_drop: 0
rx26_xdp_redirect: 0
rx26_lro_packets: 0
rx26_lro_bytes: 0
rx26_ecn_mark: 0
rx26_removed_vlan_packets: 4496003688
rx26_wqe_err: 0
rx26_mpwqe_filler_cqes: 0
rx26_mpwqe_filler_strides: 0
rx26_buff_alloc_err: 0
rx26_cqe_compress_blks: 0
rx26_cqe_compress_pkts: 0
rx26_page_reuse: 0
rx26_cache_reuse: 8
rx26_cache_full: 2248001581
rx26_cache_empty: 6656
rx26_cache_busy: 2248001836
rx26_cache_waive: 0
rx26_congst_umr: 0
rx26_arfs_err: 0
rx26_xdp_tx_xmit: 0
rx26_xdp_tx_full: 0
rx26_xdp_tx_err: 0
rx26_xdp_tx_cqes: 0
rx27_packets: 4341849333
rx27_bytes: 5678653545018
rx27_csum_complete: 4341849333
rx27_csum_unnecessary: 0
rx27_csum_unnecessary_inner: 0
rx27_csum_none: 0
rx27_xdp_drop: 0
rx27_xdp_redirect: 0
rx27_lro_packets: 0
rx27_lro_bytes: 0
rx27_ecn_mark: 0
rx27_removed_vlan_packets: 4341849333
rx27_wqe_err: 0
rx27_mpwqe_filler_cqes: 0
rx27_mpwqe_filler_strides: 0
rx27_buff_alloc_err: 0
rx27_cqe_compress_blks: 0
rx27_cqe_compress_pkts: 0
rx27_page_reuse: 0
rx27_cache_reuse: 1748188
rx27_cache_full: 2169176223
rx27_cache_empty: 6656
rx27_cache_busy: 2169176476
rx27_cache_waive: 0
rx27_congst_umr: 0
rx27_arfs_err: 0
rx27_xdp_tx_xmit: 0
rx27_xdp_tx_full: 0
rx27_xdp_tx_err: 0
rx27_xdp_tx_cqes: 0
rx28_packets: 0
rx28_bytes: 0
rx28_csum_complete: 0
rx28_csum_unnecessary: 0
rx28_csum_unnecessary_inner: 0
rx28_csum_none: 0
rx28_xdp_drop: 0
rx28_xdp_redirect: 0
rx28_lro_packets: 0
rx28_lro_bytes: 0
rx28_ecn_mark: 0
rx28_removed_vlan_packets: 0
rx28_wqe_err: 0
rx28_mpwqe_filler_cqes: 0
rx28_mpwqe_filler_strides: 0
rx28_buff_alloc_err: 0
rx28_cqe_compress_blks: 0
rx28_cqe_compress_pkts: 0
rx28_page_reuse: 0
rx28_cache_reuse: 0
rx28_cache_full: 0
rx28_cache_empty: 2560
rx28_cache_busy: 0
rx28_cache_waive: 0
rx28_congst_umr: 0
rx28_arfs_err: 0
rx28_xdp_tx_xmit: 0
rx28_xdp_tx_full: 0
rx28_xdp_tx_err: 0
rx28_xdp_tx_cqes: 0
rx29_packets: 0
rx29_bytes: 0
rx29_csum_complete: 0
rx29_csum_unnecessary: 0
rx29_csum_unnecessary_inner: 0
rx29_csum_none: 0
rx29_xdp_drop: 0
rx29_xdp_redirect: 0
rx29_lro_packets: 0
rx29_lro_bytes: 0
rx29_ecn_mark: 0
rx29_removed_vlan_packets: 0
rx29_wqe_err: 0
rx29_mpwqe_filler_cqes: 0
rx29_mpwqe_filler_strides: 0
rx29_buff_alloc_err: 0
rx29_cqe_compress_blks: 0
rx29_cqe_compress_pkts: 0
rx29_page_reuse: 0
rx29_cache_reuse: 0
rx29_cache_full: 0
rx29_cache_empty: 2560
rx29_cache_busy: 0
rx29_cache_waive: 0
rx29_congst_umr: 0
rx29_arfs_err: 0
rx29_xdp_tx_xmit: 0
rx29_xdp_tx_full: 0
rx29_xdp_tx_err: 0
rx29_xdp_tx_cqes: 0
rx30_packets: 0
rx30_bytes: 0
rx30_csum_complete: 0
rx30_csum_unnecessary: 0
rx30_csum_unnecessary_inner: 0
rx30_csum_none: 0
rx30_xdp_drop: 0
rx30_xdp_redirect: 0
rx30_lro_packets: 0
rx30_lro_bytes: 0
rx30_ecn_mark: 0
rx30_removed_vlan_packets: 0
rx30_wqe_err: 0
rx30_mpwqe_filler_cqes: 0
rx30_mpwqe_filler_strides: 0
rx30_buff_alloc_err: 0
rx30_cqe_compress_blks: 0
rx30_cqe_compress_pkts: 0
rx30_page_reuse: 0
rx30_cache_reuse: 0
rx30_cache_full: 0
rx30_cache_empty: 2560
rx30_cache_busy: 0
rx30_cache_waive: 0
rx30_congst_umr: 0
rx30_arfs_err: 0
rx30_xdp_tx_xmit: 0
rx30_xdp_tx_full: 0
rx30_xdp_tx_err: 0
rx30_xdp_tx_cqes: 0
rx31_packets: 0
rx31_bytes: 0
rx31_csum_complete: 0
rx31_csum_unnecessary: 0
rx31_csum_unnecessary_inner: 0
rx31_csum_none: 0
rx31_xdp_drop: 0
rx31_xdp_redirect: 0
rx31_lro_packets: 0
rx31_lro_bytes: 0
rx31_ecn_mark: 0
rx31_removed_vlan_packets: 0
rx31_wqe_err: 0
rx31_mpwqe_filler_cqes: 0
rx31_mpwqe_filler_strides: 0
rx31_buff_alloc_err: 0
rx31_cqe_compress_blks: 0
rx31_cqe_compress_pkts: 0
rx31_page_reuse: 0
rx31_cache_reuse: 0
rx31_cache_full: 0
rx31_cache_empty: 2560
rx31_cache_busy: 0
rx31_cache_waive: 0
rx31_congst_umr: 0
rx31_arfs_err: 0
rx31_xdp_tx_xmit: 0
rx31_xdp_tx_full: 0
rx31_xdp_tx_err: 0
rx31_xdp_tx_cqes: 0
rx32_packets: 0
rx32_bytes: 0
rx32_csum_complete: 0
rx32_csum_unnecessary: 0
rx32_csum_unnecessary_inner: 0
rx32_csum_none: 0
rx32_xdp_drop: 0
rx32_xdp_redirect: 0
rx32_lro_packets: 0
rx32_lro_bytes: 0
rx32_ecn_mark: 0
rx32_removed_vlan_packets: 0
rx32_wqe_err: 0
rx32_mpwqe_filler_cqes: 0
rx32_mpwqe_filler_strides: 0
rx32_buff_alloc_err: 0
rx32_cqe_compress_blks: 0
rx32_cqe_compress_pkts: 0
rx32_page_reuse: 0
rx32_cache_reuse: 0
rx32_cache_full: 0
rx32_cache_empty: 2560
rx32_cache_busy: 0
rx32_cache_waive: 0
rx32_congst_umr: 0
rx32_arfs_err: 0
rx32_xdp_tx_xmit: 0
rx32_xdp_tx_full: 0
rx32_xdp_tx_err: 0
rx32_xdp_tx_cqes: 0
rx33_packets: 0
rx33_bytes: 0
rx33_csum_complete: 0
rx33_csum_unnecessary: 0
rx33_csum_unnecessary_inner: 0
rx33_csum_none: 0
rx33_xdp_drop: 0
rx33_xdp_redirect: 0
rx33_lro_packets: 0
rx33_lro_bytes: 0
rx33_ecn_mark: 0
rx33_removed_vlan_packets: 0
rx33_wqe_err: 0
rx33_mpwqe_filler_cqes: 0
rx33_mpwqe_filler_strides: 0
rx33_buff_alloc_err: 0
rx33_cqe_compress_blks: 0
rx33_cqe_compress_pkts: 0
rx33_page_reuse: 0
rx33_cache_reuse: 0
rx33_cache_full: 0
rx33_cache_empty: 2560
rx33_cache_busy: 0
rx33_cache_waive: 0
rx33_congst_umr: 0
rx33_arfs_err: 0
rx33_xdp_tx_xmit: 0
rx33_xdp_tx_full: 0
rx33_xdp_tx_err: 0
rx33_xdp_tx_cqes: 0
rx34_packets: 0
rx34_bytes: 0
rx34_csum_complete: 0
rx34_csum_unnecessary: 0
rx34_csum_unnecessary_inner: 0
rx34_csum_none: 0
rx34_xdp_drop: 0
rx34_xdp_redirect: 0
rx34_lro_packets: 0
rx34_lro_bytes: 0
rx34_ecn_mark: 0
rx34_removed_vlan_packets: 0
rx34_wqe_err: 0
rx34_mpwqe_filler_cqes: 0
rx34_mpwqe_filler_strides: 0
rx34_buff_alloc_err: 0
rx34_cqe_compress_blks: 0
rx34_cqe_compress_pkts: 0
rx34_page_reuse: 0
rx34_cache_reuse: 0
rx34_cache_full: 0
rx34_cache_empty: 2560
rx34_cache_busy: 0
rx34_cache_waive: 0
rx34_congst_umr: 0
rx34_arfs_err: 0
rx34_xdp_tx_xmit: 0
rx34_xdp_tx_full: 0
rx34_xdp_tx_err: 0
rx34_xdp_tx_cqes: 0
rx35_packets: 0
rx35_bytes: 0
rx35_csum_complete: 0
rx35_csum_unnecessary: 0
rx35_csum_unnecessary_inner: 0
rx35_csum_none: 0
rx35_xdp_drop: 0
rx35_xdp_redirect: 0
rx35_lro_packets: 0
rx35_lro_bytes: 0
rx35_ecn_mark: 0
rx35_removed_vlan_packets: 0
rx35_wqe_err: 0
rx35_mpwqe_filler_cqes: 0
rx35_mpwqe_filler_strides: 0
rx35_buff_alloc_err: 0
rx35_cqe_compress_blks: 0
rx35_cqe_compress_pkts: 0
rx35_page_reuse: 0
rx35_cache_reuse: 0
rx35_cache_full: 0
rx35_cache_empty: 2560
rx35_cache_busy: 0
rx35_cache_waive: 0
rx35_congst_umr: 0
rx35_arfs_err: 0
rx35_xdp_tx_xmit: 0
rx35_xdp_tx_full: 0
rx35_xdp_tx_err: 0
rx35_xdp_tx_cqes: 0
rx36_packets: 0
rx36_bytes: 0
rx36_csum_complete: 0
rx36_csum_unnecessary: 0
rx36_csum_unnecessary_inner: 0
rx36_csum_none: 0
rx36_xdp_drop: 0
rx36_xdp_redirect: 0
rx36_lro_packets: 0
rx36_lro_bytes: 0
rx36_ecn_mark: 0
rx36_removed_vlan_packets: 0
rx36_wqe_err: 0
rx36_mpwqe_filler_cqes: 0
rx36_mpwqe_filler_strides: 0
rx36_buff_alloc_err: 0
rx36_cqe_compress_blks: 0
rx36_cqe_compress_pkts: 0
rx36_page_reuse: 0
rx36_cache_reuse: 0
rx36_cache_full: 0
rx36_cache_empty: 2560
rx36_cache_busy: 0
rx36_cache_waive: 0
rx36_congst_umr: 0
rx36_arfs_err: 0
rx36_xdp_tx_xmit: 0
rx36_xdp_tx_full: 0
rx36_xdp_tx_err: 0
rx36_xdp_tx_cqes: 0
rx37_packets: 0
rx37_bytes: 0
rx37_csum_complete: 0
rx37_csum_unnecessary: 0
rx37_csum_unnecessary_inner: 0
rx37_csum_none: 0
rx37_xdp_drop: 0
rx37_xdp_redirect: 0
rx37_lro_packets: 0
rx37_lro_bytes: 0
rx37_ecn_mark: 0
rx37_removed_vlan_packets: 0
rx37_wqe_err: 0
rx37_mpwqe_filler_cqes: 0
rx37_mpwqe_filler_strides: 0
rx37_buff_alloc_err: 0
rx37_cqe_compress_blks: 0
rx37_cqe_compress_pkts: 0
rx37_page_reuse: 0
rx37_cache_reuse: 0
rx37_cache_full: 0
rx37_cache_empty: 2560
rx37_cache_busy: 0
rx37_cache_waive: 0
rx37_congst_umr: 0
rx37_arfs_err: 0
rx37_xdp_tx_xmit: 0
rx37_xdp_tx_full: 0
rx37_xdp_tx_err: 0
rx37_xdp_tx_cqes: 0
rx38_packets: 0
rx38_bytes: 0
rx38_csum_complete: 0
rx38_csum_unnecessary: 0
rx38_csum_unnecessary_inner: 0
rx38_csum_none: 0
rx38_xdp_drop: 0
rx38_xdp_redirect: 0
rx38_lro_packets: 0
rx38_lro_bytes: 0
rx38_ecn_mark: 0
rx38_removed_vlan_packets: 0
rx38_wqe_err: 0
rx38_mpwqe_filler_cqes: 0
rx38_mpwqe_filler_strides: 0
rx38_buff_alloc_err: 0
rx38_cqe_compress_blks: 0
rx38_cqe_compress_pkts: 0
rx38_page_reuse: 0
rx38_cache_reuse: 0
rx38_cache_full: 0
rx38_cache_empty: 2560
rx38_cache_busy: 0
rx38_cache_waive: 0
rx38_congst_umr: 0
rx38_arfs_err: 0
rx38_xdp_tx_xmit: 0
rx38_xdp_tx_full: 0
rx38_xdp_tx_err: 0
rx38_xdp_tx_cqes: 0
rx39_packets: 0
rx39_bytes: 0
rx39_csum_complete: 0
rx39_csum_unnecessary: 0
rx39_csum_unnecessary_inner: 0
rx39_csum_none: 0
rx39_xdp_drop: 0
rx39_xdp_redirect: 0
rx39_lro_packets: 0
rx39_lro_bytes: 0
rx39_ecn_mark: 0
rx39_removed_vlan_packets: 0
rx39_wqe_err: 0
rx39_mpwqe_filler_cqes: 0
rx39_mpwqe_filler_strides: 0
rx39_buff_alloc_err: 0
rx39_cqe_compress_blks: 0
rx39_cqe_compress_pkts: 0
rx39_page_reuse: 0
rx39_cache_reuse: 0
rx39_cache_full: 0
rx39_cache_empty: 2560
rx39_cache_busy: 0
rx39_cache_waive: 0
rx39_congst_umr: 0
rx39_arfs_err: 0
rx39_xdp_tx_xmit: 0
rx39_xdp_tx_full: 0
rx39_xdp_tx_err: 0
rx39_xdp_tx_cqes: 0
rx40_packets: 0
rx40_bytes: 0
rx40_csum_complete: 0
rx40_csum_unnecessary: 0
rx40_csum_unnecessary_inner: 0
rx40_csum_none: 0
rx40_xdp_drop: 0
rx40_xdp_redirect: 0
rx40_lro_packets: 0
rx40_lro_bytes: 0
rx40_ecn_mark: 0
rx40_removed_vlan_packets: 0
rx40_wqe_err: 0
rx40_mpwqe_filler_cqes: 0
rx40_mpwqe_filler_strides: 0
rx40_buff_alloc_err: 0
rx40_cqe_compress_blks: 0
rx40_cqe_compress_pkts: 0
rx40_page_reuse: 0
rx40_cache_reuse: 0
rx40_cache_full: 0
rx40_cache_empty: 2560
rx40_cache_busy: 0
rx40_cache_waive: 0
rx40_congst_umr: 0
rx40_arfs_err: 0
rx40_xdp_tx_xmit: 0
rx40_xdp_tx_full: 0
rx40_xdp_tx_err: 0
rx40_xdp_tx_cqes: 0
rx41_packets: 0
rx41_bytes: 0
rx41_csum_complete: 0
rx41_csum_unnecessary: 0
rx41_csum_unnecessary_inner: 0
rx41_csum_none: 0
rx41_xdp_drop: 0
rx41_xdp_redirect: 0
rx41_lro_packets: 0
rx41_lro_bytes: 0
rx41_ecn_mark: 0
rx41_removed_vlan_packets: 0
rx41_wqe_err: 0
rx41_mpwqe_filler_cqes: 0
rx41_mpwqe_filler_strides: 0
rx41_buff_alloc_err: 0
rx41_cqe_compress_blks: 0
rx41_cqe_compress_pkts: 0
rx41_page_reuse: 0
rx41_cache_reuse: 0
rx41_cache_full: 0
rx41_cache_empty: 2560
rx41_cache_busy: 0
rx41_cache_waive: 0
rx41_congst_umr: 0
rx41_arfs_err: 0
rx41_xdp_tx_xmit: 0
rx41_xdp_tx_full: 0
rx41_xdp_tx_err: 0
rx41_xdp_tx_cqes: 0
rx42_packets: 0
rx42_bytes: 0
rx42_csum_complete: 0
rx42_csum_unnecessary: 0
rx42_csum_unnecessary_inner: 0
rx42_csum_none: 0
rx42_xdp_drop: 0
rx42_xdp_redirect: 0
rx42_lro_packets: 0
rx42_lro_bytes: 0
rx42_ecn_mark: 0
rx42_removed_vlan_packets: 0
rx42_wqe_err: 0
rx42_mpwqe_filler_cqes: 0
rx42_mpwqe_filler_strides: 0
rx42_buff_alloc_err: 0
rx42_cqe_compress_blks: 0
rx42_cqe_compress_pkts: 0
rx42_page_reuse: 0
rx42_cache_reuse: 0
rx42_cache_full: 0
rx42_cache_empty: 2560
rx42_cache_busy: 0
rx42_cache_waive: 0
rx42_congst_umr: 0
rx42_arfs_err: 0
rx42_xdp_tx_xmit: 0
rx42_xdp_tx_full: 0
rx42_xdp_tx_err: 0
rx42_xdp_tx_cqes: 0
rx43_packets: 0
rx43_bytes: 0
rx43_csum_complete: 0
rx43_csum_unnecessary: 0
rx43_csum_unnecessary_inner: 0
rx43_csum_none: 0
rx43_xdp_drop: 0
rx43_xdp_redirect: 0
rx43_lro_packets: 0
rx43_lro_bytes: 0
rx43_ecn_mark: 0
rx43_removed_vlan_packets: 0
rx43_wqe_err: 0
rx43_mpwqe_filler_cqes: 0
rx43_mpwqe_filler_strides: 0
rx43_buff_alloc_err: 0
rx43_cqe_compress_blks: 0
rx43_cqe_compress_pkts: 0
rx43_page_reuse: 0
rx43_cache_reuse: 0
rx43_cache_full: 0
rx43_cache_empty: 2560
rx43_cache_busy: 0
rx43_cache_waive: 0
rx43_congst_umr: 0
rx43_arfs_err: 0
rx43_xdp_tx_xmit: 0
rx43_xdp_tx_full: 0
rx43_xdp_tx_err: 0
rx43_xdp_tx_cqes: 0
rx44_packets: 0
rx44_bytes: 0
rx44_csum_complete: 0
rx44_csum_unnecessary: 0
rx44_csum_unnecessary_inner: 0
rx44_csum_none: 0
rx44_xdp_drop: 0
rx44_xdp_redirect: 0
rx44_lro_packets: 0
rx44_lro_bytes: 0
rx44_ecn_mark: 0
rx44_removed_vlan_packets: 0
rx44_wqe_err: 0
rx44_mpwqe_filler_cqes: 0
rx44_mpwqe_filler_strides: 0
rx44_buff_alloc_err: 0
rx44_cqe_compress_blks: 0
rx44_cqe_compress_pkts: 0
rx44_page_reuse: 0
rx44_cache_reuse: 0
rx44_cache_full: 0
rx44_cache_empty: 2560
rx44_cache_busy: 0
rx44_cache_waive: 0
rx44_congst_umr: 0
rx44_arfs_err: 0
rx44_xdp_tx_xmit: 0
rx44_xdp_tx_full: 0
rx44_xdp_tx_err: 0
rx44_xdp_tx_cqes: 0
rx45_packets: 0
rx45_bytes: 0
rx45_csum_complete: 0
rx45_csum_unnecessary: 0
rx45_csum_unnecessary_inner: 0
rx45_csum_none: 0
rx45_xdp_drop: 0
rx45_xdp_redirect: 0
rx45_lro_packets: 0
rx45_lro_bytes: 0
rx45_ecn_mark: 0
rx45_removed_vlan_packets: 0
rx45_wqe_err: 0
rx45_mpwqe_filler_cqes: 0
rx45_mpwqe_filler_strides: 0
rx45_buff_alloc_err: 0
rx45_cqe_compress_blks: 0
rx45_cqe_compress_pkts: 0
rx45_page_reuse: 0
rx45_cache_reuse: 0
rx45_cache_full: 0
rx45_cache_empty: 2560
rx45_cache_busy: 0
rx45_cache_waive: 0
rx45_congst_umr: 0
rx45_arfs_err: 0
rx45_xdp_tx_xmit: 0
rx45_xdp_tx_full: 0
rx45_xdp_tx_err: 0
rx45_xdp_tx_cqes: 0
rx46_packets: 0
rx46_bytes: 0
rx46_csum_complete: 0
rx46_csum_unnecessary: 0
rx46_csum_unnecessary_inner: 0
rx46_csum_none: 0
rx46_xdp_drop: 0
rx46_xdp_redirect: 0
rx46_lro_packets: 0
rx46_lro_bytes: 0
rx46_ecn_mark: 0
rx46_removed_vlan_packets: 0
rx46_wqe_err: 0
rx46_mpwqe_filler_cqes: 0
rx46_mpwqe_filler_strides: 0
rx46_buff_alloc_err: 0
rx46_cqe_compress_blks: 0
rx46_cqe_compress_pkts: 0
rx46_page_reuse: 0
rx46_cache_reuse: 0
rx46_cache_full: 0
rx46_cache_empty: 2560
rx46_cache_busy: 0
rx46_cache_waive: 0
rx46_congst_umr: 0
rx46_arfs_err: 0
rx46_xdp_tx_xmit: 0
rx46_xdp_tx_full: 0
rx46_xdp_tx_err: 0
rx46_xdp_tx_cqes: 0
rx47_packets: 0
rx47_bytes: 0
rx47_csum_complete: 0
rx47_csum_unnecessary: 0
rx47_csum_unnecessary_inner: 0
rx47_csum_none: 0
rx47_xdp_drop: 0
rx47_xdp_redirect: 0
rx47_lro_packets: 0
rx47_lro_bytes: 0
rx47_ecn_mark: 0
rx47_removed_vlan_packets: 0
rx47_wqe_err: 0
rx47_mpwqe_filler_cqes: 0
rx47_mpwqe_filler_strides: 0
rx47_buff_alloc_err: 0
rx47_cqe_compress_blks: 0
rx47_cqe_compress_pkts: 0
rx47_page_reuse: 0
rx47_cache_reuse: 0
rx47_cache_full: 0
rx47_cache_empty: 2560
rx47_cache_busy: 0
rx47_cache_waive: 0
rx47_congst_umr: 0
rx47_arfs_err: 0
rx47_xdp_tx_xmit: 0
rx47_xdp_tx_full: 0
rx47_xdp_tx_err: 0
rx47_xdp_tx_cqes: 0
rx48_packets: 0
rx48_bytes: 0
rx48_csum_complete: 0
rx48_csum_unnecessary: 0
rx48_csum_unnecessary_inner: 0
rx48_csum_none: 0
rx48_xdp_drop: 0
rx48_xdp_redirect: 0
rx48_lro_packets: 0
rx48_lro_bytes: 0
rx48_ecn_mark: 0
rx48_removed_vlan_packets: 0
rx48_wqe_err: 0
rx48_mpwqe_filler_cqes: 0
rx48_mpwqe_filler_strides: 0
rx48_buff_alloc_err: 0
rx48_cqe_compress_blks: 0
rx48_cqe_compress_pkts: 0
rx48_page_reuse: 0
rx48_cache_reuse: 0
rx48_cache_full: 0
rx48_cache_empty: 2560
rx48_cache_busy: 0
rx48_cache_waive: 0
rx48_congst_umr: 0
rx48_arfs_err: 0
rx48_xdp_tx_xmit: 0
rx48_xdp_tx_full: 0
rx48_xdp_tx_err: 0
rx48_xdp_tx_cqes: 0
rx49_packets: 0
rx49_bytes: 0
rx49_csum_complete: 0
rx49_csum_unnecessary: 0
rx49_csum_unnecessary_inner: 0
rx49_csum_none: 0
rx49_xdp_drop: 0
rx49_xdp_redirect: 0
rx49_lro_packets: 0
rx49_lro_bytes: 0
rx49_ecn_mark: 0
rx49_removed_vlan_packets: 0
rx49_wqe_err: 0
rx49_mpwqe_filler_cqes: 0
rx49_mpwqe_filler_strides: 0
rx49_buff_alloc_err: 0
rx49_cqe_compress_blks: 0
rx49_cqe_compress_pkts: 0
rx49_page_reuse: 0
rx49_cache_reuse: 0
rx49_cache_full: 0
rx49_cache_empty: 2560
rx49_cache_busy: 0
rx49_cache_waive: 0
rx49_congst_umr: 0
rx49_arfs_err: 0
rx49_xdp_tx_xmit: 0
rx49_xdp_tx_full: 0
rx49_xdp_tx_err: 0
rx49_xdp_tx_cqes: 0
rx50_packets: 0
rx50_bytes: 0
rx50_csum_complete: 0
rx50_csum_unnecessary: 0
rx50_csum_unnecessary_inner: 0
rx50_csum_none: 0
rx50_xdp_drop: 0
rx50_xdp_redirect: 0
rx50_lro_packets: 0
rx50_lro_bytes: 0
rx50_ecn_mark: 0
rx50_removed_vlan_packets: 0
rx50_wqe_err: 0
rx50_mpwqe_filler_cqes: 0
rx50_mpwqe_filler_strides: 0
rx50_buff_alloc_err: 0
rx50_cqe_compress_blks: 0
rx50_cqe_compress_pkts: 0
rx50_page_reuse: 0
rx50_cache_reuse: 0
rx50_cache_full: 0
rx50_cache_empty: 2560
rx50_cache_busy: 0
rx50_cache_waive: 0
rx50_congst_umr: 0
rx50_arfs_err: 0
rx50_xdp_tx_xmit: 0
rx50_xdp_tx_full: 0
rx50_xdp_tx_err: 0
rx50_xdp_tx_cqes: 0
rx51_packets: 0
rx51_bytes: 0
rx51_csum_complete: 0
rx51_csum_unnecessary: 0
rx51_csum_unnecessary_inner: 0
rx51_csum_none: 0
rx51_xdp_drop: 0
rx51_xdp_redirect: 0
rx51_lro_packets: 0
rx51_lro_bytes: 0
rx51_ecn_mark: 0
rx51_removed_vlan_packets: 0
rx51_wqe_err: 0
rx51_mpwqe_filler_cqes: 0
rx51_mpwqe_filler_strides: 0
rx51_buff_alloc_err: 0
rx51_cqe_compress_blks: 0
rx51_cqe_compress_pkts: 0
rx51_page_reuse: 0
rx51_cache_reuse: 0
rx51_cache_full: 0
rx51_cache_empty: 2560
rx51_cache_busy: 0
rx51_cache_waive: 0
rx51_congst_umr: 0
rx51_arfs_err: 0
rx51_xdp_tx_xmit: 0
rx51_xdp_tx_full: 0
rx51_xdp_tx_err: 0
rx51_xdp_tx_cqes: 0
rx52_packets: 0
rx52_bytes: 0
rx52_csum_complete: 0
rx52_csum_unnecessary: 0
rx52_csum_unnecessary_inner: 0
rx52_csum_none: 0
rx52_xdp_drop: 0
rx52_xdp_redirect: 0
rx52_lro_packets: 0
rx52_lro_bytes: 0
rx52_ecn_mark: 0
rx52_removed_vlan_packets: 0
rx52_wqe_err: 0
rx52_mpwqe_filler_cqes: 0
rx52_mpwqe_filler_strides: 0
rx52_buff_alloc_err: 0
rx52_cqe_compress_blks: 0
rx52_cqe_compress_pkts: 0
rx52_page_reuse: 0
rx52_cache_reuse: 0
rx52_cache_full: 0
rx52_cache_empty: 2560
rx52_cache_busy: 0
rx52_cache_waive: 0
rx52_congst_umr: 0
rx52_arfs_err: 0
rx52_xdp_tx_xmit: 0
rx52_xdp_tx_full: 0
rx52_xdp_tx_err: 0
rx52_xdp_tx_cqes: 0
rx53_packets: 0
rx53_bytes: 0
rx53_csum_complete: 0
rx53_csum_unnecessary: 0
rx53_csum_unnecessary_inner: 0
rx53_csum_none: 0
rx53_xdp_drop: 0
rx53_xdp_redirect: 0
rx53_lro_packets: 0
rx53_lro_bytes: 0
rx53_ecn_mark: 0
rx53_removed_vlan_packets: 0
rx53_wqe_err: 0
rx53_mpwqe_filler_cqes: 0
rx53_mpwqe_filler_strides: 0
rx53_buff_alloc_err: 0
rx53_cqe_compress_blks: 0
rx53_cqe_compress_pkts: 0
rx53_page_reuse: 0
rx53_cache_reuse: 0
rx53_cache_full: 0
rx53_cache_empty: 2560
rx53_cache_busy: 0
rx53_cache_waive: 0
rx53_congst_umr: 0
rx53_arfs_err: 0
rx53_xdp_tx_xmit: 0
rx53_xdp_tx_full: 0
rx53_xdp_tx_err: 0
rx53_xdp_tx_cqes: 0
rx54_packets: 0
rx54_bytes: 0
rx54_csum_complete: 0
rx54_csum_unnecessary: 0
rx54_csum_unnecessary_inner: 0
rx54_csum_none: 0
rx54_xdp_drop: 0
rx54_xdp_redirect: 0
rx54_lro_packets: 0
rx54_lro_bytes: 0
rx54_ecn_mark: 0
rx54_removed_vlan_packets: 0
rx54_wqe_err: 0
rx54_mpwqe_filler_cqes: 0
rx54_mpwqe_filler_strides: 0
rx54_buff_alloc_err: 0
rx54_cqe_compress_blks: 0
rx54_cqe_compress_pkts: 0
rx54_page_reuse: 0
rx54_cache_reuse: 0
rx54_cache_full: 0
rx54_cache_empty: 2560
rx54_cache_busy: 0
rx54_cache_waive: 0
rx54_congst_umr: 0
rx54_arfs_err: 0
rx54_xdp_tx_xmit: 0
rx54_xdp_tx_full: 0
rx54_xdp_tx_err: 0
rx54_xdp_tx_cqes: 0
rx55_packets: 0
rx55_bytes: 0
rx55_csum_complete: 0
rx55_csum_unnecessary: 0
rx55_csum_unnecessary_inner: 0
rx55_csum_none: 0
rx55_xdp_drop: 0
rx55_xdp_redirect: 0
rx55_lro_packets: 0
rx55_lro_bytes: 0
rx55_ecn_mark: 0
rx55_removed_vlan_packets: 0
rx55_wqe_err: 0
rx55_mpwqe_filler_cqes: 0
rx55_mpwqe_filler_strides: 0
rx55_buff_alloc_err: 0
rx55_cqe_compress_blks: 0
rx55_cqe_compress_pkts: 0
rx55_page_reuse: 0
rx55_cache_reuse: 0
rx55_cache_full: 0
rx55_cache_empty: 2560
rx55_cache_busy: 0
rx55_cache_waive: 0
rx55_congst_umr: 0
rx55_arfs_err: 0
rx55_xdp_tx_xmit: 0
rx55_xdp_tx_full: 0
rx55_xdp_tx_err: 0
rx55_xdp_tx_cqes: 0
tx0_packets: 6019477917
tx0_bytes: 3445238940825
tx0_tso_packets: 311304622
tx0_tso_bytes: 1897094773213
tx0_tso_inner_packets: 0
tx0_tso_inner_bytes: 0
tx0_csum_partial: 457981794
tx0_csum_partial_inner: 0
tx0_added_vlan_packets: 4965567654
tx0_nop: 72290329
tx0_csum_none: 4507585860
tx0_stopped: 9118
tx0_dropped: 0
tx0_xmit_more: 51651593
tx0_recover: 0
tx0_cqes: 4913918402
tx0_wake: 9118
tx0_cqe_err: 0
tx1_packets: 5700413414
tx1_bytes: 3340870662350
tx1_tso_packets: 318201557
tx1_tso_bytes: 1915233462303
tx1_tso_inner_packets: 0
tx1_tso_inner_bytes: 0
tx1_csum_partial: 461736722
tx1_csum_partial_inner: 0
tx1_added_vlan_packets: 4638708749
tx1_nop: 70061796
tx1_csum_none: 4176972027
tx1_stopped: 9248
tx1_dropped: 0
tx1_xmit_more: 39531959
tx1_recover: 0
tx1_cqes: 4599179178
tx1_wake: 9248
tx1_cqe_err: 0
tx2_packets: 5795960848
tx2_bytes: 3394876820271
tx2_tso_packets: 322935065
tx2_tso_bytes: 1910825901109
tx2_tso_inner_packets: 0
tx2_tso_inner_bytes: 0
tx2_csum_partial: 460747092
tx2_csum_partial_inner: 0
tx2_added_vlan_packets: 4743705654
tx2_nop: 72722430
tx2_csum_none: 4282958562
tx2_stopped: 8938
tx2_dropped: 0
tx2_xmit_more: 44084718
tx2_recover: 0
tx2_cqes: 4699623410
tx2_wake: 8938
tx2_cqe_err: 0
tx3_packets: 5580215878
tx3_bytes: 3191677257787
tx3_tso_packets: 305771141
tx3_tso_bytes: 1823265793476
tx3_tso_inner_packets: 0
tx3_tso_inner_bytes: 0
tx3_csum_partial: 434976070
tx3_csum_partial_inner: 0
tx3_added_vlan_packets: 4569899956
tx3_nop: 68184348
tx3_csum_none: 4134923886
tx3_stopped: 8383
tx3_dropped: 0
tx3_xmit_more: 41940375
tx3_recover: 0
tx3_cqes: 4527961924
tx3_wake: 8383
tx3_cqe_err: 0
tx4_packets: 6795007068
tx4_bytes: 3963890025270
tx4_tso_packets: 358437617
tx4_tso_bytes: 2154747995355
tx4_tso_inner_packets: 0
tx4_tso_inner_bytes: 0
tx4_csum_partial: 504764524
tx4_csum_partial_inner: 0
tx4_added_vlan_packets: 5602510191
tx4_nop: 81345604
tx4_csum_none: 5097745667
tx4_stopped: 10248
tx4_dropped: 0
tx4_xmit_more: 49068571
tx4_recover: 0
tx4_cqes: 5553444276
tx4_wake: 10248
tx4_cqe_err: 0
tx5_packets: 6408089261
tx5_bytes: 3676275848279
tx5_tso_packets: 345129329
tx5_tso_bytes: 2108447877473
tx5_tso_inner_packets: 0
tx5_tso_inner_bytes: 0
tx5_csum_partial: 494705894
tx5_csum_partial_inner: 0
tx5_added_vlan_packets: 5235998343
tx5_nop: 77694627
tx5_csum_none: 4741292449
tx5_stopped: 46
tx5_dropped: 0
tx5_xmit_more: 46675831
tx5_recover: 0
tx5_cqes: 5189323550
tx5_wake: 46
tx5_cqe_err: 0
tx6_packets: 6382289663
tx6_bytes: 3670991418150
tx6_tso_packets: 342927826
tx6_tso_bytes: 2075049679904
tx6_tso_inner_packets: 0
tx6_tso_inner_bytes: 0
tx6_csum_partial: 490369221
tx6_csum_partial_inner: 0
tx6_added_vlan_packets: 5232144528
tx6_nop: 77391246
tx6_csum_none: 4741775307
tx6_stopped: 10823
tx6_dropped: 0
tx6_xmit_more: 44487607
tx6_recover: 0
tx6_cqes: 5187659877
tx6_wake: 10823
tx6_cqe_err: 0
tx7_packets: 6456378284
tx7_bytes: 3758013320518
tx7_tso_packets: 350958294
tx7_tso_bytes: 2126833408524
tx7_tso_inner_packets: 0
tx7_tso_inner_bytes: 0
tx7_csum_partial: 501804109
tx7_csum_partial_inner: 0
tx7_added_vlan_packets: 5275635204
tx7_nop: 79010883
tx7_csum_none: 4773831096
tx7_stopped: 14684
tx7_dropped: 0
tx7_xmit_more: 44447469
tx7_recover: 0
tx7_cqes: 5231191770
tx7_wake: 14684
tx7_cqe_err: 0
tx8_packets: 6401799768
tx8_bytes: 3681210808766
tx8_tso_packets: 342878228
tx8_tso_bytes: 2089688012191
tx8_tso_inner_packets: 0
tx8_tso_inner_bytes: 0
tx8_csum_partial: 494865145
tx8_csum_partial_inner: 0
tx8_added_vlan_packets: 5242288908
tx8_nop: 77250910
tx8_csum_none: 4747423763
tx8_stopped: 2
tx8_dropped: 0
tx8_xmit_more: 44191737
tx8_recover: 0
tx8_cqes: 5198098454
tx8_wake: 2
tx8_cqe_err: 0
tx9_packets: 6632882888
tx9_bytes: 3820110338309
tx9_tso_packets: 354189056
tx9_tso_bytes: 2187883597128
tx9_tso_inner_packets: 0
tx9_tso_inner_bytes: 0
tx9_csum_partial: 511108218
tx9_csum_partial_inner: 0
tx9_added_vlan_packets: 5413836353
tx9_nop: 80560668
tx9_csum_none: 4902728135
tx9_stopped: 9091
tx9_dropped: 0
tx9_xmit_more: 54501293
tx9_recover: 0
tx9_cqes: 5359337638
tx9_wake: 9091
tx9_cqe_err: 0
tx10_packets: 6421786406
tx10_bytes: 3692798413429
tx10_tso_packets: 346878943
tx10_tso_bytes: 2111921062110
tx10_tso_inner_packets: 0
tx10_tso_inner_bytes: 0
tx10_csum_partial: 494356645
tx10_csum_partial_inner: 0
tx10_added_vlan_packets: 5248274374
tx10_nop: 77922624
tx10_csum_none: 4753917730
tx10_stopped: 9617
tx10_dropped: 0
tx10_xmit_more: 44473939
tx10_recover: 0
tx10_cqes: 5203802547
tx10_wake: 9617
tx10_cqe_err: 0
tx11_packets: 6406750938
tx11_bytes: 3660343565126
tx11_tso_packets: 355917271
tx11_tso_bytes: 2130812246956
tx11_tso_inner_packets: 0
tx11_tso_inner_bytes: 0
tx11_csum_partial: 500336369
tx11_csum_partial_inner: 0
tx11_added_vlan_packets: 5228267547
tx11_nop: 78906315
tx11_csum_none: 4727931178
tx11_stopped: 9607
tx11_dropped: 0
tx11_xmit_more: 40041492
tx11_recover: 0
tx11_cqes: 5188228290
tx11_wake: 9607
tx11_cqe_err: 0
tx12_packets: 6422347846
tx12_bytes: 3718772753227
tx12_tso_packets: 355397223
tx12_tso_bytes: 2162614059758
tx12_tso_inner_packets: 0
tx12_tso_inner_bytes: 0
tx12_csum_partial: 511437844
tx12_csum_partial_inner: 0
tx12_added_vlan_packets: 5221373746
tx12_nop: 78866779
tx12_csum_none: 4709935902
tx12_stopped: 10280
tx12_dropped: 0
tx12_xmit_more: 42189399
tx12_recover: 0
tx12_cqes: 5179187154
tx12_wake: 10280
tx12_cqe_err: 0
tx13_packets: 6429383816
tx13_bytes: 3725679445046
tx13_tso_packets: 360934759
tx13_tso_bytes: 2148016411436
tx13_tso_inner_packets: 0
tx13_tso_inner_bytes: 0
tx13_csum_partial: 505245849
tx13_csum_partial_inner: 0
tx13_added_vlan_packets: 5240267441
tx13_nop: 80295637
tx13_csum_none: 4735021592
tx13_stopped: 84
tx13_dropped: 0
tx13_xmit_more: 43118045
tx13_recover: 0
tx13_cqes: 5197150348
tx13_wake: 84
tx13_cqe_err: 0
tx14_packets: 6375279148
tx14_bytes: 3624267203336
tx14_tso_packets: 344388148
tx14_tso_bytes: 2094966273548
tx14_tso_inner_packets: 0
tx14_tso_inner_bytes: 0
tx14_csum_partial: 494129407
tx14_csum_partial_inner: 0
tx14_added_vlan_packets: 5210749337
tx14_nop: 77280615
tx14_csum_none: 4716619930
tx14_stopped: 13057
tx14_dropped: 0
tx14_xmit_more: 40849682
tx14_recover: 0
tx14_cqes: 5169902694
tx14_wake: 13057
tx14_cqe_err: 0
tx15_packets: 6489306520
tx15_bytes: 3775716194795
tx15_tso_packets: 368716406
tx15_tso_bytes: 2165876423354
tx15_tso_inner_packets: 0
tx15_tso_inner_bytes: 0
tx15_csum_partial: 509887864
tx15_csum_partial_inner: 0
tx15_added_vlan_packets: 5296767390
tx15_nop: 80803468
tx15_csum_none: 4786879529
tx15_stopped: 1
tx15_dropped: 0
tx15_xmit_more: 46979676
tx15_recover: 0
tx15_cqes: 5249789328
tx15_wake: 1
tx15_cqe_err: 0
tx16_packets: 6559857761
tx16_bytes: 3724080573905
tx16_tso_packets: 350864176
tx16_tso_bytes: 2099634006033
tx16_tso_inner_packets: 0
tx16_tso_inner_bytes: 0
tx16_csum_partial: 489397232
tx16_csum_partial_inner: 0
tx16_added_vlan_packets: 5398869334
tx16_nop: 79046075
tx16_csum_none: 4909472106
tx16_stopped: 4480
tx16_dropped: 0
tx16_xmit_more: 47273286
tx16_recover: 0
tx16_cqes: 5351598315
tx16_wake: 4480
tx16_cqe_err: 0
tx17_packets: 6358711533
tx17_bytes: 3650180865573
tx17_tso_packets: 350723136
tx17_tso_bytes: 2109426587128
tx17_tso_inner_packets: 0
tx17_tso_inner_bytes: 0
tx17_csum_partial: 494719487
tx17_csum_partial_inner: 0
tx17_added_vlan_packets: 5190068796
tx17_nop: 77285612
tx17_csum_none: 4695349309
tx17_stopped: 10443
tx17_dropped: 0
tx17_xmit_more: 45582108
tx17_recover: 0
tx17_cqes: 5144489363
tx17_wake: 10443
tx17_cqe_err: 0
tx18_packets: 6655328437
tx18_bytes: 3801768461807
tx18_tso_packets: 356516373
tx18_tso_bytes: 2164829247550
tx18_tso_inner_packets: 0
tx18_tso_inner_bytes: 0
tx18_csum_partial: 500508446
tx18_csum_partial_inner: 0
tx18_added_vlan_packets: 5454166840
tx18_nop: 80423007
tx18_csum_none: 4953658394
tx18_stopped: 14760
tx18_dropped: 0
tx18_xmit_more: 50837465
tx18_recover: 0
tx18_cqes: 5403332553
tx18_wake: 14760
tx18_cqe_err: 0
tx19_packets: 6408680611
tx19_bytes: 3644119934372
tx19_tso_packets: 350727530
tx19_tso_bytes: 2089896715365
tx19_tso_inner_packets: 0
tx19_tso_inner_bytes: 0
tx19_csum_partial: 486536490
tx19_csum_partial_inner: 0
tx19_added_vlan_packets: 5255839020
tx19_nop: 78525198
tx19_csum_none: 4769302530
tx19_stopped: 8614
tx19_dropped: 0
tx19_xmit_more: 43605232
tx19_recover: 0
tx19_cqes: 5212236833
tx19_wake: 8614
tx19_cqe_err: 0
tx20_packets: 5609275141
tx20_bytes: 3187279031581
tx20_tso_packets: 298609303
tx20_tso_bytes: 1794382229379
tx20_tso_inner_packets: 0
tx20_tso_inner_bytes: 0
tx20_csum_partial: 430691178
tx20_csum_partial_inner: 0
tx20_added_vlan_packets: 4616844286
tx20_nop: 67450040
tx20_csum_none: 4186153108
tx20_stopped: 9099
tx20_dropped: 0
tx20_xmit_more: 42040991
tx20_recover: 0
tx20_cqes: 4574805846
tx20_wake: 9099
tx20_cqe_err: 0
tx21_packets: 5641621183
tx21_bytes: 3279282331124
tx21_tso_packets: 311297057
tx21_tso_bytes: 1875735401012
tx21_tso_inner_packets: 0
tx21_tso_inner_bytes: 0
tx21_csum_partial: 444333894
tx21_csum_partial_inner: 0
tx21_added_vlan_packets: 4603527701
tx21_nop: 68857983
tx21_csum_none: 4159193807
tx21_stopped: 10082
tx21_dropped: 0
tx21_xmit_more: 43988081
tx21_recover: 0
tx21_cqes: 4559542410
tx21_wake: 10082
tx21_cqe_err: 0
tx22_packets: 5822168288
tx22_bytes: 3452026726862
tx22_tso_packets: 308230791
tx22_tso_bytes: 1859686450671
tx22_tso_inner_packets: 0
tx22_tso_inner_bytes: 0
tx22_csum_partial: 442751518
tx22_csum_partial_inner: 0
tx22_added_vlan_packets: 4792100335
tx22_nop: 70631706
tx22_csum_none: 4349348817
tx22_stopped: 9355
tx22_dropped: 0
tx22_xmit_more: 45165994
tx22_recover: 0
tx22_cqes: 4746936601
tx22_wake: 9355
tx22_cqe_err: 0
tx23_packets: 5664896066
tx23_bytes: 3207724186946
tx23_tso_packets: 300418757
tx23_tso_bytes: 1794180478679
tx23_tso_inner_packets: 0
tx23_tso_inner_bytes: 0
tx23_csum_partial: 429898848
tx23_csum_partial_inner: 0
tx23_added_vlan_packets: 4674317320
tx23_nop: 67899896
tx23_csum_none: 4244418472
tx23_stopped: 11684
tx23_dropped: 0
tx23_xmit_more: 43351132
tx23_recover: 0
tx23_cqes: 4630969028
tx23_wake: 11684
tx23_cqe_err: 0
tx24_packets: 5663326601
tx24_bytes: 3250127095110
tx24_tso_packets: 301327422
tx24_tso_bytes: 1831260534157
tx24_tso_inner_packets: 0
tx24_tso_inner_bytes: 0
tx24_csum_partial: 438757312
tx24_csum_partial_inner: 0
tx24_added_vlan_packets: 4646014986
tx24_nop: 68431153
tx24_csum_none: 4207257674
tx24_stopped: 9240
tx24_dropped: 0
tx24_xmit_more: 47699542
tx24_recover: 0
tx24_cqes: 4598317913
tx24_wake: 9240
tx24_cqe_err: 0
tx25_packets: 5703883962
tx25_bytes: 3291856915695
tx25_tso_packets: 308900318
tx25_tso_bytes: 1855516128386
tx25_tso_inner_packets: 0
tx25_tso_inner_bytes: 0
tx25_csum_partial: 444753744
tx25_csum_partial_inner: 0
tx25_added_vlan_packets: 4676528924
tx25_nop: 69230967
tx25_csum_none: 4231775180
tx25_stopped: 1140
tx25_dropped: 0
tx25_xmit_more: 40819195
tx25_recover: 0
tx25_cqes: 4635710966
tx25_wake: 1140
tx25_cqe_err: 0
tx26_packets: 5803495984
tx26_bytes: 3413564272139
tx26_tso_packets: 319986230
tx26_tso_bytes: 1929042839677
tx26_tso_inner_packets: 0
tx26_tso_inner_bytes: 0
tx26_csum_partial: 464771163
tx26_csum_partial_inner: 0
tx26_added_vlan_packets: 4734767280
tx26_nop: 71345080
tx26_csum_none: 4269996117
tx26_stopped: 10972
tx26_dropped: 0
tx26_xmit_more: 43793424
tx26_recover: 0
tx26_cqes: 4690976400
tx26_wake: 10972
tx26_cqe_err: 0
tx27_packets: 5960955343
tx27_bytes: 3444156164526
tx27_tso_packets: 325099639
tx27_tso_bytes: 1928378678784
tx27_tso_inner_packets: 0
tx27_tso_inner_bytes: 0
tx27_csum_partial: 467310289
tx27_csum_partial_inner: 0
tx27_added_vlan_packets: 4888651368
tx27_nop: 73201664
tx27_csum_none: 4421341079
tx27_stopped: 9465
tx27_dropped: 0
tx27_xmit_more: 53632121
tx27_recover: 0
tx27_cqes: 4835021398
tx27_wake: 9465
tx27_cqe_err: 0
tx28_packets: 0
tx28_bytes: 0
tx28_tso_packets: 0
tx28_tso_bytes: 0
tx28_tso_inner_packets: 0
tx28_tso_inner_bytes: 0
tx28_csum_partial: 0
tx28_csum_partial_inner: 0
tx28_added_vlan_packets: 0
tx28_nop: 0
tx28_csum_none: 0
tx28_stopped: 0
tx28_dropped: 0
tx28_xmit_more: 0
tx28_recover: 0
tx28_cqes: 0
tx28_wake: 0
tx28_cqe_err: 0
tx29_packets: 3
tx29_bytes: 266
tx29_tso_packets: 0
tx29_tso_bytes: 0
tx29_tso_inner_packets: 0
tx29_tso_inner_bytes: 0
tx29_csum_partial: 0
tx29_csum_partial_inner: 0
tx29_added_vlan_packets: 0
tx29_nop: 0
tx29_csum_none: 3
tx29_stopped: 0
tx29_dropped: 0
tx29_xmit_more: 1
tx29_recover: 0
tx29_cqes: 2
tx29_wake: 0
tx29_cqe_err: 0
tx30_packets: 0
tx30_bytes: 0
tx30_tso_packets: 0
tx30_tso_bytes: 0
tx30_tso_inner_packets: 0
tx30_tso_inner_bytes: 0
tx30_csum_partial: 0
tx30_csum_partial_inner: 0
tx30_added_vlan_packets: 0
tx30_nop: 0
tx30_csum_none: 0
tx30_stopped: 0
tx30_dropped: 0
tx30_xmit_more: 0
tx30_recover: 0
tx30_cqes: 0
tx30_wake: 0
tx30_cqe_err: 0
tx31_packets: 0
tx31_bytes: 0
tx31_tso_packets: 0
tx31_tso_bytes: 0
tx31_tso_inner_packets: 0
tx31_tso_inner_bytes: 0
tx31_csum_partial: 0
tx31_csum_partial_inner: 0
tx31_added_vlan_packets: 0
tx31_nop: 0
tx31_csum_none: 0
tx31_stopped: 0
tx31_dropped: 0
tx31_xmit_more: 0
tx31_recover: 0
tx31_cqes: 0
tx31_wake: 0
tx31_cqe_err: 0
tx32_packets: 0
tx32_bytes: 0
tx32_tso_packets: 0
tx32_tso_bytes: 0
tx32_tso_inner_packets: 0
tx32_tso_inner_bytes: 0
tx32_csum_partial: 0
tx32_csum_partial_inner: 0
tx32_added_vlan_packets: 0
tx32_nop: 0
tx32_csum_none: 0
tx32_stopped: 0
tx32_dropped: 0
tx32_xmit_more: 0
tx32_recover: 0
tx32_cqes: 0
tx32_wake: 0
tx32_cqe_err: 0
tx33_packets: 0
tx33_bytes: 0
tx33_tso_packets: 0
tx33_tso_bytes: 0
tx33_tso_inner_packets: 0
tx33_tso_inner_bytes: 0
tx33_csum_partial: 0
tx33_csum_partial_inner: 0
tx33_added_vlan_packets: 0
tx33_nop: 0
tx33_csum_none: 0
tx33_stopped: 0
tx33_dropped: 0
tx33_xmit_more: 0
tx33_recover: 0
tx33_cqes: 0
tx33_wake: 0
tx33_cqe_err: 0
tx34_packets: 0
tx34_bytes: 0
tx34_tso_packets: 0
tx34_tso_bytes: 0
tx34_tso_inner_packets: 0
tx34_tso_inner_bytes: 0
tx34_csum_partial: 0
tx34_csum_partial_inner: 0
tx34_added_vlan_packets: 0
tx34_nop: 0
tx34_csum_none: 0
tx34_stopped: 0
tx34_dropped: 0
tx34_xmit_more: 0
tx34_recover: 0
tx34_cqes: 0
tx34_wake: 0
tx34_cqe_err: 0
tx35_packets: 0
tx35_bytes: 0
tx35_tso_packets: 0
tx35_tso_bytes: 0
tx35_tso_inner_packets: 0
tx35_tso_inner_bytes: 0
tx35_csum_partial: 0
tx35_csum_partial_inner: 0
tx35_added_vlan_packets: 0
tx35_nop: 0
tx35_csum_none: 0
tx35_stopped: 0
tx35_dropped: 0
tx35_xmit_more: 0
tx35_recover: 0
tx35_cqes: 0
tx35_wake: 0
tx35_cqe_err: 0
tx36_packets: 0
tx36_bytes: 0
tx36_tso_packets: 0
tx36_tso_bytes: 0
tx36_tso_inner_packets: 0
tx36_tso_inner_bytes: 0
tx36_csum_partial: 0
tx36_csum_partial_inner: 0
tx36_added_vlan_packets: 0
tx36_nop: 0
tx36_csum_none: 0
tx36_stopped: 0
tx36_dropped: 0
tx36_xmit_more: 0
tx36_recover: 0
tx36_cqes: 0
tx36_wake: 0
tx36_cqe_err: 0
tx37_packets: 0
tx37_bytes: 0
tx37_tso_packets: 0
tx37_tso_bytes: 0
tx37_tso_inner_packets: 0
tx37_tso_inner_bytes: 0
tx37_csum_partial: 0
tx37_csum_partial_inner: 0
tx37_added_vlan_packets: 0
tx37_nop: 0
tx37_csum_none: 0
tx37_stopped: 0
tx37_dropped: 0
tx37_xmit_more: 0
tx37_recover: 0
tx37_cqes: 0
tx37_wake: 0
tx37_cqe_err: 0
tx38_packets: 0
tx38_bytes: 0
tx38_tso_packets: 0
tx38_tso_bytes: 0
tx38_tso_inner_packets: 0
tx38_tso_inner_bytes: 0
tx38_csum_partial: 0
tx38_csum_partial_inner: 0
tx38_added_vlan_packets: 0
tx38_nop: 0
tx38_csum_none: 0
tx38_stopped: 0
tx38_dropped: 0
tx38_xmit_more: 0
tx38_recover: 0
tx38_cqes: 0
tx38_wake: 0
tx38_cqe_err: 0
tx39_packets: 0
tx39_bytes: 0
tx39_tso_packets: 0
tx39_tso_bytes: 0
tx39_tso_inner_packets: 0
tx39_tso_inner_bytes: 0
tx39_csum_partial: 0
tx39_csum_partial_inner: 0
tx39_added_vlan_packets: 0
tx39_nop: 0
tx39_csum_none: 0
tx39_stopped: 0
tx39_dropped: 0
tx39_xmit_more: 0
tx39_recover: 0
tx39_cqes: 0
tx39_wake: 0
tx39_cqe_err: 0
tx40_packets: 0
tx40_bytes: 0
tx40_tso_packets: 0
tx40_tso_bytes: 0
tx40_tso_inner_packets: 0
tx40_tso_inner_bytes: 0
tx40_csum_partial: 0
tx40_csum_partial_inner: 0
tx40_added_vlan_packets: 0
tx40_nop: 0
tx40_csum_none: 0
tx40_stopped: 0
tx40_dropped: 0
tx40_xmit_more: 0
tx40_recover: 0
tx40_cqes: 0
tx40_wake: 0
tx40_cqe_err: 0
tx41_packets: 0
tx41_bytes: 0
tx41_tso_packets: 0
tx41_tso_bytes: 0
tx41_tso_inner_packets: 0
tx41_tso_inner_bytes: 0
tx41_csum_partial: 0
tx41_csum_partial_inner: 0
tx41_added_vlan_packets: 0
tx41_nop: 0
tx41_csum_none: 0
tx41_stopped: 0
tx41_dropped: 0
tx41_xmit_more: 0
tx41_recover: 0
tx41_cqes: 0
tx41_wake: 0
tx41_cqe_err: 0
tx42_packets: 0
tx42_bytes: 0
tx42_tso_packets: 0
tx42_tso_bytes: 0
tx42_tso_inner_packets: 0
tx42_tso_inner_bytes: 0
tx42_csum_partial: 0
tx42_csum_partial_inner: 0
tx42_added_vlan_packets: 0
tx42_nop: 0
tx42_csum_none: 0
tx42_stopped: 0
tx42_dropped: 0
tx42_xmit_more: 0
tx42_recover: 0
tx42_cqes: 0
tx42_wake: 0
tx42_cqe_err: 0
tx43_packets: 0
tx43_bytes: 0
tx43_tso_packets: 0
tx43_tso_bytes: 0
tx43_tso_inner_packets: 0
tx43_tso_inner_bytes: 0
tx43_csum_partial: 0
tx43_csum_partial_inner: 0
tx43_added_vlan_packets: 0
tx43_nop: 0
tx43_csum_none: 0
tx43_stopped: 0
tx43_dropped: 0
tx43_xmit_more: 0
tx43_recover: 0
tx43_cqes: 0
tx43_wake: 0
tx43_cqe_err: 0
tx44_packets: 0
tx44_bytes: 0
tx44_tso_packets: 0
tx44_tso_bytes: 0
tx44_tso_inner_packets: 0
tx44_tso_inner_bytes: 0
tx44_csum_partial: 0
tx44_csum_partial_inner: 0
tx44_added_vlan_packets: 0
tx44_nop: 0
tx44_csum_none: 0
tx44_stopped: 0
tx44_dropped: 0
tx44_xmit_more: 0
tx44_recover: 0
tx44_cqes: 0
tx44_wake: 0
tx44_cqe_err: 0
tx45_packets: 0
tx45_bytes: 0
tx45_tso_packets: 0
tx45_tso_bytes: 0
tx45_tso_inner_packets: 0
tx45_tso_inner_bytes: 0
tx45_csum_partial: 0
tx45_csum_partial_inner: 0
tx45_added_vlan_packets: 0
tx45_nop: 0
tx45_csum_none: 0
tx45_stopped: 0
tx45_dropped: 0
tx45_xmit_more: 0
tx45_recover: 0
tx45_cqes: 0
tx45_wake: 0
tx45_cqe_err: 0
tx46_packets: 0
tx46_bytes: 0
tx46_tso_packets: 0
tx46_tso_bytes: 0
tx46_tso_inner_packets: 0
tx46_tso_inner_bytes: 0
tx46_csum_partial: 0
tx46_csum_partial_inner: 0
tx46_added_vlan_packets: 0
tx46_nop: 0
tx46_csum_none: 0
tx46_stopped: 0
tx46_dropped: 0
tx46_xmit_more: 0
tx46_recover: 0
tx46_cqes: 0
tx46_wake: 0
tx46_cqe_err: 0
tx47_packets: 0
tx47_bytes: 0
tx47_tso_packets: 0
tx47_tso_bytes: 0
tx47_tso_inner_packets: 0
tx47_tso_inner_bytes: 0
tx47_csum_partial: 0
tx47_csum_partial_inner: 0
tx47_added_vlan_packets: 0
tx47_nop: 0
tx47_csum_none: 0
tx47_stopped: 0
tx47_dropped: 0
tx47_xmit_more: 0
tx47_recover: 0
tx47_cqes: 0
tx47_wake: 0
tx47_cqe_err: 0
tx48_packets: 0
tx48_bytes: 0
tx48_tso_packets: 0
tx48_tso_bytes: 0
tx48_tso_inner_packets: 0
tx48_tso_inner_bytes: 0
tx48_csum_partial: 0
tx48_csum_partial_inner: 0
tx48_added_vlan_packets: 0
tx48_nop: 0
tx48_csum_none: 0
tx48_stopped: 0
tx48_dropped: 0
tx48_xmit_more: 0
tx48_recover: 0
tx48_cqes: 0
tx48_wake: 0
tx48_cqe_err: 0
tx49_packets: 0
tx49_bytes: 0
tx49_tso_packets: 0
tx49_tso_bytes: 0
tx49_tso_inner_packets: 0
tx49_tso_inner_bytes: 0
tx49_csum_partial: 0
tx49_csum_partial_inner: 0
tx49_added_vlan_packets: 0
tx49_nop: 0
tx49_csum_none: 0
tx49_stopped: 0
tx49_dropped: 0
tx49_xmit_more: 0
tx49_recover: 0
tx49_cqes: 0
tx49_wake: 0
tx49_cqe_err: 0
tx50_packets: 0
tx50_bytes: 0
tx50_tso_packets: 0
tx50_tso_bytes: 0
tx50_tso_inner_packets: 0
tx50_tso_inner_bytes: 0
tx50_csum_partial: 0
tx50_csum_partial_inner: 0
tx50_added_vlan_packets: 0
tx50_nop: 0
tx50_csum_none: 0
tx50_stopped: 0
tx50_dropped: 0
tx50_xmit_more: 0
tx50_recover: 0
tx50_cqes: 0
tx50_wake: 0
tx50_cqe_err: 0
tx51_packets: 0
tx51_bytes: 0
tx51_tso_packets: 0
tx51_tso_bytes: 0
tx51_tso_inner_packets: 0
tx51_tso_inner_bytes: 0
tx51_csum_partial: 0
tx51_csum_partial_inner: 0
tx51_added_vlan_packets: 0
tx51_nop: 0
tx51_csum_none: 0
tx51_stopped: 0
tx51_dropped: 0
tx51_xmit_more: 0
tx51_recover: 0
tx51_cqes: 0
tx51_wake: 0
tx51_cqe_err: 0
tx52_packets: 0
tx52_bytes: 0
tx52_tso_packets: 0
tx52_tso_bytes: 0
tx52_tso_inner_packets: 0
tx52_tso_inner_bytes: 0
tx52_csum_partial: 0
tx52_csum_partial_inner: 0
tx52_added_vlan_packets: 0
tx52_nop: 0
tx52_csum_none: 0
tx52_stopped: 0
tx52_dropped: 0
tx52_xmit_more: 0
tx52_recover: 0
tx52_cqes: 0
tx52_wake: 0
tx52_cqe_err: 0
tx53_packets: 0
tx53_bytes: 0
tx53_tso_packets: 0
tx53_tso_bytes: 0
tx53_tso_inner_packets: 0
tx53_tso_inner_bytes: 0
tx53_csum_partial: 0
tx53_csum_partial_inner: 0
tx53_added_vlan_packets: 0
tx53_nop: 0
tx53_csum_none: 0
tx53_stopped: 0
tx53_dropped: 0
tx53_xmit_more: 0
tx53_recover: 0
tx53_cqes: 0
tx53_wake: 0
tx53_cqe_err: 0
tx54_packets: 0
tx54_bytes: 0
tx54_tso_packets: 0
tx54_tso_bytes: 0
tx54_tso_inner_packets: 0
tx54_tso_inner_bytes: 0
tx54_csum_partial: 0
tx54_csum_partial_inner: 0
tx54_added_vlan_packets: 0
tx54_nop: 0
tx54_csum_none: 0
tx54_stopped: 0
tx54_dropped: 0
tx54_xmit_more: 0
tx54_recover: 0
tx54_cqes: 0
tx54_wake: 0
tx54_cqe_err: 0
tx55_packets: 0
tx55_bytes: 0
tx55_tso_packets: 0
tx55_tso_bytes: 0
tx55_tso_inner_packets: 0
tx55_tso_inner_bytes: 0
tx55_csum_partial: 0
tx55_csum_partial_inner: 0
tx55_added_vlan_packets: 0
tx55_nop: 0
tx55_csum_none: 0
tx55_stopped: 0
tx55_dropped: 0
tx55_xmit_more: 0
tx55_recover: 0
tx55_cqes: 0
tx55_wake: 0
tx55_cqe_err: 0
tx0_xdp_xmit: 0
tx0_xdp_full: 0
tx0_xdp_err: 0
tx0_xdp_cqes: 0
tx1_xdp_xmit: 0
tx1_xdp_full: 0
tx1_xdp_err: 0
tx1_xdp_cqes: 0
tx2_xdp_xmit: 0
tx2_xdp_full: 0
tx2_xdp_err: 0
tx2_xdp_cqes: 0
tx3_xdp_xmit: 0
tx3_xdp_full: 0
tx3_xdp_err: 0
tx3_xdp_cqes: 0
tx4_xdp_xmit: 0
tx4_xdp_full: 0
tx4_xdp_err: 0
tx4_xdp_cqes: 0
tx5_xdp_xmit: 0
tx5_xdp_full: 0
tx5_xdp_err: 0
tx5_xdp_cqes: 0
tx6_xdp_xmit: 0
tx6_xdp_full: 0
tx6_xdp_err: 0
tx6_xdp_cqes: 0
tx7_xdp_xmit: 0
tx7_xdp_full: 0
tx7_xdp_err: 0
tx7_xdp_cqes: 0
tx8_xdp_xmit: 0
tx8_xdp_full: 0
tx8_xdp_err: 0
tx8_xdp_cqes: 0
tx9_xdp_xmit: 0
tx9_xdp_full: 0
tx9_xdp_err: 0
tx9_xdp_cqes: 0
tx10_xdp_xmit: 0
tx10_xdp_full: 0
tx10_xdp_err: 0
tx10_xdp_cqes: 0
tx11_xdp_xmit: 0
tx11_xdp_full: 0
tx11_xdp_err: 0
tx11_xdp_cqes: 0
tx12_xdp_xmit: 0
tx12_xdp_full: 0
tx12_xdp_err: 0
tx12_xdp_cqes: 0
tx13_xdp_xmit: 0
tx13_xdp_full: 0
tx13_xdp_err: 0
tx13_xdp_cqes: 0
tx14_xdp_xmit: 0
tx14_xdp_full: 0
tx14_xdp_err: 0
tx14_xdp_cqes: 0
tx15_xdp_xmit: 0
tx15_xdp_full: 0
tx15_xdp_err: 0
tx15_xdp_cqes: 0
tx16_xdp_xmit: 0
tx16_xdp_full: 0
tx16_xdp_err: 0
tx16_xdp_cqes: 0
tx17_xdp_xmit: 0
tx17_xdp_full: 0
tx17_xdp_err: 0
tx17_xdp_cqes: 0
tx18_xdp_xmit: 0
tx18_xdp_full: 0
tx18_xdp_err: 0
tx18_xdp_cqes: 0
tx19_xdp_xmit: 0
tx19_xdp_full: 0
tx19_xdp_err: 0
tx19_xdp_cqes: 0
tx20_xdp_xmit: 0
tx20_xdp_full: 0
tx20_xdp_err: 0
tx20_xdp_cqes: 0
tx21_xdp_xmit: 0
tx21_xdp_full: 0
tx21_xdp_err: 0
tx21_xdp_cqes: 0
tx22_xdp_xmit: 0
tx22_xdp_full: 0
tx22_xdp_err: 0
tx22_xdp_cqes: 0
tx23_xdp_xmit: 0
tx23_xdp_full: 0
tx23_xdp_err: 0
tx23_xdp_cqes: 0
tx24_xdp_xmit: 0
tx24_xdp_full: 0
tx24_xdp_err: 0
tx24_xdp_cqes: 0
tx25_xdp_xmit: 0
tx25_xdp_full: 0
tx25_xdp_err: 0
tx25_xdp_cqes: 0
tx26_xdp_xmit: 0
tx26_xdp_full: 0
tx26_xdp_err: 0
tx26_xdp_cqes: 0
tx27_xdp_xmit: 0
tx27_xdp_full: 0
tx27_xdp_err: 0
tx27_xdp_cqes: 0
tx28_xdp_xmit: 0
tx28_xdp_full: 0
tx28_xdp_err: 0
tx28_xdp_cqes: 0
tx29_xdp_xmit: 0
tx29_xdp_full: 0
tx29_xdp_err: 0
tx29_xdp_cqes: 0
tx30_xdp_xmit: 0
tx30_xdp_full: 0
tx30_xdp_err: 0
tx30_xdp_cqes: 0
tx31_xdp_xmit: 0
tx31_xdp_full: 0
tx31_xdp_err: 0
tx31_xdp_cqes: 0
tx32_xdp_xmit: 0
tx32_xdp_full: 0
tx32_xdp_err: 0
tx32_xdp_cqes: 0
tx33_xdp_xmit: 0
tx33_xdp_full: 0
tx33_xdp_err: 0
tx33_xdp_cqes: 0
tx34_xdp_xmit: 0
tx34_xdp_full: 0
tx34_xdp_err: 0
tx34_xdp_cqes: 0
tx35_xdp_xmit: 0
tx35_xdp_full: 0
tx35_xdp_err: 0
tx35_xdp_cqes: 0
tx36_xdp_xmit: 0
tx36_xdp_full: 0
tx36_xdp_err: 0
tx36_xdp_cqes: 0
tx37_xdp_xmit: 0
tx37_xdp_full: 0
tx37_xdp_err: 0
tx37_xdp_cqes: 0
tx38_xdp_xmit: 0
tx38_xdp_full: 0
tx38_xdp_err: 0
tx38_xdp_cqes: 0
tx39_xdp_xmit: 0
tx39_xdp_full: 0
tx39_xdp_err: 0
tx39_xdp_cqes: 0
tx40_xdp_xmit: 0
tx40_xdp_full: 0
tx40_xdp_err: 0
tx40_xdp_cqes: 0
tx41_xdp_xmit: 0
tx41_xdp_full: 0
tx41_xdp_err: 0
tx41_xdp_cqes: 0
tx42_xdp_xmit: 0
tx42_xdp_full: 0
tx42_xdp_err: 0
tx42_xdp_cqes: 0
tx43_xdp_xmit: 0
tx43_xdp_full: 0
tx43_xdp_err: 0
tx43_xdp_cqes: 0
tx44_xdp_xmit: 0
tx44_xdp_full: 0
tx44_xdp_err: 0
tx44_xdp_cqes: 0
tx45_xdp_xmit: 0
tx45_xdp_full: 0
tx45_xdp_err: 0
tx45_xdp_cqes: 0
tx46_xdp_xmit: 0
tx46_xdp_full: 0
tx46_xdp_err: 0
tx46_xdp_cqes: 0
tx47_xdp_xmit: 0
tx47_xdp_full: 0
tx47_xdp_err: 0
tx47_xdp_cqes: 0
tx48_xdp_xmit: 0
tx48_xdp_full: 0
tx48_xdp_err: 0
tx48_xdp_cqes: 0
tx49_xdp_xmit: 0
tx49_xdp_full: 0
tx49_xdp_err: 0
tx49_xdp_cqes: 0
tx50_xdp_xmit: 0
tx50_xdp_full: 0
tx50_xdp_err: 0
tx50_xdp_cqes: 0
tx51_xdp_xmit: 0
tx51_xdp_full: 0
tx51_xdp_err: 0
tx51_xdp_cqes: 0
tx52_xdp_xmit: 0
tx52_xdp_full: 0
tx52_xdp_err: 0
tx52_xdp_cqes: 0
tx53_xdp_xmit: 0
tx53_xdp_full: 0
tx53_xdp_err: 0
tx53_xdp_cqes: 0
tx54_xdp_xmit: 0
tx54_xdp_full: 0
tx54_xdp_err: 0
tx54_xdp_cqes: 0
tx55_xdp_xmit: 0
tx55_xdp_full: 0
tx55_xdp_err: 0
tx55_xdp_cqes: 0
mpstat -P ALL 1 10
Average: CPU %usr %nice %sys %iowait %irq %soft %steal
%guest %gnice %idle
Average: all 0.04 0.00 6.94 0.02 0.00 32.00
0.00 0.00 0.00 61.00
Average: 0 0.00 0.00 1.20 0.00 0.00 0.00 0.00
0.00 0.00 98.80
Average: 1 0.00 0.00 2.30 0.00 0.00 0.00 0.00
0.00 0.00 97.70
Average: 2 0.10 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 3 0.10 0.00 1.50 0.00 0.00 0.00 0.00
0.00 0.00 98.40
Average: 4 0.50 0.00 2.50 0.00 0.00 0.00 0.00
0.00 0.00 97.00
Average: 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 6 0.90 0.00 10.20 0.00 0.00 0.00 0.00
0.00 0.00 88.90
Average: 7 0.00 0.00 0.00 1.40 0.00 0.00 0.00
0.00 0.00 98.60
Average: 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 11 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 13 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 14 0.00 0.00 12.99 0.00 0.00 62.64
0.00 0.00 0.00 24.38
Average: 15 0.00 0.00 12.70 0.00 0.00 63.40
0.00 0.00 0.00 23.90
Average: 16 0.00 0.00 11.20 0.00 0.00 66.40
0.00 0.00 0.00 22.40
Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
0.00 0.00 0.00 31.30
Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
0.00 0.00 0.00 24.90
Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
0.00 0.00 0.00 19.68
Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
0.00 0.00 0.00 18.00
Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
0.00 0.00 0.00 17.40
Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
0.00 0.00 0.00 26.03
Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
0.00 0.00 0.00 17.52
Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
0.00 0.00 0.00 18.20
Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
0.00 0.00 0.00 17.68
Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
0.00 0.00 0.00 18.20
Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
0.00 0.00 0.00 20.42
Average: 28 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 29 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 30 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 31 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 32 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 33 0.00 0.00 3.90 0.00 0.00 0.00 0.00
0.00 0.00 96.10
Average: 34 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 35 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 36 0.10 0.00 0.20 0.00 0.00 0.00 0.00
0.00 0.00 99.70
Average: 37 0.20 0.00 0.30 0.00 0.00 0.00 0.00
0.00 0.00 99.50
Average: 38 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 39 0.00 0.00 2.60 0.00 0.00 0.00 0.00
0.00 0.00 97.40
Average: 40 0.00 0.00 0.90 0.00 0.00 0.00 0.00
0.00 0.00 99.10
Average: 41 0.10 0.00 0.50 0.00 0.00 0.00 0.00
0.00 0.00 99.40
Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
0.00 0.00 0.00 19.42
Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
0.00 0.00 0.00 26.60
Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
0.00 0.00 0.00 21.60
Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
0.00 0.00 0.00 20.50
Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
0.00 0.00 0.00 21.60
Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
0.00 0.00 0.00 24.58
Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
0.00 0.00 0.00 24.68
Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
0.00 0.00 0.00 28.34
Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
0.00 0.00 0.00 25.83
Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
0.00 0.00 0.00 27.20
Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
0.00 0.00 0.00 28.03
Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
0.00 0.00 0.00 24.70
Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
0.00 0.00 0.00 25.30
Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
0.00 0.00 0.00 26.77
ethtool -k enp175s0f0
Features for enp175s0f0:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: on
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: off
tx-vlan-stag-hw-insert: on
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: on [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
ethtool -c enp175s0f0
Coalesce parameters for enp175s0f0:
Adaptive RX: off TX: on
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
dmac: 32703
rx-usecs: 256
rx-frames: 128
rx-usecs-irq: 0
rx-frames-irq: 0
tx-usecs: 8
tx-frames: 128
tx-usecs-irq: 0
tx-frames-irq: 0
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
ethtool -g enp175s0f0
Ring parameters for enp175s0f0:
Pre-set maximums:
RX: 8192
RX Mini: 0
RX Jumbo: 0
TX: 8192
Current hardware settings:
RX: 4096
RX Mini: 0
RX Jumbo: 0
TX: 4096
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-10-31 21:57 Kernel 4.19 network performance - forwarding/routing normal users traffic Paweł Staszewski
@ 2018-10-31 22:09 ` Eric Dumazet
2018-10-31 22:20 ` Paweł Staszewski
2018-11-01 3:37 ` David Ahern
2018-11-01 9:50 ` Saeed Mahameed
2 siblings, 1 reply; 77+ messages in thread
From: Eric Dumazet @ 2018-10-31 22:09 UTC (permalink / raw)
To: Paweł Staszewski, netdev
On 10/31/2018 02:57 PM, Paweł Staszewski wrote:
> Hi
>
> So maybee someone will be interested how linux kernel handles normal traffic (not pktgen :) )
>
>
> Server HW configuration:
>
> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>
> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>
>
> Server software:
>
> FRR - as routing daemon
>
> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa node)
>
> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node)
>
>
> Maximum traffic that server can handle:
>
> Bandwidth
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> \ iface Rx Tx Total
> ==============================================================================
> enp175s0f1: 28.51 Gb/s 37.24 Gb/s 65.74 Gb/s
> enp175s0f0: 38.07 Gb/s 28.44 Gb/s 66.51 Gb/s
> ------------------------------------------------------------------------------
> total: 66.58 Gb/s 65.67 Gb/s 132.25 Gb/s
>
>
> Packets per second:
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> - iface Rx Tx Total
> ==============================================================================
> enp175s0f1: 5248589.00 P/s 3486617.75 P/s 8735207.00 P/s
> enp175s0f0: 3557944.25 P/s 5232516.00 P/s 8790460.00 P/s
> ------------------------------------------------------------------------------
> total: 8806533.00 P/s 8719134.00 P/s 17525668.00 P/s
>
>
> After reaching that limits nics on the upstream side (more RX traffic) start to drop packets
>
>
> I just dont understand that server can't handle more bandwidth (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX side are increasing.
>
> Was thinking that maybee reached some pcie x16 limit - but x16 8GT is 126Gbit - and also when testing with pktgen i can reach more bw and pps (like 4x more comparing to normal internet traffic)
>
> And wondering if there is something that can be improved here.
>
>
>
> Some more informations / counters / stats and perf top below:
>
> Perf top flame graph:
>
> https://uploadfiles.io/7zo6u
>
>
>
> System configuration(long):
>
>
> cat /sys/devices/system/node/node1/cpulist
> 14-27,42-55
> cat /sys/class/net/enp175s0f0/device/numa_node
> 1
> cat /sys/class/net/enp175s0f1/device/numa_node
> 1
>
>
>
>
>
> ip -s -d link ls dev enp175s0f0
> 6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
> link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
> RX: bytes packets errors dropped overrun mcast
> 184142375840858 141347715974 2 2806325 0 85050528
> TX: bytes packets errors dropped carrier collsns
> 99270697277430 172227994003 0 0 0 0
>
> ip -s -d link ls dev enp175s0f1
> 7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
> link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
> RX: bytes packets errors dropped overrun mcast
> 99686284170801 173507590134 61 669685 0 100304421
> TX: bytes packets errors dropped carrier collsns
> 184435107970545 142383178304 0 0 0 0
>
>
> ./softnet.sh
> cpu total dropped squeezed collision rps flow_limit
>
>
>
>
> PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz cycles], (all, 56 CPUs)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> 26.78% [kernel] [k] queued_spin_lock_slowpath
This is highly suspect.
A call graph (perf record -a -g sleep 1; perf report --stdio) would tell what is going on.
With that many TX/RX queues, I would expect you to not use RPS/RFS, and have a 1/1 RX/TX mapping,
so I do not know what could request a spinlock contention.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-10-31 22:09 ` Eric Dumazet
@ 2018-10-31 22:20 ` Paweł Staszewski
2018-10-31 22:45 ` Paweł Staszewski
2018-11-01 9:22 ` Jesper Dangaard Brouer
0 siblings, 2 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-10-31 22:20 UTC (permalink / raw)
To: Eric Dumazet, netdev
W dniu 31.10.2018 o 23:09, Eric Dumazet pisze:
>
> On 10/31/2018 02:57 PM, Paweł Staszewski wrote:
>> Hi
>>
>> So maybee someone will be interested how linux kernel handles normal traffic (not pktgen :) )
>>
>>
>> Server HW configuration:
>>
>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>
>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>
>>
>> Server software:
>>
>> FRR - as routing daemon
>>
>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa node)
>>
>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node)
>>
>>
>> Maximum traffic that server can handle:
>>
>> Bandwidth
>>
>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>> input: /proc/net/dev type: rate
>> \ iface Rx Tx Total
>> ==============================================================================
>> enp175s0f1: 28.51 Gb/s 37.24 Gb/s 65.74 Gb/s
>> enp175s0f0: 38.07 Gb/s 28.44 Gb/s 66.51 Gb/s
>> ------------------------------------------------------------------------------
>> total: 66.58 Gb/s 65.67 Gb/s 132.25 Gb/s
>>
>>
>> Packets per second:
>>
>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>> input: /proc/net/dev type: rate
>> - iface Rx Tx Total
>> ==============================================================================
>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s 8735207.00 P/s
>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s 8790460.00 P/s
>> ------------------------------------------------------------------------------
>> total: 8806533.00 P/s 8719134.00 P/s 17525668.00 P/s
>>
>>
>> After reaching that limits nics on the upstream side (more RX traffic) start to drop packets
>>
>>
>> I just dont understand that server can't handle more bandwidth (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX side are increasing.
>>
>> Was thinking that maybee reached some pcie x16 limit - but x16 8GT is 126Gbit - and also when testing with pktgen i can reach more bw and pps (like 4x more comparing to normal internet traffic)
>>
>> And wondering if there is something that can be improved here.
>>
>>
>>
>> Some more informations / counters / stats and perf top below:
>>
>> Perf top flame graph:
>>
>> https://uploadfiles.io/7zo6u
>>
>>
>>
>> System configuration(long):
>>
>>
>> cat /sys/devices/system/node/node1/cpulist
>> 14-27,42-55
>> cat /sys/class/net/enp175s0f0/device/numa_node
>> 1
>> cat /sys/class/net/enp175s0f1/device/numa_node
>> 1
>>
>>
>>
>>
>>
>> ip -s -d link ls dev enp175s0f0
>> 6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
>> link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
>> RX: bytes packets errors dropped overrun mcast
>> 184142375840858 141347715974 2 2806325 0 85050528
>> TX: bytes packets errors dropped carrier collsns
>> 99270697277430 172227994003 0 0 0 0
>>
>> ip -s -d link ls dev enp175s0f1
>> 7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
>> link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
>> RX: bytes packets errors dropped overrun mcast
>> 99686284170801 173507590134 61 669685 0 100304421
>> TX: bytes packets errors dropped carrier collsns
>> 184435107970545 142383178304 0 0 0 0
>>
>>
>> ./softnet.sh
>> cpu total dropped squeezed collision rps flow_limit
>>
>>
>>
>>
>> PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz cycles], (all, 56 CPUs)
>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>
>> 26.78% [kernel] [k] queued_spin_lock_slowpath
> This is highly suspect.
>
> A call graph (perf record -a -g sleep 1; perf report --stdio) would tell what is going on.
perf report:
https://ufile.io/rqp0h
>
> With that many TX/RX queues, I would expect you to not use RPS/RFS, and have a 1/1 RX/TX mapping,
> so I do not know what could request a spinlock contention.
>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-10-31 22:20 ` Paweł Staszewski
@ 2018-10-31 22:45 ` Paweł Staszewski
2018-11-01 9:22 ` Jesper Dangaard Brouer
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-10-31 22:45 UTC (permalink / raw)
To: Eric Dumazet, netdev
W dniu 31.10.2018 o 23:20, Paweł Staszewski pisze:
>
>
> W dniu 31.10.2018 o 23:09, Eric Dumazet pisze:
>>
>> On 10/31/2018 02:57 PM, Paweł Staszewski wrote:
>>> Hi
>>>
>>> So maybee someone will be interested how linux kernel handles normal
>>> traffic (not pktgen :) )
>>>
>>>
>>> Server HW configuration:
>>>
>>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>>
>>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>>
>>>
>>> Server software:
>>>
>>> FRR - as routing daemon
>>>
>>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local
>>> numa node)
>>>
>>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local
>>> numa node)
>>>
>>>
>>> Maximum traffic that server can handle:
>>>
>>> Bandwidth
>>>
>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>> input: /proc/net/dev type: rate
>>> \ iface Rx Tx Total
>>> ==============================================================================
>>>
>>> enp175s0f1: 28.51 Gb/s 37.24
>>> Gb/s 65.74 Gb/s
>>> enp175s0f0: 38.07 Gb/s 28.44
>>> Gb/s 66.51 Gb/s
>>> ------------------------------------------------------------------------------
>>>
>>> total: 66.58 Gb/s 65.67
>>> Gb/s 132.25 Gb/s
>>>
>>>
>>> Packets per second:
>>>
>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>> input: /proc/net/dev type: rate
>>> - iface Rx Tx Total
>>> ==============================================================================
>>>
>>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
>>> 8735207.00 P/s
>>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
>>> 8790460.00 P/s
>>> ------------------------------------------------------------------------------
>>>
>>> total: 8806533.00 P/s 8719134.00 P/s
>>> 17525668.00 P/s
>>>
>>>
>>> After reaching that limits nics on the upstream side (more RX
>>> traffic) start to drop packets
>>>
>>>
>>> I just dont understand that server can't handle more bandwidth
>>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX
>>> side are increasing.
>>>
>>> Was thinking that maybee reached some pcie x16 limit - but x16 8GT
>>> is 126Gbit - and also when testing with pktgen i can reach more bw
>>> and pps (like 4x more comparing to normal internet traffic)
>>>
>>> And wondering if there is something that can be improved here.
>>>
>>>
>>>
>>> Some more informations / counters / stats and perf top below:
>>>
>>> Perf top flame graph:
>>>
>>> https://uploadfiles.io/7zo6u
>>>
>>>
>>>
>>> System configuration(long):
>>>
>>>
>>> cat /sys/devices/system/node/node1/cpulist
>>> 14-27,42-55
>>> cat /sys/class/net/enp175s0f0/device/numa_node
>>> 1
>>> cat /sys/class/net/enp175s0f1/device/numa_node
>>> 1
>>>
>>>
>>>
>>>
>>>
>>> ip -s -d link ls dev enp175s0f0
>>> 6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>>> state UP mode DEFAULT group default qlen 8192
>>> link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity
>>> 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size
>>> 65536 gso_max_segs 65535
>>> RX: bytes packets errors dropped overrun mcast
>>> 184142375840858 141347715974 2 2806325 0 85050528
>>> TX: bytes packets errors dropped carrier collsns
>>> 99270697277430 172227994003 0 0 0 0
>>>
>>> ip -s -d link ls dev enp175s0f1
>>> 7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>>> state UP mode DEFAULT group default qlen 8192
>>> link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity
>>> 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size
>>> 65536 gso_max_segs 65535
>>> RX: bytes packets errors dropped overrun mcast
>>> 99686284170801 173507590134 61 669685 0 100304421
>>> TX: bytes packets errors dropped carrier collsns
>>> 184435107970545 142383178304 0 0 0 0
>>>
>>>
>>> ./softnet.sh
>>> cpu total dropped squeezed collision rps flow_limit
>>>
>>>
>>>
>>>
>>> PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz
>>> cycles], (all, 56 CPUs)
>>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>
>>>
>>> 26.78% [kernel] [k] queued_spin_lock_slowpath
>> This is highly suspect.
>>
>> A call graph (perf record -a -g sleep 1; perf report --stdio) would
>> tell what is going on.
> perf report:
> https://ufile.io/rqp0h
>
>
>
>>
>> With that many TX/RX queues, I would expect you to not use RPS/RFS,
>> and have a 1/1 RX/TX mapping,
>> so I do not know what could request a spinlock contention.
>>
>>
>>
>
>
And yes there is no RPF/RFS - just 1/1 RX/TX and affinity mapping on
local cpu for the network controller for 28 RX+TX queues per nic .
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-10-31 21:57 Kernel 4.19 network performance - forwarding/routing normal users traffic Paweł Staszewski
2018-10-31 22:09 ` Eric Dumazet
@ 2018-11-01 3:37 ` David Ahern
2018-11-01 10:55 ` Jesper Dangaard Brouer
2018-11-01 9:50 ` Saeed Mahameed
2 siblings, 1 reply; 77+ messages in thread
From: David Ahern @ 2018-11-01 3:37 UTC (permalink / raw)
To: Paweł Staszewski, netdev
On 10/31/18 3:57 PM, Paweł Staszewski wrote:
> Hi
>
> So maybee someone will be interested how linux kernel handles normal
> traffic (not pktgen :) )
>
>
> Server HW configuration:
>
> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>
> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>
>
> Server software:
>
> FRR - as routing daemon
>
> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa
> node)
>
> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node)
>
>
> Maximum traffic that server can handle:
>
> Bandwidth
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> \ iface Rx Tx Total
> ==============================================================================
>
> enp175s0f1: 28.51 Gb/s 37.24 Gb/s
> 65.74 Gb/s
> enp175s0f0: 38.07 Gb/s 28.44 Gb/s
> 66.51 Gb/s
> ------------------------------------------------------------------------------
>
> total: 66.58 Gb/s 65.67 Gb/s
> 132.25 Gb/s
>
>
> Packets per second:
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> - iface Rx Tx Total
> ==============================================================================
>
> enp175s0f1: 5248589.00 P/s 3486617.75 P/s 8735207.00 P/s
> enp175s0f0: 3557944.25 P/s 5232516.00 P/s 8790460.00 P/s
> ------------------------------------------------------------------------------
>
> total: 8806533.00 P/s 8719134.00 P/s 17525668.00 P/s
>
>
> After reaching that limits nics on the upstream side (more RX traffic)
> start to drop packets
>
>
> I just dont understand that server can't handle more bandwidth
> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX
> side are increasing.
>
> Was thinking that maybee reached some pcie x16 limit - but x16 8GT is
> 126Gbit - and also when testing with pktgen i can reach more bw and pps
> (like 4x more comparing to normal internet traffic)
>
> And wondering if there is something that can be improved here.
This is mainly a forwarding use case? Seems so based on the perf report.
I suspect forwarding with XDP would show pretty good improvement. You
need the vlan changes I have queued up though.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-10-31 22:20 ` Paweł Staszewski
2018-10-31 22:45 ` Paweł Staszewski
@ 2018-11-01 9:22 ` Jesper Dangaard Brouer
2018-11-01 10:34 ` Paweł Staszewski
2018-11-01 15:27 ` Aaron Lu
1 sibling, 2 replies; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-01 9:22 UTC (permalink / raw)
To: Paweł Staszewski
Cc: brouer, Eric Dumazet, netdev, Tariq Toukan, Ilias Apalodimas,
Yoel Caspersen, Mel Gorman, Aaron Lu
On Wed, 31 Oct 2018 23:20:01 +0100
Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> W dniu 31.10.2018 o 23:09, Eric Dumazet pisze:
> >
> > On 10/31/2018 02:57 PM, Paweł Staszewski wrote:
> >> Hi
> >>
> >> So maybee someone will be interested how linux kernel handles
> >> normal traffic (not pktgen :) )
Pawel is this live production traffic?
I know Yoel (Cc) is very interested to know the real-life limitation of
Linux as a router, especially with VLANs like you use.
> >>
> >> Server HW configuration:
> >>
> >> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
> >>
> >> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
> >>
> >>
> >> Server software:
> >>
> >> FRR - as routing daemon
> >>
> >> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa node)
> >>
> >> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node)
> >>
> >>
> >> Maximum traffic that server can handle:
> >>
> >> Bandwidth
> >>
> >> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> >> input: /proc/net/dev type: rate
> >> \ iface Rx Tx Total
> >> ==============================================================================
> >> enp175s0f1: 28.51 Gb/s 37.24 Gb/s 65.74 Gb/s
> >> enp175s0f0: 38.07 Gb/s 28.44 Gb/s 66.51 Gb/s
> >> ------------------------------------------------------------------------------
> >> total: 66.58 Gb/s 65.67 Gb/s 132.25 Gb/s
> >>
Actually rather impressive number for a Linux router.
> >>
> >> Packets per second:
> >>
> >> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> >> input: /proc/net/dev type: rate
> >> - iface Rx Tx Total
> >> ==============================================================================
> >> enp175s0f1: 5248589.00 P/s 3486617.75 P/s 8735207.00 P/s
> >> enp175s0f0: 3557944.25 P/s 5232516.00 P/s 8790460.00 P/s
> >> ------------------------------------------------------------------------------
> >> total: 8806533.00 P/s 8719134.00 P/s 17525668.00 P/s
> >>
Average packet size:
(28.51*10^9/8)/5248589 = 678.99 bytes
(38.07*10^9/8)/3557944 = 1337.49 bytes
> >> After reaching that limits nics on the upstream side (more RX
> >> traffic) start to drop packets
> >>
> >>
> >> I just dont understand that server can't handle more bandwidth
> >> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
> >> RX side are increasing.
> >>
> >> Was thinking that maybee reached some pcie x16 limit - but x16 8GT
> >> is 126Gbit - and also when testing with pktgen i can reach more bw
> >> and pps (like 4x more comparing to normal internet traffic)
> >>
> >> And wondering if there is something that can be improved here.
> >>
> >>
> >>
> >> Some more informations / counters / stats and perf top below:
> >>
> >> Perf top flame graph:
> >>
> >> https://uploadfiles.io/7zo6u
Thanks a lot for the flame graph!
> >>
> >> System configuration(long):
> >>
> >>
> >> cat /sys/devices/system/node/node1/cpulist
> >> 14-27,42-55
> >> cat /sys/class/net/enp175s0f0/device/numa_node
> >> 1
> >> cat /sys/class/net/enp175s0f1/device/numa_node
> >> 1
> >>
Hint grep can give you nicer output that cat:
$ grep -H . /sys/class/net/*/device/numa_node
> >>
> >>
> >>
> >>
> >> ip -s -d link ls dev enp175s0f0
> >> 6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
> >> link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
> >> RX: bytes packets errors dropped overrun mcast
> >> 184142375840858 141347715974 2 2806325 0 85050528
> >> TX: bytes packets errors dropped carrier collsns
> >> 99270697277430 172227994003 0 0 0 0
> >>
> >> ip -s -d link ls dev enp175s0f1
> >> 7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
> >> link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
> >> RX: bytes packets errors dropped overrun mcast
> >> 99686284170801 173507590134 61 669685 0 100304421
> >> TX: bytes packets errors dropped carrier collsns
> >> 184435107970545 142383178304 0 0 0 0
> >>
You have increased the default (1000) qlen to 8192, why?
What default qdisc do you run?... looking through your very detail main
email report (I do love the details you give!). You run
pfifo_fast_dequeue, thus this 8192 qlen is actually having effect.
I would like to know if and how much qdisc_dequeue bulking is happening
in this setup? Can you run:
perf-stat-hist -m 8192 -P2 qdisc:qdisc_dequeue packets
The perf-stat-hist is from Brendan Gregg's git-tree:
https://github.com/brendangregg/perf-tools
https://github.com/brendangregg/perf-tools/blob/master/misc/perf-stat-hist
> >> ./softnet.sh
> >> cpu total dropped squeezed collision rps flow_limit
> >>
> >>
> >>
> >>
> >> PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz cycles], (all, 56 CPUs)
> >> ------------------------------------------------------------------------------------------
> >>
> >> 26.78% [kernel] [k] queued_spin_lock_slowpath
> >
> > This is highly suspect.
> >
I agree! -- 26.78% spend in queued_spin_lock_slowpath. Hint if you see
_raw_spin_lock then it is likely not a contended lock, but if you see
queued_spin_lock_slowpath in a perf-report your workload is likely in
trouble.
> > A call graph (perf record -a -g sleep 1; perf report --stdio)
> > would tell what is going on.
>
> perf report:
> https://ufile.io/rqp0h
>
Thanks for the output (my 30" screen is just large enough to see the
full output). Together with the flame-graph, it is clear that this
lock happens in the page allocator code.
Section copied out:
mlx5e_poll_tx_cq
|
--16.34%--napi_consume_skb
|
|--12.65%--__free_pages_ok
| |
| --11.86%--free_one_page
| |
| |--10.10%--queued_spin_lock_slowpath
| |
| --0.65%--_raw_spin_lock
|
|--1.55%--page_frag_free
|
--1.44%--skb_release_data
Let me explain what (I think) happens. The mlx5 driver RX-page recycle
mechanism is not effective in this workload, and pages have to go
through the page allocator. The lock contention happens during mlx5
DMA TX completion cycle. And the page allocator cannot keep up at
these speeds.
One solution is extend page allocator with a bulk free API. (This have
been on my TODO list for a long time, but I don't have a
micro-benchmark that trick the driver page-recycle to fail). It should
fit nicely, as I can see that kmem_cache_free_bulk() does get
activated (bulk freeing SKBs), which means that DMA TX completion do
have a bulk of packets.
We can (and should) also improve the page recycle scheme in the driver.
After LPC, I have a project with Tariq and Ilias (Cc'ed) to improve the
page_pool, and we will (attempt) to generalize this, for both high-end
mlx5 and more low-end ARM64-boards (macchiatobin and espressobin).
The MM-people is in parallel working to improve the performance of
order-0 page returns. Thus, the explicit page bulk free API might
actually become less important. I actually think (Cc.) Aaron have a
patchset he would like you to test, which removes the (zone->)lock
you hit in free_one_page().
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-10-31 21:57 Kernel 4.19 network performance - forwarding/routing normal users traffic Paweł Staszewski
2018-10-31 22:09 ` Eric Dumazet
2018-11-01 3:37 ` David Ahern
@ 2018-11-01 9:50 ` Saeed Mahameed
2018-11-01 11:09 ` Paweł Staszewski
2 siblings, 1 reply; 77+ messages in thread
From: Saeed Mahameed @ 2018-11-01 9:50 UTC (permalink / raw)
To: pstaszewski, netdev
On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
> Hi
>
> So maybee someone will be interested how linux kernel handles normal
> traffic (not pktgen :) )
>
>
> Server HW configuration:
>
> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>
> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>
>
> Server software:
>
> FRR - as routing daemon
>
> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local
> numa
> node)
>
> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa
> node)
>
>
> Maximum traffic that server can handle:
>
> Bandwidth
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> \ iface Rx Tx Total
> =====================================================================
> =========
> enp175s0f1: 28.51 Gb/s 37.24
> Gb/s
> 65.74 Gb/s
> enp175s0f0: 38.07 Gb/s 28.44
> Gb/s
> 66.51 Gb/s
> -------------------------------------------------------------------
> -----------
> total: 66.58 Gb/s 65.67
> Gb/s
> 132.25 Gb/s
>
>
> Packets per second:
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> - iface Rx Tx Total
> =====================================================================
> =========
> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
> 8735207.00 P/s
> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
> 8790460.00 P/s
> -------------------------------------------------------------------
> -----------
> total: 8806533.00 P/s 8719134.00 P/s
> 17525668.00 P/s
>
>
> After reaching that limits nics on the upstream side (more RX
> traffic)
> start to drop packets
>
>
> I just dont understand that server can't handle more bandwidth
> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX
> side are increasing.
>
Where do you see 40 Gb/s ? you showed that both ports on the same NIC (
same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) = 132.25
Gb/s which aligns with your pcie link limit, what am i missing ?
> Was thinking that maybee reached some pcie x16 limit - but x16 8GT
> is
> 126Gbit - and also when testing with pktgen i can reach more bw and
> pps
> (like 4x more comparing to normal internet traffic)
>
Are you forwarding when using pktgen as well or you just testing the RX
side pps ?
> And wondering if there is something that can be improved here.
>
>
>
> Some more informations / counters / stats and perf top below:
>
> Perf top flame graph:
>
> https://uploadfiles.io/7zo6u
>
>
>
> System configuration(long):
>
>
> cat /sys/devices/system/node/node1/cpulist
> 14-27,42-55
> cat /sys/class/net/enp175s0f0/device/numa_node
> 1
> cat /sys/class/net/enp175s0f1/device/numa_node
> 1
>
>
>
>
>
> ip -s -d link ls dev enp175s0f0
> 6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> state
> UP mode DEFAULT group default qlen 8192
> link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity
> 0
> addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536
> gso_max_segs 65535
> RX: bytes packets errors dropped overrun mcast
> 184142375840858 141347715974 2 2806325 0 85050528
> TX: bytes packets errors dropped carrier collsns
> 99270697277430 172227994003 0 0 0 0
>
> ip -s -d link ls dev enp175s0f1
> 7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> state
> UP mode DEFAULT group default qlen 8192
> link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity
> 0
> addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536
> gso_max_segs 65535
> RX: bytes packets errors dropped overrun mcast
> 99686284170801 173507590134 61 669685 0 100304421
> TX: bytes packets errors dropped carrier collsns
> 184435107970545 142383178304 0 0 0 0
>
>
> ./softnet.sh
> cpu total dropped squeezed collision rps flow_limit
> 0 3961392822 0 1221478 0 0 0
> 1 3701952251 0 1258234 0 0 0
> 2 3879522030 0 1584282 0 0 0
> 3 3731349789 0 1529029 0 0 0
> 4 1323956701 0 2176371 0 0 0
> 5 420528963 0 1880146 0 0 0
> 6 348720322 0 1830142 0 0 0
> 7 372736328 0 1820891 0 0 0
> 8 567888751 0 1414763 0 0 0
> 9 476075775 0 1868150 0 0 0
> 10 468946725 0 1841428 0 0 0
> 11 676591958 0 1900160 0 0 0
> 12 346803472 0 1834600 0 0 0
> 13 457960872 0 1874529 0 0 0
> 14 1990279665 0 4699000 0 0 0
> 15 1211873601 0 4541281 0 0 0
> 16 1123871928 0 4544712 0 0 0
> 17 1014957263 0 4152355 0 0 0
> 18 2603779724 0 4593869 0 0 0
> 19 2181924054 0 4930618 0 0 0
> 20 2273502182 0 4894627 0 0 0
> 21 2232030947 0 4860048 0 0 0
> 22 2203555394 0 4603830 0 0 0
> 23 2194756800 0 4921294 0 0 0
> 24 2347158294 0 4818354 0 0 0
> 25 2291097883 0 4744469 0 0 0
> 26 2206945011 0 4836483 0 0 0
> 27 2318530217 0 4917617 0 0 0
> 28 512797543 0 1895200 0 0 0
> 29 597279474 0 1532134 0 0 0
> 30 475317503 0 1451523 0 0 0
> 31 499172796 0 1901207 0 0 0
> 32 493874745 0 1915382 0 0 0
> 33 296056288 0 1865535 0 0 0
> 34 3905097041 0 1580822 0 0 0
> 35 3905112345 0 1536105 0 0 0
> 36 3900358950 0 1166319 0 0 0
> 37 3940978093 0 1600219 0 0 0
> 38 3878632215 0 1180389 0 0 0
> 39 3814804736 0 1584925 0 0 0
> 40 4152934337 0 1663660 0 0 0
> 41 3855273904 0 1552219 0 0 0
> 42 2319538182 0 4884480 0 0 0
> 43 2448606991 0 4387456 0 0 0
> 44 1436136753 0 4485073 0 0 0
> 45 1200500141 0 4537284 0 0 0
> 46 1307799923 0 4534156 0 0 0
> 47 1586575293 0 4272997 0 0 0
> 48 3852574 0 4162653 0 0 0
> 49 391449390 0 3935202 0 0 0
> 50 791388200 0 4290738 0 0 0
> 51 127107573 0 3907750 0 0 0
> 52 115622148 0 4012843 0 0 0
> 53 71098871 0 4200625 0 0 0
> 54 305121466 0 4365614 0 0 0
> 55 10914257 0 4369426 0 0 0
>
>
>
>
> PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz
> cycles], (all, 56 CPUs)
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> ------
>
> 26.78% [kernel] [k] queued_spin_lock_slowpath
> 9.09% [kernel] [k] mlx5e_skb_from_cqe_linear
> 4.94% [kernel] [k] mlx5e_sq_xmit
> 3.63% [kernel] [k] memcpy_erms
> 3.30% [kernel] [k] fib_table_lookup
> 3.26% [kernel] [k] build_skb
> 2.41% [kernel] [k] mlx5e_poll_tx_cq
> 2.11% [kernel] [k] get_page_from_freelist
> 1.51% [kernel] [k] vlan_do_receive
> 1.51% [kernel] [k] _raw_spin_lock
> 1.43% [kernel] [k] __dev_queue_xmit
> 1.41% [kernel] [k] dev_gro_receive
> 1.34% [kernel] [k] mlx5e_poll_rx_cq
> 1.26% [kernel] [k] tcp_gro_receive
> 1.21% [kernel] [k] free_one_page
> 1.13% [kernel] [k] swiotlb_map_page
> 1.13% [kernel] [k] mlx5e_post_rx_wqes
> 1.05% [kernel] [k] pfifo_fast_dequeue
> 1.05% [kernel] [k] mlx5e_handle_rx_cqe
> 1.03% [kernel] [k] ip_finish_output2
> 1.02% [kernel] [k] ipt_do_table
> 0.96% [kernel] [k] inet_gro_receive
> 0.91% [kernel] [k] mlx5_eq_int
> 0.88% [kernel] [k] __slab_free.isra.79
> 0.86% [kernel] [k] __build_skb
> 0.84% [kernel] [k] page_frag_free
> 0.76% [kernel] [k] skb_release_data
> 0.75% [kernel] [k] __netif_receive_skb_core
> 0.75% [kernel] [k] irq_entries_start
> 0.71% [kernel] [k] ip_route_input_rcu
> 0.65% [kernel] [k] vlan_dev_hard_start_xmit
> 0.56% [kernel] [k] ip_forward
> 0.56% [kernel] [k] __memcpy
> 0.52% [kernel] [k] kmem_cache_alloc
> 0.52% [kernel] [k] kmem_cache_free_bulk
> 0.49% [kernel] [k] mlx5e_page_release
> 0.47% [kernel] [k] netif_skb_features
> 0.47% [kernel] [k] mlx5e_build_rx_skb
> 0.47% [kernel] [k] dev_hard_start_xmit
> 0.43% [kernel] [k] __page_pool_put_page
> 0.43% [kernel] [k] __netif_schedule
> 0.43% [kernel] [k] mlx5e_xmit
> 0.41% [kernel] [k] __qdisc_run
> 0.41% [kernel] [k] validate_xmit_skb.isra.142
> 0.41% [kernel] [k] swiotlb_unmap_page
> 0.40% [kernel] [k] inet_lookup_ifaddr_rcu
> 0.34% [kernel] [k] ip_rcv_core.isra.20.constprop.25
> 0.34% [kernel] [k] tcp4_gro_receive
> 0.29% [kernel] [k] _raw_spin_lock_irqsave
> 0.29% [kernel] [k] napi_consume_skb
> 0.29% [kernel] [k] skb_gro_receive
> 0.29% [kernel] [k] ___slab_alloc.isra.80
> 0.27% [kernel] [k] eth_type_trans
> 0.26% [kernel] [k] __free_pages_ok
> 0.26% [kernel] [k] __get_xps_queue_idx
> 0.24% [kernel] [k] _raw_spin_trylock
> 0.23% [kernel] [k] __local_bh_enable_ip
> 0.22% [kernel] [k] pfifo_fast_enqueue
> 0.21% [kernel] [k] tasklet_action_common.isra.21
> 0.21% [kernel] [k] sch_direct_xmit
> 0.21% [kernel] [k] skb_network_protocol
> 0.21% [kernel] [k] kmem_cache_free
> 0.20% [kernel] [k] netdev_pick_tx
> 0.18% [kernel] [k] napi_gro_complete
> 0.18% [kernel] [k] __sched_text_start
> 0.18% [kernel] [k] mlx5e_xdp_handle
> 0.17% [kernel] [k] ip_finish_output
> 0.16% [kernel] [k] napi_gro_flush
> 0.16% [kernel] [k] vlan_passthru_hard_header
> 0.16% [kernel] [k] skb_segment
> 0.15% [kernel] [k] __alloc_pages_nodemask
> 0.15% [kernel] [k] mlx5e_features_check
> 0.15% [kernel] [k] mlx5e_napi_poll
> 0.15% [kernel] [k] napi_gro_receive
> 0.14% [kernel] [k] fib_validate_source
> 0.14% [kernel] [k] _raw_spin_lock_irq
> 0.14% [kernel] [k] inet_gro_complete
> 0.14% [kernel] [k] get_partial_node.isra.78
> 0.13% [kernel] [k] napi_complete_done
> 0.13% [kernel] [k] ip_rcv_finish_core.isra.17
> 0.13% [kernel] [k] cmd_exec
>
>
>
> ethtool -S enp175s0f1
> NIC statistics:
> rx_packets: 173730800927
> rx_bytes: 99827422751332
> tx_packets: 142532009512
> tx_bytes: 184633045911222
> tx_tso_packets: 25989113891
> tx_tso_bytes: 132933363384458
> tx_tso_inner_packets: 0
> tx_tso_inner_bytes: 0
> tx_added_vlan_packets: 74630239613
> tx_nop: 2029817748
> rx_lro_packets: 0
> rx_lro_bytes: 0
> rx_ecn_mark: 0
> rx_removed_vlan_packets: 173730800927
> rx_csum_unnecessary: 0
> rx_csum_none: 434357
> rx_csum_complete: 173730366570
> rx_csum_unnecessary_inner: 0
> rx_xdp_drop: 0
> rx_xdp_redirect: 0
> rx_xdp_tx_xmit: 0
> rx_xdp_tx_full: 0
> rx_xdp_tx_err: 0
> rx_xdp_tx_cqe: 0
> tx_csum_none: 38260960853
> tx_csum_partial: 36369278774
> tx_csum_partial_inner: 0
> tx_queue_stopped: 1
> tx_queue_dropped: 0
> tx_xmit_more: 748638099
> tx_recover: 0
> tx_cqes: 73881645031
> tx_queue_wake: 1
> tx_udp_seg_rem: 0
> tx_cqe_err: 0
> tx_xdp_xmit: 0
> tx_xdp_full: 0
> tx_xdp_err: 0
> tx_xdp_cqes: 0
> rx_wqe_err: 0
> rx_mpwqe_filler_cqes: 0
> rx_mpwqe_filler_strides: 0
> rx_buff_alloc_err: 0
> rx_cqe_compress_blks: 0
> rx_cqe_compress_pkts: 0
If this is a pcie bottleneck it might be useful to enable CQE
compression (to reduce PCIe completion descriptors transactions)
you should see the above rx_cqe_compress_pkts increasing when enabled.
$ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
$ ethtool --show-priv-flags enp175s0f1
Private flags for p6p1:
rx_cqe_moder : on
cqe_moder : off
rx_cqe_compress : on
...
try this on both interfaces.
> rx_page_reuse: 0
> rx_cache_reuse: 14441066823
> rx_cache_full: 51126004413
> rx_cache_empty: 21297344082
> rx_cache_busy: 51127247487
> rx_cache_waive: 21298322293
> rx_congst_umr: 0
> rx_arfs_err: 0
> ch_events: 24603119858
> ch_poll: 25180949074
> ch_arm: 24480437587
> ch_aff_change: 75
> ch_eq_rearm: 0
> rx_out_of_buffer: 669685
comparing this to rx_vport_unicast_packets, it is a very small
percentage of dropped packets due to stalled rx cpu, so rx cpu is not a
bottleneck, at least for the driver rx rings.
> rx_if_down_packets: 61
> rx_vport_unicast_packets: 173731641945
> rx_vport_unicast_bytes: 100522745036693
> tx_vport_unicast_packets: 142531901313
> tx_vport_unicast_bytes: 185189071776429
> rx_vport_multicast_packets: 100360886
> rx_vport_multicast_bytes: 6639236688
> tx_vport_multicast_packets: 32837
> tx_vport_multicast_bytes: 2978810
> rx_vport_broadcast_packets: 44854
> rx_vport_broadcast_bytes: 6313510
> tx_vport_broadcast_packets: 72258
> tx_vport_broadcast_bytes: 4335480
> rx_vport_rdma_unicast_packets: 0
> rx_vport_rdma_unicast_bytes: 0
> tx_vport_rdma_unicast_packets: 0
> tx_vport_rdma_unicast_bytes: 0
> rx_vport_rdma_multicast_packets: 0
> rx_vport_rdma_multicast_bytes: 0
> tx_vport_rdma_multicast_packets: 0
> tx_vport_rdma_multicast_bytes: 0
> tx_packets_phy: 142532004669
> rx_packets_phy: 173980375752
> rx_crc_errors_phy: 0
> tx_bytes_phy: 185759204762903
> rx_bytes_phy: 101326109361379
> tx_multicast_phy: 32837
> tx_broadcast_phy: 72258
> rx_multicast_phy: 100360885
> rx_broadcast_phy: 44854
> rx_in_range_len_errors_phy: 2
> rx_out_of_range_len_phy: 0
> rx_oversize_pkts_phy: 59
> rx_symbol_err_phy: 0
> tx_mac_control_phy: 0
> rx_mac_control_phy: 0
> rx_unsupported_op_phy: 0
> rx_pause_ctrl_phy: 0
> tx_pause_ctrl_phy: 0
> rx_discards_phy: 148328738
> tx_discards_phy: 0
> tx_errors_phy: 0
> rx_undersize_pkts_phy: 0
> rx_fragments_phy: 0
> rx_jabbers_phy: 0
> rx_64_bytes_phy: 36551843112
> rx_65_to_127_bytes_phy: 65102131735
> rx_128_to_255_bytes_phy: 5755731137
> rx_256_to_511_bytes_phy: 2475619839
> rx_512_to_1023_bytes_phy: 2826971156
> rx_1024_to_1518_bytes_phy: 42474023107
> rx_1519_to_2047_bytes_phy: 18794051270
> rx_2048_to_4095_bytes_phy: 0
> rx_4096_to_8191_bytes_phy: 0
> rx_8192_to_10239_bytes_phy: 0
> link_down_events_phy: 0
> rx_pcs_symbol_err_phy: 0
> rx_corrected_bits_phy: 0
> rx_pci_signal_integrity: 0
> tx_pci_signal_integrity: 48
> rx_prio0_bytes: 101316322498995
> rx_prio0_packets: 173711151686
> tx_prio0_bytes: 185759176566814
> tx_prio0_packets: 142531983704
> rx_prio1_bytes: 47062768
> rx_prio1_packets: 228932
> tx_prio1_bytes: 0
> tx_prio1_packets: 0
> rx_prio2_bytes: 12434759
> rx_prio2_packets: 83773
> tx_prio2_bytes: 0
> tx_prio2_packets: 0
> rx_prio3_bytes: 288843134
> rx_prio3_packets: 982102
> tx_prio3_bytes: 0
> tx_prio3_packets: 0
> rx_prio4_bytes: 699797236
> rx_prio4_packets: 8109231
> tx_prio4_bytes: 0
> tx_prio4_packets: 0
> rx_prio5_bytes: 1385386738
> rx_prio5_packets: 9661187
> tx_prio5_bytes: 0
> tx_prio5_packets: 0
> rx_prio6_bytes: 317092102
> rx_prio6_packets: 1951538
> tx_prio6_bytes: 0
> tx_prio6_packets: 0
> rx_prio7_bytes: 7015734695
> rx_prio7_packets: 99847456
> tx_prio7_bytes: 0
> tx_prio7_packets: 0
> module_unplug: 0
> module_bus_stuck: 0
> module_high_temp: 0
> module_bad_shorted: 0
> ch0_events: 936264703
> ch0_poll: 963766474
> ch0_arm: 930246079
> ch0_aff_change: 0
> ch0_eq_rearm: 0
> ch1_events: 869408429
> ch1_poll: 896099392
> ch1_arm: 864336861
> ch1_aff_change: 0
> ch1_eq_rearm: 0
> ch2_events: 843345698
> ch2_poll: 869749522
> ch2_arm: 838186113
> ch2_aff_change: 2
> ch2_eq_rearm: 0
> ch3_events: 850261340
> ch3_poll: 876721111
> ch3_arm: 845295235
> ch3_aff_change: 3
> ch3_eq_rearm: 0
> ch4_events: 974985780
> ch4_poll: 997781915
> ch4_arm: 969618250
> ch4_aff_change: 3
> ch4_eq_rearm: 0
> ch5_events: 888559089
> ch5_poll: 912783615
> ch5_arm: 883826078
> ch5_aff_change: 2
> ch5_eq_rearm: 0
> ch6_events: 873730730
> ch6_poll: 899635752
> ch6_arm: 868677574
> ch6_aff_change: 4
> ch6_eq_rearm: 0
> ch7_events: 873478411
> ch7_poll: 899216716
> ch7_arm: 868693645
> ch7_aff_change: 3
> ch7_eq_rearm: 0
> ch8_events: 871900967
> ch8_poll: 898575518
> ch8_arm: 866763693
> ch8_aff_change: 3
> ch8_eq_rearm: 0
> ch9_events: 880325565
> ch9_poll: 904983269
> ch9_arm: 875643922
> ch9_aff_change: 2
> ch9_eq_rearm: 0
> ch10_events: 889919775
> ch10_poll: 915335809
> ch10_arm: 885110225
> ch10_aff_change: 4
> ch10_eq_rearm: 0
> ch11_events: 962709175
> ch11_poll: 983963451
> ch11_arm: 958117526
> ch11_aff_change: 2
> ch11_eq_rearm: 0
> ch12_events: 941333837
> ch12_poll: 964625523
> ch12_arm: 936409706
> ch12_aff_change: 2
> ch12_eq_rearm: 0
> ch13_events: 914996974
> ch13_poll: 937441049
> ch13_arm: 910478393
> ch13_aff_change: 4
> ch13_eq_rearm: 0
> ch14_events: 888050001
> ch14_poll: 911818008
> ch14_arm: 883465035
> ch14_aff_change: 4
> ch14_eq_rearm: 0
> ch15_events: 947547704
> ch15_poll: 969073194
> ch15_arm: 942686515
> ch15_aff_change: 4
> ch15_eq_rearm: 0
> ch16_events: 825804904
> ch16_poll: 840630747
> ch16_arm: 822227488
> ch16_aff_change: 2
> ch16_eq_rearm: 0
> ch17_events: 861673823
> ch17_poll: 874754041
> ch17_arm: 858520448
> ch17_aff_change: 2
> ch17_eq_rearm: 0
> ch18_events: 879413440
> ch18_poll: 893962529
> ch18_arm: 875983204
> ch18_aff_change: 4
> ch18_eq_rearm: 0
> ch19_events: 896073709
> ch19_poll: 909216857
> ch19_arm: 893022121
> ch19_aff_change: 4
> ch19_eq_rearm: 0
> ch20_events: 865188535
> ch20_poll: 880692345
> ch20_arm: 861440265
> ch20_aff_change: 3
> ch20_eq_rearm: 0
> ch21_events: 862709303
> ch21_poll: 878104242
> ch21_arm: 859041767
> ch21_aff_change: 2
> ch21_eq_rearm: 0
> ch22_events: 887720551
> ch22_poll: 904122074
> ch22_arm: 883983794
> ch22_aff_change: 2
> ch22_eq_rearm: 0
> ch23_events: 813355027
> ch23_poll: 828074467
> ch23_arm: 809912398
> ch23_aff_change: 4
> ch23_eq_rearm: 0
> ch24_events: 822366675
> ch24_poll: 839917937
> ch24_arm: 818422754
> ch24_aff_change: 2
> ch24_eq_rearm: 0
> ch25_events: 826642292
> ch25_poll: 842630121
> ch25_arm: 822642618
> ch25_aff_change: 2
> ch25_eq_rearm: 0
> ch26_events: 826392584
> ch26_poll: 843406973
> ch26_arm: 822455000
> ch26_aff_change: 3
> ch26_eq_rearm: 0
> ch27_events: 828960899
> ch27_poll: 843866518
> ch27_arm: 825230937
> ch27_aff_change: 3
> ch27_eq_rearm: 0
> ch28_events: 7
> ch28_poll: 7
> ch28_arm: 7
> ch28_aff_change: 0
> ch28_eq_rearm: 0
> ch29_events: 4
> ch29_poll: 4
> ch29_arm: 4
> ch29_aff_change: 0
> ch29_eq_rearm: 0
> ch30_events: 4
> ch30_poll: 4
> ch30_arm: 4
> ch30_aff_change: 0
> ch30_eq_rearm: 0
> ch31_events: 4
> ch31_poll: 4
> ch31_arm: 4
> ch31_aff_change: 0
> ch31_eq_rearm: 0
> ch32_events: 4
> ch32_poll: 4
> ch32_arm: 4
> ch32_aff_change: 0
> ch32_eq_rearm: 0
> ch33_events: 4
> ch33_poll: 4
> ch33_arm: 4
> ch33_aff_change: 0
> ch33_eq_rearm: 0
> ch34_events: 4
> ch34_poll: 4
> ch34_arm: 4
> ch34_aff_change: 0
> ch34_eq_rearm: 0
> ch35_events: 4
> ch35_poll: 4
> ch35_arm: 4
> ch35_aff_change: 0
> ch35_eq_rearm: 0
> ch36_events: 4
> ch36_poll: 4
> ch36_arm: 4
> ch36_aff_change: 0
> ch36_eq_rearm: 0
> ch37_events: 4
> ch37_poll: 4
> ch37_arm: 4
> ch37_aff_change: 0
> ch37_eq_rearm: 0
> ch38_events: 4
> ch38_poll: 4
> ch38_arm: 4
> ch38_aff_change: 0
> ch38_eq_rearm: 0
> ch39_events: 4
> ch39_poll: 4
> ch39_arm: 4
> ch39_aff_change: 0
> ch39_eq_rearm: 0
> ch40_events: 4
> ch40_poll: 4
> ch40_arm: 4
> ch40_aff_change: 0
> ch40_eq_rearm: 0
> ch41_events: 4
> ch41_poll: 4
> ch41_arm: 4
> ch41_aff_change: 0
> ch41_eq_rearm: 0
> ch42_events: 4
> ch42_poll: 4
> ch42_arm: 4
> ch42_aff_change: 0
> ch42_eq_rearm: 0
> ch43_events: 4
> ch43_poll: 4
> ch43_arm: 4
> ch43_aff_change: 0
> ch43_eq_rearm: 0
> ch44_events: 4
> ch44_poll: 4
> ch44_arm: 4
> ch44_aff_change: 0
> ch44_eq_rearm: 0
> ch45_events: 4
> ch45_poll: 4
> ch45_arm: 4
> ch45_aff_change: 0
> ch45_eq_rearm: 0
> ch46_events: 4
> ch46_poll: 4
> ch46_arm: 4
> ch46_aff_change: 0
> ch46_eq_rearm: 0
> ch47_events: 4
> ch47_poll: 4
> ch47_arm: 4
> ch47_aff_change: 0
> ch47_eq_rearm: 0
> ch48_events: 4
> ch48_poll: 4
> ch48_arm: 4
> ch48_aff_change: 0
> ch48_eq_rearm: 0
> ch49_events: 4
> ch49_poll: 4
> ch49_arm: 4
> ch49_aff_change: 0
> ch49_eq_rearm: 0
> ch50_events: 4
> ch50_poll: 4
> ch50_arm: 4
> ch50_aff_change: 0
> ch50_eq_rearm: 0
> ch51_events: 4
> ch51_poll: 4
> ch51_arm: 4
> ch51_aff_change: 0
> ch51_eq_rearm: 0
> ch52_events: 4
> ch52_poll: 4
> ch52_arm: 4
> ch52_aff_change: 0
> ch52_eq_rearm: 0
> ch53_events: 4
> ch53_poll: 4
> ch53_arm: 4
> ch53_aff_change: 0
> ch53_eq_rearm: 0
> ch54_events: 4
> ch54_poll: 4
> ch54_arm: 4
> ch54_aff_change: 0
> ch54_eq_rearm: 0
> ch55_events: 4
> ch55_poll: 4
> ch55_arm: 4
> ch55_aff_change: 0
> ch55_eq_rearm: 0
> rx0_packets: 7284057433
> rx0_bytes: 4330611281319
> rx0_csum_complete: 7283623076
> rx0_csum_unnecessary: 0
> rx0_csum_unnecessary_inner: 0
> rx0_csum_none: 434357
> rx0_xdp_drop: 0
> rx0_xdp_redirect: 0
> rx0_lro_packets: 0
> rx0_lro_bytes: 0
> rx0_ecn_mark: 0
> rx0_removed_vlan_packets: 7284057433
> rx0_wqe_err: 0
> rx0_mpwqe_filler_cqes: 0
> rx0_mpwqe_filler_strides: 0
> rx0_buff_alloc_err: 0
> rx0_cqe_compress_blks: 0
> rx0_cqe_compress_pkts: 0
> rx0_page_reuse: 0
> rx0_cache_reuse: 1989731589
> rx0_cache_full: 28213297
> rx0_cache_empty: 1624089822
> rx0_cache_busy: 28213961
> rx0_cache_waive: 1624083610
> rx0_congst_umr: 0
> rx0_arfs_err: 0
> rx0_xdp_tx_xmit: 0
> rx0_xdp_tx_full: 0
> rx0_xdp_tx_err: 0
> rx0_xdp_tx_cqes: 0
> rx1_packets: 6691319211
> rx1_bytes: 3799580210608
> rx1_csum_complete: 6691319211
> rx1_csum_unnecessary: 0
> rx1_csum_unnecessary_inner: 0
> rx1_csum_none: 0
> rx1_xdp_drop: 0
> rx1_xdp_redirect: 0
> rx1_lro_packets: 0
> rx1_lro_bytes: 0
> rx1_ecn_mark: 0
> rx1_removed_vlan_packets: 6691319211
> rx1_wqe_err: 0
> rx1_mpwqe_filler_cqes: 0
> rx1_mpwqe_filler_strides: 0
> rx1_buff_alloc_err: 0
> rx1_cqe_compress_blks: 0
> rx1_cqe_compress_pkts: 0
> rx1_page_reuse: 0
> rx1_cache_reuse: 2270019
> rx1_cache_full: 3343389331
> rx1_cache_empty: 6656
> rx1_cache_busy: 3343389585
> rx1_cache_waive: 0
> rx1_congst_umr: 0
> rx1_arfs_err: 0
> rx1_xdp_tx_xmit: 0
> rx1_xdp_tx_full: 0
> rx1_xdp_tx_err: 0
> rx1_xdp_tx_cqes: 0
> rx2_packets: 6618370416
> rx2_bytes: 3762508364015
> rx2_csum_complete: 6618370416
> rx2_csum_unnecessary: 0
> rx2_csum_unnecessary_inner: 0
> rx2_csum_none: 0
> rx2_xdp_drop: 0
> rx2_xdp_redirect: 0
> rx2_lro_packets: 0
> rx2_lro_bytes: 0
> rx2_ecn_mark: 0
> rx2_removed_vlan_packets: 6618370416
> rx2_wqe_err: 0
> rx2_mpwqe_filler_cqes: 0
> rx2_mpwqe_filler_strides: 0
> rx2_buff_alloc_err: 0
> rx2_cqe_compress_blks: 0
> rx2_cqe_compress_pkts: 0
> rx2_page_reuse: 0
> rx2_cache_reuse: 111419328
> rx2_cache_full: 1807563903
> rx2_cache_empty: 1390208158
> rx2_cache_busy: 1807564378
> rx2_cache_waive: 1390201722
> rx2_congst_umr: 0
> rx2_arfs_err: 0
> rx2_xdp_tx_xmit: 0
> rx2_xdp_tx_full: 0
> rx2_xdp_tx_err: 0
> rx2_xdp_tx_cqes: 0
> rx3_packets: 6665308976
> rx3_bytes: 3828546206006
> rx3_csum_complete: 6665308976
> rx3_csum_unnecessary: 0
> rx3_csum_unnecessary_inner: 0
> rx3_csum_none: 0
> rx3_xdp_drop: 0
> rx3_xdp_redirect: 0
> rx3_lro_packets: 0
> rx3_lro_bytes: 0
> rx3_ecn_mark: 0
> rx3_removed_vlan_packets: 6665308976
> rx3_wqe_err: 0
> rx3_mpwqe_filler_cqes: 0
> rx3_mpwqe_filler_strides: 0
> rx3_buff_alloc_err: 0
> rx3_cqe_compress_blks: 0
> rx3_cqe_compress_pkts: 0
> rx3_page_reuse: 0
> rx3_cache_reuse: 215779091
> rx3_cache_full: 1720040649
> rx3_cache_empty: 1396840926
> rx3_cache_busy: 1720041127
> rx3_cache_waive: 1396834493
> rx3_congst_umr: 0
> rx3_arfs_err: 0
> rx3_xdp_tx_xmit: 0
> rx3_xdp_tx_full: 0
> rx3_xdp_tx_err: 0
> rx3_xdp_tx_cqes: 0
> rx4_packets: 6764448165
> rx4_bytes: 3883101339142
> rx4_csum_complete: 6764448165
> rx4_csum_unnecessary: 0
> rx4_csum_unnecessary_inner: 0
> rx4_csum_none: 0
> rx4_xdp_drop: 0
> rx4_xdp_redirect: 0
> rx4_lro_packets: 0
> rx4_lro_bytes: 0
> rx4_ecn_mark: 0
> rx4_removed_vlan_packets: 6764448165
> rx4_wqe_err: 0
> rx4_mpwqe_filler_cqes: 0
> rx4_mpwqe_filler_strides: 0
> rx4_buff_alloc_err: 0
> rx4_cqe_compress_blks: 0
> rx4_cqe_compress_pkts: 0
> rx4_page_reuse: 0
> rx4_cache_reuse: 1930710653
> rx4_cache_full: 6490815
> rx4_cache_empty: 1445028605
> rx4_cache_busy: 6491478
> rx4_cache_waive: 1445022392
> rx4_congst_umr: 0
> rx4_arfs_err: 0
> rx4_xdp_tx_xmit: 0
> rx4_xdp_tx_full: 0
> rx4_xdp_tx_err: 0
> rx4_xdp_tx_cqes: 0
> rx5_packets: 6736853264
> rx5_bytes: 3925186068552
> rx5_csum_complete: 6736853264
> rx5_csum_unnecessary: 0
> rx5_csum_unnecessary_inner: 0
> rx5_csum_none: 0
> rx5_xdp_drop: 0
> rx5_xdp_redirect: 0
> rx5_lro_packets: 0
> rx5_lro_bytes: 0
> rx5_ecn_mark: 0
> rx5_removed_vlan_packets: 6736853264
> rx5_wqe_err: 0
> rx5_mpwqe_filler_cqes: 0
> rx5_mpwqe_filler_strides: 0
> rx5_buff_alloc_err: 0
> rx5_cqe_compress_blks: 0
> rx5_cqe_compress_pkts: 0
> rx5_page_reuse: 0
> rx5_cache_reuse: 7283914
> rx5_cache_full: 3361142463
> rx5_cache_empty: 6656
> rx5_cache_busy: 3361142718
> rx5_cache_waive: 0
> rx5_congst_umr: 0
> rx5_arfs_err: 0
> rx5_xdp_tx_xmit: 0
> rx5_xdp_tx_full: 0
> rx5_xdp_tx_err: 0
> rx5_xdp_tx_cqes: 0
> rx6_packets: 6751588828
> rx6_bytes: 3860537598885
> rx6_csum_complete: 6751588828
> rx6_csum_unnecessary: 0
> rx6_csum_unnecessary_inner: 0
> rx6_csum_none: 0
> rx6_xdp_drop: 0
> rx6_xdp_redirect: 0
> rx6_lro_packets: 0
> rx6_lro_bytes: 0
> rx6_ecn_mark: 0
> rx6_removed_vlan_packets: 6751588828
> rx6_wqe_err: 0
> rx6_mpwqe_filler_cqes: 0
> rx6_mpwqe_filler_strides: 0
> rx6_buff_alloc_err: 0
> rx6_cqe_compress_blks: 0
> rx6_cqe_compress_pkts: 0
> rx6_page_reuse: 0
> rx6_cache_reuse: 96032126
> rx6_cache_full: 1857890923
> rx6_cache_empty: 1421877543
> rx6_cache_busy: 1857891399
> rx6_cache_waive: 1421871110
> rx6_congst_umr: 0
> rx6_arfs_err: 0
> rx6_xdp_tx_xmit: 0
> rx6_xdp_tx_full: 0
> rx6_xdp_tx_err: 0
> rx6_xdp_tx_cqes: 0
> rx7_packets: 6935300074
> rx7_bytes: 4004713524388
> rx7_csum_complete: 6935300074
> rx7_csum_unnecessary: 0
> rx7_csum_unnecessary_inner: 0
> rx7_csum_none: 0
> rx7_xdp_drop: 0
> rx7_xdp_redirect: 0
> rx7_lro_packets: 0
> rx7_lro_bytes: 0
> rx7_ecn_mark: 0
> rx7_removed_vlan_packets: 6935300074
> rx7_wqe_err: 0
> rx7_mpwqe_filler_cqes: 0
> rx7_mpwqe_filler_strides: 0
> rx7_buff_alloc_err: 0
> rx7_cqe_compress_blks: 0
> rx7_cqe_compress_pkts: 0
> rx7_page_reuse: 0
> rx7_cache_reuse: 17555187
> rx7_cache_full: 3450094595
> rx7_cache_empty: 6656
> rx7_cache_busy: 3450094849
> rx7_cache_waive: 0
> rx7_congst_umr: 0
> rx7_arfs_err: 0
> rx7_xdp_tx_xmit: 0
> rx7_xdp_tx_full: 0
> rx7_xdp_tx_err: 0
> rx7_xdp_tx_cqes: 0
> rx8_packets: 6678640094
> rx8_bytes: 3783722686028
> rx8_csum_complete: 6678640094
> rx8_csum_unnecessary: 0
> rx8_csum_unnecessary_inner: 0
> rx8_csum_none: 0
> rx8_xdp_drop: 0
> rx8_xdp_redirect: 0
> rx8_lro_packets: 0
> rx8_lro_bytes: 0
> rx8_ecn_mark: 0
> rx8_removed_vlan_packets: 6678640094
> rx8_wqe_err: 0
> rx8_mpwqe_filler_cqes: 0
> rx8_mpwqe_filler_strides: 0
> rx8_buff_alloc_err: 0
> rx8_cqe_compress_blks: 0
> rx8_cqe_compress_pkts: 0
> rx8_page_reuse: 0
> rx8_cache_reuse: 71006578
> rx8_cache_full: 1879380649
> rx8_cache_empty: 1388938999
> rx8_cache_busy: 1879381123
> rx8_cache_waive: 1388932565
> rx8_congst_umr: 0
> rx8_arfs_err: 0
> rx8_xdp_tx_xmit: 0
> rx8_xdp_tx_full: 0
> rx8_xdp_tx_err: 0
> rx8_xdp_tx_cqes: 0
> rx9_packets: 6709855557
> rx9_bytes: 3849522227880
> rx9_csum_complete: 6709855557
> rx9_csum_unnecessary: 0
> rx9_csum_unnecessary_inner: 0
> rx9_csum_none: 0
> rx9_xdp_drop: 0
> rx9_xdp_redirect: 0
> rx9_lro_packets: 0
> rx9_lro_bytes: 0
> rx9_ecn_mark: 0
> rx9_removed_vlan_packets: 6709855557
> rx9_wqe_err: 0
> rx9_mpwqe_filler_cqes: 0
> rx9_mpwqe_filler_strides: 0
> rx9_buff_alloc_err: 0
> rx9_cqe_compress_blks: 0
> rx9_cqe_compress_pkts: 0
> rx9_page_reuse: 0
> rx9_cache_reuse: 108980215
> rx9_cache_full: 1822730121
> rx9_cache_empty: 1423223623
> rx9_cache_busy: 1822730594
> rx9_cache_waive: 1423217187
> rx9_congst_umr: 0
> rx9_arfs_err: 0
> rx9_xdp_tx_xmit: 0
> rx9_xdp_tx_full: 0
> rx9_xdp_tx_err: 0
> rx9_xdp_tx_cqes: 0
> rx10_packets: 6761861066
> rx10_bytes: 3816266733385
> rx10_csum_complete: 6761861066
> rx10_csum_unnecessary: 0
> rx10_csum_unnecessary_inner: 0
> rx10_csum_none: 0
> rx10_xdp_drop: 0
> rx10_xdp_redirect: 0
> rx10_lro_packets: 0
> rx10_lro_bytes: 0
> rx10_ecn_mark: 0
> rx10_removed_vlan_packets: 6761861066
> rx10_wqe_err: 0
> rx10_mpwqe_filler_cqes: 0
> rx10_mpwqe_filler_strides: 0
> rx10_buff_alloc_err: 0
> rx10_cqe_compress_blks: 0
> rx10_cqe_compress_pkts: 0
> rx10_page_reuse: 0
> rx10_cache_reuse: 3489300
> rx10_cache_full: 3377440977
> rx10_cache_empty: 6656
> rx10_cache_busy: 3377441216
> rx10_cache_waive: 0
> rx10_congst_umr: 0
> rx10_arfs_err: 0
> rx10_xdp_tx_xmit: 0
> rx10_xdp_tx_full: 0
> rx10_xdp_tx_err: 0
> rx10_xdp_tx_cqes: 0
> rx11_packets: 6868113938
> rx11_bytes: 4048196300710
> rx11_csum_complete: 6868113938
> rx11_csum_unnecessary: 0
> rx11_csum_unnecessary_inner: 0
> rx11_csum_none: 0
> rx11_xdp_drop: 0
> rx11_xdp_redirect: 0
> rx11_lro_packets: 0
> rx11_lro_bytes: 0
> rx11_ecn_mark: 0
> rx11_removed_vlan_packets: 6868113938
> rx11_wqe_err: 0
> rx11_mpwqe_filler_cqes: 0
> rx11_mpwqe_filler_strides: 0
> rx11_buff_alloc_err: 0
> rx11_cqe_compress_blks: 0
> rx11_cqe_compress_pkts: 0
> rx11_page_reuse: 0
> rx11_cache_reuse: 1948516819
> rx11_cache_full: 17132157
> rx11_cache_empty: 1468413985
> rx11_cache_busy: 17132820
> rx11_cache_waive: 1468407772
> rx11_congst_umr: 0
> rx11_arfs_err: 0
> rx11_xdp_tx_xmit: 0
> rx11_xdp_tx_full: 0
> rx11_xdp_tx_err: 0
> rx11_xdp_tx_cqes: 0
> rx12_packets: 6742955386
> rx12_bytes: 3865747629271
> rx12_csum_complete: 6742955386
> rx12_csum_unnecessary: 0
> rx12_csum_unnecessary_inner: 0
> rx12_csum_none: 0
> rx12_xdp_drop: 0
> rx12_xdp_redirect: 0
> rx12_lro_packets: 0
> rx12_lro_bytes: 0
> rx12_ecn_mark: 0
> rx12_removed_vlan_packets: 6742955386
> rx12_wqe_err: 0
> rx12_mpwqe_filler_cqes: 0
> rx12_mpwqe_filler_strides: 0
> rx12_buff_alloc_err: 0
> rx12_cqe_compress_blks: 0
> rx12_cqe_compress_pkts: 0
> rx12_page_reuse: 0
> rx12_cache_reuse: 30809331
> rx12_cache_full: 3340668106
> rx12_cache_empty: 6656
> rx12_cache_busy: 3340668333
> rx12_cache_waive: 0
> rx12_congst_umr: 0
> rx12_arfs_err: 0
> rx12_xdp_tx_xmit: 0
> rx12_xdp_tx_full: 0
> rx12_xdp_tx_err: 0
> rx12_xdp_tx_cqes: 0
> rx13_packets: 6707028036
> rx13_bytes: 3813462190623
> rx13_csum_complete: 6707028036
> rx13_csum_unnecessary: 0
> rx13_csum_unnecessary_inner: 0
> rx13_csum_none: 0
> rx13_xdp_drop: 0
> rx13_xdp_redirect: 0
> rx13_lro_packets: 0
> rx13_lro_bytes: 0
> rx13_ecn_mark: 0
> rx13_removed_vlan_packets: 6707028036
> rx13_wqe_err: 0
> rx13_mpwqe_filler_cqes: 0
> rx13_mpwqe_filler_strides: 0
> rx13_buff_alloc_err: 0
> rx13_cqe_compress_blks: 0
> rx13_cqe_compress_pkts: 0
> rx13_page_reuse: 0
> rx13_cache_reuse: 14951053
> rx13_cache_full: 3338562710
> rx13_cache_empty: 6656
> rx13_cache_busy: 3338562963
> rx13_cache_waive: 0
> rx13_congst_umr: 0
> rx13_arfs_err: 0
> rx13_xdp_tx_xmit: 0
> rx13_xdp_tx_full: 0
> rx13_xdp_tx_err: 0
> rx13_xdp_tx_cqes: 0
> rx14_packets: 6737074410
> rx14_bytes: 3868905276119
> rx14_csum_complete: 6737074410
> rx14_csum_unnecessary: 0
> rx14_csum_unnecessary_inner: 0
> rx14_csum_none: 0
> rx14_xdp_drop: 0
> rx14_xdp_redirect: 0
> rx14_lro_packets: 0
> rx14_lro_bytes: 0
> rx14_ecn_mark: 0
> rx14_removed_vlan_packets: 6737074410
> rx14_wqe_err: 0
> rx14_mpwqe_filler_cqes: 0
> rx14_mpwqe_filler_strides: 0
> rx14_buff_alloc_err: 0
> rx14_cqe_compress_blks: 0
> rx14_cqe_compress_pkts: 0
> rx14_page_reuse: 0
> rx14_cache_reuse: 967799432
> rx14_cache_full: 982704312
> rx14_cache_empty: 1418039639
> rx14_cache_busy: 982704789
> rx14_cache_waive: 1418033206
> rx14_congst_umr: 0
> rx14_arfs_err: 0
> rx14_xdp_tx_xmit: 0
> rx14_xdp_tx_full: 0
> rx14_xdp_tx_err: 0
> rx14_xdp_tx_cqes: 0
> rx15_packets: 6641887441
> rx15_bytes: 3742874400402
> rx15_csum_complete: 6641887441
> rx15_csum_unnecessary: 0
> rx15_csum_unnecessary_inner: 0
> rx15_csum_none: 0
> rx15_xdp_drop: 0
> rx15_xdp_redirect: 0
> rx15_lro_packets: 0
> rx15_lro_bytes: 0
> rx15_ecn_mark: 0
> rx15_removed_vlan_packets: 6641887441
> rx15_wqe_err: 0
> rx15_mpwqe_filler_cqes: 0
> rx15_mpwqe_filler_strides: 0
> rx15_buff_alloc_err: 0
> rx15_cqe_compress_blks: 0
> rx15_cqe_compress_pkts: 0
> rx15_page_reuse: 0
> rx15_cache_reuse: 1920227538
> rx15_cache_full: 19386129
> rx15_cache_empty: 1381335137
> rx15_cache_busy: 19387693
> rx15_cache_waive: 1381329825
> rx15_congst_umr: 0
> rx15_arfs_err: 0
> rx15_xdp_tx_xmit: 0
> rx15_xdp_tx_full: 0
> rx15_xdp_tx_err: 0
> rx15_xdp_tx_cqes: 0
> rx16_packets: 5420472874
> rx16_bytes: 3079293332581
> rx16_csum_complete: 5420472874
> rx16_csum_unnecessary: 0
> rx16_csum_unnecessary_inner: 0
> rx16_csum_none: 0
> rx16_xdp_drop: 0
> rx16_xdp_redirect: 0
> rx16_lro_packets: 0
> rx16_lro_bytes: 0
> rx16_ecn_mark: 0
> rx16_removed_vlan_packets: 5420472874
> rx16_wqe_err: 0
> rx16_mpwqe_filler_cqes: 0
> rx16_mpwqe_filler_strides: 0
> rx16_buff_alloc_err: 0
> rx16_cqe_compress_blks: 0
> rx16_cqe_compress_pkts: 0
> rx16_page_reuse: 0
> rx16_cache_reuse: 2361079
> rx16_cache_full: 2707875103
> rx16_cache_empty: 6656
> rx16_cache_busy: 2707875349
> rx16_cache_waive: 0
> rx16_congst_umr: 0
> rx16_arfs_err: 0
> rx16_xdp_tx_xmit: 0
> rx16_xdp_tx_full: 0
> rx16_xdp_tx_err: 0
> rx16_xdp_tx_cqes: 0
> rx17_packets: 5428380986
> rx17_bytes: 3080981893118
> rx17_csum_complete: 5428380986
> rx17_csum_unnecessary: 0
> rx17_csum_unnecessary_inner: 0
> rx17_csum_none: 0
> rx17_xdp_drop: 0
> rx17_xdp_redirect: 0
> rx17_lro_packets: 0
> rx17_lro_bytes: 0
> rx17_ecn_mark: 0
> rx17_removed_vlan_packets: 5428380986
> rx17_wqe_err: 0
> rx17_mpwqe_filler_cqes: 0
> rx17_mpwqe_filler_strides: 0
> rx17_buff_alloc_err: 0
> rx17_cqe_compress_blks: 0
> rx17_cqe_compress_pkts: 0
> rx17_page_reuse: 0
> rx17_cache_reuse: 1552266402
> rx17_cache_full: 5947505
> rx17_cache_empty: 1155981856
> rx17_cache_busy: 5948870
> rx17_cache_waive: 1155976345
> rx17_congst_umr: 0
> rx17_arfs_err: 0
> rx17_xdp_tx_xmit: 0
> rx17_xdp_tx_full: 0
> rx17_xdp_tx_err: 0
> rx17_xdp_tx_cqes: 0
> rx18_packets: 5529118410
> rx18_bytes: 3254749573833
> rx18_csum_complete: 5529118410
> rx18_csum_unnecessary: 0
> rx18_csum_unnecessary_inner: 0
> rx18_csum_none: 0
> rx18_xdp_drop: 0
> rx18_xdp_redirect: 0
> rx18_lro_packets: 0
> rx18_lro_bytes: 0
> rx18_ecn_mark: 0
> rx18_removed_vlan_packets: 5529118410
> rx18_wqe_err: 0
> rx18_mpwqe_filler_cqes: 0
> rx18_mpwqe_filler_strides: 0
> rx18_buff_alloc_err: 0
> rx18_cqe_compress_blks: 0
> rx18_cqe_compress_pkts: 0
> rx18_page_reuse: 0
> rx18_cache_reuse: 67438840
> rx18_cache_full: 1536718472
> rx18_cache_empty: 1160408072
> rx18_cache_busy: 1536718932
> rx18_cache_waive: 1160401638
> rx18_congst_umr: 0
> rx18_arfs_err: 0
> rx18_xdp_tx_xmit: 0
> rx18_xdp_tx_full: 0
> rx18_xdp_tx_err: 0
> rx18_xdp_tx_cqes: 0
> rx19_packets: 5449932653
> rx19_bytes: 3148726579411
> rx19_csum_complete: 5449932653
> rx19_csum_unnecessary: 0
> rx19_csum_unnecessary_inner: 0
> rx19_csum_none: 0
> rx19_xdp_drop: 0
> rx19_xdp_redirect: 0
> rx19_lro_packets: 0
> rx19_lro_bytes: 0
> rx19_ecn_mark: 0
> rx19_removed_vlan_packets: 5449932653
> rx19_wqe_err: 0
> rx19_mpwqe_filler_cqes: 0
> rx19_mpwqe_filler_strides: 0
> rx19_buff_alloc_err: 0
> rx19_cqe_compress_blks: 0
> rx19_cqe_compress_pkts: 0
> rx19_page_reuse: 0
> rx19_cache_reuse: 1537841743
> rx19_cache_full: 9920960
> rx19_cache_empty: 1177208938
> rx19_cache_busy: 9922299
> rx19_cache_waive: 1177203401
> rx19_congst_umr: 0
> rx19_arfs_err: 0
> rx19_xdp_tx_xmit: 0
> rx19_xdp_tx_full: 0
> rx19_xdp_tx_err: 0
> rx19_xdp_tx_cqes: 0
> rx20_packets: 5407910071
> rx20_bytes: 3123560861922
> rx20_csum_complete: 5407910071
> rx20_csum_unnecessary: 0
> rx20_csum_unnecessary_inner: 0
> rx20_csum_none: 0
> rx20_xdp_drop: 0
> rx20_xdp_redirect: 0
> rx20_lro_packets: 0
> rx20_lro_bytes: 0
> rx20_ecn_mark: 0
> rx20_removed_vlan_packets: 5407910071
> rx20_wqe_err: 0
> rx20_mpwqe_filler_cqes: 0
> rx20_mpwqe_filler_strides: 0
> rx20_buff_alloc_err: 0
> rx20_cqe_compress_blks: 0
> rx20_cqe_compress_pkts: 0
> rx20_page_reuse: 0
> rx20_cache_reuse: 10255209
> rx20_cache_full: 2693699571
> rx20_cache_empty: 6656
> rx20_cache_busy: 2693699823
> rx20_cache_waive: 0
> rx20_congst_umr: 0
> rx20_arfs_err: 0
> rx20_xdp_tx_xmit: 0
> rx20_xdp_tx_full: 0
> rx20_xdp_tx_err: 0
> rx20_xdp_tx_cqes: 0
> rx21_packets: 5417498508
> rx21_bytes: 3131335892379
> rx21_csum_complete: 5417498508
> rx21_csum_unnecessary: 0
> rx21_csum_unnecessary_inner: 0
> rx21_csum_none: 0
> rx21_xdp_drop: 0
> rx21_xdp_redirect: 0
> rx21_lro_packets: 0
> rx21_lro_bytes: 0
> rx21_ecn_mark: 0
> rx21_removed_vlan_packets: 5417498508
> rx21_wqe_err: 0
> rx21_mpwqe_filler_cqes: 0
> rx21_mpwqe_filler_strides: 0
> rx21_buff_alloc_err: 0
> rx21_cqe_compress_blks: 0
> rx21_cqe_compress_pkts: 0
> rx21_page_reuse: 0
> rx21_cache_reuse: 192662917
> rx21_cache_full: 1374120417
> rx21_cache_empty: 1141972100
> rx21_cache_busy: 1374120891
> rx21_cache_waive: 1141965665
> rx21_congst_umr: 0
> rx21_arfs_err: 0
> rx21_xdp_tx_xmit: 0
> rx21_xdp_tx_full: 0
> rx21_xdp_tx_err: 0
> rx21_xdp_tx_cqes: 0
> rx22_packets: 5613634706
> rx22_bytes: 3240055099058
> rx22_csum_complete: 5613634706
> rx22_csum_unnecessary: 0
> rx22_csum_unnecessary_inner: 0
> rx22_csum_none: 0
> rx22_xdp_drop: 0
> rx22_xdp_redirect: 0
> rx22_lro_packets: 0
> rx22_lro_bytes: 0
> rx22_ecn_mark: 0
> rx22_removed_vlan_packets: 5613634706
> rx22_wqe_err: 0
> rx22_mpwqe_filler_cqes: 0
> rx22_mpwqe_filler_strides: 0
> rx22_buff_alloc_err: 0
> rx22_cqe_compress_blks: 0
> rx22_cqe_compress_pkts: 0
> rx22_page_reuse: 0
> rx22_cache_reuse: 12161531
> rx22_cache_full: 2794655567
> rx22_cache_empty: 6656
> rx22_cache_busy: 2794655821
> rx22_cache_waive: 0
> rx22_congst_umr: 0
> rx22_arfs_err: 0
> rx22_xdp_tx_xmit: 0
> rx22_xdp_tx_full: 0
> rx22_xdp_tx_err: 0
> rx22_xdp_tx_cqes: 0
> rx23_packets: 5389977167
> rx23_bytes: 3054270771559
> rx23_csum_complete: 5389977167
> rx23_csum_unnecessary: 0
> rx23_csum_unnecessary_inner: 0
> rx23_csum_none: 0
> rx23_xdp_drop: 0
> rx23_xdp_redirect: 0
> rx23_lro_packets: 0
> rx23_lro_bytes: 0
> rx23_ecn_mark: 0
> rx23_removed_vlan_packets: 5389977167
> rx23_wqe_err: 0
> rx23_mpwqe_filler_cqes: 0
> rx23_mpwqe_filler_strides: 0
> rx23_buff_alloc_err: 0
> rx23_cqe_compress_blks: 0
> rx23_cqe_compress_pkts: 0
> rx23_page_reuse: 0
> rx23_cache_reuse: 709328
> rx23_cache_full: 2694279000
> rx23_cache_empty: 6656
> rx23_cache_busy: 2694279252
> rx23_cache_waive: 0
> rx23_congst_umr: 0
> rx23_arfs_err: 0
> rx23_xdp_tx_xmit: 0
> rx23_xdp_tx_full: 0
> rx23_xdp_tx_err: 0
> rx23_xdp_tx_cqes: 0
> rx24_packets: 5547561932
> rx24_bytes: 3166602453443
> rx24_csum_complete: 5547561932
> rx24_csum_unnecessary: 0
> rx24_csum_unnecessary_inner: 0
> rx24_csum_none: 0
> rx24_xdp_drop: 0
> rx24_xdp_redirect: 0
> rx24_lro_packets: 0
> rx24_lro_bytes: 0
> rx24_ecn_mark: 0
> rx24_removed_vlan_packets: 5547561932
> rx24_wqe_err: 0
> rx24_mpwqe_filler_cqes: 0
> rx24_mpwqe_filler_strides: 0
> rx24_buff_alloc_err: 0
> rx24_cqe_compress_blks: 0
> rx24_cqe_compress_pkts: 0
> rx24_page_reuse: 0
> rx24_cache_reuse: 57885119
> rx24_cache_full: 1529450077
> rx24_cache_empty: 1186451948
> rx24_cache_busy: 1529450553
> rx24_cache_waive: 1186445515
> rx24_congst_umr: 0
> rx24_arfs_err: 0
> rx24_xdp_tx_xmit: 0
> rx24_xdp_tx_full: 0
> rx24_xdp_tx_err: 0
> rx24_xdp_tx_cqes: 0
> rx25_packets: 5414569326
> rx25_bytes: 3184757708091
> rx25_csum_complete: 5414569326
> rx25_csum_unnecessary: 0
> rx25_csum_unnecessary_inner: 0
> rx25_csum_none: 0
> rx25_xdp_drop: 0
> rx25_xdp_redirect: 0
> rx25_lro_packets: 0
> rx25_lro_bytes: 0
> rx25_ecn_mark: 0
> rx25_removed_vlan_packets: 5414569326
> rx25_wqe_err: 0
> rx25_mpwqe_filler_cqes: 0
> rx25_mpwqe_filler_strides: 0
> rx25_buff_alloc_err: 0
> rx25_cqe_compress_blks: 0
> rx25_cqe_compress_pkts: 0
> rx25_page_reuse: 0
> rx25_cache_reuse: 5080853
> rx25_cache_full: 2702203555
> rx25_cache_empty: 6656
> rx25_cache_busy: 2702203807
> rx25_cache_waive: 0
> rx25_congst_umr: 0
> rx25_arfs_err: 0
> rx25_xdp_tx_xmit: 0
> rx25_xdp_tx_full: 0
> rx25_xdp_tx_err: 0
> rx25_xdp_tx_cqes: 0
> rx26_packets: 5479972151
> rx26_bytes: 3110642276239
> rx26_csum_complete: 5479972151
> rx26_csum_unnecessary: 0
> rx26_csum_unnecessary_inner: 0
> rx26_csum_none: 0
> rx26_xdp_drop: 0
> rx26_xdp_redirect: 0
> rx26_lro_packets: 0
> rx26_lro_bytes: 0
> rx26_ecn_mark: 0
> rx26_removed_vlan_packets: 5479972151
> rx26_wqe_err: 0
> rx26_mpwqe_filler_cqes: 0
> rx26_mpwqe_filler_strides: 0
> rx26_buff_alloc_err: 0
> rx26_cqe_compress_blks: 0
> rx26_cqe_compress_pkts: 0
> rx26_page_reuse: 0
> rx26_cache_reuse: 26543335
> rx26_cache_full: 2713442485
> rx26_cache_empty: 6656
> rx26_cache_busy: 2713442737
> rx26_cache_waive: 0
> rx26_congst_umr: 0
> rx26_arfs_err: 0
> rx26_xdp_tx_xmit: 0
> rx26_xdp_tx_full: 0
> rx26_xdp_tx_err: 0
> rx26_xdp_tx_cqes: 0
> rx27_packets: 5337113900
> rx27_bytes: 3068966906075
> rx27_csum_complete: 5337113900
> rx27_csum_unnecessary: 0
> rx27_csum_unnecessary_inner: 0
> rx27_csum_none: 0
> rx27_xdp_drop: 0
> rx27_xdp_redirect: 0
> rx27_lro_packets: 0
> rx27_lro_bytes: 0
> rx27_ecn_mark: 0
> rx27_removed_vlan_packets: 5337113900
> rx27_wqe_err: 0
> rx27_mpwqe_filler_cqes: 0
> rx27_mpwqe_filler_strides: 0
> rx27_buff_alloc_err: 0
> rx27_cqe_compress_blks: 0
> rx27_cqe_compress_pkts: 0
> rx27_page_reuse: 0
> rx27_cache_reuse: 1539298962
> rx27_cache_full: 10861919
> rx27_cache_empty: 1117173179
> rx27_cache_busy: 12091463
> rx27_cache_waive: 1118395847
> rx27_congst_umr: 0
> rx27_arfs_err: 0
> rx27_xdp_tx_xmit: 0
> rx27_xdp_tx_full: 0
> rx27_xdp_tx_err: 0
> rx27_xdp_tx_cqes: 0
> rx28_packets: 0
> rx28_bytes: 0
> rx28_csum_complete: 0
> rx28_csum_unnecessary: 0
> rx28_csum_unnecessary_inner: 0
> rx28_csum_none: 0
> rx28_xdp_drop: 0
> rx28_xdp_redirect: 0
> rx28_lro_packets: 0
> rx28_lro_bytes: 0
> rx28_ecn_mark: 0
> rx28_removed_vlan_packets: 0
> rx28_wqe_err: 0
> rx28_mpwqe_filler_cqes: 0
> rx28_mpwqe_filler_strides: 0
> rx28_buff_alloc_err: 0
> rx28_cqe_compress_blks: 0
> rx28_cqe_compress_pkts: 0
> rx28_page_reuse: 0
> rx28_cache_reuse: 0
> rx28_cache_full: 0
> rx28_cache_empty: 2560
> rx28_cache_busy: 0
> rx28_cache_waive: 0
> rx28_congst_umr: 0
> rx28_arfs_err: 0
> rx28_xdp_tx_xmit: 0
> rx28_xdp_tx_full: 0
> rx28_xdp_tx_err: 0
> rx28_xdp_tx_cqes: 0
> rx29_packets: 0
> rx29_bytes: 0
> rx29_csum_complete: 0
> rx29_csum_unnecessary: 0
> rx29_csum_unnecessary_inner: 0
> rx29_csum_none: 0
> rx29_xdp_drop: 0
> rx29_xdp_redirect: 0
> rx29_lro_packets: 0
> rx29_lro_bytes: 0
> rx29_ecn_mark: 0
> rx29_removed_vlan_packets: 0
> rx29_wqe_err: 0
> rx29_mpwqe_filler_cqes: 0
> rx29_mpwqe_filler_strides: 0
> rx29_buff_alloc_err: 0
> rx29_cqe_compress_blks: 0
> rx29_cqe_compress_pkts: 0
> rx29_page_reuse: 0
> rx29_cache_reuse: 0
> rx29_cache_full: 0
> rx29_cache_empty: 2560
> rx29_cache_busy: 0
> rx29_cache_waive: 0
> rx29_congst_umr: 0
> rx29_arfs_err: 0
> rx29_xdp_tx_xmit: 0
> rx29_xdp_tx_full: 0
> rx29_xdp_tx_err: 0
> rx29_xdp_tx_cqes: 0
> rx30_packets: 0
> rx30_bytes: 0
> rx30_csum_complete: 0
> rx30_csum_unnecessary: 0
> rx30_csum_unnecessary_inner: 0
> rx30_csum_none: 0
> rx30_xdp_drop: 0
> rx30_xdp_redirect: 0
> rx30_lro_packets: 0
> rx30_lro_bytes: 0
> rx30_ecn_mark: 0
> rx30_removed_vlan_packets: 0
> rx30_wqe_err: 0
> rx30_mpwqe_filler_cqes: 0
> rx30_mpwqe_filler_strides: 0
> rx30_buff_alloc_err: 0
> rx30_cqe_compress_blks: 0
> rx30_cqe_compress_pkts: 0
> rx30_page_reuse: 0
> rx30_cache_reuse: 0
> rx30_cache_full: 0
> rx30_cache_empty: 2560
> rx30_cache_busy: 0
> rx30_cache_waive: 0
> rx30_congst_umr: 0
> rx30_arfs_err: 0
> rx30_xdp_tx_xmit: 0
> rx30_xdp_tx_full: 0
> rx30_xdp_tx_err: 0
> rx30_xdp_tx_cqes: 0
> rx31_packets: 0
> rx31_bytes: 0
> rx31_csum_complete: 0
> rx31_csum_unnecessary: 0
> rx31_csum_unnecessary_inner: 0
> rx31_csum_none: 0
> rx31_xdp_drop: 0
> rx31_xdp_redirect: 0
> rx31_lro_packets: 0
> rx31_lro_bytes: 0
> rx31_ecn_mark: 0
> rx31_removed_vlan_packets: 0
> rx31_wqe_err: 0
> rx31_mpwqe_filler_cqes: 0
> rx31_mpwqe_filler_strides: 0
> rx31_buff_alloc_err: 0
> rx31_cqe_compress_blks: 0
> rx31_cqe_compress_pkts: 0
> rx31_page_reuse: 0
> rx31_cache_reuse: 0
> rx31_cache_full: 0
> rx31_cache_empty: 2560
> rx31_cache_busy: 0
> rx31_cache_waive: 0
> rx31_congst_umr: 0
> rx31_arfs_err: 0
> rx31_xdp_tx_xmit: 0
> rx31_xdp_tx_full: 0
> rx31_xdp_tx_err: 0
> rx31_xdp_tx_cqes: 0
> rx32_packets: 0
> rx32_bytes: 0
> rx32_csum_complete: 0
> rx32_csum_unnecessary: 0
> rx32_csum_unnecessary_inner: 0
> rx32_csum_none: 0
> rx32_xdp_drop: 0
> rx32_xdp_redirect: 0
> rx32_lro_packets: 0
> rx32_lro_bytes: 0
> rx32_ecn_mark: 0
> rx32_removed_vlan_packets: 0
> rx32_wqe_err: 0
> rx32_mpwqe_filler_cqes: 0
> rx32_mpwqe_filler_strides: 0
> rx32_buff_alloc_err: 0
> rx32_cqe_compress_blks: 0
> rx32_cqe_compress_pkts: 0
> rx32_page_reuse: 0
> rx32_cache_reuse: 0
> rx32_cache_full: 0
> rx32_cache_empty: 2560
> rx32_cache_busy: 0
> rx32_cache_waive: 0
> rx32_congst_umr: 0
> rx32_arfs_err: 0
> rx32_xdp_tx_xmit: 0
> rx32_xdp_tx_full: 0
> rx32_xdp_tx_err: 0
> rx32_xdp_tx_cqes: 0
> rx33_packets: 0
> rx33_bytes: 0
> rx33_csum_complete: 0
> rx33_csum_unnecessary: 0
> rx33_csum_unnecessary_inner: 0
> rx33_csum_none: 0
> rx33_xdp_drop: 0
> rx33_xdp_redirect: 0
> rx33_lro_packets: 0
> rx33_lro_bytes: 0
> rx33_ecn_mark: 0
> rx33_removed_vlan_packets: 0
> rx33_wqe_err: 0
> rx33_mpwqe_filler_cqes: 0
> rx33_mpwqe_filler_strides: 0
> rx33_buff_alloc_err: 0
> rx33_cqe_compress_blks: 0
> rx33_cqe_compress_pkts: 0
> rx33_page_reuse: 0
> rx33_cache_reuse: 0
> rx33_cache_full: 0
> rx33_cache_empty: 2560
> rx33_cache_busy: 0
> rx33_cache_waive: 0
> rx33_congst_umr: 0
> rx33_arfs_err: 0
> rx33_xdp_tx_xmit: 0
> rx33_xdp_tx_full: 0
> rx33_xdp_tx_err: 0
> rx33_xdp_tx_cqes: 0
> rx34_packets: 0
> rx34_bytes: 0
> rx34_csum_complete: 0
> rx34_csum_unnecessary: 0
> rx34_csum_unnecessary_inner: 0
> rx34_csum_none: 0
> rx34_xdp_drop: 0
> rx34_xdp_redirect: 0
> rx34_lro_packets: 0
> rx34_lro_bytes: 0
> rx34_ecn_mark: 0
> rx34_removed_vlan_packets: 0
> rx34_wqe_err: 0
> rx34_mpwqe_filler_cqes: 0
> rx34_mpwqe_filler_strides: 0
> rx34_buff_alloc_err: 0
> rx34_cqe_compress_blks: 0
> rx34_cqe_compress_pkts: 0
> rx34_page_reuse: 0
> rx34_cache_reuse: 0
> rx34_cache_full: 0
> rx34_cache_empty: 2560
> rx34_cache_busy: 0
> rx34_cache_waive: 0
> rx34_congst_umr: 0
> rx34_arfs_err: 0
> rx34_xdp_tx_xmit: 0
> rx34_xdp_tx_full: 0
> rx34_xdp_tx_err: 0
> rx34_xdp_tx_cqes: 0
> rx35_packets: 0
> rx35_bytes: 0
> rx35_csum_complete: 0
> rx35_csum_unnecessary: 0
> rx35_csum_unnecessary_inner: 0
> rx35_csum_none: 0
> rx35_xdp_drop: 0
> rx35_xdp_redirect: 0
> rx35_lro_packets: 0
> rx35_lro_bytes: 0
> rx35_ecn_mark: 0
> rx35_removed_vlan_packets: 0
> rx35_wqe_err: 0
> rx35_mpwqe_filler_cqes: 0
> rx35_mpwqe_filler_strides: 0
> rx35_buff_alloc_err: 0
> rx35_cqe_compress_blks: 0
> rx35_cqe_compress_pkts: 0
> rx35_page_reuse: 0
> rx35_cache_reuse: 0
> rx35_cache_full: 0
> rx35_cache_empty: 2560
> rx35_cache_busy: 0
> rx35_cache_waive: 0
> rx35_congst_umr: 0
> rx35_arfs_err: 0
> rx35_xdp_tx_xmit: 0
> rx35_xdp_tx_full: 0
> rx35_xdp_tx_err: 0
> rx35_xdp_tx_cqes: 0
> rx36_packets: 0
> rx36_bytes: 0
> rx36_csum_complete: 0
> rx36_csum_unnecessary: 0
> rx36_csum_unnecessary_inner: 0
> rx36_csum_none: 0
> rx36_xdp_drop: 0
> rx36_xdp_redirect: 0
> rx36_lro_packets: 0
> rx36_lro_bytes: 0
> rx36_ecn_mark: 0
> rx36_removed_vlan_packets: 0
> rx36_wqe_err: 0
> rx36_mpwqe_filler_cqes: 0
> rx36_mpwqe_filler_strides: 0
> rx36_buff_alloc_err: 0
> rx36_cqe_compress_blks: 0
> rx36_cqe_compress_pkts: 0
> rx36_page_reuse: 0
> rx36_cache_reuse: 0
> rx36_cache_full: 0
> rx36_cache_empty: 2560
> rx36_cache_busy: 0
> rx36_cache_waive: 0
> rx36_congst_umr: 0
> rx36_arfs_err: 0
> rx36_xdp_tx_xmit: 0
> rx36_xdp_tx_full: 0
> rx36_xdp_tx_err: 0
> rx36_xdp_tx_cqes: 0
> rx37_packets: 0
> rx37_bytes: 0
> rx37_csum_complete: 0
> rx37_csum_unnecessary: 0
> rx37_csum_unnecessary_inner: 0
> rx37_csum_none: 0
> rx37_xdp_drop: 0
> rx37_xdp_redirect: 0
> rx37_lro_packets: 0
> rx37_lro_bytes: 0
> rx37_ecn_mark: 0
> rx37_removed_vlan_packets: 0
> rx37_wqe_err: 0
> rx37_mpwqe_filler_cqes: 0
> rx37_mpwqe_filler_strides: 0
> rx37_buff_alloc_err: 0
> rx37_cqe_compress_blks: 0
> rx37_cqe_compress_pkts: 0
> rx37_page_reuse: 0
> rx37_cache_reuse: 0
> rx37_cache_full: 0
> rx37_cache_empty: 2560
> rx37_cache_busy: 0
> rx37_cache_waive: 0
> rx37_congst_umr: 0
> rx37_arfs_err: 0
> rx37_xdp_tx_xmit: 0
> rx37_xdp_tx_full: 0
> rx37_xdp_tx_err: 0
> rx37_xdp_tx_cqes: 0
> rx38_packets: 0
> rx38_bytes: 0
> rx38_csum_complete: 0
> rx38_csum_unnecessary: 0
> rx38_csum_unnecessary_inner: 0
> rx38_csum_none: 0
> rx38_xdp_drop: 0
> rx38_xdp_redirect: 0
> rx38_lro_packets: 0
> rx38_lro_bytes: 0
> rx38_ecn_mark: 0
> rx38_removed_vlan_packets: 0
> rx38_wqe_err: 0
> rx38_mpwqe_filler_cqes: 0
> rx38_mpwqe_filler_strides: 0
> rx38_buff_alloc_err: 0
> rx38_cqe_compress_blks: 0
> rx38_cqe_compress_pkts: 0
> rx38_page_reuse: 0
> rx38_cache_reuse: 0
> rx38_cache_full: 0
> rx38_cache_empty: 2560
> rx38_cache_busy: 0
> rx38_cache_waive: 0
> rx38_congst_umr: 0
> rx38_arfs_err: 0
> rx38_xdp_tx_xmit: 0
> rx38_xdp_tx_full: 0
> rx38_xdp_tx_err: 0
> rx38_xdp_tx_cqes: 0
> rx39_packets: 0
> rx39_bytes: 0
> rx39_csum_complete: 0
> rx39_csum_unnecessary: 0
> rx39_csum_unnecessary_inner: 0
> rx39_csum_none: 0
> rx39_xdp_drop: 0
> rx39_xdp_redirect: 0
> rx39_lro_packets: 0
> rx39_lro_bytes: 0
> rx39_ecn_mark: 0
> rx39_removed_vlan_packets: 0
> rx39_wqe_err: 0
> rx39_mpwqe_filler_cqes: 0
> rx39_mpwqe_filler_strides: 0
> rx39_buff_alloc_err: 0
> rx39_cqe_compress_blks: 0
> rx39_cqe_compress_pkts: 0
> rx39_page_reuse: 0
> rx39_cache_reuse: 0
> rx39_cache_full: 0
> rx39_cache_empty: 2560
> rx39_cache_busy: 0
> rx39_cache_waive: 0
> rx39_congst_umr: 0
> rx39_arfs_err: 0
> rx39_xdp_tx_xmit: 0
> rx39_xdp_tx_full: 0
> rx39_xdp_tx_err: 0
> rx39_xdp_tx_cqes: 0
> rx40_packets: 0
> rx40_bytes: 0
> rx40_csum_complete: 0
> rx40_csum_unnecessary: 0
> rx40_csum_unnecessary_inner: 0
> rx40_csum_none: 0
> rx40_xdp_drop: 0
> rx40_xdp_redirect: 0
> rx40_lro_packets: 0
> rx40_lro_bytes: 0
> rx40_ecn_mark: 0
> rx40_removed_vlan_packets: 0
> rx40_wqe_err: 0
> rx40_mpwqe_filler_cqes: 0
> rx40_mpwqe_filler_strides: 0
> rx40_buff_alloc_err: 0
> rx40_cqe_compress_blks: 0
> rx40_cqe_compress_pkts: 0
> rx40_page_reuse: 0
> rx40_cache_reuse: 0
> rx40_cache_full: 0
> rx40_cache_empty: 2560
> rx40_cache_busy: 0
> rx40_cache_waive: 0
> rx40_congst_umr: 0
> rx40_arfs_err: 0
> rx40_xdp_tx_xmit: 0
> rx40_xdp_tx_full: 0
> rx40_xdp_tx_err: 0
> rx40_xdp_tx_cqes: 0
> rx41_packets: 0
> rx41_bytes: 0
> rx41_csum_complete: 0
> rx41_csum_unnecessary: 0
> rx41_csum_unnecessary_inner: 0
> rx41_csum_none: 0
> rx41_xdp_drop: 0
> rx41_xdp_redirect: 0
> rx41_lro_packets: 0
> rx41_lro_bytes: 0
> rx41_ecn_mark: 0
> rx41_removed_vlan_packets: 0
> rx41_wqe_err: 0
> rx41_mpwqe_filler_cqes: 0
> rx41_mpwqe_filler_strides: 0
> rx41_buff_alloc_err: 0
> rx41_cqe_compress_blks: 0
> rx41_cqe_compress_pkts: 0
> rx41_page_reuse: 0
> rx41_cache_reuse: 0
> rx41_cache_full: 0
> rx41_cache_empty: 2560
> rx41_cache_busy: 0
> rx41_cache_waive: 0
> rx41_congst_umr: 0
> rx41_arfs_err: 0
> rx41_xdp_tx_xmit: 0
> rx41_xdp_tx_full: 0
> rx41_xdp_tx_err: 0
> rx41_xdp_tx_cqes: 0
> rx42_packets: 0
> rx42_bytes: 0
> rx42_csum_complete: 0
> rx42_csum_unnecessary: 0
> rx42_csum_unnecessary_inner: 0
> rx42_csum_none: 0
> rx42_xdp_drop: 0
> rx42_xdp_redirect: 0
> rx42_lro_packets: 0
> rx42_lro_bytes: 0
> rx42_ecn_mark: 0
> rx42_removed_vlan_packets: 0
> rx42_wqe_err: 0
> rx42_mpwqe_filler_cqes: 0
> rx42_mpwqe_filler_strides: 0
> rx42_buff_alloc_err: 0
> rx42_cqe_compress_blks: 0
> rx42_cqe_compress_pkts: 0
> rx42_page_reuse: 0
> rx42_cache_reuse: 0
> rx42_cache_full: 0
> rx42_cache_empty: 2560
> rx42_cache_busy: 0
> rx42_cache_waive: 0
> rx42_congst_umr: 0
> rx42_arfs_err: 0
> rx42_xdp_tx_xmit: 0
> rx42_xdp_tx_full: 0
> rx42_xdp_tx_err: 0
> rx42_xdp_tx_cqes: 0
> rx43_packets: 0
> rx43_bytes: 0
> rx43_csum_complete: 0
> rx43_csum_unnecessary: 0
> rx43_csum_unnecessary_inner: 0
> rx43_csum_none: 0
> rx43_xdp_drop: 0
> rx43_xdp_redirect: 0
> rx43_lro_packets: 0
> rx43_lro_bytes: 0
> rx43_ecn_mark: 0
> rx43_removed_vlan_packets: 0
> rx43_wqe_err: 0
> rx43_mpwqe_filler_cqes: 0
> rx43_mpwqe_filler_strides: 0
> rx43_buff_alloc_err: 0
> rx43_cqe_compress_blks: 0
> rx43_cqe_compress_pkts: 0
> rx43_page_reuse: 0
> rx43_cache_reuse: 0
> rx43_cache_full: 0
> rx43_cache_empty: 2560
> rx43_cache_busy: 0
> rx43_cache_waive: 0
> rx43_congst_umr: 0
> rx43_arfs_err: 0
> rx43_xdp_tx_xmit: 0
> rx43_xdp_tx_full: 0
> rx43_xdp_tx_err: 0
> rx43_xdp_tx_cqes: 0
> rx44_packets: 0
> rx44_bytes: 0
> rx44_csum_complete: 0
> rx44_csum_unnecessary: 0
> rx44_csum_unnecessary_inner: 0
> rx44_csum_none: 0
> rx44_xdp_drop: 0
> rx44_xdp_redirect: 0
> rx44_lro_packets: 0
> rx44_lro_bytes: 0
> rx44_ecn_mark: 0
> rx44_removed_vlan_packets: 0
> rx44_wqe_err: 0
> rx44_mpwqe_filler_cqes: 0
> rx44_mpwqe_filler_strides: 0
> rx44_buff_alloc_err: 0
> rx44_cqe_compress_blks: 0
> rx44_cqe_compress_pkts: 0
> rx44_page_reuse: 0
> rx44_cache_reuse: 0
> rx44_cache_full: 0
> rx44_cache_empty: 2560
> rx44_cache_busy: 0
> rx44_cache_waive: 0
> rx44_congst_umr: 0
> rx44_arfs_err: 0
> rx44_xdp_tx_xmit: 0
> rx44_xdp_tx_full: 0
> rx44_xdp_tx_err: 0
> rx44_xdp_tx_cqes: 0
> rx45_packets: 0
> rx45_bytes: 0
> rx45_csum_complete: 0
> rx45_csum_unnecessary: 0
> rx45_csum_unnecessary_inner: 0
> rx45_csum_none: 0
> rx45_xdp_drop: 0
> rx45_xdp_redirect: 0
> rx45_lro_packets: 0
> rx45_lro_bytes: 0
> rx45_ecn_mark: 0
> rx45_removed_vlan_packets: 0
> rx45_wqe_err: 0
> rx45_mpwqe_filler_cqes: 0
> rx45_mpwqe_filler_strides: 0
> rx45_buff_alloc_err: 0
> rx45_cqe_compress_blks: 0
> rx45_cqe_compress_pkts: 0
> rx45_page_reuse: 0
> rx45_cache_reuse: 0
> rx45_cache_full: 0
> rx45_cache_empty: 2560
> rx45_cache_busy: 0
> rx45_cache_waive: 0
> rx45_congst_umr: 0
> rx45_arfs_err: 0
> rx45_xdp_tx_xmit: 0
> rx45_xdp_tx_full: 0
> rx45_xdp_tx_err: 0
> rx45_xdp_tx_cqes: 0
> rx46_packets: 0
> rx46_bytes: 0
> rx46_csum_complete: 0
> rx46_csum_unnecessary: 0
> rx46_csum_unnecessary_inner: 0
> rx46_csum_none: 0
> rx46_xdp_drop: 0
> rx46_xdp_redirect: 0
> rx46_lro_packets: 0
> rx46_lro_bytes: 0
> rx46_ecn_mark: 0
> rx46_removed_vlan_packets: 0
> rx46_wqe_err: 0
> rx46_mpwqe_filler_cqes: 0
> rx46_mpwqe_filler_strides: 0
> rx46_buff_alloc_err: 0
> rx46_cqe_compress_blks: 0
> rx46_cqe_compress_pkts: 0
> rx46_page_reuse: 0
> rx46_cache_reuse: 0
> rx46_cache_full: 0
> rx46_cache_empty: 2560
> rx46_cache_busy: 0
> rx46_cache_waive: 0
> rx46_congst_umr: 0
> rx46_arfs_err: 0
> rx46_xdp_tx_xmit: 0
> rx46_xdp_tx_full: 0
> rx46_xdp_tx_err: 0
> rx46_xdp_tx_cqes: 0
> rx47_packets: 0
> rx47_bytes: 0
> rx47_csum_complete: 0
> rx47_csum_unnecessary: 0
> rx47_csum_unnecessary_inner: 0
> rx47_csum_none: 0
> rx47_xdp_drop: 0
> rx47_xdp_redirect: 0
> rx47_lro_packets: 0
> rx47_lro_bytes: 0
> rx47_ecn_mark: 0
> rx47_removed_vlan_packets: 0
> rx47_wqe_err: 0
> rx47_mpwqe_filler_cqes: 0
> rx47_mpwqe_filler_strides: 0
> rx47_buff_alloc_err: 0
> rx47_cqe_compress_blks: 0
> rx47_cqe_compress_pkts: 0
> rx47_page_reuse: 0
> rx47_cache_reuse: 0
> rx47_cache_full: 0
> rx47_cache_empty: 2560
> rx47_cache_busy: 0
> rx47_cache_waive: 0
> rx47_congst_umr: 0
> rx47_arfs_err: 0
> rx47_xdp_tx_xmit: 0
> rx47_xdp_tx_full: 0
> rx47_xdp_tx_err: 0
> rx47_xdp_tx_cqes: 0
> rx48_packets: 0
> rx48_bytes: 0
> rx48_csum_complete: 0
> rx48_csum_unnecessary: 0
> rx48_csum_unnecessary_inner: 0
> rx48_csum_none: 0
> rx48_xdp_drop: 0
> rx48_xdp_redirect: 0
> rx48_lro_packets: 0
> rx48_lro_bytes: 0
> rx48_ecn_mark: 0
> rx48_removed_vlan_packets: 0
> rx48_wqe_err: 0
> rx48_mpwqe_filler_cqes: 0
> rx48_mpwqe_filler_strides: 0
> rx48_buff_alloc_err: 0
> rx48_cqe_compress_blks: 0
> rx48_cqe_compress_pkts: 0
> rx48_page_reuse: 0
> rx48_cache_reuse: 0
> rx48_cache_full: 0
> rx48_cache_empty: 2560
> rx48_cache_busy: 0
> rx48_cache_waive: 0
> rx48_congst_umr: 0
> rx48_arfs_err: 0
> rx48_xdp_tx_xmit: 0
> rx48_xdp_tx_full: 0
> rx48_xdp_tx_err: 0
> rx48_xdp_tx_cqes: 0
> rx49_packets: 0
> rx49_bytes: 0
> rx49_csum_complete: 0
> rx49_csum_unnecessary: 0
> rx49_csum_unnecessary_inner: 0
> rx49_csum_none: 0
> rx49_xdp_drop: 0
> rx49_xdp_redirect: 0
> rx49_lro_packets: 0
> rx49_lro_bytes: 0
> rx49_ecn_mark: 0
> rx49_removed_vlan_packets: 0
> rx49_wqe_err: 0
> rx49_mpwqe_filler_cqes: 0
> rx49_mpwqe_filler_strides: 0
> rx49_buff_alloc_err: 0
> rx49_cqe_compress_blks: 0
> rx49_cqe_compress_pkts: 0
> rx49_page_reuse: 0
> rx49_cache_reuse: 0
> rx49_cache_full: 0
> rx49_cache_empty: 2560
> rx49_cache_busy: 0
> rx49_cache_waive: 0
> rx49_congst_umr: 0
> rx49_arfs_err: 0
> rx49_xdp_tx_xmit: 0
> rx49_xdp_tx_full: 0
> rx49_xdp_tx_err: 0
> rx49_xdp_tx_cqes: 0
> rx50_packets: 0
> rx50_bytes: 0
> rx50_csum_complete: 0
> rx50_csum_unnecessary: 0
> rx50_csum_unnecessary_inner: 0
> rx50_csum_none: 0
> rx50_xdp_drop: 0
> rx50_xdp_redirect: 0
> rx50_lro_packets: 0
> rx50_lro_bytes: 0
> rx50_ecn_mark: 0
> rx50_removed_vlan_packets: 0
> rx50_wqe_err: 0
> rx50_mpwqe_filler_cqes: 0
> rx50_mpwqe_filler_strides: 0
> rx50_buff_alloc_err: 0
> rx50_cqe_compress_blks: 0
> rx50_cqe_compress_pkts: 0
> rx50_page_reuse: 0
> rx50_cache_reuse: 0
> rx50_cache_full: 0
> rx50_cache_empty: 2560
> rx50_cache_busy: 0
> rx50_cache_waive: 0
> rx50_congst_umr: 0
> rx50_arfs_err: 0
> rx50_xdp_tx_xmit: 0
> rx50_xdp_tx_full: 0
> rx50_xdp_tx_err: 0
> rx50_xdp_tx_cqes: 0
> rx51_packets: 0
> rx51_bytes: 0
> rx51_csum_complete: 0
> rx51_csum_unnecessary: 0
> rx51_csum_unnecessary_inner: 0
> rx51_csum_none: 0
> rx51_xdp_drop: 0
> rx51_xdp_redirect: 0
> rx51_lro_packets: 0
> rx51_lro_bytes: 0
> rx51_ecn_mark: 0
> rx51_removed_vlan_packets: 0
> rx51_wqe_err: 0
> rx51_mpwqe_filler_cqes: 0
> rx51_mpwqe_filler_strides: 0
> rx51_buff_alloc_err: 0
> rx51_cqe_compress_blks: 0
> rx51_cqe_compress_pkts: 0
> rx51_page_reuse: 0
> rx51_cache_reuse: 0
> rx51_cache_full: 0
> rx51_cache_empty: 2560
> rx51_cache_busy: 0
> rx51_cache_waive: 0
> rx51_congst_umr: 0
> rx51_arfs_err: 0
> rx51_xdp_tx_xmit: 0
> rx51_xdp_tx_full: 0
> rx51_xdp_tx_err: 0
> rx51_xdp_tx_cqes: 0
> rx52_packets: 0
> rx52_bytes: 0
> rx52_csum_complete: 0
> rx52_csum_unnecessary: 0
> rx52_csum_unnecessary_inner: 0
> rx52_csum_none: 0
> rx52_xdp_drop: 0
> rx52_xdp_redirect: 0
> rx52_lro_packets: 0
> rx52_lro_bytes: 0
> rx52_ecn_mark: 0
> rx52_removed_vlan_packets: 0
> rx52_wqe_err: 0
> rx52_mpwqe_filler_cqes: 0
> rx52_mpwqe_filler_strides: 0
> rx52_buff_alloc_err: 0
> rx52_cqe_compress_blks: 0
> rx52_cqe_compress_pkts: 0
> rx52_page_reuse: 0
> rx52_cache_reuse: 0
> rx52_cache_full: 0
> rx52_cache_empty: 2560
> rx52_cache_busy: 0
> rx52_cache_waive: 0
> rx52_congst_umr: 0
> rx52_arfs_err: 0
> rx52_xdp_tx_xmit: 0
> rx52_xdp_tx_full: 0
> rx52_xdp_tx_err: 0
> rx52_xdp_tx_cqes: 0
> rx53_packets: 0
> rx53_bytes: 0
> rx53_csum_complete: 0
> rx53_csum_unnecessary: 0
> rx53_csum_unnecessary_inner: 0
> rx53_csum_none: 0
> rx53_xdp_drop: 0
> rx53_xdp_redirect: 0
> rx53_lro_packets: 0
> rx53_lro_bytes: 0
> rx53_ecn_mark: 0
> rx53_removed_vlan_packets: 0
> rx53_wqe_err: 0
> rx53_mpwqe_filler_cqes: 0
> rx53_mpwqe_filler_strides: 0
> rx53_buff_alloc_err: 0
> rx53_cqe_compress_blks: 0
> rx53_cqe_compress_pkts: 0
> rx53_page_reuse: 0
> rx53_cache_reuse: 0
> rx53_cache_full: 0
> rx53_cache_empty: 2560
> rx53_cache_busy: 0
> rx53_cache_waive: 0
> rx53_congst_umr: 0
> rx53_arfs_err: 0
> rx53_xdp_tx_xmit: 0
> rx53_xdp_tx_full: 0
> rx53_xdp_tx_err: 0
> rx53_xdp_tx_cqes: 0
> rx54_packets: 0
> rx54_bytes: 0
> rx54_csum_complete: 0
> rx54_csum_unnecessary: 0
> rx54_csum_unnecessary_inner: 0
> rx54_csum_none: 0
> rx54_xdp_drop: 0
> rx54_xdp_redirect: 0
> rx54_lro_packets: 0
> rx54_lro_bytes: 0
> rx54_ecn_mark: 0
> rx54_removed_vlan_packets: 0
> rx54_wqe_err: 0
> rx54_mpwqe_filler_cqes: 0
> rx54_mpwqe_filler_strides: 0
> rx54_buff_alloc_err: 0
> rx54_cqe_compress_blks: 0
> rx54_cqe_compress_pkts: 0
> rx54_page_reuse: 0
> rx54_cache_reuse: 0
> rx54_cache_full: 0
> rx54_cache_empty: 2560
> rx54_cache_busy: 0
> rx54_cache_waive: 0
> rx54_congst_umr: 0
> rx54_arfs_err: 0
> rx54_xdp_tx_xmit: 0
> rx54_xdp_tx_full: 0
> rx54_xdp_tx_err: 0
> rx54_xdp_tx_cqes: 0
> rx55_packets: 0
> rx55_bytes: 0
> rx55_csum_complete: 0
> rx55_csum_unnecessary: 0
> rx55_csum_unnecessary_inner: 0
> rx55_csum_none: 0
> rx55_xdp_drop: 0
> rx55_xdp_redirect: 0
> rx55_lro_packets: 0
> rx55_lro_bytes: 0
> rx55_ecn_mark: 0
> rx55_removed_vlan_packets: 0
> rx55_wqe_err: 0
> rx55_mpwqe_filler_cqes: 0
> rx55_mpwqe_filler_strides: 0
> rx55_buff_alloc_err: 0
> rx55_cqe_compress_blks: 0
> rx55_cqe_compress_pkts: 0
> rx55_page_reuse: 0
> rx55_cache_reuse: 0
> rx55_cache_full: 0
> rx55_cache_empty: 2560
> rx55_cache_busy: 0
> rx55_cache_waive: 0
> rx55_congst_umr: 0
> rx55_arfs_err: 0
> rx55_xdp_tx_xmit: 0
> rx55_xdp_tx_full: 0
> rx55_xdp_tx_err: 0
> rx55_xdp_tx_cqes: 0
> tx0_packets: 5868971166
> tx0_bytes: 7384241881537
> tx0_tso_packets: 1005089669
> tx0_tso_bytes: 5138882499687
> tx0_tso_inner_packets: 0
> tx0_tso_inner_bytes: 0
> tx0_csum_partial: 1405330470
> tx0_csum_partial_inner: 0
> tx0_added_vlan_packets: 3247061022
> tx0_nop: 83925216
> tx0_csum_none: 1841730552
> tx0_stopped: 0
> tx0_dropped: 0
> tx0_xmit_more: 29664303
> tx0_recover: 0
> tx0_cqes: 3217398842
> tx0_wake: 0
> tx0_cqe_err: 0
> tx1_packets: 5599378674
> tx1_bytes: 7272236466962
> tx1_tso_packets: 1024612268
> tx1_tso_bytes: 5244192050917
> tx1_tso_inner_packets: 0
> tx1_tso_inner_bytes: 0
> tx1_csum_partial: 1438007932
> tx1_csum_partial_inner: 0
> tx1_added_vlan_packets: 2919765857
> tx1_nop: 79661231
> tx1_csum_none: 1481757925
> tx1_stopped: 0
> tx1_dropped: 0
> tx1_xmit_more: 29485355
> tx1_recover: 0
> tx1_cqes: 2890282176
> tx1_wake: 0
> tx1_cqe_err: 0
> tx2_packets: 5413821094
> tx2_bytes: 7033951631334
> tx2_tso_packets: 1002868589
> tx2_tso_bytes: 5089549008985
> tx2_tso_inner_packets: 0
> tx2_tso_inner_bytes: 0
> tx2_csum_partial: 1404186175
> tx2_csum_partial_inner: 0
> tx2_added_vlan_packets: 2822670460
> tx2_nop: 77115408
> tx2_csum_none: 1418484285
> tx2_stopped: 0
> tx2_dropped: 0
> tx2_xmit_more: 29321129
> tx2_recover: 0
> tx2_cqes: 2793351019
> tx2_wake: 0
> tx2_cqe_err: 0
> tx3_packets: 5479609727
> tx3_bytes: 7116904107659
> tx3_tso_packets: 1002992639
> tx3_tso_bytes: 5154225081979
> tx3_tso_inner_packets: 0
> tx3_tso_inner_bytes: 0
> tx3_csum_partial: 1415739849
> tx3_csum_partial_inner: 0
> tx3_added_vlan_packets: 2842823811
> tx3_nop: 78060813
> tx3_csum_none: 1427083971
> tx3_stopped: 0
> tx3_dropped: 0
> tx3_xmit_more: 28575040
> tx3_recover: 0
> tx3_cqes: 2814250785
> tx3_wake: 0
> tx3_cqe_err: 0
> tx4_packets: 5508297397
> tx4_bytes: 7127659369902
> tx4_tso_packets: 1007356432
> tx4_tso_bytes: 5145975736034
> tx4_tso_inner_packets: 0
> tx4_tso_inner_bytes: 0
> tx4_csum_partial: 1411271000
> tx4_csum_partial_inner: 0
> tx4_added_vlan_packets: 2882086825
> tx4_nop: 78433610
> tx4_csum_none: 1470815825
> tx4_stopped: 0
> tx4_dropped: 0
> tx4_xmit_more: 28632444
> tx4_recover: 0
> tx4_cqes: 2853456464
> tx4_wake: 0
> tx4_cqe_err: 0
> tx5_packets: 5513864156
> tx5_bytes: 7165864145517
> tx5_tso_packets: 1014046485
> tx5_tso_bytes: 5192635614477
> tx5_tso_inner_packets: 0
> tx5_tso_inner_bytes: 0
> tx5_csum_partial: 1420810473
> tx5_csum_partial_inner: 0
> tx5_added_vlan_packets: 2861370556
> tx5_nop: 78481355
> tx5_csum_none: 1440560083
> tx5_stopped: 0
> tx5_dropped: 0
> tx5_xmit_more: 28222467
> tx5_recover: 0
> tx5_cqes: 2833149758
> tx5_wake: 0
> tx5_cqe_err: 0
> tx6_packets: 5560724761
> tx6_bytes: 7210309972086
> tx6_tso_packets: 994050514
> tx6_tso_bytes: 5171393741595
> tx6_tso_inner_packets: 0
> tx6_tso_inner_bytes: 0
> tx6_csum_partial: 1414303265
> tx6_csum_partial_inner: 0
> tx6_added_vlan_packets: 2905794177
> tx6_nop: 79353318
> tx6_csum_none: 1491490912
> tx6_stopped: 0
> tx6_dropped: 0
> tx6_xmit_more: 31246664
> tx6_recover: 0
> tx6_cqes: 2874549217
> tx6_wake: 0
> tx6_cqe_err: 0
> tx7_packets: 5557594170
> tx7_bytes: 7223138778685
> tx7_tso_packets: 1013475396
> tx7_tso_bytes: 5241530065484
> tx7_tso_inner_packets: 0
> tx7_tso_inner_bytes: 0
> tx7_csum_partial: 1438604314
> tx7_csum_partial_inner: 0
> tx7_added_vlan_packets: 2873917552
> tx7_nop: 79057059
> tx7_csum_none: 1435313239
> tx7_stopped: 0
> tx7_dropped: 0
> tx7_xmit_more: 29258761
> tx7_recover: 0
> tx7_cqes: 2844660578
> tx7_wake: 0
> tx7_cqe_err: 0
> tx8_packets: 5521254733
> tx8_bytes: 7208043146297
> tx8_tso_packets: 1014670801
> tx8_tso_bytes: 5185842447246
> tx8_tso_inner_packets: 0
> tx8_tso_inner_bytes: 0
> tx8_csum_partial: 1431631562
> tx8_csum_partial_inner: 0
> tx8_added_vlan_packets: 2872641129
> tx8_nop: 78545776
> tx8_csum_none: 1441009567
> tx8_stopped: 0
> tx8_dropped: 0
> tx8_xmit_more: 29106291
> tx8_recover: 0
> tx8_cqes: 2843536748
> tx8_wake: 0
> tx8_cqe_err: 0
> tx9_packets: 5528889957
> tx9_bytes: 7191793816058
> tx9_tso_packets: 1015955476
> tx9_tso_bytes: 5207232047828
> tx9_tso_inner_packets: 0
> tx9_tso_inner_bytes: 0
> tx9_csum_partial: 1421266796
> tx9_csum_partial_inner: 0
> tx9_added_vlan_packets: 2869523921
> tx9_nop: 78586218
> tx9_csum_none: 1448257125
> tx9_stopped: 0
> tx9_dropped: 0
> tx9_xmit_more: 29483347
> tx9_recover: 0
> tx9_cqes: 2840042245
> tx9_wake: 0
> tx9_cqe_err: 0
> tx10_packets: 5556351222
> tx10_bytes: 7254798330757
> tx10_tso_packets: 1028554460
> tx10_tso_bytes: 5246179615774
> tx10_tso_inner_packets: 0
> tx10_tso_inner_bytes: 0
> tx10_csum_partial: 1430459021
> tx10_csum_partial_inner: 0
> tx10_added_vlan_packets: 2881683382
> tx10_nop: 79139584
> tx10_csum_none: 1451224361
> tx10_stopped: 0
> tx10_dropped: 0
> tx10_xmit_more: 29217190
> tx10_recover: 0
> tx10_cqes: 2852467898
> tx10_wake: 0
> tx10_cqe_err: 0
> tx11_packets: 5455631854
> tx11_bytes: 7061121713772
> tx11_tso_packets: 992133383
> tx11_tso_bytes: 5089419722682
> tx11_tso_inner_packets: 0
> tx11_tso_inner_bytes: 0
> tx11_csum_partial: 1395542033
> tx11_csum_partial_inner: 0
> tx11_added_vlan_packets: 2852589093
> tx11_nop: 77799857
> tx11_csum_none: 1457047060
> tx11_stopped: 0
> tx11_dropped: 0
> tx11_xmit_more: 29559927
> tx11_recover: 0
> tx11_cqes: 2823031110
> tx11_wake: 0
> tx11_cqe_err: 0
> tx12_packets: 5488286808
> tx12_bytes: 7137087569303
> tx12_tso_packets: 1006435537
> tx12_tso_bytes: 5163371416750
> tx12_tso_inner_packets: 0
> tx12_tso_inner_bytes: 0
> tx12_csum_partial: 1414799411
> tx12_csum_partial_inner: 0
> tx12_added_vlan_packets: 2841679543
> tx12_nop: 78387039
> tx12_csum_none: 1426880132
> tx12_stopped: 0
> tx12_dropped: 0
> tx12_xmit_more: 28607526
> tx12_recover: 0
> tx12_cqes: 2813073557
> tx12_wake: 0
> tx12_cqe_err: 0
> tx13_packets: 5594132290
> tx13_bytes: 7251106284829
> tx13_tso_packets: 1035172061
> tx13_tso_bytes: 5251200286298
> tx13_tso_inner_packets: 0
> tx13_tso_inner_bytes: 0
> tx13_csum_partial: 1443665981
> tx13_csum_partial_inner: 0
> tx13_added_vlan_packets: 2916604799
> tx13_nop: 79670465
> tx13_csum_none: 1472938818
> tx13_stopped: 0
> tx13_dropped: 0
> tx13_xmit_more: 27797067
> tx13_recover: 0
> tx13_cqes: 2888809352
> tx13_wake: 0
> tx13_cqe_err: 0
> tx14_packets: 5548790952
> tx14_bytes: 7194211868411
> tx14_tso_packets: 1021015561
> tx14_tso_bytes: 5231483708869
> tx14_tso_inner_packets: 0
> tx14_tso_inner_bytes: 0
> tx14_csum_partial: 1427711576
> tx14_csum_partial_inner: 0
> tx14_added_vlan_packets: 2875288572
> tx14_nop: 78900224
> tx14_csum_none: 1447576996
> tx14_stopped: 0
> tx14_dropped: 0
> tx14_xmit_more: 30003496
> tx14_recover: 0
> tx14_cqes: 2845286732
> tx14_wake: 0
> tx14_cqe_err: 0
> tx15_packets: 5609310963
> tx15_bytes: 7271380831798
> tx15_tso_packets: 1027830118
> tx15_tso_bytes: 5229697431506
> tx15_tso_inner_packets: 0
> tx15_tso_inner_bytes: 0
> tx15_csum_partial: 1429209941
> tx15_csum_partial_inner: 0
> tx15_added_vlan_packets: 2940315402
> tx15_nop: 79950883
> tx15_csum_none: 1511105462
> tx15_stopped: 0
> tx15_dropped: 0
> tx15_xmit_more: 28820740
> tx15_recover: 0
> tx15_cqes: 2911496633
> tx15_wake: 0
> tx15_cqe_err: 0
> tx16_packets: 4465363036
> tx16_bytes: 5769771803704
> tx16_tso_packets: 817101913
> tx16_tso_bytes: 4180172833814
> tx16_tso_inner_packets: 0
> tx16_tso_inner_bytes: 0
> tx16_csum_partial: 1136731404
> tx16_csum_partial_inner: 0
> tx16_added_vlan_packets: 2332178232
> tx16_nop: 63458573
> tx16_csum_none: 1195446828
> tx16_stopped: 0
> tx16_dropped: 0
> tx16_xmit_more: 23756254
> tx16_recover: 0
> tx16_cqes: 2308423025
> tx16_wake: 0
> tx16_cqe_err: 0
> tx17_packets: 4380386348
> tx17_bytes: 5708702994526
> tx17_tso_packets: 813638023
> tx17_tso_bytes: 4130806014947
> tx17_tso_inner_packets: 0
> tx17_tso_inner_bytes: 0
> tx17_csum_partial: 1133007164
> tx17_csum_partial_inner: 0
> tx17_added_vlan_packets: 2277314787
> tx17_nop: 62377372
> tx17_csum_none: 1144307623
> tx17_stopped: 0
> tx17_dropped: 0
> tx17_xmit_more: 23731361
> tx17_recover: 0
> tx17_cqes: 2253584638
> tx17_wake: 0
> tx17_cqe_err: 0
> tx18_packets: 4450359743
> tx18_bytes: 5758968674820
> tx18_tso_packets: 815791601
> tx18_tso_bytes: 4179942688909
> tx18_tso_inner_packets: 0
> tx18_tso_inner_bytes: 0
> tx18_csum_partial: 1137649257
> tx18_csum_partial_inner: 0
> tx18_added_vlan_packets: 2314556550
> tx18_nop: 63271085
> tx18_csum_none: 1176907293
> tx18_stopped: 0
> tx18_dropped: 0
> tx18_xmit_more: 23055770
> tx18_recover: 0
> tx18_cqes: 2291501928
> tx18_wake: 0
> tx18_cqe_err: 0
> tx19_packets: 4596064378
> tx19_bytes: 5916675706535
> tx19_tso_packets: 825788649
> tx19_tso_bytes: 4208046929921
> tx19_tso_inner_packets: 0
> tx19_tso_inner_bytes: 0
> tx19_csum_partial: 1150666569
> tx19_csum_partial_inner: 0
> tx19_added_vlan_packets: 2450567026
> tx19_nop: 65468504
> tx19_csum_none: 1299900457
> tx19_stopped: 0
> tx19_dropped: 0
> tx19_xmit_more: 23846250
> tx19_recover: 0
> tx19_cqes: 2426722127
> tx19_wake: 0
> tx19_cqe_err: 0
> tx20_packets: 4424935388
> tx20_bytes: 5757631205901
> tx20_tso_packets: 804875006
> tx20_tso_bytes: 4156262736109
> tx20_tso_inner_packets: 0
> tx20_tso_inner_bytes: 0
> tx20_csum_partial: 1134144916
> tx20_csum_partial_inner: 0
> tx20_added_vlan_packets: 2294839665
> tx20_nop: 63023986
> tx20_csum_none: 1160694749
> tx20_stopped: 0
> tx20_dropped: 0
> tx20_xmit_more: 23393201
> tx20_recover: 0
> tx20_cqes: 2271447623
> tx20_wake: 0
> tx20_cqe_err: 0
> tx21_packets: 4595062285
> tx21_bytes: 5958671993467
> tx21_tso_packets: 821936215
> tx21_tso_bytes: 4187977870684
> tx21_tso_inner_packets: 0
> tx21_tso_inner_bytes: 0
> tx21_csum_partial: 1143339787
> tx21_csum_partial_inner: 0
> tx21_added_vlan_packets: 2457167412
> tx21_nop: 65697763
> tx21_csum_none: 1313827625
> tx21_stopped: 0
> tx21_dropped: 0
> tx21_xmit_more: 23858345
> tx21_recover: 0
> tx21_cqes: 2433310348
> tx21_wake: 0
> tx21_cqe_err: 0
> tx22_packets: 4664446513
> tx22_bytes: 5931429292082
> tx22_tso_packets: 814457881
> tx22_tso_bytes: 4148607956533
> tx22_tso_inner_packets: 0
> tx22_tso_inner_bytes: 0
> tx22_csum_partial: 1127284783
> tx22_csum_partial_inner: 0
> tx22_added_vlan_packets: 2548650146
> tx22_nop: 66299909
> tx22_csum_none: 1421365363
> tx22_stopped: 0
> tx22_dropped: 0
> tx22_xmit_more: 23800911
> tx22_recover: 0
> tx22_cqes: 2524850415
> tx22_wake: 0
> tx22_cqe_err: 0
> tx23_packets: 4416221747
> tx23_bytes: 5721472587985
> tx23_tso_packets: 823538520
> tx23_tso_bytes: 4163520218617
> tx23_tso_inner_packets: 0
> tx23_tso_inner_bytes: 0
> tx23_csum_partial: 1135996006
> tx23_csum_partial_inner: 0
> tx23_added_vlan_packets: 2292404120
> tx23_nop: 62709432
> tx23_csum_none: 1156408114
> tx23_stopped: 0
> tx23_dropped: 0
> tx23_xmit_more: 22299889
> tx23_recover: 0
> tx23_cqes: 2270105487
> tx23_wake: 0
> tx23_cqe_err: 0
> tx24_packets: 4420014824
> tx24_bytes: 5740767318521
> tx24_tso_packets: 820838072
> tx24_tso_bytes: 4183722948422
> tx24_tso_inner_packets: 0
> tx24_tso_inner_bytes: 0
> tx24_csum_partial: 1138070059
> tx24_csum_partial_inner: 0
> tx24_added_vlan_packets: 2289043946
> tx24_nop: 62797341
> tx24_csum_none: 1150973887
> tx24_stopped: 0
> tx24_dropped: 0
> tx24_xmit_more: 22744690
> tx24_recover: 0
> tx24_cqes: 2266300568
> tx24_wake: 0
> tx24_cqe_err: 0
> tx25_packets: 4413225545
> tx25_bytes: 5716162617155
> tx25_tso_packets: 808274341
> tx25_tso_bytes: 4138408857714
> tx25_tso_inner_packets: 0
> tx25_tso_inner_bytes: 0
> tx25_csum_partial: 1134587898
> tx25_csum_partial_inner: 0
> tx25_added_vlan_packets: 2297149310
> tx25_nop: 62958238
> tx25_csum_none: 1162561412
> tx25_stopped: 0
> tx25_dropped: 0
> tx25_xmit_more: 24463552
> tx25_recover: 0
> tx25_cqes: 2272686971
> tx25_wake: 0
> tx25_cqe_err: 0
> tx26_packets: 4524907591
> tx26_bytes: 5865394280699
> tx26_tso_packets: 807270022
> tx26_tso_bytes: 4148754705317
> tx26_tso_inner_packets: 0
> tx26_tso_inner_bytes: 0
> tx26_csum_partial: 1130306933
> tx26_csum_partial_inner: 0
> tx26_added_vlan_packets: 2402682460
> tx26_nop: 64474322
> tx26_csum_none: 1272375527
> tx26_stopped: 1
> tx26_dropped: 0
> tx26_xmit_more: 23316186
> tx26_recover: 0
> tx26_cqes: 2379367502
> tx26_wake: 1
> tx26_cqe_err: 0
> tx27_packets: 4376114969
> tx27_bytes: 5683551238304
> tx27_tso_packets: 809344829
> tx27_tso_bytes: 4124331859270
> tx27_tso_inner_packets: 0
> tx27_tso_inner_bytes: 0
> tx27_csum_partial: 1124954937
> tx27_csum_partial_inner: 0
> tx27_added_vlan_packets: 2267871300
> tx27_nop: 62213214
> tx27_csum_none: 1142916363
> tx27_stopped: 0
> tx27_dropped: 0
> tx27_xmit_more: 23369974
> tx27_recover: 0
> tx27_cqes: 2244502686
> tx27_wake: 0
> tx27_cqe_err: 0
> tx28_packets: 3
> tx28_bytes: 266
> tx28_tso_packets: 0
> tx28_tso_bytes: 0
> tx28_tso_inner_packets: 0
> tx28_tso_inner_bytes: 0
> tx28_csum_partial: 0
> tx28_csum_partial_inner: 0
> tx28_added_vlan_packets: 0
> tx28_nop: 0
> tx28_csum_none: 3
> tx28_stopped: 0
> tx28_dropped: 0
> tx28_xmit_more: 0
> tx28_recover: 0
> tx28_cqes: 3
> tx28_wake: 0
> tx28_cqe_err: 0
> tx29_packets: 0
> tx29_bytes: 0
> tx29_tso_packets: 0
> tx29_tso_bytes: 0
> tx29_tso_inner_packets: 0
> tx29_tso_inner_bytes: 0
> tx29_csum_partial: 0
> tx29_csum_partial_inner: 0
> tx29_added_vlan_packets: 0
> tx29_nop: 0
> tx29_csum_none: 0
> tx29_stopped: 0
> tx29_dropped: 0
> tx29_xmit_more: 0
> tx29_recover: 0
> tx29_cqes: 0
> tx29_wake: 0
> tx29_cqe_err: 0
> tx30_packets: 0
> tx30_bytes: 0
> tx30_tso_packets: 0
> tx30_tso_bytes: 0
> tx30_tso_inner_packets: 0
> tx30_tso_inner_bytes: 0
> tx30_csum_partial: 0
> tx30_csum_partial_inner: 0
> tx30_added_vlan_packets: 0
> tx30_nop: 0
> tx30_csum_none: 0
> tx30_stopped: 0
> tx30_dropped: 0
> tx30_xmit_more: 0
> tx30_recover: 0
> tx30_cqes: 0
> tx30_wake: 0
> tx30_cqe_err: 0
> tx31_packets: 0
> tx31_bytes: 0
> tx31_tso_packets: 0
> tx31_tso_bytes: 0
> tx31_tso_inner_packets: 0
> tx31_tso_inner_bytes: 0
> tx31_csum_partial: 0
> tx31_csum_partial_inner: 0
> tx31_added_vlan_packets: 0
> tx31_nop: 0
> tx31_csum_none: 0
> tx31_stopped: 0
> tx31_dropped: 0
> tx31_xmit_more: 0
> tx31_recover: 0
> tx31_cqes: 0
> tx31_wake: 0
> tx31_cqe_err: 0
> tx32_packets: 0
> tx32_bytes: 0
> tx32_tso_packets: 0
> tx32_tso_bytes: 0
> tx32_tso_inner_packets: 0
> tx32_tso_inner_bytes: 0
> tx32_csum_partial: 0
> tx32_csum_partial_inner: 0
> tx32_added_vlan_packets: 0
> tx32_nop: 0
> tx32_csum_none: 0
> tx32_stopped: 0
> tx32_dropped: 0
> tx32_xmit_more: 0
> tx32_recover: 0
> tx32_cqes: 0
> tx32_wake: 0
> tx32_cqe_err: 0
> tx33_packets: 0
> tx33_bytes: 0
> tx33_tso_packets: 0
> tx33_tso_bytes: 0
> tx33_tso_inner_packets: 0
> tx33_tso_inner_bytes: 0
> tx33_csum_partial: 0
> tx33_csum_partial_inner: 0
> tx33_added_vlan_packets: 0
> tx33_nop: 0
> tx33_csum_none: 0
> tx33_stopped: 0
> tx33_dropped: 0
> tx33_xmit_more: 0
> tx33_recover: 0
> tx33_cqes: 0
> tx33_wake: 0
> tx33_cqe_err: 0
> tx34_packets: 0
> tx34_bytes: 0
> tx34_tso_packets: 0
> tx34_tso_bytes: 0
> tx34_tso_inner_packets: 0
> tx34_tso_inner_bytes: 0
> tx34_csum_partial: 0
> tx34_csum_partial_inner: 0
> tx34_added_vlan_packets: 0
> tx34_nop: 0
> tx34_csum_none: 0
> tx34_stopped: 0
> tx34_dropped: 0
> tx34_xmit_more: 0
> tx34_recover: 0
> tx34_cqes: 0
> tx34_wake: 0
> tx34_cqe_err: 0
> tx35_packets: 0
> tx35_bytes: 0
> tx35_tso_packets: 0
> tx35_tso_bytes: 0
> tx35_tso_inner_packets: 0
> tx35_tso_inner_bytes: 0
> tx35_csum_partial: 0
> tx35_csum_partial_inner: 0
> tx35_added_vlan_packets: 0
> tx35_nop: 0
> tx35_csum_none: 0
> tx35_stopped: 0
> tx35_dropped: 0
> tx35_xmit_more: 0
> tx35_recover: 0
> tx35_cqes: 0
> tx35_wake: 0
> tx35_cqe_err: 0
> tx36_packets: 0
> tx36_bytes: 0
> tx36_tso_packets: 0
> tx36_tso_bytes: 0
> tx36_tso_inner_packets: 0
> tx36_tso_inner_bytes: 0
> tx36_csum_partial: 0
> tx36_csum_partial_inner: 0
> tx36_added_vlan_packets: 0
> tx36_nop: 0
> tx36_csum_none: 0
> tx36_stopped: 0
> tx36_dropped: 0
> tx36_xmit_more: 0
> tx36_recover: 0
> tx36_cqes: 0
> tx36_wake: 0
> tx36_cqe_err: 0
> tx37_packets: 0
> tx37_bytes: 0
> tx37_tso_packets: 0
> tx37_tso_bytes: 0
> tx37_tso_inner_packets: 0
> tx37_tso_inner_bytes: 0
> tx37_csum_partial: 0
> tx37_csum_partial_inner: 0
> tx37_added_vlan_packets: 0
> tx37_nop: 0
> tx37_csum_none: 0
> tx37_stopped: 0
> tx37_dropped: 0
> tx37_xmit_more: 0
> tx37_recover: 0
> tx37_cqes: 0
> tx37_wake: 0
> tx37_cqe_err: 0
> tx38_packets: 0
> tx38_bytes: 0
> tx38_tso_packets: 0
> tx38_tso_bytes: 0
> tx38_tso_inner_packets: 0
> tx38_tso_inner_bytes: 0
> tx38_csum_partial: 0
> tx38_csum_partial_inner: 0
> tx38_added_vlan_packets: 0
> tx38_nop: 0
> tx38_csum_none: 0
> tx38_stopped: 0
> tx38_dropped: 0
> tx38_xmit_more: 0
> tx38_recover: 0
> tx38_cqes: 0
> tx38_wake: 0
> tx38_cqe_err: 0
> tx39_packets: 0
> tx39_bytes: 0
> tx39_tso_packets: 0
> tx39_tso_bytes: 0
> tx39_tso_inner_packets: 0
> tx39_tso_inner_bytes: 0
> tx39_csum_partial: 0
> tx39_csum_partial_inner: 0
> tx39_added_vlan_packets: 0
> tx39_nop: 0
> tx39_csum_none: 0
> tx39_stopped: 0
> tx39_dropped: 0
> tx39_xmit_more: 0
> tx39_recover: 0
> tx39_cqes: 0
> tx39_wake: 0
> tx39_cqe_err: 0
> tx40_packets: 0
> tx40_bytes: 0
> tx40_tso_packets: 0
> tx40_tso_bytes: 0
> tx40_tso_inner_packets: 0
> tx40_tso_inner_bytes: 0
> tx40_csum_partial: 0
> tx40_csum_partial_inner: 0
> tx40_added_vlan_packets: 0
> tx40_nop: 0
> tx40_csum_none: 0
> tx40_stopped: 0
> tx40_dropped: 0
> tx40_xmit_more: 0
> tx40_recover: 0
> tx40_cqes: 0
> tx40_wake: 0
> tx40_cqe_err: 0
> tx41_packets: 0
> tx41_bytes: 0
> tx41_tso_packets: 0
> tx41_tso_bytes: 0
> tx41_tso_inner_packets: 0
> tx41_tso_inner_bytes: 0
> tx41_csum_partial: 0
> tx41_csum_partial_inner: 0
> tx41_added_vlan_packets: 0
> tx41_nop: 0
> tx41_csum_none: 0
> tx41_stopped: 0
> tx41_dropped: 0
> tx41_xmit_more: 0
> tx41_recover: 0
> tx41_cqes: 0
> tx41_wake: 0
> tx41_cqe_err: 0
> tx42_packets: 0
> tx42_bytes: 0
> tx42_tso_packets: 0
> tx42_tso_bytes: 0
> tx42_tso_inner_packets: 0
> tx42_tso_inner_bytes: 0
> tx42_csum_partial: 0
> tx42_csum_partial_inner: 0
> tx42_added_vlan_packets: 0
> tx42_nop: 0
> tx42_csum_none: 0
> tx42_stopped: 0
> tx42_dropped: 0
> tx42_xmit_more: 0
> tx42_recover: 0
> tx42_cqes: 0
> tx42_wake: 0
> tx42_cqe_err: 0
> tx43_packets: 0
> tx43_bytes: 0
> tx43_tso_packets: 0
> tx43_tso_bytes: 0
> tx43_tso_inner_packets: 0
> tx43_tso_inner_bytes: 0
> tx43_csum_partial: 0
> tx43_csum_partial_inner: 0
> tx43_added_vlan_packets: 0
> tx43_nop: 0
> tx43_csum_none: 0
> tx43_stopped: 0
> tx43_dropped: 0
> tx43_xmit_more: 0
> tx43_recover: 0
> tx43_cqes: 0
> tx43_wake: 0
> tx43_cqe_err: 0
> tx44_packets: 0
> tx44_bytes: 0
> tx44_tso_packets: 0
> tx44_tso_bytes: 0
> tx44_tso_inner_packets: 0
> tx44_tso_inner_bytes: 0
> tx44_csum_partial: 0
> tx44_csum_partial_inner: 0
> tx44_added_vlan_packets: 0
> tx44_nop: 0
> tx44_csum_none: 0
> tx44_stopped: 0
> tx44_dropped: 0
> tx44_xmit_more: 0
> tx44_recover: 0
> tx44_cqes: 0
> tx44_wake: 0
> tx44_cqe_err: 0
> tx45_packets: 0
> tx45_bytes: 0
> tx45_tso_packets: 0
> tx45_tso_bytes: 0
> tx45_tso_inner_packets: 0
> tx45_tso_inner_bytes: 0
> tx45_csum_partial: 0
> tx45_csum_partial_inner: 0
> tx45_added_vlan_packets: 0
> tx45_nop: 0
> tx45_csum_none: 0
> tx45_stopped: 0
> tx45_dropped: 0
> tx45_xmit_more: 0
> tx45_recover: 0
> tx45_cqes: 0
> tx45_wake: 0
> tx45_cqe_err: 0
> tx46_packets: 0
> tx46_bytes: 0
> tx46_tso_packets: 0
> tx46_tso_bytes: 0
> tx46_tso_inner_packets: 0
> tx46_tso_inner_bytes: 0
> tx46_csum_partial: 0
> tx46_csum_partial_inner: 0
> tx46_added_vlan_packets: 0
> tx46_nop: 0
> tx46_csum_none: 0
> tx46_stopped: 0
> tx46_dropped: 0
> tx46_xmit_more: 0
> tx46_recover: 0
> tx46_cqes: 0
> tx46_wake: 0
> tx46_cqe_err: 0
> tx47_packets: 0
> tx47_bytes: 0
> tx47_tso_packets: 0
> tx47_tso_bytes: 0
> tx47_tso_inner_packets: 0
> tx47_tso_inner_bytes: 0
> tx47_csum_partial: 0
> tx47_csum_partial_inner: 0
> tx47_added_vlan_packets: 0
> tx47_nop: 0
> tx47_csum_none: 0
> tx47_stopped: 0
> tx47_dropped: 0
> tx47_xmit_more: 0
> tx47_recover: 0
> tx47_cqes: 0
> tx47_wake: 0
> tx47_cqe_err: 0
> tx48_packets: 0
> tx48_bytes: 0
> tx48_tso_packets: 0
> tx48_tso_bytes: 0
> tx48_tso_inner_packets: 0
> tx48_tso_inner_bytes: 0
> tx48_csum_partial: 0
> tx48_csum_partial_inner: 0
> tx48_added_vlan_packets: 0
> tx48_nop: 0
> tx48_csum_none: 0
> tx48_stopped: 0
> tx48_dropped: 0
> tx48_xmit_more: 0
> tx48_recover: 0
> tx48_cqes: 0
> tx48_wake: 0
> tx48_cqe_err: 0
> tx49_packets: 0
> tx49_bytes: 0
> tx49_tso_packets: 0
> tx49_tso_bytes: 0
> tx49_tso_inner_packets: 0
> tx49_tso_inner_bytes: 0
> tx49_csum_partial: 0
> tx49_csum_partial_inner: 0
> tx49_added_vlan_packets: 0
> tx49_nop: 0
> tx49_csum_none: 0
> tx49_stopped: 0
> tx49_dropped: 0
> tx49_xmit_more: 0
> tx49_recover: 0
> tx49_cqes: 0
> tx49_wake: 0
> tx49_cqe_err: 0
> tx50_packets: 0
> tx50_bytes: 0
> tx50_tso_packets: 0
> tx50_tso_bytes: 0
> tx50_tso_inner_packets: 0
> tx50_tso_inner_bytes: 0
> tx50_csum_partial: 0
> tx50_csum_partial_inner: 0
> tx50_added_vlan_packets: 0
> tx50_nop: 0
> tx50_csum_none: 0
> tx50_stopped: 0
> tx50_dropped: 0
> tx50_xmit_more: 0
> tx50_recover: 0
> tx50_cqes: 0
> tx50_wake: 0
> tx50_cqe_err: 0
> tx51_packets: 0
> tx51_bytes: 0
> tx51_tso_packets: 0
> tx51_tso_bytes: 0
> tx51_tso_inner_packets: 0
> tx51_tso_inner_bytes: 0
> tx51_csum_partial: 0
> tx51_csum_partial_inner: 0
> tx51_added_vlan_packets: 0
> tx51_nop: 0
> tx51_csum_none: 0
> tx51_stopped: 0
> tx51_dropped: 0
> tx51_xmit_more: 0
> tx51_recover: 0
> tx51_cqes: 0
> tx51_wake: 0
> tx51_cqe_err: 0
> tx52_packets: 0
> tx52_bytes: 0
> tx52_tso_packets: 0
> tx52_tso_bytes: 0
> tx52_tso_inner_packets: 0
> tx52_tso_inner_bytes: 0
> tx52_csum_partial: 0
> tx52_csum_partial_inner: 0
> tx52_added_vlan_packets: 0
> tx52_nop: 0
> tx52_csum_none: 0
> tx52_stopped: 0
> tx52_dropped: 0
> tx52_xmit_more: 0
> tx52_recover: 0
> tx52_cqes: 0
> tx52_wake: 0
> tx52_cqe_err: 0
> tx53_packets: 0
> tx53_bytes: 0
> tx53_tso_packets: 0
> tx53_tso_bytes: 0
> tx53_tso_inner_packets: 0
> tx53_tso_inner_bytes: 0
> tx53_csum_partial: 0
> tx53_csum_partial_inner: 0
> tx53_added_vlan_packets: 0
> tx53_nop: 0
> tx53_csum_none: 0
> tx53_stopped: 0
> tx53_dropped: 0
> tx53_xmit_more: 0
> tx53_recover: 0
> tx53_cqes: 0
> tx53_wake: 0
> tx53_cqe_err: 0
> tx54_packets: 0
> tx54_bytes: 0
> tx54_tso_packets: 0
> tx54_tso_bytes: 0
> tx54_tso_inner_packets: 0
> tx54_tso_inner_bytes: 0
> tx54_csum_partial: 0
> tx54_csum_partial_inner: 0
> tx54_added_vlan_packets: 0
> tx54_nop: 0
> tx54_csum_none: 0
> tx54_stopped: 0
> tx54_dropped: 0
> tx54_xmit_more: 0
> tx54_recover: 0
> tx54_cqes: 0
> tx54_wake: 0
> tx54_cqe_err: 0
> tx55_packets: 0
> tx55_bytes: 0
> tx55_tso_packets: 0
> tx55_tso_bytes: 0
> tx55_tso_inner_packets: 0
> tx55_tso_inner_bytes: 0
> tx55_csum_partial: 0
> tx55_csum_partial_inner: 0
> tx55_added_vlan_packets: 0
> tx55_nop: 0
> tx55_csum_none: 0
> tx55_stopped: 0
> tx55_dropped: 0
> tx55_xmit_more: 0
> tx55_recover: 0
> tx55_cqes: 0
> tx55_wake: 0
> tx55_cqe_err: 0
> tx0_xdp_xmit: 0
> tx0_xdp_full: 0
> tx0_xdp_err: 0
> tx0_xdp_cqes: 0
> tx1_xdp_xmit: 0
> tx1_xdp_full: 0
> tx1_xdp_err: 0
> tx1_xdp_cqes: 0
> tx2_xdp_xmit: 0
> tx2_xdp_full: 0
> tx2_xdp_err: 0
> tx2_xdp_cqes: 0
> tx3_xdp_xmit: 0
> tx3_xdp_full: 0
> tx3_xdp_err: 0
> tx3_xdp_cqes: 0
> tx4_xdp_xmit: 0
> tx4_xdp_full: 0
> tx4_xdp_err: 0
> tx4_xdp_cqes: 0
> tx5_xdp_xmit: 0
> tx5_xdp_full: 0
> tx5_xdp_err: 0
> tx5_xdp_cqes: 0
> tx6_xdp_xmit: 0
> tx6_xdp_full: 0
> tx6_xdp_err: 0
> tx6_xdp_cqes: 0
> tx7_xdp_xmit: 0
> tx7_xdp_full: 0
> tx7_xdp_err: 0
> tx7_xdp_cqes: 0
> tx8_xdp_xmit: 0
> tx8_xdp_full: 0
> tx8_xdp_err: 0
> tx8_xdp_cqes: 0
> tx9_xdp_xmit: 0
> tx9_xdp_full: 0
> tx9_xdp_err: 0
> tx9_xdp_cqes: 0
> tx10_xdp_xmit: 0
> tx10_xdp_full: 0
> tx10_xdp_err: 0
> tx10_xdp_cqes: 0
> tx11_xdp_xmit: 0
> tx11_xdp_full: 0
> tx11_xdp_err: 0
> tx11_xdp_cqes: 0
> tx12_xdp_xmit: 0
> tx12_xdp_full: 0
> tx12_xdp_err: 0
> tx12_xdp_cqes: 0
> tx13_xdp_xmit: 0
> tx13_xdp_full: 0
> tx13_xdp_err: 0
> tx13_xdp_cqes: 0
> tx14_xdp_xmit: 0
> tx14_xdp_full: 0
> tx14_xdp_err: 0
> tx14_xdp_cqes: 0
> tx15_xdp_xmit: 0
> tx15_xdp_full: 0
> tx15_xdp_err: 0
> tx15_xdp_cqes: 0
> tx16_xdp_xmit: 0
> tx16_xdp_full: 0
> tx16_xdp_err: 0
> tx16_xdp_cqes: 0
> tx17_xdp_xmit: 0
> tx17_xdp_full: 0
> tx17_xdp_err: 0
> tx17_xdp_cqes: 0
> tx18_xdp_xmit: 0
> tx18_xdp_full: 0
> tx18_xdp_err: 0
> tx18_xdp_cqes: 0
> tx19_xdp_xmit: 0
> tx19_xdp_full: 0
> tx19_xdp_err: 0
> tx19_xdp_cqes: 0
> tx20_xdp_xmit: 0
> tx20_xdp_full: 0
> tx20_xdp_err: 0
> tx20_xdp_cqes: 0
> tx21_xdp_xmit: 0
> tx21_xdp_full: 0
> tx21_xdp_err: 0
> tx21_xdp_cqes: 0
> tx22_xdp_xmit: 0
> tx22_xdp_full: 0
> tx22_xdp_err: 0
> tx22_xdp_cqes: 0
> tx23_xdp_xmit: 0
> tx23_xdp_full: 0
> tx23_xdp_err: 0
> tx23_xdp_cqes: 0
> tx24_xdp_xmit: 0
> tx24_xdp_full: 0
> tx24_xdp_err: 0
> tx24_xdp_cqes: 0
> tx25_xdp_xmit: 0
> tx25_xdp_full: 0
> tx25_xdp_err: 0
> tx25_xdp_cqes: 0
> tx26_xdp_xmit: 0
> tx26_xdp_full: 0
> tx26_xdp_err: 0
> tx26_xdp_cqes: 0
> tx27_xdp_xmit: 0
> tx27_xdp_full: 0
> tx27_xdp_err: 0
> tx27_xdp_cqes: 0
> tx28_xdp_xmit: 0
> tx28_xdp_full: 0
> tx28_xdp_err: 0
> tx28_xdp_cqes: 0
> tx29_xdp_xmit: 0
> tx29_xdp_full: 0
> tx29_xdp_err: 0
> tx29_xdp_cqes: 0
> tx30_xdp_xmit: 0
> tx30_xdp_full: 0
> tx30_xdp_err: 0
> tx30_xdp_cqes: 0
> tx31_xdp_xmit: 0
> tx31_xdp_full: 0
> tx31_xdp_err: 0
> tx31_xdp_cqes: 0
> tx32_xdp_xmit: 0
> tx32_xdp_full: 0
> tx32_xdp_err: 0
> tx32_xdp_cqes: 0
> tx33_xdp_xmit: 0
> tx33_xdp_full: 0
> tx33_xdp_err: 0
> tx33_xdp_cqes: 0
> tx34_xdp_xmit: 0
> tx34_xdp_full: 0
> tx34_xdp_err: 0
> tx34_xdp_cqes: 0
> tx35_xdp_xmit: 0
> tx35_xdp_full: 0
> tx35_xdp_err: 0
> tx35_xdp_cqes: 0
> tx36_xdp_xmit: 0
> tx36_xdp_full: 0
> tx36_xdp_err: 0
> tx36_xdp_cqes: 0
> tx37_xdp_xmit: 0
> tx37_xdp_full: 0
> tx37_xdp_err: 0
> tx37_xdp_cqes: 0
> tx38_xdp_xmit: 0
> tx38_xdp_full: 0
> tx38_xdp_err: 0
> tx38_xdp_cqes: 0
> tx39_xdp_xmit: 0
> tx39_xdp_full: 0
> tx39_xdp_err: 0
> tx39_xdp_cqes: 0
> tx40_xdp_xmit: 0
> tx40_xdp_full: 0
> tx40_xdp_err: 0
> tx40_xdp_cqes: 0
> tx41_xdp_xmit: 0
> tx41_xdp_full: 0
> tx41_xdp_err: 0
> tx41_xdp_cqes: 0
> tx42_xdp_xmit: 0
> tx42_xdp_full: 0
> tx42_xdp_err: 0
> tx42_xdp_cqes: 0
> tx43_xdp_xmit: 0
> tx43_xdp_full: 0
> tx43_xdp_err: 0
> tx43_xdp_cqes: 0
> tx44_xdp_xmit: 0
> tx44_xdp_full: 0
> tx44_xdp_err: 0
> tx44_xdp_cqes: 0
> tx45_xdp_xmit: 0
> tx45_xdp_full: 0
> tx45_xdp_err: 0
> tx45_xdp_cqes: 0
> tx46_xdp_xmit: 0
> tx46_xdp_full: 0
> tx46_xdp_err: 0
> tx46_xdp_cqes: 0
> tx47_xdp_xmit: 0
> tx47_xdp_full: 0
> tx47_xdp_err: 0
> tx47_xdp_cqes: 0
> tx48_xdp_xmit: 0
> tx48_xdp_full: 0
> tx48_xdp_err: 0
> tx48_xdp_cqes: 0
> tx49_xdp_xmit: 0
> tx49_xdp_full: 0
> tx49_xdp_err: 0
> tx49_xdp_cqes: 0
> tx50_xdp_xmit: 0
> tx50_xdp_full: 0
> tx50_xdp_err: 0
> tx50_xdp_cqes: 0
> tx51_xdp_xmit: 0
> tx51_xdp_full: 0
> tx51_xdp_err: 0
> tx51_xdp_cqes: 0
> tx52_xdp_xmit: 0
> tx52_xdp_full: 0
> tx52_xdp_err: 0
> tx52_xdp_cqes: 0
> tx53_xdp_xmit: 0
> tx53_xdp_full: 0
> tx53_xdp_err: 0
> tx53_xdp_cqes: 0
> tx54_xdp_xmit: 0
> tx54_xdp_full: 0
> tx54_xdp_err: 0
> tx54_xdp_cqes: 0
> tx55_xdp_xmit: 0
> tx55_xdp_full: 0
> tx55_xdp_err: 0
> tx55_xdp_cqes: 0
>
> ethtool -S enp175s0f0
> NIC statistics:
> rx_packets: 141574897253
> rx_bytes: 184445040406258
> tx_packets: 172569543894
> tx_bytes: 99486882076365
> tx_tso_packets: 9367664195
> tx_tso_bytes: 56435233992948
> tx_tso_inner_packets: 0
> tx_tso_inner_bytes: 0
> tx_added_vlan_packets: 141297671626
> tx_nop: 2102916272
> rx_lro_packets: 0
> rx_lro_bytes: 0
> rx_ecn_mark: 0
> rx_removed_vlan_packets: 141574897252
> rx_csum_unnecessary: 0
> rx_csum_none: 23135854
> rx_csum_complete: 141551761398
> rx_csum_unnecessary_inner: 0
> rx_xdp_drop: 0
> rx_xdp_redirect: 0
> rx_xdp_tx_xmit: 0
> rx_xdp_tx_full: 0
> rx_xdp_tx_err: 0
> rx_xdp_tx_cqe: 0
> tx_csum_none: 127934791664
It is a good idea to look into this, tx is not requesting hw tx
csumming for a lot of packets, maybe you are wasting a lot of cpu on
calculating csum, or maybe this is just the rx csum complete..
> tx_csum_partial: 13362879974
> tx_csum_partial_inner: 0
> tx_queue_stopped: 232561
TX queues are stalling, could be an indentation for the pcie
bottelneck.
> tx_queue_dropped: 0
> tx_xmit_more: 1266021946
> tx_recover: 0
> tx_cqes: 140031716469
> tx_queue_wake: 232561
> tx_udp_seg_rem: 0
> tx_cqe_err: 0
> tx_xdp_xmit: 0
> tx_xdp_full: 0
> tx_xdp_err: 0
> tx_xdp_cqes: 0
> rx_wqe_err: 0
> rx_mpwqe_filler_cqes: 0
> rx_mpwqe_filler_strides: 0
> rx_buff_alloc_err: 0
> rx_cqe_compress_blks: 0
> rx_cqe_compress_pkts: 0
> rx_page_reuse: 0
> rx_cache_reuse: 16625975793
> rx_cache_full: 54161465914
> rx_cache_empty: 258048
> rx_cache_busy: 54161472735
> rx_cache_waive: 0
> rx_congst_umr: 0
> rx_arfs_err: 0
> ch_events: 40572621887
> ch_poll: 40885650979
> ch_arm: 40429276692
> ch_aff_change: 0
> ch_eq_rearm: 0
> rx_out_of_buffer: 2791690
> rx_if_down_packets: 74
> rx_vport_unicast_packets: 141843476308
> rx_vport_unicast_bytes: 185421265403318
> tx_vport_unicast_packets: 172569484005
> tx_vport_unicast_bytes: 100019940094298
> rx_vport_multicast_packets: 85122935
> rx_vport_multicast_bytes: 5761316431
> tx_vport_multicast_packets: 6452
> tx_vport_multicast_bytes: 643540
> rx_vport_broadcast_packets: 22423624
> rx_vport_broadcast_bytes: 1390127090
> tx_vport_broadcast_packets: 22024
> tx_vport_broadcast_bytes: 1321440
> rx_vport_rdma_unicast_packets: 0
> rx_vport_rdma_unicast_bytes: 0
> tx_vport_rdma_unicast_packets: 0
> tx_vport_rdma_unicast_bytes: 0
> rx_vport_rdma_multicast_packets: 0
> rx_vport_rdma_multicast_bytes: 0
> tx_vport_rdma_multicast_packets: 0
> tx_vport_rdma_multicast_bytes: 0
> tx_packets_phy: 172569501577
> rx_packets_phy: 142871314588
> rx_crc_errors_phy: 0
> tx_bytes_phy: 100710212814151
> rx_bytes_phy: 187209224289564
> tx_multicast_phy: 6452
> tx_broadcast_phy: 22024
> rx_multicast_phy: 85122933
> rx_broadcast_phy: 22423623
> rx_in_range_len_errors_phy: 2
> rx_out_of_range_len_phy: 0
> rx_oversize_pkts_phy: 0
> rx_symbol_err_phy: 0
> tx_mac_control_phy: 0
> rx_mac_control_phy: 0
> rx_unsupported_op_phy: 0
> rx_pause_ctrl_phy: 0
> tx_pause_ctrl_phy: 0
> rx_discards_phy: 920161423
Ok, this port seem to be suffering more, RX is congested, maybe due to
the pcie bottleneck.
> tx_discards_phy: 0
> tx_errors_phy: 0
> rx_undersize_pkts_phy: 0
> rx_fragments_phy: 0
> rx_jabbers_phy: 0
> rx_64_bytes_phy: 412006326
> rx_65_to_127_bytes_phy: 11934371453
> rx_128_to_255_bytes_phy: 3415281165
> rx_256_to_511_bytes_phy: 2072955511
> rx_512_to_1023_bytes_phy: 2415393005
> rx_1024_to_1518_bytes_phy: 72182391608
> rx_1519_to_2047_bytes_phy: 50438902587
> rx_2048_to_4095_bytes_phy: 0
> rx_4096_to_8191_bytes_phy: 0
> rx_8192_to_10239_bytes_phy: 0
> link_down_events_phy: 0
> rx_pcs_symbol_err_phy: 0
> rx_corrected_bits_phy: 0
> rx_pci_signal_integrity: 0
> tx_pci_signal_integrity: 48
> rx_prio0_bytes: 186709842592642
> rx_prio0_packets: 141481966007
> tx_prio0_bytes: 100710171118138
> tx_prio0_packets: 172569437949
> rx_prio1_bytes: 492288152326
> rx_prio1_packets: 385996045
> tx_prio1_bytes: 0
> tx_prio1_packets: 0
> rx_prio2_bytes: 22119952
> rx_prio2_packets: 70788
> tx_prio2_bytes: 0
> tx_prio2_packets: 0
> rx_prio3_bytes: 546141102
> rx_prio3_packets: 681608
> tx_prio3_bytes: 0
> tx_prio3_packets: 0
> rx_prio4_bytes: 14665067
> rx_prio4_packets: 29486
> tx_prio4_bytes: 0
> tx_prio4_packets: 0
> rx_prio5_bytes: 158862504
> rx_prio5_packets: 965307
> tx_prio5_bytes: 0
> tx_prio5_packets: 0
> rx_prio6_bytes: 669337783
> rx_prio6_packets: 1475775
> tx_prio6_bytes: 0
> tx_prio6_packets: 0
> rx_prio7_bytes: 5623481349
> rx_prio7_packets: 79926412
> tx_prio7_bytes: 0
> tx_prio7_packets: 0
> module_unplug: 0
> module_bus_stuck: 0
> module_high_temp: 0
> module_bad_shorted: 0
> ch0_events: 1446162630
> ch0_poll: 1463312972
> ch0_arm: 1440728278
> ch0_aff_change: 0
> ch0_eq_rearm: 0
> ch1_events: 1384301405
> ch1_poll: 1399210915
> ch1_arm: 1378636486
> ch1_aff_change: 0
> ch1_eq_rearm: 0
> ch2_events: 1382788887
> ch2_poll: 1397231470
> ch2_arm: 1377058116
> ch2_aff_change: 0
> ch2_eq_rearm: 0
> ch3_events: 1461956995
> ch3_poll: 1475553146
> ch3_arm: 1456571625
> ch3_aff_change: 0
> ch3_eq_rearm: 0
> ch4_events: 1497359109
> ch4_poll: 1511021037
> ch4_arm: 1491733757
> ch4_aff_change: 0
> ch4_eq_rearm: 0
> ch5_events: 1387736262
> ch5_poll: 1400964615
> ch5_arm: 1382382834
> ch5_aff_change: 0
> ch5_eq_rearm: 0
> ch6_events: 1376772405
> ch6_poll: 1390851449
> ch6_arm: 1371551764
> ch6_aff_change: 0
> ch6_eq_rearm: 0
> ch7_events: 1431271514
> ch7_poll: 1445049729
> ch7_arm: 1425753718
> ch7_aff_change: 0
> ch7_eq_rearm: 0
> ch8_events: 1426976374
> ch8_poll: 1439938692
> ch8_arm: 1421392984
> ch8_aff_change: 0
> ch8_eq_rearm: 0
> ch9_events: 1456160031
> ch9_poll: 1468922870
> ch9_arm: 1450930446
> ch9_aff_change: 0
> ch9_eq_rearm: 0
> ch10_events: 1443640165
> ch10_poll: 1456812203
> ch10_arm: 1438425101
> ch10_aff_change: 0
> ch10_eq_rearm: 0
> ch11_events: 1381104776
> ch11_poll: 1393811057
> ch11_arm: 1376059326
> ch11_aff_change: 0
> ch11_eq_rearm: 0
> ch12_events: 1365223276
> ch12_poll: 1378406059
> ch12_arm: 1359950494
> ch12_aff_change: 0
> ch12_eq_rearm: 0
> ch13_events: 1421622259
> ch13_poll: 1434670996
> ch13_arm: 1416241801
> ch13_aff_change: 0
> ch13_eq_rearm: 0
> ch14_events: 1379084590
> ch14_poll: 1392425015
> ch14_arm: 1373675179
> ch14_aff_change: 0
> ch14_eq_rearm: 0
> ch15_events: 1531217338
> ch15_poll: 1543353833
> ch15_arm: 1526350453
> ch15_aff_change: 0
> ch15_eq_rearm: 0
> ch16_events: 1460469776
> ch16_poll: 1467995928
> ch16_arm: 1456010194
> ch16_aff_change: 0
> ch16_eq_rearm: 0
> ch17_events: 1494067670
> ch17_poll: 1500856680
> ch17_arm: 1489232674
> ch17_aff_change: 0
> ch17_eq_rearm: 0
> ch18_events: 1530126866
> ch18_poll: 1537293620
> ch18_arm: 1525476123
> ch18_aff_change: 0
> ch18_eq_rearm: 0
> ch19_events: 1499526149
> ch19_poll: 1506789309
> ch19_arm: 1495161602
> ch19_aff_change: 0
> ch19_eq_rearm: 0
> ch20_events: 1451479763
> ch20_poll: 1459767921
> ch20_arm: 1446360801
> ch20_aff_change: 0
> ch20_eq_rearm: 0
> ch21_events: 1521413613
> ch21_poll: 1529345146
> ch21_arm: 1517229314
> ch21_aff_change: 0
> ch21_eq_rearm: 0
> ch22_events: 1471950045
> ch22_poll: 1479746764
> ch22_arm: 1467681629
> ch22_aff_change: 0
> ch22_eq_rearm: 0
> ch23_events: 1502968393
> ch23_poll: 1510419909
> ch23_arm: 1498168438
> ch23_aff_change: 0
> ch23_eq_rearm: 0
> ch24_events: 1473451639
> ch24_poll: 1482606899
> ch24_arm: 1468212489
> ch24_aff_change: 0
> ch24_eq_rearm: 0
> ch25_events: 1440399182
> ch25_poll: 1448897475
> ch25_arm: 1435044786
> ch25_aff_change: 0
> ch25_eq_rearm: 0
> ch26_events: 1436831565
> ch26_poll: 1445485731
> ch26_arm: 1431827527
> ch26_aff_change: 0
> ch26_eq_rearm: 0
> ch27_events: 1516560621
> ch27_poll: 1524911010
> ch27_arm: 1511430164
> ch27_aff_change: 0
> ch27_eq_rearm: 0
> ch28_events: 4
> ch28_poll: 4
> ch28_arm: 4
> ch28_aff_change: 0
> ch28_eq_rearm: 0
> ch29_events: 6
> ch29_poll: 6
> ch29_arm: 6
> ch29_aff_change: 0
> ch29_eq_rearm: 0
> ch30_events: 4
> ch30_poll: 4
> ch30_arm: 4
> ch30_aff_change: 0
> ch30_eq_rearm: 0
> ch31_events: 4
> ch31_poll: 4
> ch31_arm: 4
> ch31_aff_change: 0
> ch31_eq_rearm: 0
> ch32_events: 4
> ch32_poll: 4
> ch32_arm: 4
> ch32_aff_change: 0
> ch32_eq_rearm: 0
> ch33_events: 4
> ch33_poll: 4
> ch33_arm: 4
> ch33_aff_change: 0
> ch33_eq_rearm: 0
> ch34_events: 4
> ch34_poll: 4
> ch34_arm: 4
> ch34_aff_change: 0
> ch34_eq_rearm: 0
> ch35_events: 4
> ch35_poll: 4
> ch35_arm: 4
> ch35_aff_change: 0
> ch35_eq_rearm: 0
> ch36_events: 4
> ch36_poll: 4
> ch36_arm: 4
> ch36_aff_change: 0
> ch36_eq_rearm: 0
> ch37_events: 4
> ch37_poll: 4
> ch37_arm: 4
> ch37_aff_change: 0
> ch37_eq_rearm: 0
> ch38_events: 4
> ch38_poll: 4
> ch38_arm: 4
> ch38_aff_change: 0
> ch38_eq_rearm: 0
> ch39_events: 4
> ch39_poll: 4
> ch39_arm: 4
> ch39_aff_change: 0
> ch39_eq_rearm: 0
> ch40_events: 4
> ch40_poll: 4
> ch40_arm: 4
> ch40_aff_change: 0
> ch40_eq_rearm: 0
> ch41_events: 4
> ch41_poll: 4
> ch41_arm: 4
> ch41_aff_change: 0
> ch41_eq_rearm: 0
> ch42_events: 4
> ch42_poll: 4
> ch42_arm: 4
> ch42_aff_change: 0
> ch42_eq_rearm: 0
> ch43_events: 4
> ch43_poll: 4
> ch43_arm: 4
> ch43_aff_change: 0
> ch43_eq_rearm: 0
> ch44_events: 4
> ch44_poll: 4
> ch44_arm: 4
> ch44_aff_change: 0
> ch44_eq_rearm: 0
> ch45_events: 4
> ch45_poll: 4
> ch45_arm: 4
> ch45_aff_change: 0
> ch45_eq_rearm: 0
> ch46_events: 4
> ch46_poll: 4
> ch46_arm: 4
> ch46_aff_change: 0
> ch46_eq_rearm: 0
> ch47_events: 4
> ch47_poll: 4
> ch47_arm: 4
> ch47_aff_change: 0
> ch47_eq_rearm: 0
> ch48_events: 4
> ch48_poll: 4
> ch48_arm: 4
> ch48_aff_change: 0
> ch48_eq_rearm: 0
> ch49_events: 4
> ch49_poll: 4
> ch49_arm: 4
> ch49_aff_change: 0
> ch49_eq_rearm: 0
> ch50_events: 4
> ch50_poll: 4
> ch50_arm: 4
> ch50_aff_change: 0
> ch50_eq_rearm: 0
> ch51_events: 4
> ch51_poll: 4
> ch51_arm: 4
> ch51_aff_change: 0
> ch51_eq_rearm: 0
> ch52_events: 4
> ch52_poll: 4
> ch52_arm: 4
> ch52_aff_change: 0
> ch52_eq_rearm: 0
> ch53_events: 4
> ch53_poll: 4
> ch53_arm: 4
> ch53_aff_change: 0
> ch53_eq_rearm: 0
> ch54_events: 4
> ch54_poll: 4
> ch54_arm: 4
> ch54_aff_change: 0
> ch54_eq_rearm: 0
> ch55_events: 4
> ch55_poll: 4
> ch55_arm: 4
> ch55_aff_change: 0
> ch55_eq_rearm: 0
> rx0_packets: 5861448653
> rx0_bytes: 7389128595728
> rx0_csum_complete: 5838312798
> rx0_csum_unnecessary: 0
> rx0_csum_unnecessary_inner: 0
> rx0_csum_none: 23135855
> rx0_xdp_drop: 0
> rx0_xdp_redirect: 0
> rx0_lro_packets: 0
> rx0_lro_bytes: 0
> rx0_ecn_mark: 0
> rx0_removed_vlan_packets: 5861448653
> rx0_wqe_err: 0
> rx0_mpwqe_filler_cqes: 0
> rx0_mpwqe_filler_strides: 0
> rx0_buff_alloc_err: 0
> rx0_cqe_compress_blks: 0
> rx0_cqe_compress_pkts: 0
> rx0_page_reuse: 0
> rx0_cache_reuse: 2559
> rx0_cache_full: 2930721512
> rx0_cache_empty: 6656
> rx0_cache_busy: 2930721765
> rx0_cache_waive: 0
> rx0_congst_umr: 0
> rx0_arfs_err: 0
> rx0_xdp_tx_xmit: 0
> rx0_xdp_tx_full: 0
> rx0_xdp_tx_err: 0
> rx0_xdp_tx_cqes: 0
> rx1_packets: 5550585106
> rx1_bytes: 7255635262803
> rx1_csum_complete: 5550585106
> rx1_csum_unnecessary: 0
> rx1_csum_unnecessary_inner: 0
> rx1_csum_none: 0
> rx1_xdp_drop: 0
> rx1_xdp_redirect: 0
> rx1_lro_packets: 0
> rx1_lro_bytes: 0
> rx1_ecn_mark: 0
> rx1_removed_vlan_packets: 5550585106
> rx1_wqe_err: 0
> rx1_mpwqe_filler_cqes: 0
> rx1_mpwqe_filler_strides: 0
> rx1_buff_alloc_err: 0
> rx1_cqe_compress_blks: 0
> rx1_cqe_compress_pkts: 0
> rx1_page_reuse: 0
> rx1_cache_reuse: 2918845
> rx1_cache_full: 2772373453
> rx1_cache_empty: 6656
> rx1_cache_busy: 2772373707
> rx1_cache_waive: 0
> rx1_congst_umr: 0
> rx1_arfs_err: 0
> rx1_xdp_tx_xmit: 0
> rx1_xdp_tx_full: 0
> rx1_xdp_tx_err: 0
> rx1_xdp_tx_cqes: 0
> rx2_packets: 5383874739
> rx2_bytes: 7031545423967
> rx2_csum_complete: 5383874739
> rx2_csum_unnecessary: 0
> rx2_csum_unnecessary_inner: 0
> rx2_csum_none: 0
> rx2_xdp_drop: 0
> rx2_xdp_redirect: 0
> rx2_lro_packets: 0
> rx2_lro_bytes: 0
> rx2_ecn_mark: 0
> rx2_removed_vlan_packets: 5383874739
> rx2_wqe_err: 0
> rx2_mpwqe_filler_cqes: 0
> rx2_mpwqe_filler_strides: 0
> rx2_buff_alloc_err: 0
> rx2_cqe_compress_blks: 0
> rx2_cqe_compress_pkts: 0
> rx2_page_reuse: 0
> rx2_cache_reuse: 2173370
> rx2_cache_full: 2689763744
> rx2_cache_empty: 6656
> rx2_cache_busy: 2689763998
> rx2_cache_waive: 0
> rx2_congst_umr: 0
> rx2_arfs_err: 0
> rx2_xdp_tx_xmit: 0
> rx2_xdp_tx_full: 0
> rx2_xdp_tx_err: 0
> rx2_xdp_tx_cqes: 0
> rx3_packets: 5456494012
> rx3_bytes: 7120241119485
> rx3_csum_complete: 5456494012
> rx3_csum_unnecessary: 0
> rx3_csum_unnecessary_inner: 0
> rx3_csum_none: 0
> rx3_xdp_drop: 0
> rx3_xdp_redirect: 0
> rx3_lro_packets: 0
> rx3_lro_bytes: 0
> rx3_ecn_mark: 0
> rx3_removed_vlan_packets: 5456494012
> rx3_wqe_err: 0
> rx3_mpwqe_filler_cqes: 0
> rx3_mpwqe_filler_strides: 0
> rx3_buff_alloc_err: 0
> rx3_cqe_compress_blks: 0
> rx3_cqe_compress_pkts: 0
> rx3_page_reuse: 0
> rx3_cache_reuse: 2120123
> rx3_cache_full: 2726126628
> rx3_cache_empty: 6656
> rx3_cache_busy: 2726126881
> rx3_cache_waive: 0
> rx3_congst_umr: 0
> rx3_arfs_err: 0
> rx3_xdp_tx_xmit: 0
> rx3_xdp_tx_full: 0
> rx3_xdp_tx_err: 0
> rx3_xdp_tx_cqes: 0
> rx4_packets: 5475216251
> rx4_bytes: 7123129170196
> rx4_csum_complete: 5475216251
> rx4_csum_unnecessary: 0
> rx4_csum_unnecessary_inner: 0
> rx4_csum_none: 0
> rx4_xdp_drop: 0
> rx4_xdp_redirect: 0
> rx4_lro_packets: 0
> rx4_lro_bytes: 0
> rx4_ecn_mark: 0
> rx4_removed_vlan_packets: 5475216251
> rx4_wqe_err: 0
> rx4_mpwqe_filler_cqes: 0
> rx4_mpwqe_filler_strides: 0
> rx4_buff_alloc_err: 0
> rx4_cqe_compress_blks: 0
> rx4_cqe_compress_pkts: 0
> rx4_page_reuse: 0
> rx4_cache_reuse: 2668296355
> rx4_cache_full: 69311549
> rx4_cache_empty: 6656
> rx4_cache_busy: 69311769
> rx4_cache_waive: 0
> rx4_congst_umr: 0
> rx4_arfs_err: 0
> rx4_xdp_tx_xmit: 0
> rx4_xdp_tx_full: 0
> rx4_xdp_tx_err: 0
> rx4_xdp_tx_cqes: 0
> rx5_packets: 5474372232
> rx5_bytes: 7159146801926
> rx5_csum_complete: 5474372232
> rx5_csum_unnecessary: 0
> rx5_csum_unnecessary_inner: 0
> rx5_csum_none: 0
> rx5_xdp_drop: 0
> rx5_xdp_redirect: 0
> rx5_lro_packets: 0
> rx5_lro_bytes: 0
> rx5_ecn_mark: 0
> rx5_removed_vlan_packets: 5474372232
> rx5_wqe_err: 0
> rx5_mpwqe_filler_cqes: 0
> rx5_mpwqe_filler_strides: 0
> rx5_buff_alloc_err: 0
> rx5_cqe_compress_blks: 0
> rx5_cqe_compress_pkts: 0
> rx5_page_reuse: 0
> rx5_cache_reuse: 626187
> rx5_cache_full: 2736559674
> rx5_cache_empty: 6656
> rx5_cache_busy: 2736559929
> rx5_cache_waive: 0
> rx5_congst_umr: 0
> rx5_arfs_err: 0
> rx5_xdp_tx_xmit: 0
> rx5_xdp_tx_full: 0
> rx5_xdp_tx_err: 0
> rx5_xdp_tx_cqes: 0
> rx6_packets: 5533622456
> rx6_bytes: 7207308809081
> rx6_csum_complete: 5533622456
> rx6_csum_unnecessary: 0
> rx6_csum_unnecessary_inner: 0
> rx6_csum_none: 0
> rx6_xdp_drop: 0
> rx6_xdp_redirect: 0
> rx6_lro_packets: 0
> rx6_lro_bytes: 0
> rx6_ecn_mark: 0
> rx6_removed_vlan_packets: 5533622456
> rx6_wqe_err: 0
> rx6_mpwqe_filler_cqes: 0
> rx6_mpwqe_filler_strides: 0
> rx6_buff_alloc_err: 0
> rx6_cqe_compress_blks: 0
> rx6_cqe_compress_pkts: 0
> rx6_page_reuse: 0
> rx6_cache_reuse: 2325217
> rx6_cache_full: 2764485756
> rx6_cache_empty: 6656
> rx6_cache_busy: 2764486011
> rx6_cache_waive: 0
> rx6_congst_umr: 0
> rx6_arfs_err: 0
> rx6_xdp_tx_xmit: 0
> rx6_xdp_tx_full: 0
> rx6_xdp_tx_err: 0
> rx6_xdp_tx_cqes: 0
> rx7_packets: 5533901822
> rx7_bytes: 7227441240536
> rx7_csum_complete: 5533901822
> rx7_csum_unnecessary: 0
> rx7_csum_unnecessary_inner: 0
> rx7_csum_none: 0
> rx7_xdp_drop: 0
> rx7_xdp_redirect: 0
> rx7_lro_packets: 0
> rx7_lro_bytes: 0
> rx7_ecn_mark: 0
> rx7_removed_vlan_packets: 5533901822
> rx7_wqe_err: 0
> rx7_mpwqe_filler_cqes: 0
> rx7_mpwqe_filler_strides: 0
> rx7_buff_alloc_err: 0
> rx7_cqe_compress_blks: 0
> rx7_cqe_compress_pkts: 0
> rx7_page_reuse: 0
> rx7_cache_reuse: 2372505
> rx7_cache_full: 2764578151
> rx7_cache_empty: 6656
> rx7_cache_busy: 2764578403
> rx7_cache_waive: 0
> rx7_congst_umr: 0
> rx7_arfs_err: 0
> rx7_xdp_tx_xmit: 0
> rx7_xdp_tx_full: 0
> rx7_xdp_tx_err: 0
> rx7_xdp_tx_cqes: 0
> rx8_packets: 5485670137
> rx8_bytes: 7203339989013
> rx8_csum_complete: 5485670137
> rx8_csum_unnecessary: 0
> rx8_csum_unnecessary_inner: 0
> rx8_csum_none: 0
> rx8_xdp_drop: 0
> rx8_xdp_redirect: 0
> rx8_lro_packets: 0
> rx8_lro_bytes: 0
> rx8_ecn_mark: 0
> rx8_removed_vlan_packets: 5485670137
> rx8_wqe_err: 0
> rx8_mpwqe_filler_cqes: 0
> rx8_mpwqe_filler_strides: 0
> rx8_buff_alloc_err: 0
> rx8_cqe_compress_blks: 0
> rx8_cqe_compress_pkts: 0
> rx8_page_reuse: 0
> rx8_cache_reuse: 7522232
> rx8_cache_full: 2735312581
> rx8_cache_empty: 6656
> rx8_cache_busy: 2735312836
> rx8_cache_waive: 0
> rx8_congst_umr: 0
> rx8_arfs_err: 0
> rx8_xdp_tx_xmit: 0
> rx8_xdp_tx_full: 0
> rx8_xdp_tx_err: 0
> rx8_xdp_tx_cqes: 0
> rx9_packets: 5482212354
> rx9_bytes: 7169663341718
> rx9_csum_complete: 5482212354
> rx9_csum_unnecessary: 0
> rx9_csum_unnecessary_inner: 0
> rx9_csum_none: 0
> rx9_xdp_drop: 0
> rx9_xdp_redirect: 0
> rx9_lro_packets: 0
> rx9_lro_bytes: 0
> rx9_ecn_mark: 0
> rx9_removed_vlan_packets: 5482212354
> rx9_wqe_err: 0
> rx9_mpwqe_filler_cqes: 0
> rx9_mpwqe_filler_strides: 0
> rx9_buff_alloc_err: 0
> rx9_cqe_compress_blks: 0
> rx9_cqe_compress_pkts: 0
> rx9_page_reuse: 0
> rx9_cache_reuse: 37279961
> rx9_cache_full: 2703825961
> rx9_cache_empty: 6656
> rx9_cache_busy: 2703826215
> rx9_cache_waive: 0
> rx9_congst_umr: 0
> rx9_arfs_err: 0
> rx9_xdp_tx_xmit: 0
> rx9_xdp_tx_full: 0
> rx9_xdp_tx_err: 0
> rx9_xdp_tx_cqes: 0
> rx10_packets: 5524679952
> rx10_bytes: 7248301275181
> rx10_csum_complete: 5524679952
> rx10_csum_unnecessary: 0
> rx10_csum_unnecessary_inner: 0
> rx10_csum_none: 0
> rx10_xdp_drop: 0
> rx10_xdp_redirect: 0
> rx10_lro_packets: 0
> rx10_lro_bytes: 0
> rx10_ecn_mark: 0
> rx10_removed_vlan_packets: 5524679952
> rx10_wqe_err: 0
> rx10_mpwqe_filler_cqes: 0
> rx10_mpwqe_filler_strides: 0
> rx10_buff_alloc_err: 0
> rx10_cqe_compress_blks: 0
> rx10_cqe_compress_pkts: 0
> rx10_page_reuse: 0
> rx10_cache_reuse: 2049666
> rx10_cache_full: 2760290055
> rx10_cache_empty: 6656
> rx10_cache_busy: 2760290310
> rx10_cache_waive: 0
> rx10_congst_umr: 0
> rx10_arfs_err: 0
> rx10_xdp_tx_xmit: 0
> rx10_xdp_tx_full: 0
> rx10_xdp_tx_err: 0
> rx10_xdp_tx_cqes: 0
> rx11_packets: 5394633545
> rx11_bytes: 7033509636092
> rx11_csum_complete: 5394633545
> rx11_csum_unnecessary: 0
> rx11_csum_unnecessary_inner: 0
> rx11_csum_none: 0
> rx11_xdp_drop: 0
> rx11_xdp_redirect: 0
> rx11_lro_packets: 0
> rx11_lro_bytes: 0
> rx11_ecn_mark: 0
> rx11_removed_vlan_packets: 5394633545
> rx11_wqe_err: 0
> rx11_mpwqe_filler_cqes: 0
> rx11_mpwqe_filler_strides: 0
> rx11_buff_alloc_err: 0
> rx11_cqe_compress_blks: 0
> rx11_cqe_compress_pkts: 0
> rx11_page_reuse: 0
> rx11_cache_reuse: 2617466268
> rx11_cache_full: 79850284
> rx11_cache_empty: 6656
> rx11_cache_busy: 79850504
> rx11_cache_waive: 0
> rx11_congst_umr: 0
> rx11_arfs_err: 0
> rx11_xdp_tx_xmit: 0
> rx11_xdp_tx_full: 0
> rx11_xdp_tx_err: 0
> rx11_xdp_tx_cqes: 0
> rx12_packets: 5458907385
> rx12_bytes: 7134867867515
> rx12_csum_complete: 5458907385
> rx12_csum_unnecessary: 0
> rx12_csum_unnecessary_inner: 0
> rx12_csum_none: 0
> rx12_xdp_drop: 0
> rx12_xdp_redirect: 0
> rx12_lro_packets: 0
> rx12_lro_bytes: 0
> rx12_ecn_mark: 0
> rx12_removed_vlan_packets: 5458907385
> rx12_wqe_err: 0
> rx12_mpwqe_filler_cqes: 0
> rx12_mpwqe_filler_strides: 0
> rx12_buff_alloc_err: 0
> rx12_cqe_compress_blks: 0
> rx12_cqe_compress_pkts: 0
> rx12_page_reuse: 0
> rx12_cache_reuse: 2650214169
> rx12_cache_full: 79239303
> rx12_cache_empty: 6656
> rx12_cache_busy: 79239523
> rx12_cache_waive: 0
> rx12_congst_umr: 0
> rx12_arfs_err: 0
> rx12_xdp_tx_xmit: 0
> rx12_xdp_tx_full: 0
> rx12_xdp_tx_err: 0
> rx12_xdp_tx_cqes: 0
> rx13_packets: 5549932912
> rx13_bytes: 7232548705586
> rx13_csum_complete: 5549932912
> rx13_csum_unnecessary: 0
> rx13_csum_unnecessary_inner: 0
> rx13_csum_none: 0
> rx13_xdp_drop: 0
> rx13_xdp_redirect: 0
> rx13_lro_packets: 0
> rx13_lro_bytes: 0
> rx13_ecn_mark: 0
> rx13_removed_vlan_packets: 5549932912
> rx13_wqe_err: 0
> rx13_mpwqe_filler_cqes: 0
> rx13_mpwqe_filler_strides: 0
> rx13_buff_alloc_err: 0
> rx13_cqe_compress_blks: 0
> rx13_cqe_compress_pkts: 0
> rx13_page_reuse: 0
> rx13_cache_reuse: 2417696
> rx13_cache_full: 2772548505
> rx13_cache_empty: 6656
> rx13_cache_busy: 2772548760
> rx13_cache_waive: 0
> rx13_congst_umr: 0
> rx13_arfs_err: 0
> rx13_xdp_tx_xmit: 0
> rx13_xdp_tx_full: 0
> rx13_xdp_tx_err: 0
> rx13_xdp_tx_cqes: 0
> rx14_packets: 5517712329
> rx14_bytes: 7192111965227
> rx14_csum_complete: 5517712329
> rx14_csum_unnecessary: 0
> rx14_csum_unnecessary_inner: 0
> rx14_csum_none: 0
> rx14_xdp_drop: 0
> rx14_xdp_redirect: 0
> rx14_lro_packets: 0
> rx14_lro_bytes: 0
> rx14_ecn_mark: 0
> rx14_removed_vlan_packets: 5517712329
> rx14_wqe_err: 0
> rx14_mpwqe_filler_cqes: 0
> rx14_mpwqe_filler_strides: 0
> rx14_buff_alloc_err: 0
> rx14_cqe_compress_blks: 0
> rx14_cqe_compress_pkts: 0
> rx14_page_reuse: 0
> rx14_cache_reuse: 1830206
> rx14_cache_full: 2757025703
> rx14_cache_empty: 6656
> rx14_cache_busy: 2757025958
> rx14_cache_waive: 0
> rx14_congst_umr: 0
> rx14_arfs_err: 0
> rx14_xdp_tx_xmit: 0
> rx14_xdp_tx_full: 0
> rx14_xdp_tx_err: 0
> rx14_xdp_tx_cqes: 0
> rx15_packets: 5578343373
> rx15_bytes: 7268484501219
> rx15_csum_complete: 5578343373
> rx15_csum_unnecessary: 0
> rx15_csum_unnecessary_inner: 0
> rx15_csum_none: 0
> rx15_xdp_drop: 0
> rx15_xdp_redirect: 0
> rx15_lro_packets: 0
> rx15_lro_bytes: 0
> rx15_ecn_mark: 0
> rx15_removed_vlan_packets: 5578343373
> rx15_wqe_err: 0
> rx15_mpwqe_filler_cqes: 0
> rx15_mpwqe_filler_strides: 0
> rx15_buff_alloc_err: 0
> rx15_cqe_compress_blks: 0
> rx15_cqe_compress_pkts: 0
> rx15_page_reuse: 0
> rx15_cache_reuse: 2317165
> rx15_cache_full: 2786854266
> rx15_cache_empty: 6656
> rx15_cache_busy: 2786854519
> rx15_cache_waive: 0
> rx15_congst_umr: 0
> rx15_arfs_err: 0
> rx15_xdp_tx_xmit: 0
> rx15_xdp_tx_full: 0
> rx15_xdp_tx_err: 0
> rx15_xdp_tx_cqes: 0
> rx16_packets: 4435773951
> rx16_bytes: 5766665272007
> rx16_csum_complete: 4435773951
> rx16_csum_unnecessary: 0
> rx16_csum_unnecessary_inner: 0
> rx16_csum_none: 0
> rx16_xdp_drop: 0
> rx16_xdp_redirect: 0
> rx16_lro_packets: 0
> rx16_lro_bytes: 0
> rx16_ecn_mark: 0
> rx16_removed_vlan_packets: 4435773951
> rx16_wqe_err: 0
> rx16_mpwqe_filler_cqes: 0
> rx16_mpwqe_filler_strides: 0
> rx16_buff_alloc_err: 0
> rx16_cqe_compress_blks: 0
> rx16_cqe_compress_pkts: 0
> rx16_page_reuse: 0
> rx16_cache_reuse: 2033793
> rx16_cache_full: 2215852927
> rx16_cache_empty: 6656
> rx16_cache_busy: 2215853179
> rx16_cache_waive: 0
> rx16_congst_umr: 0
> rx16_arfs_err: 0
> rx16_xdp_tx_xmit: 0
> rx16_xdp_tx_full: 0
> rx16_xdp_tx_err: 0
> rx16_xdp_tx_cqes: 0
> rx17_packets: 4344087587
> rx17_bytes: 5695006496323
> rx17_csum_complete: 4344087587
> rx17_csum_unnecessary: 0
> rx17_csum_unnecessary_inner: 0
> rx17_csum_none: 0
> rx17_xdp_drop: 0
> rx17_xdp_redirect: 0
> rx17_lro_packets: 0
> rx17_lro_bytes: 0
> rx17_ecn_mark: 0
> rx17_removed_vlan_packets: 4344087587
> rx17_wqe_err: 0
> rx17_mpwqe_filler_cqes: 0
> rx17_mpwqe_filler_strides: 0
> rx17_buff_alloc_err: 0
> rx17_cqe_compress_blks: 0
> rx17_cqe_compress_pkts: 0
> rx17_page_reuse: 0
> rx17_cache_reuse: 2652127
> rx17_cache_full: 2169391411
> rx17_cache_empty: 6656
> rx17_cache_busy: 2169391665
> rx17_cache_waive: 0
> rx17_congst_umr: 0
> rx17_arfs_err: 0
> rx17_xdp_tx_xmit: 0
> rx17_xdp_tx_full: 0
> rx17_xdp_tx_err: 0
> rx17_xdp_tx_cqes: 0
> rx18_packets: 4407422804
> rx18_bytes: 5741134634177
> rx18_csum_complete: 4407422804
> rx18_csum_unnecessary: 0
> rx18_csum_unnecessary_inner: 0
> rx18_csum_none: 0
> rx18_xdp_drop: 0
> rx18_xdp_redirect: 0
> rx18_lro_packets: 0
> rx18_lro_bytes: 0
> rx18_ecn_mark: 0
> rx18_removed_vlan_packets: 4407422804
> rx18_wqe_err: 0
> rx18_mpwqe_filler_cqes: 0
> rx18_mpwqe_filler_strides: 0
> rx18_buff_alloc_err: 0
> rx18_cqe_compress_blks: 0
> rx18_cqe_compress_pkts: 0
> rx18_page_reuse: 0
> rx18_cache_reuse: 2156080239
> rx18_cache_full: 47630941
> rx18_cache_empty: 6656
> rx18_cache_busy: 47631161
> rx18_cache_waive: 0
> rx18_congst_umr: 0
> rx18_arfs_err: 0
> rx18_xdp_tx_xmit: 0
> rx18_xdp_tx_full: 0
> rx18_xdp_tx_err: 0
> rx18_xdp_tx_cqes: 0
> rx19_packets: 4545554180
> rx19_bytes: 5905277503466
> rx19_csum_complete: 4545554180
> rx19_csum_unnecessary: 0
> rx19_csum_unnecessary_inner: 0
> rx19_csum_none: 0
> rx19_xdp_drop: 0
> rx19_xdp_redirect: 0
> rx19_lro_packets: 0
> rx19_lro_bytes: 0
> rx19_ecn_mark: 0
> rx19_removed_vlan_packets: 4545554180
> rx19_wqe_err: 0
> rx19_mpwqe_filler_cqes: 0
> rx19_mpwqe_filler_strides: 0
> rx19_buff_alloc_err: 0
> rx19_cqe_compress_blks: 0
> rx19_cqe_compress_pkts: 0
> rx19_page_reuse: 0
> rx19_cache_reuse: 11112455
> rx19_cache_full: 2261664379
> rx19_cache_empty: 6656
> rx19_cache_busy: 2261664601
> rx19_cache_waive: 0
> rx19_congst_umr: 0
> rx19_arfs_err: 0
> rx19_xdp_tx_xmit: 0
> rx19_xdp_tx_full: 0
> rx19_xdp_tx_err: 0
> rx19_xdp_tx_cqes: 0
> rx20_packets: 4397428553
> rx20_bytes: 5757329184301
> rx20_csum_complete: 4397428553
> rx20_csum_unnecessary: 0
> rx20_csum_unnecessary_inner: 0
> rx20_csum_none: 0
> rx20_xdp_drop: 0
> rx20_xdp_redirect: 0
> rx20_lro_packets: 0
> rx20_lro_bytes: 0
> rx20_ecn_mark: 0
> rx20_removed_vlan_packets: 4397428553
> rx20_wqe_err: 0
> rx20_mpwqe_filler_cqes: 0
> rx20_mpwqe_filler_strides: 0
> rx20_buff_alloc_err: 0
> rx20_cqe_compress_blks: 0
> rx20_cqe_compress_pkts: 0
> rx20_page_reuse: 0
> rx20_cache_reuse: 2168116995
> rx20_cache_full: 30597061
> rx20_cache_empty: 6656
> rx20_cache_busy: 30597281
> rx20_cache_waive: 0
> rx20_congst_umr: 0
> rx20_arfs_err: 0
> rx20_xdp_tx_xmit: 0
> rx20_xdp_tx_full: 0
> rx20_xdp_tx_err: 0
> rx20_xdp_tx_cqes: 0
> rx21_packets: 4552564821
> rx21_bytes: 5944840329249
> rx21_csum_complete: 4552564821
> rx21_csum_unnecessary: 0
> rx21_csum_unnecessary_inner: 0
> rx21_csum_none: 0
> rx21_xdp_drop: 0
> rx21_xdp_redirect: 0
> rx21_lro_packets: 0
> rx21_lro_bytes: 0
> rx21_ecn_mark: 0
> rx21_removed_vlan_packets: 4552564821
> rx21_wqe_err: 0
> rx21_mpwqe_filler_cqes: 0
> rx21_mpwqe_filler_strides: 0
> rx21_buff_alloc_err: 0
> rx21_cqe_compress_blks: 0
> rx21_cqe_compress_pkts: 0
> rx21_page_reuse: 0
> rx21_cache_reuse: 2295681
> rx21_cache_full: 2273986474
> rx21_cache_empty: 6656
> rx21_cache_busy: 2273986727
> rx21_cache_waive: 0
> rx21_congst_umr: 0
> rx21_arfs_err: 0
> rx21_xdp_tx_xmit: 0
> rx21_xdp_tx_full: 0
> rx21_xdp_tx_err: 0
> rx21_xdp_tx_cqes: 0
> rx22_packets: 4629499740
> rx22_bytes: 5924206566499
> rx22_csum_complete: 4629499740
> rx22_csum_unnecessary: 0
> rx22_csum_unnecessary_inner: 0
> rx22_csum_none: 0
> rx22_xdp_drop: 0
> rx22_xdp_redirect: 0
> rx22_lro_packets: 0
> rx22_lro_bytes: 0
> rx22_ecn_mark: 0
> rx22_removed_vlan_packets: 4629499740
> rx22_wqe_err: 0
> rx22_mpwqe_filler_cqes: 0
> rx22_mpwqe_filler_strides: 0
> rx22_buff_alloc_err: 0
> rx22_cqe_compress_blks: 0
> rx22_cqe_compress_pkts: 0
> rx22_page_reuse: 0
> rx22_cache_reuse: 1407527
> rx22_cache_full: 2313342088
> rx22_cache_empty: 6656
> rx22_cache_busy: 2313342341
> rx22_cache_waive: 0
> rx22_congst_umr: 0
> rx22_arfs_err: 0
> rx22_xdp_tx_xmit: 0
> rx22_xdp_tx_full: 0
> rx22_xdp_tx_err: 0
> rx22_xdp_tx_cqes: 0
> rx23_packets: 4387124505
> rx23_bytes: 5718118678470
> rx23_csum_complete: 4387124505
> rx23_csum_unnecessary: 0
> rx23_csum_unnecessary_inner: 0
> rx23_csum_none: 0
> rx23_xdp_drop: 0
> rx23_xdp_redirect: 0
> rx23_lro_packets: 0
> rx23_lro_bytes: 0
> rx23_ecn_mark: 0
> rx23_removed_vlan_packets: 4387124505
> rx23_wqe_err: 0
> rx23_mpwqe_filler_cqes: 0
> rx23_mpwqe_filler_strides: 0
> rx23_buff_alloc_err: 0
> rx23_cqe_compress_blks: 0
> rx23_cqe_compress_pkts: 0
> rx23_page_reuse: 0
> rx23_cache_reuse: 2013280
> rx23_cache_full: 2191548717
> rx23_cache_empty: 6656
> rx23_cache_busy: 2191548972
> rx23_cache_waive: 0
> rx23_congst_umr: 0
> rx23_arfs_err: 0
> rx23_xdp_tx_xmit: 0
> rx23_xdp_tx_full: 0
> rx23_xdp_tx_err: 0
> rx23_xdp_tx_cqes: 0
> rx24_packets: 4398791634
> rx24_bytes: 5744875564632
> rx24_csum_complete: 4398791634
> rx24_csum_unnecessary: 0
> rx24_csum_unnecessary_inner: 0
> rx24_csum_none: 0
> rx24_xdp_drop: 0
> rx24_xdp_redirect: 0
> rx24_lro_packets: 0
> rx24_lro_bytes: 0
> rx24_ecn_mark: 0
> rx24_removed_vlan_packets: 4398791634
> rx24_wqe_err: 0
> rx24_mpwqe_filler_cqes: 0
> rx24_mpwqe_filler_strides: 0
> rx24_buff_alloc_err: 0
> rx24_cqe_compress_blks: 0
> rx24_cqe_compress_pkts: 0
> rx24_page_reuse: 0
> rx24_cache_reuse: 2143926100
> rx24_cache_full: 55469496
> rx24_cache_empty: 6656
> rx24_cache_busy: 55469716
> rx24_cache_waive: 0
> rx24_congst_umr: 0
> rx24_arfs_err: 0
> rx24_xdp_tx_xmit: 0
> rx24_xdp_tx_full: 0
> rx24_xdp_tx_err: 0
> rx24_xdp_tx_cqes: 0
> rx25_packets: 4377204935
> rx25_bytes: 5710369124105
> rx25_csum_complete: 4377204935
> rx25_csum_unnecessary: 0
> rx25_csum_unnecessary_inner: 0
> rx25_csum_none: 0
> rx25_xdp_drop: 0
> rx25_xdp_redirect: 0
> rx25_lro_packets: 0
> rx25_lro_bytes: 0
> rx25_ecn_mark: 0
> rx25_removed_vlan_packets: 4377204935
> rx25_wqe_err: 0
> rx25_mpwqe_filler_cqes: 0
> rx25_mpwqe_filler_strides: 0
> rx25_buff_alloc_err: 0
> rx25_cqe_compress_blks: 0
> rx25_cqe_compress_pkts: 0
> rx25_page_reuse: 0
> rx25_cache_reuse: 2132658660
> rx25_cache_full: 55943584
> rx25_cache_empty: 6656
> rx25_cache_busy: 55943804
> rx25_cache_waive: 0
> rx25_congst_umr: 0
> rx25_arfs_err: 0
> rx25_xdp_tx_xmit: 0
> rx25_xdp_tx_full: 0
> rx25_xdp_tx_err: 0
> rx25_xdp_tx_cqes: 0
> rx26_packets: 4496003688
> rx26_bytes: 5862180715503
> rx26_csum_complete: 4496003688
> rx26_csum_unnecessary: 0
> rx26_csum_unnecessary_inner: 0
> rx26_csum_none: 0
> rx26_xdp_drop: 0
> rx26_xdp_redirect: 0
> rx26_lro_packets: 0
> rx26_lro_bytes: 0
> rx26_ecn_mark: 0
> rx26_removed_vlan_packets: 4496003688
> rx26_wqe_err: 0
> rx26_mpwqe_filler_cqes: 0
> rx26_mpwqe_filler_strides: 0
> rx26_buff_alloc_err: 0
> rx26_cqe_compress_blks: 0
> rx26_cqe_compress_pkts: 0
> rx26_page_reuse: 0
> rx26_cache_reuse: 8
> rx26_cache_full: 2248001581
> rx26_cache_empty: 6656
> rx26_cache_busy: 2248001836
> rx26_cache_waive: 0
> rx26_congst_umr: 0
> rx26_arfs_err: 0
> rx26_xdp_tx_xmit: 0
> rx26_xdp_tx_full: 0
> rx26_xdp_tx_err: 0
> rx26_xdp_tx_cqes: 0
> rx27_packets: 4341849333
> rx27_bytes: 5678653545018
> rx27_csum_complete: 4341849333
> rx27_csum_unnecessary: 0
> rx27_csum_unnecessary_inner: 0
> rx27_csum_none: 0
> rx27_xdp_drop: 0
> rx27_xdp_redirect: 0
> rx27_lro_packets: 0
> rx27_lro_bytes: 0
> rx27_ecn_mark: 0
> rx27_removed_vlan_packets: 4341849333
> rx27_wqe_err: 0
> rx27_mpwqe_filler_cqes: 0
> rx27_mpwqe_filler_strides: 0
> rx27_buff_alloc_err: 0
> rx27_cqe_compress_blks: 0
> rx27_cqe_compress_pkts: 0
> rx27_page_reuse: 0
> rx27_cache_reuse: 1748188
> rx27_cache_full: 2169176223
> rx27_cache_empty: 6656
> rx27_cache_busy: 2169176476
> rx27_cache_waive: 0
> rx27_congst_umr: 0
> rx27_arfs_err: 0
> rx27_xdp_tx_xmit: 0
> rx27_xdp_tx_full: 0
> rx27_xdp_tx_err: 0
> rx27_xdp_tx_cqes: 0
> rx28_packets: 0
> rx28_bytes: 0
> rx28_csum_complete: 0
> rx28_csum_unnecessary: 0
> rx28_csum_unnecessary_inner: 0
> rx28_csum_none: 0
> rx28_xdp_drop: 0
> rx28_xdp_redirect: 0
> rx28_lro_packets: 0
> rx28_lro_bytes: 0
> rx28_ecn_mark: 0
> rx28_removed_vlan_packets: 0
> rx28_wqe_err: 0
> rx28_mpwqe_filler_cqes: 0
> rx28_mpwqe_filler_strides: 0
> rx28_buff_alloc_err: 0
> rx28_cqe_compress_blks: 0
> rx28_cqe_compress_pkts: 0
> rx28_page_reuse: 0
> rx28_cache_reuse: 0
> rx28_cache_full: 0
> rx28_cache_empty: 2560
> rx28_cache_busy: 0
> rx28_cache_waive: 0
> rx28_congst_umr: 0
> rx28_arfs_err: 0
> rx28_xdp_tx_xmit: 0
> rx28_xdp_tx_full: 0
> rx28_xdp_tx_err: 0
> rx28_xdp_tx_cqes: 0
> rx29_packets: 0
> rx29_bytes: 0
> rx29_csum_complete: 0
> rx29_csum_unnecessary: 0
> rx29_csum_unnecessary_inner: 0
> rx29_csum_none: 0
> rx29_xdp_drop: 0
> rx29_xdp_redirect: 0
> rx29_lro_packets: 0
> rx29_lro_bytes: 0
> rx29_ecn_mark: 0
> rx29_removed_vlan_packets: 0
> rx29_wqe_err: 0
> rx29_mpwqe_filler_cqes: 0
> rx29_mpwqe_filler_strides: 0
> rx29_buff_alloc_err: 0
> rx29_cqe_compress_blks: 0
> rx29_cqe_compress_pkts: 0
> rx29_page_reuse: 0
> rx29_cache_reuse: 0
> rx29_cache_full: 0
> rx29_cache_empty: 2560
> rx29_cache_busy: 0
> rx29_cache_waive: 0
> rx29_congst_umr: 0
> rx29_arfs_err: 0
> rx29_xdp_tx_xmit: 0
> rx29_xdp_tx_full: 0
> rx29_xdp_tx_err: 0
> rx29_xdp_tx_cqes: 0
> rx30_packets: 0
> rx30_bytes: 0
> rx30_csum_complete: 0
> rx30_csum_unnecessary: 0
> rx30_csum_unnecessary_inner: 0
> rx30_csum_none: 0
> rx30_xdp_drop: 0
> rx30_xdp_redirect: 0
> rx30_lro_packets: 0
> rx30_lro_bytes: 0
> rx30_ecn_mark: 0
> rx30_removed_vlan_packets: 0
> rx30_wqe_err: 0
> rx30_mpwqe_filler_cqes: 0
> rx30_mpwqe_filler_strides: 0
> rx30_buff_alloc_err: 0
> rx30_cqe_compress_blks: 0
> rx30_cqe_compress_pkts: 0
> rx30_page_reuse: 0
> rx30_cache_reuse: 0
> rx30_cache_full: 0
> rx30_cache_empty: 2560
> rx30_cache_busy: 0
> rx30_cache_waive: 0
> rx30_congst_umr: 0
> rx30_arfs_err: 0
> rx30_xdp_tx_xmit: 0
> rx30_xdp_tx_full: 0
> rx30_xdp_tx_err: 0
> rx30_xdp_tx_cqes: 0
> rx31_packets: 0
> rx31_bytes: 0
> rx31_csum_complete: 0
> rx31_csum_unnecessary: 0
> rx31_csum_unnecessary_inner: 0
> rx31_csum_none: 0
> rx31_xdp_drop: 0
> rx31_xdp_redirect: 0
> rx31_lro_packets: 0
> rx31_lro_bytes: 0
> rx31_ecn_mark: 0
> rx31_removed_vlan_packets: 0
> rx31_wqe_err: 0
> rx31_mpwqe_filler_cqes: 0
> rx31_mpwqe_filler_strides: 0
> rx31_buff_alloc_err: 0
> rx31_cqe_compress_blks: 0
> rx31_cqe_compress_pkts: 0
> rx31_page_reuse: 0
> rx31_cache_reuse: 0
> rx31_cache_full: 0
> rx31_cache_empty: 2560
> rx31_cache_busy: 0
> rx31_cache_waive: 0
> rx31_congst_umr: 0
> rx31_arfs_err: 0
> rx31_xdp_tx_xmit: 0
> rx31_xdp_tx_full: 0
> rx31_xdp_tx_err: 0
> rx31_xdp_tx_cqes: 0
> rx32_packets: 0
> rx32_bytes: 0
> rx32_csum_complete: 0
> rx32_csum_unnecessary: 0
> rx32_csum_unnecessary_inner: 0
> rx32_csum_none: 0
> rx32_xdp_drop: 0
> rx32_xdp_redirect: 0
> rx32_lro_packets: 0
> rx32_lro_bytes: 0
> rx32_ecn_mark: 0
> rx32_removed_vlan_packets: 0
> rx32_wqe_err: 0
> rx32_mpwqe_filler_cqes: 0
> rx32_mpwqe_filler_strides: 0
> rx32_buff_alloc_err: 0
> rx32_cqe_compress_blks: 0
> rx32_cqe_compress_pkts: 0
> rx32_page_reuse: 0
> rx32_cache_reuse: 0
> rx32_cache_full: 0
> rx32_cache_empty: 2560
> rx32_cache_busy: 0
> rx32_cache_waive: 0
> rx32_congst_umr: 0
> rx32_arfs_err: 0
> rx32_xdp_tx_xmit: 0
> rx32_xdp_tx_full: 0
> rx32_xdp_tx_err: 0
> rx32_xdp_tx_cqes: 0
> rx33_packets: 0
> rx33_bytes: 0
> rx33_csum_complete: 0
> rx33_csum_unnecessary: 0
> rx33_csum_unnecessary_inner: 0
> rx33_csum_none: 0
> rx33_xdp_drop: 0
> rx33_xdp_redirect: 0
> rx33_lro_packets: 0
> rx33_lro_bytes: 0
> rx33_ecn_mark: 0
> rx33_removed_vlan_packets: 0
> rx33_wqe_err: 0
> rx33_mpwqe_filler_cqes: 0
> rx33_mpwqe_filler_strides: 0
> rx33_buff_alloc_err: 0
> rx33_cqe_compress_blks: 0
> rx33_cqe_compress_pkts: 0
> rx33_page_reuse: 0
> rx33_cache_reuse: 0
> rx33_cache_full: 0
> rx33_cache_empty: 2560
> rx33_cache_busy: 0
> rx33_cache_waive: 0
> rx33_congst_umr: 0
> rx33_arfs_err: 0
> rx33_xdp_tx_xmit: 0
> rx33_xdp_tx_full: 0
> rx33_xdp_tx_err: 0
> rx33_xdp_tx_cqes: 0
> rx34_packets: 0
> rx34_bytes: 0
> rx34_csum_complete: 0
> rx34_csum_unnecessary: 0
> rx34_csum_unnecessary_inner: 0
> rx34_csum_none: 0
> rx34_xdp_drop: 0
> rx34_xdp_redirect: 0
> rx34_lro_packets: 0
> rx34_lro_bytes: 0
> rx34_ecn_mark: 0
> rx34_removed_vlan_packets: 0
> rx34_wqe_err: 0
> rx34_mpwqe_filler_cqes: 0
> rx34_mpwqe_filler_strides: 0
> rx34_buff_alloc_err: 0
> rx34_cqe_compress_blks: 0
> rx34_cqe_compress_pkts: 0
> rx34_page_reuse: 0
> rx34_cache_reuse: 0
> rx34_cache_full: 0
> rx34_cache_empty: 2560
> rx34_cache_busy: 0
> rx34_cache_waive: 0
> rx34_congst_umr: 0
> rx34_arfs_err: 0
> rx34_xdp_tx_xmit: 0
> rx34_xdp_tx_full: 0
> rx34_xdp_tx_err: 0
> rx34_xdp_tx_cqes: 0
> rx35_packets: 0
> rx35_bytes: 0
> rx35_csum_complete: 0
> rx35_csum_unnecessary: 0
> rx35_csum_unnecessary_inner: 0
> rx35_csum_none: 0
> rx35_xdp_drop: 0
> rx35_xdp_redirect: 0
> rx35_lro_packets: 0
> rx35_lro_bytes: 0
> rx35_ecn_mark: 0
> rx35_removed_vlan_packets: 0
> rx35_wqe_err: 0
> rx35_mpwqe_filler_cqes: 0
> rx35_mpwqe_filler_strides: 0
> rx35_buff_alloc_err: 0
> rx35_cqe_compress_blks: 0
> rx35_cqe_compress_pkts: 0
> rx35_page_reuse: 0
> rx35_cache_reuse: 0
> rx35_cache_full: 0
> rx35_cache_empty: 2560
> rx35_cache_busy: 0
> rx35_cache_waive: 0
> rx35_congst_umr: 0
> rx35_arfs_err: 0
> rx35_xdp_tx_xmit: 0
> rx35_xdp_tx_full: 0
> rx35_xdp_tx_err: 0
> rx35_xdp_tx_cqes: 0
> rx36_packets: 0
> rx36_bytes: 0
> rx36_csum_complete: 0
> rx36_csum_unnecessary: 0
> rx36_csum_unnecessary_inner: 0
> rx36_csum_none: 0
> rx36_xdp_drop: 0
> rx36_xdp_redirect: 0
> rx36_lro_packets: 0
> rx36_lro_bytes: 0
> rx36_ecn_mark: 0
> rx36_removed_vlan_packets: 0
> rx36_wqe_err: 0
> rx36_mpwqe_filler_cqes: 0
> rx36_mpwqe_filler_strides: 0
> rx36_buff_alloc_err: 0
> rx36_cqe_compress_blks: 0
> rx36_cqe_compress_pkts: 0
> rx36_page_reuse: 0
> rx36_cache_reuse: 0
> rx36_cache_full: 0
> rx36_cache_empty: 2560
> rx36_cache_busy: 0
> rx36_cache_waive: 0
> rx36_congst_umr: 0
> rx36_arfs_err: 0
> rx36_xdp_tx_xmit: 0
> rx36_xdp_tx_full: 0
> rx36_xdp_tx_err: 0
> rx36_xdp_tx_cqes: 0
> rx37_packets: 0
> rx37_bytes: 0
> rx37_csum_complete: 0
> rx37_csum_unnecessary: 0
> rx37_csum_unnecessary_inner: 0
> rx37_csum_none: 0
> rx37_xdp_drop: 0
> rx37_xdp_redirect: 0
> rx37_lro_packets: 0
> rx37_lro_bytes: 0
> rx37_ecn_mark: 0
> rx37_removed_vlan_packets: 0
> rx37_wqe_err: 0
> rx37_mpwqe_filler_cqes: 0
> rx37_mpwqe_filler_strides: 0
> rx37_buff_alloc_err: 0
> rx37_cqe_compress_blks: 0
> rx37_cqe_compress_pkts: 0
> rx37_page_reuse: 0
> rx37_cache_reuse: 0
> rx37_cache_full: 0
> rx37_cache_empty: 2560
> rx37_cache_busy: 0
> rx37_cache_waive: 0
> rx37_congst_umr: 0
> rx37_arfs_err: 0
> rx37_xdp_tx_xmit: 0
> rx37_xdp_tx_full: 0
> rx37_xdp_tx_err: 0
> rx37_xdp_tx_cqes: 0
> rx38_packets: 0
> rx38_bytes: 0
> rx38_csum_complete: 0
> rx38_csum_unnecessary: 0
> rx38_csum_unnecessary_inner: 0
> rx38_csum_none: 0
> rx38_xdp_drop: 0
> rx38_xdp_redirect: 0
> rx38_lro_packets: 0
> rx38_lro_bytes: 0
> rx38_ecn_mark: 0
> rx38_removed_vlan_packets: 0
> rx38_wqe_err: 0
> rx38_mpwqe_filler_cqes: 0
> rx38_mpwqe_filler_strides: 0
> rx38_buff_alloc_err: 0
> rx38_cqe_compress_blks: 0
> rx38_cqe_compress_pkts: 0
> rx38_page_reuse: 0
> rx38_cache_reuse: 0
> rx38_cache_full: 0
> rx38_cache_empty: 2560
> rx38_cache_busy: 0
> rx38_cache_waive: 0
> rx38_congst_umr: 0
> rx38_arfs_err: 0
> rx38_xdp_tx_xmit: 0
> rx38_xdp_tx_full: 0
> rx38_xdp_tx_err: 0
> rx38_xdp_tx_cqes: 0
> rx39_packets: 0
> rx39_bytes: 0
> rx39_csum_complete: 0
> rx39_csum_unnecessary: 0
> rx39_csum_unnecessary_inner: 0
> rx39_csum_none: 0
> rx39_xdp_drop: 0
> rx39_xdp_redirect: 0
> rx39_lro_packets: 0
> rx39_lro_bytes: 0
> rx39_ecn_mark: 0
> rx39_removed_vlan_packets: 0
> rx39_wqe_err: 0
> rx39_mpwqe_filler_cqes: 0
> rx39_mpwqe_filler_strides: 0
> rx39_buff_alloc_err: 0
> rx39_cqe_compress_blks: 0
> rx39_cqe_compress_pkts: 0
> rx39_page_reuse: 0
> rx39_cache_reuse: 0
> rx39_cache_full: 0
> rx39_cache_empty: 2560
> rx39_cache_busy: 0
> rx39_cache_waive: 0
> rx39_congst_umr: 0
> rx39_arfs_err: 0
> rx39_xdp_tx_xmit: 0
> rx39_xdp_tx_full: 0
> rx39_xdp_tx_err: 0
> rx39_xdp_tx_cqes: 0
> rx40_packets: 0
> rx40_bytes: 0
> rx40_csum_complete: 0
> rx40_csum_unnecessary: 0
> rx40_csum_unnecessary_inner: 0
> rx40_csum_none: 0
> rx40_xdp_drop: 0
> rx40_xdp_redirect: 0
> rx40_lro_packets: 0
> rx40_lro_bytes: 0
> rx40_ecn_mark: 0
> rx40_removed_vlan_packets: 0
> rx40_wqe_err: 0
> rx40_mpwqe_filler_cqes: 0
> rx40_mpwqe_filler_strides: 0
> rx40_buff_alloc_err: 0
> rx40_cqe_compress_blks: 0
> rx40_cqe_compress_pkts: 0
> rx40_page_reuse: 0
> rx40_cache_reuse: 0
> rx40_cache_full: 0
> rx40_cache_empty: 2560
> rx40_cache_busy: 0
> rx40_cache_waive: 0
> rx40_congst_umr: 0
> rx40_arfs_err: 0
> rx40_xdp_tx_xmit: 0
> rx40_xdp_tx_full: 0
> rx40_xdp_tx_err: 0
> rx40_xdp_tx_cqes: 0
> rx41_packets: 0
> rx41_bytes: 0
> rx41_csum_complete: 0
> rx41_csum_unnecessary: 0
> rx41_csum_unnecessary_inner: 0
> rx41_csum_none: 0
> rx41_xdp_drop: 0
> rx41_xdp_redirect: 0
> rx41_lro_packets: 0
> rx41_lro_bytes: 0
> rx41_ecn_mark: 0
> rx41_removed_vlan_packets: 0
> rx41_wqe_err: 0
> rx41_mpwqe_filler_cqes: 0
> rx41_mpwqe_filler_strides: 0
> rx41_buff_alloc_err: 0
> rx41_cqe_compress_blks: 0
> rx41_cqe_compress_pkts: 0
> rx41_page_reuse: 0
> rx41_cache_reuse: 0
> rx41_cache_full: 0
> rx41_cache_empty: 2560
> rx41_cache_busy: 0
> rx41_cache_waive: 0
> rx41_congst_umr: 0
> rx41_arfs_err: 0
> rx41_xdp_tx_xmit: 0
> rx41_xdp_tx_full: 0
> rx41_xdp_tx_err: 0
> rx41_xdp_tx_cqes: 0
> rx42_packets: 0
> rx42_bytes: 0
> rx42_csum_complete: 0
> rx42_csum_unnecessary: 0
> rx42_csum_unnecessary_inner: 0
> rx42_csum_none: 0
> rx42_xdp_drop: 0
> rx42_xdp_redirect: 0
> rx42_lro_packets: 0
> rx42_lro_bytes: 0
> rx42_ecn_mark: 0
> rx42_removed_vlan_packets: 0
> rx42_wqe_err: 0
> rx42_mpwqe_filler_cqes: 0
> rx42_mpwqe_filler_strides: 0
> rx42_buff_alloc_err: 0
> rx42_cqe_compress_blks: 0
> rx42_cqe_compress_pkts: 0
> rx42_page_reuse: 0
> rx42_cache_reuse: 0
> rx42_cache_full: 0
> rx42_cache_empty: 2560
> rx42_cache_busy: 0
> rx42_cache_waive: 0
> rx42_congst_umr: 0
> rx42_arfs_err: 0
> rx42_xdp_tx_xmit: 0
> rx42_xdp_tx_full: 0
> rx42_xdp_tx_err: 0
> rx42_xdp_tx_cqes: 0
> rx43_packets: 0
> rx43_bytes: 0
> rx43_csum_complete: 0
> rx43_csum_unnecessary: 0
> rx43_csum_unnecessary_inner: 0
> rx43_csum_none: 0
> rx43_xdp_drop: 0
> rx43_xdp_redirect: 0
> rx43_lro_packets: 0
> rx43_lro_bytes: 0
> rx43_ecn_mark: 0
> rx43_removed_vlan_packets: 0
> rx43_wqe_err: 0
> rx43_mpwqe_filler_cqes: 0
> rx43_mpwqe_filler_strides: 0
> rx43_buff_alloc_err: 0
> rx43_cqe_compress_blks: 0
> rx43_cqe_compress_pkts: 0
> rx43_page_reuse: 0
> rx43_cache_reuse: 0
> rx43_cache_full: 0
> rx43_cache_empty: 2560
> rx43_cache_busy: 0
> rx43_cache_waive: 0
> rx43_congst_umr: 0
> rx43_arfs_err: 0
> rx43_xdp_tx_xmit: 0
> rx43_xdp_tx_full: 0
> rx43_xdp_tx_err: 0
> rx43_xdp_tx_cqes: 0
> rx44_packets: 0
> rx44_bytes: 0
> rx44_csum_complete: 0
> rx44_csum_unnecessary: 0
> rx44_csum_unnecessary_inner: 0
> rx44_csum_none: 0
> rx44_xdp_drop: 0
> rx44_xdp_redirect: 0
> rx44_lro_packets: 0
> rx44_lro_bytes: 0
> rx44_ecn_mark: 0
> rx44_removed_vlan_packets: 0
> rx44_wqe_err: 0
> rx44_mpwqe_filler_cqes: 0
> rx44_mpwqe_filler_strides: 0
> rx44_buff_alloc_err: 0
> rx44_cqe_compress_blks: 0
> rx44_cqe_compress_pkts: 0
> rx44_page_reuse: 0
> rx44_cache_reuse: 0
> rx44_cache_full: 0
> rx44_cache_empty: 2560
> rx44_cache_busy: 0
> rx44_cache_waive: 0
> rx44_congst_umr: 0
> rx44_arfs_err: 0
> rx44_xdp_tx_xmit: 0
> rx44_xdp_tx_full: 0
> rx44_xdp_tx_err: 0
> rx44_xdp_tx_cqes: 0
> rx45_packets: 0
> rx45_bytes: 0
> rx45_csum_complete: 0
> rx45_csum_unnecessary: 0
> rx45_csum_unnecessary_inner: 0
> rx45_csum_none: 0
> rx45_xdp_drop: 0
> rx45_xdp_redirect: 0
> rx45_lro_packets: 0
> rx45_lro_bytes: 0
> rx45_ecn_mark: 0
> rx45_removed_vlan_packets: 0
> rx45_wqe_err: 0
> rx45_mpwqe_filler_cqes: 0
> rx45_mpwqe_filler_strides: 0
> rx45_buff_alloc_err: 0
> rx45_cqe_compress_blks: 0
> rx45_cqe_compress_pkts: 0
> rx45_page_reuse: 0
> rx45_cache_reuse: 0
> rx45_cache_full: 0
> rx45_cache_empty: 2560
> rx45_cache_busy: 0
> rx45_cache_waive: 0
> rx45_congst_umr: 0
> rx45_arfs_err: 0
> rx45_xdp_tx_xmit: 0
> rx45_xdp_tx_full: 0
> rx45_xdp_tx_err: 0
> rx45_xdp_tx_cqes: 0
> rx46_packets: 0
> rx46_bytes: 0
> rx46_csum_complete: 0
> rx46_csum_unnecessary: 0
> rx46_csum_unnecessary_inner: 0
> rx46_csum_none: 0
> rx46_xdp_drop: 0
> rx46_xdp_redirect: 0
> rx46_lro_packets: 0
> rx46_lro_bytes: 0
> rx46_ecn_mark: 0
> rx46_removed_vlan_packets: 0
> rx46_wqe_err: 0
> rx46_mpwqe_filler_cqes: 0
> rx46_mpwqe_filler_strides: 0
> rx46_buff_alloc_err: 0
> rx46_cqe_compress_blks: 0
> rx46_cqe_compress_pkts: 0
> rx46_page_reuse: 0
> rx46_cache_reuse: 0
> rx46_cache_full: 0
> rx46_cache_empty: 2560
> rx46_cache_busy: 0
> rx46_cache_waive: 0
> rx46_congst_umr: 0
> rx46_arfs_err: 0
> rx46_xdp_tx_xmit: 0
> rx46_xdp_tx_full: 0
> rx46_xdp_tx_err: 0
> rx46_xdp_tx_cqes: 0
> rx47_packets: 0
> rx47_bytes: 0
> rx47_csum_complete: 0
> rx47_csum_unnecessary: 0
> rx47_csum_unnecessary_inner: 0
> rx47_csum_none: 0
> rx47_xdp_drop: 0
> rx47_xdp_redirect: 0
> rx47_lro_packets: 0
> rx47_lro_bytes: 0
> rx47_ecn_mark: 0
> rx47_removed_vlan_packets: 0
> rx47_wqe_err: 0
> rx47_mpwqe_filler_cqes: 0
> rx47_mpwqe_filler_strides: 0
> rx47_buff_alloc_err: 0
> rx47_cqe_compress_blks: 0
> rx47_cqe_compress_pkts: 0
> rx47_page_reuse: 0
> rx47_cache_reuse: 0
> rx47_cache_full: 0
> rx47_cache_empty: 2560
> rx47_cache_busy: 0
> rx47_cache_waive: 0
> rx47_congst_umr: 0
> rx47_arfs_err: 0
> rx47_xdp_tx_xmit: 0
> rx47_xdp_tx_full: 0
> rx47_xdp_tx_err: 0
> rx47_xdp_tx_cqes: 0
> rx48_packets: 0
> rx48_bytes: 0
> rx48_csum_complete: 0
> rx48_csum_unnecessary: 0
> rx48_csum_unnecessary_inner: 0
> rx48_csum_none: 0
> rx48_xdp_drop: 0
> rx48_xdp_redirect: 0
> rx48_lro_packets: 0
> rx48_lro_bytes: 0
> rx48_ecn_mark: 0
> rx48_removed_vlan_packets: 0
> rx48_wqe_err: 0
> rx48_mpwqe_filler_cqes: 0
> rx48_mpwqe_filler_strides: 0
> rx48_buff_alloc_err: 0
> rx48_cqe_compress_blks: 0
> rx48_cqe_compress_pkts: 0
> rx48_page_reuse: 0
> rx48_cache_reuse: 0
> rx48_cache_full: 0
> rx48_cache_empty: 2560
> rx48_cache_busy: 0
> rx48_cache_waive: 0
> rx48_congst_umr: 0
> rx48_arfs_err: 0
> rx48_xdp_tx_xmit: 0
> rx48_xdp_tx_full: 0
> rx48_xdp_tx_err: 0
> rx48_xdp_tx_cqes: 0
> rx49_packets: 0
> rx49_bytes: 0
> rx49_csum_complete: 0
> rx49_csum_unnecessary: 0
> rx49_csum_unnecessary_inner: 0
> rx49_csum_none: 0
> rx49_xdp_drop: 0
> rx49_xdp_redirect: 0
> rx49_lro_packets: 0
> rx49_lro_bytes: 0
> rx49_ecn_mark: 0
> rx49_removed_vlan_packets: 0
> rx49_wqe_err: 0
> rx49_mpwqe_filler_cqes: 0
> rx49_mpwqe_filler_strides: 0
> rx49_buff_alloc_err: 0
> rx49_cqe_compress_blks: 0
> rx49_cqe_compress_pkts: 0
> rx49_page_reuse: 0
> rx49_cache_reuse: 0
> rx49_cache_full: 0
> rx49_cache_empty: 2560
> rx49_cache_busy: 0
> rx49_cache_waive: 0
> rx49_congst_umr: 0
> rx49_arfs_err: 0
> rx49_xdp_tx_xmit: 0
> rx49_xdp_tx_full: 0
> rx49_xdp_tx_err: 0
> rx49_xdp_tx_cqes: 0
> rx50_packets: 0
> rx50_bytes: 0
> rx50_csum_complete: 0
> rx50_csum_unnecessary: 0
> rx50_csum_unnecessary_inner: 0
> rx50_csum_none: 0
> rx50_xdp_drop: 0
> rx50_xdp_redirect: 0
> rx50_lro_packets: 0
> rx50_lro_bytes: 0
> rx50_ecn_mark: 0
> rx50_removed_vlan_packets: 0
> rx50_wqe_err: 0
> rx50_mpwqe_filler_cqes: 0
> rx50_mpwqe_filler_strides: 0
> rx50_buff_alloc_err: 0
> rx50_cqe_compress_blks: 0
> rx50_cqe_compress_pkts: 0
> rx50_page_reuse: 0
> rx50_cache_reuse: 0
> rx50_cache_full: 0
> rx50_cache_empty: 2560
> rx50_cache_busy: 0
> rx50_cache_waive: 0
> rx50_congst_umr: 0
> rx50_arfs_err: 0
> rx50_xdp_tx_xmit: 0
> rx50_xdp_tx_full: 0
> rx50_xdp_tx_err: 0
> rx50_xdp_tx_cqes: 0
> rx51_packets: 0
> rx51_bytes: 0
> rx51_csum_complete: 0
> rx51_csum_unnecessary: 0
> rx51_csum_unnecessary_inner: 0
> rx51_csum_none: 0
> rx51_xdp_drop: 0
> rx51_xdp_redirect: 0
> rx51_lro_packets: 0
> rx51_lro_bytes: 0
> rx51_ecn_mark: 0
> rx51_removed_vlan_packets: 0
> rx51_wqe_err: 0
> rx51_mpwqe_filler_cqes: 0
> rx51_mpwqe_filler_strides: 0
> rx51_buff_alloc_err: 0
> rx51_cqe_compress_blks: 0
> rx51_cqe_compress_pkts: 0
> rx51_page_reuse: 0
> rx51_cache_reuse: 0
> rx51_cache_full: 0
> rx51_cache_empty: 2560
> rx51_cache_busy: 0
> rx51_cache_waive: 0
> rx51_congst_umr: 0
> rx51_arfs_err: 0
> rx51_xdp_tx_xmit: 0
> rx51_xdp_tx_full: 0
> rx51_xdp_tx_err: 0
> rx51_xdp_tx_cqes: 0
> rx52_packets: 0
> rx52_bytes: 0
> rx52_csum_complete: 0
> rx52_csum_unnecessary: 0
> rx52_csum_unnecessary_inner: 0
> rx52_csum_none: 0
> rx52_xdp_drop: 0
> rx52_xdp_redirect: 0
> rx52_lro_packets: 0
> rx52_lro_bytes: 0
> rx52_ecn_mark: 0
> rx52_removed_vlan_packets: 0
> rx52_wqe_err: 0
> rx52_mpwqe_filler_cqes: 0
> rx52_mpwqe_filler_strides: 0
> rx52_buff_alloc_err: 0
> rx52_cqe_compress_blks: 0
> rx52_cqe_compress_pkts: 0
> rx52_page_reuse: 0
> rx52_cache_reuse: 0
> rx52_cache_full: 0
> rx52_cache_empty: 2560
> rx52_cache_busy: 0
> rx52_cache_waive: 0
> rx52_congst_umr: 0
> rx52_arfs_err: 0
> rx52_xdp_tx_xmit: 0
> rx52_xdp_tx_full: 0
> rx52_xdp_tx_err: 0
> rx52_xdp_tx_cqes: 0
> rx53_packets: 0
> rx53_bytes: 0
> rx53_csum_complete: 0
> rx53_csum_unnecessary: 0
> rx53_csum_unnecessary_inner: 0
> rx53_csum_none: 0
> rx53_xdp_drop: 0
> rx53_xdp_redirect: 0
> rx53_lro_packets: 0
> rx53_lro_bytes: 0
> rx53_ecn_mark: 0
> rx53_removed_vlan_packets: 0
> rx53_wqe_err: 0
> rx53_mpwqe_filler_cqes: 0
> rx53_mpwqe_filler_strides: 0
> rx53_buff_alloc_err: 0
> rx53_cqe_compress_blks: 0
> rx53_cqe_compress_pkts: 0
> rx53_page_reuse: 0
> rx53_cache_reuse: 0
> rx53_cache_full: 0
> rx53_cache_empty: 2560
> rx53_cache_busy: 0
> rx53_cache_waive: 0
> rx53_congst_umr: 0
> rx53_arfs_err: 0
> rx53_xdp_tx_xmit: 0
> rx53_xdp_tx_full: 0
> rx53_xdp_tx_err: 0
> rx53_xdp_tx_cqes: 0
> rx54_packets: 0
> rx54_bytes: 0
> rx54_csum_complete: 0
> rx54_csum_unnecessary: 0
> rx54_csum_unnecessary_inner: 0
> rx54_csum_none: 0
> rx54_xdp_drop: 0
> rx54_xdp_redirect: 0
> rx54_lro_packets: 0
> rx54_lro_bytes: 0
> rx54_ecn_mark: 0
> rx54_removed_vlan_packets: 0
> rx54_wqe_err: 0
> rx54_mpwqe_filler_cqes: 0
> rx54_mpwqe_filler_strides: 0
> rx54_buff_alloc_err: 0
> rx54_cqe_compress_blks: 0
> rx54_cqe_compress_pkts: 0
> rx54_page_reuse: 0
> rx54_cache_reuse: 0
> rx54_cache_full: 0
> rx54_cache_empty: 2560
> rx54_cache_busy: 0
> rx54_cache_waive: 0
> rx54_congst_umr: 0
> rx54_arfs_err: 0
> rx54_xdp_tx_xmit: 0
> rx54_xdp_tx_full: 0
> rx54_xdp_tx_err: 0
> rx54_xdp_tx_cqes: 0
> rx55_packets: 0
> rx55_bytes: 0
> rx55_csum_complete: 0
> rx55_csum_unnecessary: 0
> rx55_csum_unnecessary_inner: 0
> rx55_csum_none: 0
> rx55_xdp_drop: 0
> rx55_xdp_redirect: 0
> rx55_lro_packets: 0
> rx55_lro_bytes: 0
> rx55_ecn_mark: 0
> rx55_removed_vlan_packets: 0
> rx55_wqe_err: 0
> rx55_mpwqe_filler_cqes: 0
> rx55_mpwqe_filler_strides: 0
> rx55_buff_alloc_err: 0
> rx55_cqe_compress_blks: 0
> rx55_cqe_compress_pkts: 0
> rx55_page_reuse: 0
> rx55_cache_reuse: 0
> rx55_cache_full: 0
> rx55_cache_empty: 2560
> rx55_cache_busy: 0
> rx55_cache_waive: 0
> rx55_congst_umr: 0
> rx55_arfs_err: 0
> rx55_xdp_tx_xmit: 0
> rx55_xdp_tx_full: 0
> rx55_xdp_tx_err: 0
> rx55_xdp_tx_cqes: 0
> tx0_packets: 6019477917
> tx0_bytes: 3445238940825
> tx0_tso_packets: 311304622
> tx0_tso_bytes: 1897094773213
> tx0_tso_inner_packets: 0
> tx0_tso_inner_bytes: 0
> tx0_csum_partial: 457981794
> tx0_csum_partial_inner: 0
> tx0_added_vlan_packets: 4965567654
> tx0_nop: 72290329
> tx0_csum_none: 4507585860
> tx0_stopped: 9118
> tx0_dropped: 0
> tx0_xmit_more: 51651593
> tx0_recover: 0
> tx0_cqes: 4913918402
> tx0_wake: 9118
> tx0_cqe_err: 0
> tx1_packets: 5700413414
> tx1_bytes: 3340870662350
> tx1_tso_packets: 318201557
> tx1_tso_bytes: 1915233462303
> tx1_tso_inner_packets: 0
> tx1_tso_inner_bytes: 0
> tx1_csum_partial: 461736722
> tx1_csum_partial_inner: 0
> tx1_added_vlan_packets: 4638708749
> tx1_nop: 70061796
> tx1_csum_none: 4176972027
> tx1_stopped: 9248
> tx1_dropped: 0
> tx1_xmit_more: 39531959
> tx1_recover: 0
> tx1_cqes: 4599179178
> tx1_wake: 9248
> tx1_cqe_err: 0
> tx2_packets: 5795960848
> tx2_bytes: 3394876820271
> tx2_tso_packets: 322935065
> tx2_tso_bytes: 1910825901109
> tx2_tso_inner_packets: 0
> tx2_tso_inner_bytes: 0
> tx2_csum_partial: 460747092
> tx2_csum_partial_inner: 0
> tx2_added_vlan_packets: 4743705654
> tx2_nop: 72722430
> tx2_csum_none: 4282958562
> tx2_stopped: 8938
> tx2_dropped: 0
> tx2_xmit_more: 44084718
> tx2_recover: 0
> tx2_cqes: 4699623410
> tx2_wake: 8938
> tx2_cqe_err: 0
> tx3_packets: 5580215878
> tx3_bytes: 3191677257787
> tx3_tso_packets: 305771141
> tx3_tso_bytes: 1823265793476
> tx3_tso_inner_packets: 0
> tx3_tso_inner_bytes: 0
> tx3_csum_partial: 434976070
> tx3_csum_partial_inner: 0
> tx3_added_vlan_packets: 4569899956
> tx3_nop: 68184348
> tx3_csum_none: 4134923886
> tx3_stopped: 8383
> tx3_dropped: 0
> tx3_xmit_more: 41940375
> tx3_recover: 0
> tx3_cqes: 4527961924
> tx3_wake: 8383
> tx3_cqe_err: 0
> tx4_packets: 6795007068
> tx4_bytes: 3963890025270
> tx4_tso_packets: 358437617
> tx4_tso_bytes: 2154747995355
> tx4_tso_inner_packets: 0
> tx4_tso_inner_bytes: 0
> tx4_csum_partial: 504764524
> tx4_csum_partial_inner: 0
> tx4_added_vlan_packets: 5602510191
> tx4_nop: 81345604
> tx4_csum_none: 5097745667
> tx4_stopped: 10248
> tx4_dropped: 0
> tx4_xmit_more: 49068571
> tx4_recover: 0
> tx4_cqes: 5553444276
> tx4_wake: 10248
> tx4_cqe_err: 0
> tx5_packets: 6408089261
> tx5_bytes: 3676275848279
> tx5_tso_packets: 345129329
> tx5_tso_bytes: 2108447877473
> tx5_tso_inner_packets: 0
> tx5_tso_inner_bytes: 0
> tx5_csum_partial: 494705894
> tx5_csum_partial_inner: 0
> tx5_added_vlan_packets: 5235998343
> tx5_nop: 77694627
> tx5_csum_none: 4741292449
> tx5_stopped: 46
> tx5_dropped: 0
> tx5_xmit_more: 46675831
> tx5_recover: 0
> tx5_cqes: 5189323550
> tx5_wake: 46
> tx5_cqe_err: 0
> tx6_packets: 6382289663
> tx6_bytes: 3670991418150
> tx6_tso_packets: 342927826
> tx6_tso_bytes: 2075049679904
> tx6_tso_inner_packets: 0
> tx6_tso_inner_bytes: 0
> tx6_csum_partial: 490369221
> tx6_csum_partial_inner: 0
> tx6_added_vlan_packets: 5232144528
> tx6_nop: 77391246
> tx6_csum_none: 4741775307
> tx6_stopped: 10823
> tx6_dropped: 0
> tx6_xmit_more: 44487607
> tx6_recover: 0
> tx6_cqes: 5187659877
> tx6_wake: 10823
> tx6_cqe_err: 0
> tx7_packets: 6456378284
> tx7_bytes: 3758013320518
> tx7_tso_packets: 350958294
> tx7_tso_bytes: 2126833408524
> tx7_tso_inner_packets: 0
> tx7_tso_inner_bytes: 0
> tx7_csum_partial: 501804109
> tx7_csum_partial_inner: 0
> tx7_added_vlan_packets: 5275635204
> tx7_nop: 79010883
> tx7_csum_none: 4773831096
> tx7_stopped: 14684
> tx7_dropped: 0
> tx7_xmit_more: 44447469
> tx7_recover: 0
> tx7_cqes: 5231191770
> tx7_wake: 14684
> tx7_cqe_err: 0
> tx8_packets: 6401799768
> tx8_bytes: 3681210808766
> tx8_tso_packets: 342878228
> tx8_tso_bytes: 2089688012191
> tx8_tso_inner_packets: 0
> tx8_tso_inner_bytes: 0
> tx8_csum_partial: 494865145
> tx8_csum_partial_inner: 0
> tx8_added_vlan_packets: 5242288908
> tx8_nop: 77250910
> tx8_csum_none: 4747423763
> tx8_stopped: 2
> tx8_dropped: 0
> tx8_xmit_more: 44191737
> tx8_recover: 0
> tx8_cqes: 5198098454
> tx8_wake: 2
> tx8_cqe_err: 0
> tx9_packets: 6632882888
> tx9_bytes: 3820110338309
> tx9_tso_packets: 354189056
> tx9_tso_bytes: 2187883597128
> tx9_tso_inner_packets: 0
> tx9_tso_inner_bytes: 0
> tx9_csum_partial: 511108218
> tx9_csum_partial_inner: 0
> tx9_added_vlan_packets: 5413836353
> tx9_nop: 80560668
> tx9_csum_none: 4902728135
> tx9_stopped: 9091
> tx9_dropped: 0
> tx9_xmit_more: 54501293
> tx9_recover: 0
> tx9_cqes: 5359337638
> tx9_wake: 9091
> tx9_cqe_err: 0
> tx10_packets: 6421786406
> tx10_bytes: 3692798413429
> tx10_tso_packets: 346878943
> tx10_tso_bytes: 2111921062110
> tx10_tso_inner_packets: 0
> tx10_tso_inner_bytes: 0
> tx10_csum_partial: 494356645
> tx10_csum_partial_inner: 0
> tx10_added_vlan_packets: 5248274374
> tx10_nop: 77922624
> tx10_csum_none: 4753917730
> tx10_stopped: 9617
> tx10_dropped: 0
> tx10_xmit_more: 44473939
> tx10_recover: 0
> tx10_cqes: 5203802547
> tx10_wake: 9617
> tx10_cqe_err: 0
> tx11_packets: 6406750938
> tx11_bytes: 3660343565126
> tx11_tso_packets: 355917271
> tx11_tso_bytes: 2130812246956
> tx11_tso_inner_packets: 0
> tx11_tso_inner_bytes: 0
> tx11_csum_partial: 500336369
> tx11_csum_partial_inner: 0
> tx11_added_vlan_packets: 5228267547
> tx11_nop: 78906315
> tx11_csum_none: 4727931178
> tx11_stopped: 9607
> tx11_dropped: 0
> tx11_xmit_more: 40041492
> tx11_recover: 0
> tx11_cqes: 5188228290
> tx11_wake: 9607
> tx11_cqe_err: 0
> tx12_packets: 6422347846
> tx12_bytes: 3718772753227
> tx12_tso_packets: 355397223
> tx12_tso_bytes: 2162614059758
> tx12_tso_inner_packets: 0
> tx12_tso_inner_bytes: 0
> tx12_csum_partial: 511437844
> tx12_csum_partial_inner: 0
> tx12_added_vlan_packets: 5221373746
> tx12_nop: 78866779
> tx12_csum_none: 4709935902
> tx12_stopped: 10280
> tx12_dropped: 0
> tx12_xmit_more: 42189399
> tx12_recover: 0
> tx12_cqes: 5179187154
> tx12_wake: 10280
> tx12_cqe_err: 0
> tx13_packets: 6429383816
> tx13_bytes: 3725679445046
> tx13_tso_packets: 360934759
> tx13_tso_bytes: 2148016411436
> tx13_tso_inner_packets: 0
> tx13_tso_inner_bytes: 0
> tx13_csum_partial: 505245849
> tx13_csum_partial_inner: 0
> tx13_added_vlan_packets: 5240267441
> tx13_nop: 80295637
> tx13_csum_none: 4735021592
> tx13_stopped: 84
> tx13_dropped: 0
> tx13_xmit_more: 43118045
> tx13_recover: 0
> tx13_cqes: 5197150348
> tx13_wake: 84
> tx13_cqe_err: 0
> tx14_packets: 6375279148
> tx14_bytes: 3624267203336
> tx14_tso_packets: 344388148
> tx14_tso_bytes: 2094966273548
> tx14_tso_inner_packets: 0
> tx14_tso_inner_bytes: 0
> tx14_csum_partial: 494129407
> tx14_csum_partial_inner: 0
> tx14_added_vlan_packets: 5210749337
> tx14_nop: 77280615
> tx14_csum_none: 4716619930
> tx14_stopped: 13057
> tx14_dropped: 0
> tx14_xmit_more: 40849682
> tx14_recover: 0
> tx14_cqes: 5169902694
> tx14_wake: 13057
> tx14_cqe_err: 0
> tx15_packets: 6489306520
> tx15_bytes: 3775716194795
> tx15_tso_packets: 368716406
> tx15_tso_bytes: 2165876423354
> tx15_tso_inner_packets: 0
> tx15_tso_inner_bytes: 0
> tx15_csum_partial: 509887864
> tx15_csum_partial_inner: 0
> tx15_added_vlan_packets: 5296767390
> tx15_nop: 80803468
> tx15_csum_none: 4786879529
> tx15_stopped: 1
> tx15_dropped: 0
> tx15_xmit_more: 46979676
> tx15_recover: 0
> tx15_cqes: 5249789328
> tx15_wake: 1
> tx15_cqe_err: 0
> tx16_packets: 6559857761
> tx16_bytes: 3724080573905
> tx16_tso_packets: 350864176
> tx16_tso_bytes: 2099634006033
> tx16_tso_inner_packets: 0
> tx16_tso_inner_bytes: 0
> tx16_csum_partial: 489397232
> tx16_csum_partial_inner: 0
> tx16_added_vlan_packets: 5398869334
> tx16_nop: 79046075
> tx16_csum_none: 4909472106
> tx16_stopped: 4480
> tx16_dropped: 0
> tx16_xmit_more: 47273286
> tx16_recover: 0
> tx16_cqes: 5351598315
> tx16_wake: 4480
> tx16_cqe_err: 0
> tx17_packets: 6358711533
> tx17_bytes: 3650180865573
> tx17_tso_packets: 350723136
> tx17_tso_bytes: 2109426587128
> tx17_tso_inner_packets: 0
> tx17_tso_inner_bytes: 0
> tx17_csum_partial: 494719487
> tx17_csum_partial_inner: 0
> tx17_added_vlan_packets: 5190068796
> tx17_nop: 77285612
> tx17_csum_none: 4695349309
> tx17_stopped: 10443
> tx17_dropped: 0
> tx17_xmit_more: 45582108
> tx17_recover: 0
> tx17_cqes: 5144489363
> tx17_wake: 10443
> tx17_cqe_err: 0
> tx18_packets: 6655328437
> tx18_bytes: 3801768461807
> tx18_tso_packets: 356516373
> tx18_tso_bytes: 2164829247550
> tx18_tso_inner_packets: 0
> tx18_tso_inner_bytes: 0
> tx18_csum_partial: 500508446
> tx18_csum_partial_inner: 0
> tx18_added_vlan_packets: 5454166840
> tx18_nop: 80423007
> tx18_csum_none: 4953658394
> tx18_stopped: 14760
> tx18_dropped: 0
> tx18_xmit_more: 50837465
> tx18_recover: 0
> tx18_cqes: 5403332553
> tx18_wake: 14760
> tx18_cqe_err: 0
> tx19_packets: 6408680611
> tx19_bytes: 3644119934372
> tx19_tso_packets: 350727530
> tx19_tso_bytes: 2089896715365
> tx19_tso_inner_packets: 0
> tx19_tso_inner_bytes: 0
> tx19_csum_partial: 486536490
> tx19_csum_partial_inner: 0
> tx19_added_vlan_packets: 5255839020
> tx19_nop: 78525198
> tx19_csum_none: 4769302530
> tx19_stopped: 8614
> tx19_dropped: 0
> tx19_xmit_more: 43605232
> tx19_recover: 0
> tx19_cqes: 5212236833
> tx19_wake: 8614
> tx19_cqe_err: 0
> tx20_packets: 5609275141
> tx20_bytes: 3187279031581
> tx20_tso_packets: 298609303
> tx20_tso_bytes: 1794382229379
> tx20_tso_inner_packets: 0
> tx20_tso_inner_bytes: 0
> tx20_csum_partial: 430691178
> tx20_csum_partial_inner: 0
> tx20_added_vlan_packets: 4616844286
> tx20_nop: 67450040
> tx20_csum_none: 4186153108
> tx20_stopped: 9099
> tx20_dropped: 0
> tx20_xmit_more: 42040991
> tx20_recover: 0
> tx20_cqes: 4574805846
> tx20_wake: 9099
> tx20_cqe_err: 0
> tx21_packets: 5641621183
> tx21_bytes: 3279282331124
> tx21_tso_packets: 311297057
> tx21_tso_bytes: 1875735401012
> tx21_tso_inner_packets: 0
> tx21_tso_inner_bytes: 0
> tx21_csum_partial: 444333894
> tx21_csum_partial_inner: 0
> tx21_added_vlan_packets: 4603527701
> tx21_nop: 68857983
> tx21_csum_none: 4159193807
> tx21_stopped: 10082
> tx21_dropped: 0
> tx21_xmit_more: 43988081
> tx21_recover: 0
> tx21_cqes: 4559542410
> tx21_wake: 10082
> tx21_cqe_err: 0
> tx22_packets: 5822168288
> tx22_bytes: 3452026726862
> tx22_tso_packets: 308230791
> tx22_tso_bytes: 1859686450671
> tx22_tso_inner_packets: 0
> tx22_tso_inner_bytes: 0
> tx22_csum_partial: 442751518
> tx22_csum_partial_inner: 0
> tx22_added_vlan_packets: 4792100335
> tx22_nop: 70631706
> tx22_csum_none: 4349348817
> tx22_stopped: 9355
> tx22_dropped: 0
> tx22_xmit_more: 45165994
> tx22_recover: 0
> tx22_cqes: 4746936601
> tx22_wake: 9355
> tx22_cqe_err: 0
> tx23_packets: 5664896066
> tx23_bytes: 3207724186946
> tx23_tso_packets: 300418757
> tx23_tso_bytes: 1794180478679
> tx23_tso_inner_packets: 0
> tx23_tso_inner_bytes: 0
> tx23_csum_partial: 429898848
> tx23_csum_partial_inner: 0
> tx23_added_vlan_packets: 4674317320
> tx23_nop: 67899896
> tx23_csum_none: 4244418472
> tx23_stopped: 11684
> tx23_dropped: 0
> tx23_xmit_more: 43351132
> tx23_recover: 0
> tx23_cqes: 4630969028
> tx23_wake: 11684
> tx23_cqe_err: 0
> tx24_packets: 5663326601
> tx24_bytes: 3250127095110
> tx24_tso_packets: 301327422
> tx24_tso_bytes: 1831260534157
> tx24_tso_inner_packets: 0
> tx24_tso_inner_bytes: 0
> tx24_csum_partial: 438757312
> tx24_csum_partial_inner: 0
> tx24_added_vlan_packets: 4646014986
> tx24_nop: 68431153
> tx24_csum_none: 4207257674
> tx24_stopped: 9240
> tx24_dropped: 0
> tx24_xmit_more: 47699542
> tx24_recover: 0
> tx24_cqes: 4598317913
> tx24_wake: 9240
> tx24_cqe_err: 0
> tx25_packets: 5703883962
> tx25_bytes: 3291856915695
> tx25_tso_packets: 308900318
> tx25_tso_bytes: 1855516128386
> tx25_tso_inner_packets: 0
> tx25_tso_inner_bytes: 0
> tx25_csum_partial: 444753744
> tx25_csum_partial_inner: 0
> tx25_added_vlan_packets: 4676528924
> tx25_nop: 69230967
> tx25_csum_none: 4231775180
> tx25_stopped: 1140
> tx25_dropped: 0
> tx25_xmit_more: 40819195
> tx25_recover: 0
> tx25_cqes: 4635710966
> tx25_wake: 1140
> tx25_cqe_err: 0
> tx26_packets: 5803495984
> tx26_bytes: 3413564272139
> tx26_tso_packets: 319986230
> tx26_tso_bytes: 1929042839677
> tx26_tso_inner_packets: 0
> tx26_tso_inner_bytes: 0
> tx26_csum_partial: 464771163
> tx26_csum_partial_inner: 0
> tx26_added_vlan_packets: 4734767280
> tx26_nop: 71345080
> tx26_csum_none: 4269996117
> tx26_stopped: 10972
> tx26_dropped: 0
> tx26_xmit_more: 43793424
> tx26_recover: 0
> tx26_cqes: 4690976400
> tx26_wake: 10972
> tx26_cqe_err: 0
> tx27_packets: 5960955343
> tx27_bytes: 3444156164526
> tx27_tso_packets: 325099639
> tx27_tso_bytes: 1928378678784
> tx27_tso_inner_packets: 0
> tx27_tso_inner_bytes: 0
> tx27_csum_partial: 467310289
> tx27_csum_partial_inner: 0
> tx27_added_vlan_packets: 4888651368
> tx27_nop: 73201664
> tx27_csum_none: 4421341079
> tx27_stopped: 9465
> tx27_dropped: 0
> tx27_xmit_more: 53632121
> tx27_recover: 0
> tx27_cqes: 4835021398
> tx27_wake: 9465
> tx27_cqe_err: 0
> tx28_packets: 0
> tx28_bytes: 0
> tx28_tso_packets: 0
> tx28_tso_bytes: 0
> tx28_tso_inner_packets: 0
> tx28_tso_inner_bytes: 0
> tx28_csum_partial: 0
> tx28_csum_partial_inner: 0
> tx28_added_vlan_packets: 0
> tx28_nop: 0
> tx28_csum_none: 0
> tx28_stopped: 0
> tx28_dropped: 0
> tx28_xmit_more: 0
> tx28_recover: 0
> tx28_cqes: 0
> tx28_wake: 0
> tx28_cqe_err: 0
> tx29_packets: 3
> tx29_bytes: 266
> tx29_tso_packets: 0
> tx29_tso_bytes: 0
> tx29_tso_inner_packets: 0
> tx29_tso_inner_bytes: 0
> tx29_csum_partial: 0
> tx29_csum_partial_inner: 0
> tx29_added_vlan_packets: 0
> tx29_nop: 0
> tx29_csum_none: 3
> tx29_stopped: 0
> tx29_dropped: 0
> tx29_xmit_more: 1
> tx29_recover: 0
> tx29_cqes: 2
> tx29_wake: 0
> tx29_cqe_err: 0
> tx30_packets: 0
> tx30_bytes: 0
> tx30_tso_packets: 0
> tx30_tso_bytes: 0
> tx30_tso_inner_packets: 0
> tx30_tso_inner_bytes: 0
> tx30_csum_partial: 0
> tx30_csum_partial_inner: 0
> tx30_added_vlan_packets: 0
> tx30_nop: 0
> tx30_csum_none: 0
> tx30_stopped: 0
> tx30_dropped: 0
> tx30_xmit_more: 0
> tx30_recover: 0
> tx30_cqes: 0
> tx30_wake: 0
> tx30_cqe_err: 0
> tx31_packets: 0
> tx31_bytes: 0
> tx31_tso_packets: 0
> tx31_tso_bytes: 0
> tx31_tso_inner_packets: 0
> tx31_tso_inner_bytes: 0
> tx31_csum_partial: 0
> tx31_csum_partial_inner: 0
> tx31_added_vlan_packets: 0
> tx31_nop: 0
> tx31_csum_none: 0
> tx31_stopped: 0
> tx31_dropped: 0
> tx31_xmit_more: 0
> tx31_recover: 0
> tx31_cqes: 0
> tx31_wake: 0
> tx31_cqe_err: 0
> tx32_packets: 0
> tx32_bytes: 0
> tx32_tso_packets: 0
> tx32_tso_bytes: 0
> tx32_tso_inner_packets: 0
> tx32_tso_inner_bytes: 0
> tx32_csum_partial: 0
> tx32_csum_partial_inner: 0
> tx32_added_vlan_packets: 0
> tx32_nop: 0
> tx32_csum_none: 0
> tx32_stopped: 0
> tx32_dropped: 0
> tx32_xmit_more: 0
> tx32_recover: 0
> tx32_cqes: 0
> tx32_wake: 0
> tx32_cqe_err: 0
> tx33_packets: 0
> tx33_bytes: 0
> tx33_tso_packets: 0
> tx33_tso_bytes: 0
> tx33_tso_inner_packets: 0
> tx33_tso_inner_bytes: 0
> tx33_csum_partial: 0
> tx33_csum_partial_inner: 0
> tx33_added_vlan_packets: 0
> tx33_nop: 0
> tx33_csum_none: 0
> tx33_stopped: 0
> tx33_dropped: 0
> tx33_xmit_more: 0
> tx33_recover: 0
> tx33_cqes: 0
> tx33_wake: 0
> tx33_cqe_err: 0
> tx34_packets: 0
> tx34_bytes: 0
> tx34_tso_packets: 0
> tx34_tso_bytes: 0
> tx34_tso_inner_packets: 0
> tx34_tso_inner_bytes: 0
> tx34_csum_partial: 0
> tx34_csum_partial_inner: 0
> tx34_added_vlan_packets: 0
> tx34_nop: 0
> tx34_csum_none: 0
> tx34_stopped: 0
> tx34_dropped: 0
> tx34_xmit_more: 0
> tx34_recover: 0
> tx34_cqes: 0
> tx34_wake: 0
> tx34_cqe_err: 0
> tx35_packets: 0
> tx35_bytes: 0
> tx35_tso_packets: 0
> tx35_tso_bytes: 0
> tx35_tso_inner_packets: 0
> tx35_tso_inner_bytes: 0
> tx35_csum_partial: 0
> tx35_csum_partial_inner: 0
> tx35_added_vlan_packets: 0
> tx35_nop: 0
> tx35_csum_none: 0
> tx35_stopped: 0
> tx35_dropped: 0
> tx35_xmit_more: 0
> tx35_recover: 0
> tx35_cqes: 0
> tx35_wake: 0
> tx35_cqe_err: 0
> tx36_packets: 0
> tx36_bytes: 0
> tx36_tso_packets: 0
> tx36_tso_bytes: 0
> tx36_tso_inner_packets: 0
> tx36_tso_inner_bytes: 0
> tx36_csum_partial: 0
> tx36_csum_partial_inner: 0
> tx36_added_vlan_packets: 0
> tx36_nop: 0
> tx36_csum_none: 0
> tx36_stopped: 0
> tx36_dropped: 0
> tx36_xmit_more: 0
> tx36_recover: 0
> tx36_cqes: 0
> tx36_wake: 0
> tx36_cqe_err: 0
> tx37_packets: 0
> tx37_bytes: 0
> tx37_tso_packets: 0
> tx37_tso_bytes: 0
> tx37_tso_inner_packets: 0
> tx37_tso_inner_bytes: 0
> tx37_csum_partial: 0
> tx37_csum_partial_inner: 0
> tx37_added_vlan_packets: 0
> tx37_nop: 0
> tx37_csum_none: 0
> tx37_stopped: 0
> tx37_dropped: 0
> tx37_xmit_more: 0
> tx37_recover: 0
> tx37_cqes: 0
> tx37_wake: 0
> tx37_cqe_err: 0
> tx38_packets: 0
> tx38_bytes: 0
> tx38_tso_packets: 0
> tx38_tso_bytes: 0
> tx38_tso_inner_packets: 0
> tx38_tso_inner_bytes: 0
> tx38_csum_partial: 0
> tx38_csum_partial_inner: 0
> tx38_added_vlan_packets: 0
> tx38_nop: 0
> tx38_csum_none: 0
> tx38_stopped: 0
> tx38_dropped: 0
> tx38_xmit_more: 0
> tx38_recover: 0
> tx38_cqes: 0
> tx38_wake: 0
> tx38_cqe_err: 0
> tx39_packets: 0
> tx39_bytes: 0
> tx39_tso_packets: 0
> tx39_tso_bytes: 0
> tx39_tso_inner_packets: 0
> tx39_tso_inner_bytes: 0
> tx39_csum_partial: 0
> tx39_csum_partial_inner: 0
> tx39_added_vlan_packets: 0
> tx39_nop: 0
> tx39_csum_none: 0
> tx39_stopped: 0
> tx39_dropped: 0
> tx39_xmit_more: 0
> tx39_recover: 0
> tx39_cqes: 0
> tx39_wake: 0
> tx39_cqe_err: 0
> tx40_packets: 0
> tx40_bytes: 0
> tx40_tso_packets: 0
> tx40_tso_bytes: 0
> tx40_tso_inner_packets: 0
> tx40_tso_inner_bytes: 0
> tx40_csum_partial: 0
> tx40_csum_partial_inner: 0
> tx40_added_vlan_packets: 0
> tx40_nop: 0
> tx40_csum_none: 0
> tx40_stopped: 0
> tx40_dropped: 0
> tx40_xmit_more: 0
> tx40_recover: 0
> tx40_cqes: 0
> tx40_wake: 0
> tx40_cqe_err: 0
> tx41_packets: 0
> tx41_bytes: 0
> tx41_tso_packets: 0
> tx41_tso_bytes: 0
> tx41_tso_inner_packets: 0
> tx41_tso_inner_bytes: 0
> tx41_csum_partial: 0
> tx41_csum_partial_inner: 0
> tx41_added_vlan_packets: 0
> tx41_nop: 0
> tx41_csum_none: 0
> tx41_stopped: 0
> tx41_dropped: 0
> tx41_xmit_more: 0
> tx41_recover: 0
> tx41_cqes: 0
> tx41_wake: 0
> tx41_cqe_err: 0
> tx42_packets: 0
> tx42_bytes: 0
> tx42_tso_packets: 0
> tx42_tso_bytes: 0
> tx42_tso_inner_packets: 0
> tx42_tso_inner_bytes: 0
> tx42_csum_partial: 0
> tx42_csum_partial_inner: 0
> tx42_added_vlan_packets: 0
> tx42_nop: 0
> tx42_csum_none: 0
> tx42_stopped: 0
> tx42_dropped: 0
> tx42_xmit_more: 0
> tx42_recover: 0
> tx42_cqes: 0
> tx42_wake: 0
> tx42_cqe_err: 0
> tx43_packets: 0
> tx43_bytes: 0
> tx43_tso_packets: 0
> tx43_tso_bytes: 0
> tx43_tso_inner_packets: 0
> tx43_tso_inner_bytes: 0
> tx43_csum_partial: 0
> tx43_csum_partial_inner: 0
> tx43_added_vlan_packets: 0
> tx43_nop: 0
> tx43_csum_none: 0
> tx43_stopped: 0
> tx43_dropped: 0
> tx43_xmit_more: 0
> tx43_recover: 0
> tx43_cqes: 0
> tx43_wake: 0
> tx43_cqe_err: 0
> tx44_packets: 0
> tx44_bytes: 0
> tx44_tso_packets: 0
> tx44_tso_bytes: 0
> tx44_tso_inner_packets: 0
> tx44_tso_inner_bytes: 0
> tx44_csum_partial: 0
> tx44_csum_partial_inner: 0
> tx44_added_vlan_packets: 0
> tx44_nop: 0
> tx44_csum_none: 0
> tx44_stopped: 0
> tx44_dropped: 0
> tx44_xmit_more: 0
> tx44_recover: 0
> tx44_cqes: 0
> tx44_wake: 0
> tx44_cqe_err: 0
> tx45_packets: 0
> tx45_bytes: 0
> tx45_tso_packets: 0
> tx45_tso_bytes: 0
> tx45_tso_inner_packets: 0
> tx45_tso_inner_bytes: 0
> tx45_csum_partial: 0
> tx45_csum_partial_inner: 0
> tx45_added_vlan_packets: 0
> tx45_nop: 0
> tx45_csum_none: 0
> tx45_stopped: 0
> tx45_dropped: 0
> tx45_xmit_more: 0
> tx45_recover: 0
> tx45_cqes: 0
> tx45_wake: 0
> tx45_cqe_err: 0
> tx46_packets: 0
> tx46_bytes: 0
> tx46_tso_packets: 0
> tx46_tso_bytes: 0
> tx46_tso_inner_packets: 0
> tx46_tso_inner_bytes: 0
> tx46_csum_partial: 0
> tx46_csum_partial_inner: 0
> tx46_added_vlan_packets: 0
> tx46_nop: 0
> tx46_csum_none: 0
> tx46_stopped: 0
> tx46_dropped: 0
> tx46_xmit_more: 0
> tx46_recover: 0
> tx46_cqes: 0
> tx46_wake: 0
> tx46_cqe_err: 0
> tx47_packets: 0
> tx47_bytes: 0
> tx47_tso_packets: 0
> tx47_tso_bytes: 0
> tx47_tso_inner_packets: 0
> tx47_tso_inner_bytes: 0
> tx47_csum_partial: 0
> tx47_csum_partial_inner: 0
> tx47_added_vlan_packets: 0
> tx47_nop: 0
> tx47_csum_none: 0
> tx47_stopped: 0
> tx47_dropped: 0
> tx47_xmit_more: 0
> tx47_recover: 0
> tx47_cqes: 0
> tx47_wake: 0
> tx47_cqe_err: 0
> tx48_packets: 0
> tx48_bytes: 0
> tx48_tso_packets: 0
> tx48_tso_bytes: 0
> tx48_tso_inner_packets: 0
> tx48_tso_inner_bytes: 0
> tx48_csum_partial: 0
> tx48_csum_partial_inner: 0
> tx48_added_vlan_packets: 0
> tx48_nop: 0
> tx48_csum_none: 0
> tx48_stopped: 0
> tx48_dropped: 0
> tx48_xmit_more: 0
> tx48_recover: 0
> tx48_cqes: 0
> tx48_wake: 0
> tx48_cqe_err: 0
> tx49_packets: 0
> tx49_bytes: 0
> tx49_tso_packets: 0
> tx49_tso_bytes: 0
> tx49_tso_inner_packets: 0
> tx49_tso_inner_bytes: 0
> tx49_csum_partial: 0
> tx49_csum_partial_inner: 0
> tx49_added_vlan_packets: 0
> tx49_nop: 0
> tx49_csum_none: 0
> tx49_stopped: 0
> tx49_dropped: 0
> tx49_xmit_more: 0
> tx49_recover: 0
> tx49_cqes: 0
> tx49_wake: 0
> tx49_cqe_err: 0
> tx50_packets: 0
> tx50_bytes: 0
> tx50_tso_packets: 0
> tx50_tso_bytes: 0
> tx50_tso_inner_packets: 0
> tx50_tso_inner_bytes: 0
> tx50_csum_partial: 0
> tx50_csum_partial_inner: 0
> tx50_added_vlan_packets: 0
> tx50_nop: 0
> tx50_csum_none: 0
> tx50_stopped: 0
> tx50_dropped: 0
> tx50_xmit_more: 0
> tx50_recover: 0
> tx50_cqes: 0
> tx50_wake: 0
> tx50_cqe_err: 0
> tx51_packets: 0
> tx51_bytes: 0
> tx51_tso_packets: 0
> tx51_tso_bytes: 0
> tx51_tso_inner_packets: 0
> tx51_tso_inner_bytes: 0
> tx51_csum_partial: 0
> tx51_csum_partial_inner: 0
> tx51_added_vlan_packets: 0
> tx51_nop: 0
> tx51_csum_none: 0
> tx51_stopped: 0
> tx51_dropped: 0
> tx51_xmit_more: 0
> tx51_recover: 0
> tx51_cqes: 0
> tx51_wake: 0
> tx51_cqe_err: 0
> tx52_packets: 0
> tx52_bytes: 0
> tx52_tso_packets: 0
> tx52_tso_bytes: 0
> tx52_tso_inner_packets: 0
> tx52_tso_inner_bytes: 0
> tx52_csum_partial: 0
> tx52_csum_partial_inner: 0
> tx52_added_vlan_packets: 0
> tx52_nop: 0
> tx52_csum_none: 0
> tx52_stopped: 0
> tx52_dropped: 0
> tx52_xmit_more: 0
> tx52_recover: 0
> tx52_cqes: 0
> tx52_wake: 0
> tx52_cqe_err: 0
> tx53_packets: 0
> tx53_bytes: 0
> tx53_tso_packets: 0
> tx53_tso_bytes: 0
> tx53_tso_inner_packets: 0
> tx53_tso_inner_bytes: 0
> tx53_csum_partial: 0
> tx53_csum_partial_inner: 0
> tx53_added_vlan_packets: 0
> tx53_nop: 0
> tx53_csum_none: 0
> tx53_stopped: 0
> tx53_dropped: 0
> tx53_xmit_more: 0
> tx53_recover: 0
> tx53_cqes: 0
> tx53_wake: 0
> tx53_cqe_err: 0
> tx54_packets: 0
> tx54_bytes: 0
> tx54_tso_packets: 0
> tx54_tso_bytes: 0
> tx54_tso_inner_packets: 0
> tx54_tso_inner_bytes: 0
> tx54_csum_partial: 0
> tx54_csum_partial_inner: 0
> tx54_added_vlan_packets: 0
> tx54_nop: 0
> tx54_csum_none: 0
> tx54_stopped: 0
> tx54_dropped: 0
> tx54_xmit_more: 0
> tx54_recover: 0
> tx54_cqes: 0
> tx54_wake: 0
> tx54_cqe_err: 0
> tx55_packets: 0
> tx55_bytes: 0
> tx55_tso_packets: 0
> tx55_tso_bytes: 0
> tx55_tso_inner_packets: 0
> tx55_tso_inner_bytes: 0
> tx55_csum_partial: 0
> tx55_csum_partial_inner: 0
> tx55_added_vlan_packets: 0
> tx55_nop: 0
> tx55_csum_none: 0
> tx55_stopped: 0
> tx55_dropped: 0
> tx55_xmit_more: 0
> tx55_recover: 0
> tx55_cqes: 0
> tx55_wake: 0
> tx55_cqe_err: 0
> tx0_xdp_xmit: 0
> tx0_xdp_full: 0
> tx0_xdp_err: 0
> tx0_xdp_cqes: 0
> tx1_xdp_xmit: 0
> tx1_xdp_full: 0
> tx1_xdp_err: 0
> tx1_xdp_cqes: 0
> tx2_xdp_xmit: 0
> tx2_xdp_full: 0
> tx2_xdp_err: 0
> tx2_xdp_cqes: 0
> tx3_xdp_xmit: 0
> tx3_xdp_full: 0
> tx3_xdp_err: 0
> tx3_xdp_cqes: 0
> tx4_xdp_xmit: 0
> tx4_xdp_full: 0
> tx4_xdp_err: 0
> tx4_xdp_cqes: 0
> tx5_xdp_xmit: 0
> tx5_xdp_full: 0
> tx5_xdp_err: 0
> tx5_xdp_cqes: 0
> tx6_xdp_xmit: 0
> tx6_xdp_full: 0
> tx6_xdp_err: 0
> tx6_xdp_cqes: 0
> tx7_xdp_xmit: 0
> tx7_xdp_full: 0
> tx7_xdp_err: 0
> tx7_xdp_cqes: 0
> tx8_xdp_xmit: 0
> tx8_xdp_full: 0
> tx8_xdp_err: 0
> tx8_xdp_cqes: 0
> tx9_xdp_xmit: 0
> tx9_xdp_full: 0
> tx9_xdp_err: 0
> tx9_xdp_cqes: 0
> tx10_xdp_xmit: 0
> tx10_xdp_full: 0
> tx10_xdp_err: 0
> tx10_xdp_cqes: 0
> tx11_xdp_xmit: 0
> tx11_xdp_full: 0
> tx11_xdp_err: 0
> tx11_xdp_cqes: 0
> tx12_xdp_xmit: 0
> tx12_xdp_full: 0
> tx12_xdp_err: 0
> tx12_xdp_cqes: 0
> tx13_xdp_xmit: 0
> tx13_xdp_full: 0
> tx13_xdp_err: 0
> tx13_xdp_cqes: 0
> tx14_xdp_xmit: 0
> tx14_xdp_full: 0
> tx14_xdp_err: 0
> tx14_xdp_cqes: 0
> tx15_xdp_xmit: 0
> tx15_xdp_full: 0
> tx15_xdp_err: 0
> tx15_xdp_cqes: 0
> tx16_xdp_xmit: 0
> tx16_xdp_full: 0
> tx16_xdp_err: 0
> tx16_xdp_cqes: 0
> tx17_xdp_xmit: 0
> tx17_xdp_full: 0
> tx17_xdp_err: 0
> tx17_xdp_cqes: 0
> tx18_xdp_xmit: 0
> tx18_xdp_full: 0
> tx18_xdp_err: 0
> tx18_xdp_cqes: 0
> tx19_xdp_xmit: 0
> tx19_xdp_full: 0
> tx19_xdp_err: 0
> tx19_xdp_cqes: 0
> tx20_xdp_xmit: 0
> tx20_xdp_full: 0
> tx20_xdp_err: 0
> tx20_xdp_cqes: 0
> tx21_xdp_xmit: 0
> tx21_xdp_full: 0
> tx21_xdp_err: 0
> tx21_xdp_cqes: 0
> tx22_xdp_xmit: 0
> tx22_xdp_full: 0
> tx22_xdp_err: 0
> tx22_xdp_cqes: 0
> tx23_xdp_xmit: 0
> tx23_xdp_full: 0
> tx23_xdp_err: 0
> tx23_xdp_cqes: 0
> tx24_xdp_xmit: 0
> tx24_xdp_full: 0
> tx24_xdp_err: 0
> tx24_xdp_cqes: 0
> tx25_xdp_xmit: 0
> tx25_xdp_full: 0
> tx25_xdp_err: 0
> tx25_xdp_cqes: 0
> tx26_xdp_xmit: 0
> tx26_xdp_full: 0
> tx26_xdp_err: 0
> tx26_xdp_cqes: 0
> tx27_xdp_xmit: 0
> tx27_xdp_full: 0
> tx27_xdp_err: 0
> tx27_xdp_cqes: 0
> tx28_xdp_xmit: 0
> tx28_xdp_full: 0
> tx28_xdp_err: 0
> tx28_xdp_cqes: 0
> tx29_xdp_xmit: 0
> tx29_xdp_full: 0
> tx29_xdp_err: 0
> tx29_xdp_cqes: 0
> tx30_xdp_xmit: 0
> tx30_xdp_full: 0
> tx30_xdp_err: 0
> tx30_xdp_cqes: 0
> tx31_xdp_xmit: 0
> tx31_xdp_full: 0
> tx31_xdp_err: 0
> tx31_xdp_cqes: 0
> tx32_xdp_xmit: 0
> tx32_xdp_full: 0
> tx32_xdp_err: 0
> tx32_xdp_cqes: 0
> tx33_xdp_xmit: 0
> tx33_xdp_full: 0
> tx33_xdp_err: 0
> tx33_xdp_cqes: 0
> tx34_xdp_xmit: 0
> tx34_xdp_full: 0
> tx34_xdp_err: 0
> tx34_xdp_cqes: 0
> tx35_xdp_xmit: 0
> tx35_xdp_full: 0
> tx35_xdp_err: 0
> tx35_xdp_cqes: 0
> tx36_xdp_xmit: 0
> tx36_xdp_full: 0
> tx36_xdp_err: 0
> tx36_xdp_cqes: 0
> tx37_xdp_xmit: 0
> tx37_xdp_full: 0
> tx37_xdp_err: 0
> tx37_xdp_cqes: 0
> tx38_xdp_xmit: 0
> tx38_xdp_full: 0
> tx38_xdp_err: 0
> tx38_xdp_cqes: 0
> tx39_xdp_xmit: 0
> tx39_xdp_full: 0
> tx39_xdp_err: 0
> tx39_xdp_cqes: 0
> tx40_xdp_xmit: 0
> tx40_xdp_full: 0
> tx40_xdp_err: 0
> tx40_xdp_cqes: 0
> tx41_xdp_xmit: 0
> tx41_xdp_full: 0
> tx41_xdp_err: 0
> tx41_xdp_cqes: 0
> tx42_xdp_xmit: 0
> tx42_xdp_full: 0
> tx42_xdp_err: 0
> tx42_xdp_cqes: 0
> tx43_xdp_xmit: 0
> tx43_xdp_full: 0
> tx43_xdp_err: 0
> tx43_xdp_cqes: 0
> tx44_xdp_xmit: 0
> tx44_xdp_full: 0
> tx44_xdp_err: 0
> tx44_xdp_cqes: 0
> tx45_xdp_xmit: 0
> tx45_xdp_full: 0
> tx45_xdp_err: 0
> tx45_xdp_cqes: 0
> tx46_xdp_xmit: 0
> tx46_xdp_full: 0
> tx46_xdp_err: 0
> tx46_xdp_cqes: 0
> tx47_xdp_xmit: 0
> tx47_xdp_full: 0
> tx47_xdp_err: 0
> tx47_xdp_cqes: 0
> tx48_xdp_xmit: 0
> tx48_xdp_full: 0
> tx48_xdp_err: 0
> tx48_xdp_cqes: 0
> tx49_xdp_xmit: 0
> tx49_xdp_full: 0
> tx49_xdp_err: 0
> tx49_xdp_cqes: 0
> tx50_xdp_xmit: 0
> tx50_xdp_full: 0
> tx50_xdp_err: 0
> tx50_xdp_cqes: 0
> tx51_xdp_xmit: 0
> tx51_xdp_full: 0
> tx51_xdp_err: 0
> tx51_xdp_cqes: 0
> tx52_xdp_xmit: 0
> tx52_xdp_full: 0
> tx52_xdp_err: 0
> tx52_xdp_cqes: 0
> tx53_xdp_xmit: 0
> tx53_xdp_full: 0
> tx53_xdp_err: 0
> tx53_xdp_cqes: 0
> tx54_xdp_xmit: 0
> tx54_xdp_full: 0
> tx54_xdp_err: 0
> tx54_xdp_cqes: 0
> tx55_xdp_xmit: 0
> tx55_xdp_full: 0
> tx55_xdp_err: 0
> tx55_xdp_cqes: 0
>
>
> mpstat -P ALL 1 10
> Average: CPU %usr %nice %sys %iowait %irq %soft
> %steal
> %guest %gnice %idle
> Average: all 0.04 0.00 6.94 0.02 0.00 32.00
> 0.00 0.00 0.00 61.00
> Average: 0 0.00 0.00 1.20 0.00 0.00 0.00
> 0.00
> 0.00 0.00 98.80
> Average: 1 0.00 0.00 2.30 0.00 0.00 0.00
> 0.00
> 0.00 0.00 97.70
> Average: 2 0.10 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 99.90
> Average: 3 0.10 0.00 1.50 0.00 0.00 0.00
> 0.00
> 0.00 0.00 98.40
> Average: 4 0.50 0.00 2.50 0.00 0.00 0.00
> 0.00
> 0.00 0.00 97.00
> Average: 5 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 6 0.90 0.00 10.20 0.00 0.00 0.00
> 0.00
> 0.00 0.00 88.90
> Average: 7 0.00 0.00 0.00 1.40 0.00 0.00
> 0.00
> 0.00 0.00 98.60
> Average: 8 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 9 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 10 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 11 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 12 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 13 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 14 0.00 0.00 12.99 0.00 0.00 62.64
> 0.00 0.00 0.00 24.38
> Average: 15 0.00 0.00 12.70 0.00 0.00 63.40
> 0.00 0.00 0.00 23.90
> Average: 16 0.00 0.00 11.20 0.00 0.00 66.40
> 0.00 0.00 0.00 22.40
> Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
> 0.00 0.00 0.00 31.30
> Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
> 0.00 0.00 0.00 24.90
> Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
> 0.00 0.00 0.00 19.68
> Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
> 0.00 0.00 0.00 18.00
> Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
> 0.00 0.00 0.00 17.40
> Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
> 0.00 0.00 0.00 26.03
> Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
> 0.00 0.00 0.00 17.52
> Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
> 0.00 0.00 0.00 18.20
> Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
> 0.00 0.00 0.00 17.68
> Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
> 0.00 0.00 0.00 18.20
> Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
> 0.00 0.00 0.00 20.42
The numa cores are not at 100% util, you have around 20% of idle on
each one.
> Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
> 0.00
> 0.00 0.00 96.10
> Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
> 0.00
> 0.00 0.00 99.70
> Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
> 0.00
> 0.00 0.00 99.50
> Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00
> 0.00 0.00 100.00
> Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
> 0.00
> 0.00 0.00 97.40
> Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
> 0.00
> 0.00 0.00 99.10
> Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
> 0.00
> 0.00 0.00 99.40
> Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
> 0.00 0.00 0.00 19.42
> Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
> 0.00 0.00 0.00 26.60
> Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
> 0.00 0.00 0.00 21.60
> Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
> 0.00 0.00 0.00 20.50
> Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
> 0.00 0.00 0.00 21.60
> Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
> 0.00 0.00 0.00 24.58
> Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
> 0.00 0.00 0.00 24.68
> Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
> 0.00 0.00 0.00 28.34
> Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
> 0.00 0.00 0.00 25.83
> Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
> 0.00 0.00 0.00 27.20
> Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
> 0.00 0.00 0.00 28.03
> Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
> 0.00 0.00 0.00 24.70
> Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
> 0.00 0.00 0.00 25.30
> Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
> 0.00 0.00 0.00 26.77
>
>
> ethtool -k enp175s0f0
> Features for enp175s0f0:
> rx-checksumming: on
> tx-checksumming: on
> tx-checksum-ipv4: on
> tx-checksum-ip-generic: off [fixed]
> tx-checksum-ipv6: on
> tx-checksum-fcoe-crc: off [fixed]
> tx-checksum-sctp: off [fixed]
> scatter-gather: on
> tx-scatter-gather: on
> tx-scatter-gather-fraglist: off [fixed]
> tcp-segmentation-offload: on
> tx-tcp-segmentation: on
> tx-tcp-ecn-segmentation: off [fixed]
> tx-tcp-mangleid-segmentation: off
> tx-tcp6-segmentation: on
> udp-fragmentation-offload: off
> generic-segmentation-offload: on
> generic-receive-offload: on
> large-receive-offload: off [fixed]
> rx-vlan-offload: on
> tx-vlan-offload: on
> ntuple-filters: off
> receive-hashing: on
> highdma: on [fixed]
> rx-vlan-filter: on
> vlan-challenged: off [fixed]
> tx-lockless: off [fixed]
> netns-local: off [fixed]
> tx-gso-robust: off [fixed]
> tx-fcoe-segmentation: off [fixed]
> tx-gre-segmentation: on
> tx-gre-csum-segmentation: on
> tx-ipxip4-segmentation: off [fixed]
> tx-ipxip6-segmentation: off [fixed]
> tx-udp_tnl-segmentation: on
> tx-udp_tnl-csum-segmentation: on
> tx-gso-partial: on
> tx-sctp-segmentation: off [fixed]
> tx-esp-segmentation: off [fixed]
> tx-udp-segmentation: on
> fcoe-mtu: off [fixed]
> tx-nocache-copy: off
> loopback: off [fixed]
> rx-fcs: off
> rx-all: off
> tx-vlan-stag-hw-insert: on
> rx-vlan-stag-hw-parse: off [fixed]
> rx-vlan-stag-filter: on [fixed]
> l2-fwd-offload: off [fixed]
> hw-tc-offload: off
> esp-hw-offload: off [fixed]
> esp-tx-csum-hw-offload: off [fixed]
> rx-udp_tunnel-port-offload: on
> tls-hw-tx-offload: off [fixed]
> tls-hw-rx-offload: off [fixed]
> rx-gro-hw: off [fixed]
> tls-hw-record: off [fixed]
>
> ethtool -c enp175s0f0
> Coalesce parameters for enp175s0f0:
> Adaptive RX: off TX: on
> stats-block-usecs: 0
> sample-interval: 0
> pkt-rate-low: 0
> pkt-rate-high: 0
> dmac: 32703
>
> rx-usecs: 256
> rx-frames: 128
> rx-usecs-irq: 0
> rx-frames-irq: 0
>
> tx-usecs: 8
> tx-frames: 128
> tx-usecs-irq: 0
> tx-frames-irq: 0
>
> rx-usecs-low: 0
> rx-frame-low: 0
> tx-usecs-low: 0
> tx-frame-low: 0
>
> rx-usecs-high: 0
> rx-frame-high: 0
> tx-usecs-high: 0
> tx-frame-high: 0
>
> ethtool -g enp175s0f0
> Ring parameters for enp175s0f0:
> Pre-set maximums:
> RX: 8192
> RX Mini: 0
> RX Jumbo: 0
> TX: 8192
> Current hardware settings:
> RX: 4096
> RX Mini: 0
> RX Jumbo: 0
> TX: 4096
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 9:22 ` Jesper Dangaard Brouer
@ 2018-11-01 10:34 ` Paweł Staszewski
2018-11-01 15:27 ` Aaron Lu
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 10:34 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: Eric Dumazet, netdev, Tariq Toukan, Ilias Apalodimas,
Yoel Caspersen, Mel Gorman, Aaron Lu
W dniu 01.11.2018 o 10:22, Jesper Dangaard Brouer pisze:
> On Wed, 31 Oct 2018 23:20:01 +0100
> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> W dniu 31.10.2018 o 23:09, Eric Dumazet pisze:
>>> On 10/31/2018 02:57 PM, Paweł Staszewski wrote:
>>>> Hi
>>>>
>>>> So maybee someone will be interested how linux kernel handles
>>>> normal traffic (not pktgen :) )
> Pawel is this live production traffic?
Yes moved server from testlab to production to check (risking a little -
but this is traffic switched to backup router : ) )
>
> I know Yoel (Cc) is very interested to know the real-life limitation of
> Linux as a router, especially with VLANs like you use.
So yes this is real-life traffic , real users - normal mixed internet
traffic forwarded (including ddos-es :) )
>
>
>>>> Server HW configuration:
>>>>
>>>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>>>
>>>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>>>
>>>>
>>>> Server software:
>>>>
>>>> FRR - as routing daemon
>>>>
>>>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa node)
>>>>
>>>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node)
>>>>
>>>>
>>>> Maximum traffic that server can handle:
>>>>
>>>> Bandwidth
>>>>
>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>> input: /proc/net/dev type: rate
>>>> \ iface Rx Tx Total
>>>> ==============================================================================
>>>> enp175s0f1: 28.51 Gb/s 37.24 Gb/s 65.74 Gb/s
>>>> enp175s0f0: 38.07 Gb/s 28.44 Gb/s 66.51 Gb/s
>>>> ------------------------------------------------------------------------------
>>>> total: 66.58 Gb/s 65.67 Gb/s 132.25 Gb/s
>>>>
> Actually rather impressive number for a Linux router.
>
>>>> Packets per second:
>>>>
>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>> input: /proc/net/dev type: rate
>>>> - iface Rx Tx Total
>>>> ==============================================================================
>>>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s 8735207.00 P/s
>>>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s 8790460.00 P/s
>>>> ------------------------------------------------------------------------------
>>>> total: 8806533.00 P/s 8719134.00 P/s 17525668.00 P/s
>>>>
> Average packet size:
> (28.51*10^9/8)/5248589 = 678.99 bytes
> (38.07*10^9/8)/3557944 = 1337.49 bytes
>
>
>>>> After reaching that limits nics on the upstream side (more RX
>>>> traffic) start to drop packets
>>>>
>>>>
>>>> I just dont understand that server can't handle more bandwidth
>>>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
>>>> RX side are increasing.
>>>>
>>>> Was thinking that maybee reached some pcie x16 limit - but x16 8GT
>>>> is 126Gbit - and also when testing with pktgen i can reach more bw
>>>> and pps (like 4x more comparing to normal internet traffic)
>>>>
>>>> And wondering if there is something that can be improved here.
>>>>
>>>>
>>>>
>>>> Some more informations / counters / stats and perf top below:
>>>>
>>>> Perf top flame graph:
>>>>
>>>> https://uploadfiles.io/7zo6u
> Thanks a lot for the flame graph!
>
>>>> System configuration(long):
>>>>
>>>>
>>>> cat /sys/devices/system/node/node1/cpulist
>>>> 14-27,42-55
>>>> cat /sys/class/net/enp175s0f0/device/numa_node
>>>> 1
>>>> cat /sys/class/net/enp175s0f1/device/numa_node
>>>> 1
>>>>
> Hint grep can give you nicer output that cat:
>
> $ grep -H . /sys/class/net/*/device/numa_node
Sure:
grep -H . /sys/class/net/*/device/numa_node
/sys/class/net/enp175s0f0/device/numa_node:1
/sys/class/net/enp175s0f1/device/numa_node:1
>
>>>>
>>>>
>>>>
>>>> ip -s -d link ls dev enp175s0f0
>>>> 6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
>>>> link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
>>>> RX: bytes packets errors dropped overrun mcast
>>>> 184142375840858 141347715974 2 2806325 0 85050528
>>>> TX: bytes packets errors dropped carrier collsns
>>>> 99270697277430 172227994003 0 0 0 0
>>>>
>>>> ip -s -d link ls dev enp175s0f1
>>>> 7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 8192
>>>> link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536 gso_max_segs 65535
>>>> RX: bytes packets errors dropped overrun mcast
>>>> 99686284170801 173507590134 61 669685 0 100304421
>>>> TX: bytes packets errors dropped carrier collsns
>>>> 184435107970545 142383178304 0 0 0 0
>>>>
> You have increased the default (1000) qlen to 8192, why?
Was checking if higher txq will change anything
But no change for settings 1000,4096,8192
But yes i do not use there any traffic shaping like hfsc/hdb etc
- just default qdisc mq 0:
root pfifp_fast
tc qdisc show dev enp175s0f1
qdisc mq 0: root
qdisc pfifo_fast 0: parent :38 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1
1 1 1 1
qdisc pfifo_fast 0: parent :37 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1
1 1 1 1
qdisc pfifo_fast 0: parent :36 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1
1 1 1 1
...
...
And vlans are noqueue
tc -s -d qdisc show dev vlan1521
qdisc noqueue 0: root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Weird is that no counters increasing but there is traffic in/out on that
vlans
ip -s -d link ls dev vlan1521
87: vlan1521@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity 0
vlan protocol 802.1Q id 1521 <REORDER_HDR> addrgenmode eui64
numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
RX: bytes packets errors dropped overrun mcast
562964218394 1639370761 0 0 0 0
TX: bytes packets errors dropped carrier collsns
1417648713052 618271312 0 0 0 0
>
> What default qdisc do you run?... looking through your very detail main
> email report (I do love the details you give!). You run
> pfifo_fast_dequeue, thus this 8192 qlen is actually having effect.
>
> I would like to know if and how much qdisc_dequeue bulking is happening
> in this setup? Can you run:
>
> perf-stat-hist -m 8192 -P2 qdisc:qdisc_dequeue packets
>
> The perf-stat-hist is from Brendan Gregg's git-tree:
> https://github.com/brendangregg/perf-tools
> https://github.com/brendangregg/perf-tools/blob/master/misc/perf-stat-hist
>
./perf-stat-hist -m 8192 -P2 qdisc:qdisc_dequeue packets
Tracing qdisc:qdisc_dequeue, power-of-2, max 8192, until Ctrl-C...
^C
Range : Count Distribution
-> -1 : 0 | |
0 -> 0 : 43768349
|######################################|
1 -> 1 : 43895249
|######################################|
2 -> 3 : 352 |# |
4 -> 7 : 228 |# |
8 -> 15 : 135 |# |
16 -> 31 : 73 |# |
32 -> 63 : 7 |# |
64 -> 127 : 0 | |
128 -> 255 : 0 | |
256 -> 511 : 0 | |
512 -> 1023 : 0 | |
1024 -> 2047 : 0 | |
2048 -> 4095 : 0 | |
4096 -> 8191 : 0 | |
8192 -> : 0 | |
>>>> ./softnet.sh
>>>> cpu total dropped squeezed collision rps flow_limit
>>>>
>>>>
>>>>
>>>>
>>>> PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz cycles], (all, 56 CPUs)
>>>> ------------------------------------------------------------------------------------------
>>>>
>>>> 26.78% [kernel] [k] queued_spin_lock_slowpath
>>> This is highly suspect.
>>>
> I agree! -- 26.78% spend in queued_spin_lock_slowpath. Hint if you see
> _raw_spin_lock then it is likely not a contended lock, but if you see
> queued_spin_lock_slowpath in a perf-report your workload is likely in
> trouble.
>
>
>>> A call graph (perf record -a -g sleep 1; perf report --stdio)
>>> would tell what is going on.
>> perf report:
>> https://ufile.io/rqp0h
>>
> Thanks for the output (my 30" screen is just large enough to see the
> full output). Together with the flame-graph, it is clear that this
> lock happens in the page allocator code.
>
> Section copied out:
>
> mlx5e_poll_tx_cq
> |
> --16.34%--napi_consume_skb
> |
> |--12.65%--__free_pages_ok
> | |
> | --11.86%--free_one_page
> | |
> | |--10.10%--queued_spin_lock_slowpath
> | |
> | --0.65%--_raw_spin_lock
> |
> |--1.55%--page_frag_free
> |
> --1.44%--skb_release_data
>
>
> Let me explain what (I think) happens. The mlx5 driver RX-page recycle
> mechanism is not effective in this workload, and pages have to go
> through the page allocator. The lock contention happens during mlx5
> DMA TX completion cycle. And the page allocator cannot keep up at
> these speeds.
>
> One solution is extend page allocator with a bulk free API. (This have
> been on my TODO list for a long time, but I don't have a
> micro-benchmark that trick the driver page-recycle to fail). It should
> fit nicely, as I can see that kmem_cache_free_bulk() does get
> activated (bulk freeing SKBs), which means that DMA TX completion do
> have a bulk of packets.
>
> We can (and should) also improve the page recycle scheme in the driver.
> After LPC, I have a project with Tariq and Ilias (Cc'ed) to improve the
> page_pool, and we will (attempt) to generalize this, for both high-end
> mlx5 and more low-end ARM64-boards (macchiatobin and espressobin).
>
> The MM-people is in parallel working to improve the performance of
> order-0 page returns. Thus, the explicit page bulk free API might
> actually become less important. I actually think (Cc.) Aaron have a
> patchset he would like you to test, which removes the (zone->)lock
> you hit in free_one_page().
>
Ok - Thank You Jesper
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 3:37 ` David Ahern
@ 2018-11-01 10:55 ` Jesper Dangaard Brouer
2018-11-01 13:52 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-01 10:55 UTC (permalink / raw)
To: David Ahern; +Cc: brouer, Paweł Staszewski, netdev, Yoel Caspersen
On Wed, 31 Oct 2018 21:37:16 -0600 David Ahern <dsahern@gmail.com> wrote:
> This is mainly a forwarding use case? Seems so based on the perf report.
> I suspect forwarding with XDP would show pretty good improvement.
Yes, significant performance improvements.
Notice Davids talk: "Leveraging Kernel Tables with XDP"
http://vger.kernel.org/lpc-networking2018.html#session-1
It looks like that you are doing "pure" IP-routing, without any
iptables conntrack stuff (from your perf report data). That will
actually be a really good use-case for accelerating this with XDP.
I want you to understand the philosophy behind how David and I want
people to leverage XDP. Think of XDP as a software offload layer for
the kernel network stack. Setup and use Linux kernel network stack, but
accelerate parts of it with XDP, e.g. the route FIB lookup.
Sample code avail here:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/xdp_fwd_kern.c
(I do warn, what we just found a bug/crash in setup+tairdown for the
mlx5 driver you are using, that we/mlx _will_ fix soon)
> You need the vlan changes I have queued up though.
I know Yoel will be very interested in those changes too! I've
convinced Yoel to write an XDP program for his Border Network Gateway
(BNG) production system[1], and his is a heavy VLAN user. And the plan
is to Open Source this when he have-something-working.
[1] https://www.version2.dk/blog/software-router-del-5-linux-bng-1086060
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 9:50 ` Saeed Mahameed
@ 2018-11-01 11:09 ` Paweł Staszewski
2018-11-01 16:49 ` Paweł Staszewski
2018-11-01 20:37 ` Saeed Mahameed
0 siblings, 2 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 11:09 UTC (permalink / raw)
To: Saeed Mahameed, netdev
W dniu 01.11.2018 o 10:50, Saeed Mahameed pisze:
> On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
>> Hi
>>
>> So maybee someone will be interested how linux kernel handles normal
>> traffic (not pktgen :) )
>>
>>
>> Server HW configuration:
>>
>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>
>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>
>>
>> Server software:
>>
>> FRR - as routing daemon
>>
>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local
>> numa
>> node)
>>
>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa
>> node)
>>
>>
>> Maximum traffic that server can handle:
>>
>> Bandwidth
>>
>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>> input: /proc/net/dev type: rate
>> \ iface Rx Tx Total
>> =====================================================================
>> =========
>> enp175s0f1: 28.51 Gb/s 37.24
>> Gb/s
>> 65.74 Gb/s
>> enp175s0f0: 38.07 Gb/s 28.44
>> Gb/s
>> 66.51 Gb/s
>> -------------------------------------------------------------------
>> -----------
>> total: 66.58 Gb/s 65.67
>> Gb/s
>> 132.25 Gb/s
>>
>>
>> Packets per second:
>>
>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>> input: /proc/net/dev type: rate
>> - iface Rx Tx Total
>> =====================================================================
>> =========
>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
>> 8735207.00 P/s
>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
>> 8790460.00 P/s
>> -------------------------------------------------------------------
>> -----------
>> total: 8806533.00 P/s 8719134.00 P/s
>> 17525668.00 P/s
>>
>>
>> After reaching that limits nics on the upstream side (more RX
>> traffic)
>> start to drop packets
>>
>>
>> I just dont understand that server can't handle more bandwidth
>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX
>> side are increasing.
>>
> Where do you see 40 Gb/s ? you showed that both ports on the same NIC (
> same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) = 132.25
> Gb/s which aligns with your pcie link limit, what am i missing ?
hmm yes that was my concern also - cause cant find anywhere informations
about that bandwidth is uni or bidirectional - so if 126Gbit for x16 8GT
is unidir - then bidir will be 126/2 ~68Gbit - which will fit total bw
on both ports
This can explain maybee also why cpuload is rising rapidly from
120Gbit/s in total to 132Gbit (counters of bwmng are from /proc/net - so
there can be some error in reading them when offloading (gro/gso/tso) on
nic's is enabled that is why
>
>> Was thinking that maybee reached some pcie x16 limit - but x16 8GT
>> is
>> 126Gbit - and also when testing with pktgen i can reach more bw and
>> pps
>> (like 4x more comparing to normal internet traffic)
>>
> Are you forwarding when using pktgen as well or you just testing the RX
> side pps ?
Yes pktgen was tested on single port RX
Can check also forwarding to eliminate pciex limits
>
>> And wondering if there is something that can be improved here.
>>
>>
>>
>> Some more informations / counters / stats and perf top below:
>>
>> Perf top flame graph:
>>
>> https://uploadfiles.io/7zo6u
>>
>>
>>
>> System configuration(long):
>>
>>
>> cat /sys/devices/system/node/node1/cpulist
>> 14-27,42-55
>> cat /sys/class/net/enp175s0f0/device/numa_node
>> 1
>> cat /sys/class/net/enp175s0f1/device/numa_node
>> 1
>>
>>
>>
>>
>>
>> ip -s -d link ls dev enp175s0f0
>> 6: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>> state
>> UP mode DEFAULT group default qlen 8192
>> link/ether 0c:c4:7a:d8:5d:1c brd ff:ff:ff:ff:ff:ff promiscuity
>> 0
>> addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536
>> gso_max_segs 65535
>> RX: bytes packets errors dropped overrun mcast
>> 184142375840858 141347715974 2 2806325 0 85050528
>> TX: bytes packets errors dropped carrier collsns
>> 99270697277430 172227994003 0 0 0 0
>>
>> ip -s -d link ls dev enp175s0f1
>> 7: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>> state
>> UP mode DEFAULT group default qlen 8192
>> link/ether 0c:c4:7a:d8:5d:1d brd ff:ff:ff:ff:ff:ff promiscuity
>> 0
>> addrgenmode eui64 numtxqueues 448 numrxqueues 56 gso_max_size 65536
>> gso_max_segs 65535
>> RX: bytes packets errors dropped overrun mcast
>> 99686284170801 173507590134 61 669685 0 100304421
>> TX: bytes packets errors dropped carrier collsns
>> 184435107970545 142383178304 0 0 0 0
>>
>>
>> ./softnet.sh
>> cpu total dropped squeezed collision rps flow_limit
>> 0 3961392822 0 1221478 0 0 0
>> 1 3701952251 0 1258234 0 0 0
>> 2 3879522030 0 1584282 0 0 0
>> 3 3731349789 0 1529029 0 0 0
>> 4 1323956701 0 2176371 0 0 0
>> 5 420528963 0 1880146 0 0 0
>> 6 348720322 0 1830142 0 0 0
>> 7 372736328 0 1820891 0 0 0
>> 8 567888751 0 1414763 0 0 0
>> 9 476075775 0 1868150 0 0 0
>> 10 468946725 0 1841428 0 0 0
>> 11 676591958 0 1900160 0 0 0
>> 12 346803472 0 1834600 0 0 0
>> 13 457960872 0 1874529 0 0 0
>> 14 1990279665 0 4699000 0 0 0
>> 15 1211873601 0 4541281 0 0 0
>> 16 1123871928 0 4544712 0 0 0
>> 17 1014957263 0 4152355 0 0 0
>> 18 2603779724 0 4593869 0 0 0
>> 19 2181924054 0 4930618 0 0 0
>> 20 2273502182 0 4894627 0 0 0
>> 21 2232030947 0 4860048 0 0 0
>> 22 2203555394 0 4603830 0 0 0
>> 23 2194756800 0 4921294 0 0 0
>> 24 2347158294 0 4818354 0 0 0
>> 25 2291097883 0 4744469 0 0 0
>> 26 2206945011 0 4836483 0 0 0
>> 27 2318530217 0 4917617 0 0 0
>> 28 512797543 0 1895200 0 0 0
>> 29 597279474 0 1532134 0 0 0
>> 30 475317503 0 1451523 0 0 0
>> 31 499172796 0 1901207 0 0 0
>> 32 493874745 0 1915382 0 0 0
>> 33 296056288 0 1865535 0 0 0
>> 34 3905097041 0 1580822 0 0 0
>> 35 3905112345 0 1536105 0 0 0
>> 36 3900358950 0 1166319 0 0 0
>> 37 3940978093 0 1600219 0 0 0
>> 38 3878632215 0 1180389 0 0 0
>> 39 3814804736 0 1584925 0 0 0
>> 40 4152934337 0 1663660 0 0 0
>> 41 3855273904 0 1552219 0 0 0
>> 42 2319538182 0 4884480 0 0 0
>> 43 2448606991 0 4387456 0 0 0
>> 44 1436136753 0 4485073 0 0 0
>> 45 1200500141 0 4537284 0 0 0
>> 46 1307799923 0 4534156 0 0 0
>> 47 1586575293 0 4272997 0 0 0
>> 48 3852574 0 4162653 0 0 0
>> 49 391449390 0 3935202 0 0 0
>> 50 791388200 0 4290738 0 0 0
>> 51 127107573 0 3907750 0 0 0
>> 52 115622148 0 4012843 0 0 0
>> 53 71098871 0 4200625 0 0 0
>> 54 305121466 0 4365614 0 0 0
>> 55 10914257 0 4369426 0 0 0
>>
>>
>>
>>
>> PerfTop: 108490 irqs/sec kernel:99.6% exact: 0.0% [4000Hz
>> cycles], (all, 56 CPUs)
>> -------------------------------------------------------------------
>> -------------------------------------------------------------------
>> -------------------------------------------------------------------
>> ------
>>
>> 26.78% [kernel] [k] queued_spin_lock_slowpath
>> 9.09% [kernel] [k] mlx5e_skb_from_cqe_linear
>> 4.94% [kernel] [k] mlx5e_sq_xmit
>> 3.63% [kernel] [k] memcpy_erms
>> 3.30% [kernel] [k] fib_table_lookup
>> 3.26% [kernel] [k] build_skb
>> 2.41% [kernel] [k] mlx5e_poll_tx_cq
>> 2.11% [kernel] [k] get_page_from_freelist
>> 1.51% [kernel] [k] vlan_do_receive
>> 1.51% [kernel] [k] _raw_spin_lock
>> 1.43% [kernel] [k] __dev_queue_xmit
>> 1.41% [kernel] [k] dev_gro_receive
>> 1.34% [kernel] [k] mlx5e_poll_rx_cq
>> 1.26% [kernel] [k] tcp_gro_receive
>> 1.21% [kernel] [k] free_one_page
>> 1.13% [kernel] [k] swiotlb_map_page
>> 1.13% [kernel] [k] mlx5e_post_rx_wqes
>> 1.05% [kernel] [k] pfifo_fast_dequeue
>> 1.05% [kernel] [k] mlx5e_handle_rx_cqe
>> 1.03% [kernel] [k] ip_finish_output2
>> 1.02% [kernel] [k] ipt_do_table
>> 0.96% [kernel] [k] inet_gro_receive
>> 0.91% [kernel] [k] mlx5_eq_int
>> 0.88% [kernel] [k] __slab_free.isra.79
>> 0.86% [kernel] [k] __build_skb
>> 0.84% [kernel] [k] page_frag_free
>> 0.76% [kernel] [k] skb_release_data
>> 0.75% [kernel] [k] __netif_receive_skb_core
>> 0.75% [kernel] [k] irq_entries_start
>> 0.71% [kernel] [k] ip_route_input_rcu
>> 0.65% [kernel] [k] vlan_dev_hard_start_xmit
>> 0.56% [kernel] [k] ip_forward
>> 0.56% [kernel] [k] __memcpy
>> 0.52% [kernel] [k] kmem_cache_alloc
>> 0.52% [kernel] [k] kmem_cache_free_bulk
>> 0.49% [kernel] [k] mlx5e_page_release
>> 0.47% [kernel] [k] netif_skb_features
>> 0.47% [kernel] [k] mlx5e_build_rx_skb
>> 0.47% [kernel] [k] dev_hard_start_xmit
>> 0.43% [kernel] [k] __page_pool_put_page
>> 0.43% [kernel] [k] __netif_schedule
>> 0.43% [kernel] [k] mlx5e_xmit
>> 0.41% [kernel] [k] __qdisc_run
>> 0.41% [kernel] [k] validate_xmit_skb.isra.142
>> 0.41% [kernel] [k] swiotlb_unmap_page
>> 0.40% [kernel] [k] inet_lookup_ifaddr_rcu
>> 0.34% [kernel] [k] ip_rcv_core.isra.20.constprop.25
>> 0.34% [kernel] [k] tcp4_gro_receive
>> 0.29% [kernel] [k] _raw_spin_lock_irqsave
>> 0.29% [kernel] [k] napi_consume_skb
>> 0.29% [kernel] [k] skb_gro_receive
>> 0.29% [kernel] [k] ___slab_alloc.isra.80
>> 0.27% [kernel] [k] eth_type_trans
>> 0.26% [kernel] [k] __free_pages_ok
>> 0.26% [kernel] [k] __get_xps_queue_idx
>> 0.24% [kernel] [k] _raw_spin_trylock
>> 0.23% [kernel] [k] __local_bh_enable_ip
>> 0.22% [kernel] [k] pfifo_fast_enqueue
>> 0.21% [kernel] [k] tasklet_action_common.isra.21
>> 0.21% [kernel] [k] sch_direct_xmit
>> 0.21% [kernel] [k] skb_network_protocol
>> 0.21% [kernel] [k] kmem_cache_free
>> 0.20% [kernel] [k] netdev_pick_tx
>> 0.18% [kernel] [k] napi_gro_complete
>> 0.18% [kernel] [k] __sched_text_start
>> 0.18% [kernel] [k] mlx5e_xdp_handle
>> 0.17% [kernel] [k] ip_finish_output
>> 0.16% [kernel] [k] napi_gro_flush
>> 0.16% [kernel] [k] vlan_passthru_hard_header
>> 0.16% [kernel] [k] skb_segment
>> 0.15% [kernel] [k] __alloc_pages_nodemask
>> 0.15% [kernel] [k] mlx5e_features_check
>> 0.15% [kernel] [k] mlx5e_napi_poll
>> 0.15% [kernel] [k] napi_gro_receive
>> 0.14% [kernel] [k] fib_validate_source
>> 0.14% [kernel] [k] _raw_spin_lock_irq
>> 0.14% [kernel] [k] inet_gro_complete
>> 0.14% [kernel] [k] get_partial_node.isra.78
>> 0.13% [kernel] [k] napi_complete_done
>> 0.13% [kernel] [k] ip_rcv_finish_core.isra.17
>> 0.13% [kernel] [k] cmd_exec
>>
>>
>>
>> ethtool -S enp175s0f1
>> NIC statistics:
>> rx_packets: 173730800927
>> rx_bytes: 99827422751332
>> tx_packets: 142532009512
>> tx_bytes: 184633045911222
>> tx_tso_packets: 25989113891
>> tx_tso_bytes: 132933363384458
>> tx_tso_inner_packets: 0
>> tx_tso_inner_bytes: 0
>> tx_added_vlan_packets: 74630239613
>> tx_nop: 2029817748
>> rx_lro_packets: 0
>> rx_lro_bytes: 0
>> rx_ecn_mark: 0
>> rx_removed_vlan_packets: 173730800927
>> rx_csum_unnecessary: 0
>> rx_csum_none: 434357
>> rx_csum_complete: 173730366570
>> rx_csum_unnecessary_inner: 0
>> rx_xdp_drop: 0
>> rx_xdp_redirect: 0
>> rx_xdp_tx_xmit: 0
>> rx_xdp_tx_full: 0
>> rx_xdp_tx_err: 0
>> rx_xdp_tx_cqe: 0
>> tx_csum_none: 38260960853
>> tx_csum_partial: 36369278774
>> tx_csum_partial_inner: 0
>> tx_queue_stopped: 1
>> tx_queue_dropped: 0
>> tx_xmit_more: 748638099
>> tx_recover: 0
>> tx_cqes: 73881645031
>> tx_queue_wake: 1
>> tx_udp_seg_rem: 0
>> tx_cqe_err: 0
>> tx_xdp_xmit: 0
>> tx_xdp_full: 0
>> tx_xdp_err: 0
>> tx_xdp_cqes: 0
>> rx_wqe_err: 0
>> rx_mpwqe_filler_cqes: 0
>> rx_mpwqe_filler_strides: 0
>> rx_buff_alloc_err: 0
>> rx_cqe_compress_blks: 0
>> rx_cqe_compress_pkts: 0
> If this is a pcie bottleneck it might be useful to enable CQE
> compression (to reduce PCIe completion descriptors transactions)
> you should see the above rx_cqe_compress_pkts increasing when enabled.
>
> $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
> $ ethtool --show-priv-flags enp175s0f1
> Private flags for p6p1:
> rx_cqe_moder : on
> cqe_moder : off
> rx_cqe_compress : on
> ...
>
> try this on both interfaces.
Done
ethtool --show-priv-flags enp175s0f1
Private flags for enp175s0f1:
rx_cqe_moder : on
tx_cqe_moder : off
rx_cqe_compress : on
rx_striding_rq : off
rx_no_csum_complete: off
ethtool --show-priv-flags enp175s0f0
Private flags for enp175s0f0:
rx_cqe_moder : on
tx_cqe_moder : off
rx_cqe_compress : on
rx_striding_rq : off
rx_no_csum_complete: off
>
>> rx_page_reuse: 0
>> rx_cache_reuse: 14441066823
>> rx_cache_full: 51126004413
>> rx_cache_empty: 21297344082
>> rx_cache_busy: 51127247487
>> rx_cache_waive: 21298322293
>> rx_congst_umr: 0
>> rx_arfs_err: 0
>> ch_events: 24603119858
>> ch_poll: 25180949074
>> ch_arm: 24480437587
>> ch_aff_change: 75
>> ch_eq_rearm: 0
>> rx_out_of_buffer: 669685
> comparing this to rx_vport_unicast_packets, it is a very small
> percentage of dropped packets due to stalled rx cpu, so rx cpu is not a
> bottleneck, at least for the driver rx rings.
>
>> rx_if_down_packets: 61
>> rx_vport_unicast_packets: 173731641945
>> rx_vport_unicast_bytes: 100522745036693
>> tx_vport_unicast_packets: 142531901313
>> tx_vport_unicast_bytes: 185189071776429
>> rx_vport_multicast_packets: 100360886
>> rx_vport_multicast_bytes: 6639236688
>> tx_vport_multicast_packets: 32837
>> tx_vport_multicast_bytes: 2978810
>> rx_vport_broadcast_packets: 44854
>> rx_vport_broadcast_bytes: 6313510
>> tx_vport_broadcast_packets: 72258
>> tx_vport_broadcast_bytes: 4335480
>> rx_vport_rdma_unicast_packets: 0
>> rx_vport_rdma_unicast_bytes: 0
>> tx_vport_rdma_unicast_packets: 0
>> tx_vport_rdma_unicast_bytes: 0
>> rx_vport_rdma_multicast_packets: 0
>> rx_vport_rdma_multicast_bytes: 0
>> tx_vport_rdma_multicast_packets: 0
>> tx_vport_rdma_multicast_bytes: 0
>> tx_packets_phy: 142532004669
>> rx_packets_phy: 173980375752
>> rx_crc_errors_phy: 0
>> tx_bytes_phy: 185759204762903
>> rx_bytes_phy: 101326109361379
>> tx_multicast_phy: 32837
>> tx_broadcast_phy: 72258
>> rx_multicast_phy: 100360885
>> rx_broadcast_phy: 44854
>> rx_in_range_len_errors_phy: 2
>> rx_out_of_range_len_phy: 0
>> rx_oversize_pkts_phy: 59
>> rx_symbol_err_phy: 0
>> tx_mac_control_phy: 0
>> rx_mac_control_phy: 0
>> rx_unsupported_op_phy: 0
>> rx_pause_ctrl_phy: 0
>> tx_pause_ctrl_phy: 0
>> rx_discards_phy: 148328738
>> tx_discards_phy: 0
>> tx_errors_phy: 0
>> rx_undersize_pkts_phy: 0
>> rx_fragments_phy: 0
>> rx_jabbers_phy: 0
>> rx_64_bytes_phy: 36551843112
>> rx_65_to_127_bytes_phy: 65102131735
>> rx_128_to_255_bytes_phy: 5755731137
>> rx_256_to_511_bytes_phy: 2475619839
>> rx_512_to_1023_bytes_phy: 2826971156
>> rx_1024_to_1518_bytes_phy: 42474023107
>> rx_1519_to_2047_bytes_phy: 18794051270
>> rx_2048_to_4095_bytes_phy: 0
>> rx_4096_to_8191_bytes_phy: 0
>> rx_8192_to_10239_bytes_phy: 0
>> link_down_events_phy: 0
>> rx_pcs_symbol_err_phy: 0
>> rx_corrected_bits_phy: 0
>> rx_pci_signal_integrity: 0
>> tx_pci_signal_integrity: 48
>> rx_prio0_bytes: 101316322498995
>> rx_prio0_packets: 173711151686
>> tx_prio0_bytes: 185759176566814
>> tx_prio0_packets: 142531983704
>> rx_prio1_bytes: 47062768
>> rx_prio1_packets: 228932
>> tx_prio1_bytes: 0
>> tx_prio1_packets: 0
>> rx_prio2_bytes: 12434759
>> rx_prio2_packets: 83773
>> tx_prio2_bytes: 0
>> tx_prio2_packets: 0
>> rx_prio3_bytes: 288843134
>> rx_prio3_packets: 982102
>> tx_prio3_bytes: 0
>> tx_prio3_packets: 0
>> rx_prio4_bytes: 699797236
>> rx_prio4_packets: 8109231
>> tx_prio4_bytes: 0
>> tx_prio4_packets: 0
>> rx_prio5_bytes: 1385386738
>> rx_prio5_packets: 9661187
>> tx_prio5_bytes: 0
>> tx_prio5_packets: 0
>> rx_prio6_bytes: 317092102
>> rx_prio6_packets: 1951538
>> tx_prio6_bytes: 0
>> tx_prio6_packets: 0
>> rx_prio7_bytes: 7015734695
>> rx_prio7_packets: 99847456
>> tx_prio7_bytes: 0
>> tx_prio7_packets: 0
>> module_unplug: 0
>> module_bus_stuck: 0
>> module_high_temp: 0
>> module_bad_shorted: 0
>> ch0_events: 936264703
>> ch0_poll: 963766474
>> ch0_arm: 930246079
>> ch0_aff_change: 0
>> ch0_eq_rearm: 0
>> ch1_events: 869408429
>> ch1_poll: 896099392
>> ch1_arm: 864336861
>> ch1_aff_change: 0
>> ch1_eq_rearm: 0
>> ch2_events: 843345698
>> ch2_poll: 869749522
>> ch2_arm: 838186113
>> ch2_aff_change: 2
>> ch2_eq_rearm: 0
>> ch3_events: 850261340
>> ch3_poll: 876721111
>> ch3_arm: 845295235
>> ch3_aff_change: 3
>> ch3_eq_rearm: 0
>> ch4_events: 974985780
>> ch4_poll: 997781915
>> ch4_arm: 969618250
>> ch4_aff_change: 3
>> ch4_eq_rearm: 0
>> ch5_events: 888559089
>> ch5_poll: 912783615
>> ch5_arm: 883826078
>> ch5_aff_change: 2
>> ch5_eq_rearm: 0
>> ch6_events: 873730730
>> ch6_poll: 899635752
>> ch6_arm: 868677574
>> ch6_aff_change: 4
>> ch6_eq_rearm: 0
>> ch7_events: 873478411
>> ch7_poll: 899216716
>> ch7_arm: 868693645
>> ch7_aff_change: 3
>> ch7_eq_rearm: 0
>> ch8_events: 871900967
>> ch8_poll: 898575518
>> ch8_arm: 866763693
>> ch8_aff_change: 3
>> ch8_eq_rearm: 0
>> ch9_events: 880325565
>> ch9_poll: 904983269
>> ch9_arm: 875643922
>> ch9_aff_change: 2
>> ch9_eq_rearm: 0
>> ch10_events: 889919775
>> ch10_poll: 915335809
>> ch10_arm: 885110225
>> ch10_aff_change: 4
>> ch10_eq_rearm: 0
>> ch11_events: 962709175
>> ch11_poll: 983963451
>> ch11_arm: 958117526
>> ch11_aff_change: 2
>> ch11_eq_rearm: 0
>> ch12_events: 941333837
>> ch12_poll: 964625523
>> ch12_arm: 936409706
>> ch12_aff_change: 2
>> ch12_eq_rearm: 0
>> ch13_events: 914996974
>> ch13_poll: 937441049
>> ch13_arm: 910478393
>> ch13_aff_change: 4
>> ch13_eq_rearm: 0
>> ch14_events: 888050001
>> ch14_poll: 911818008
>> ch14_arm: 883465035
>> ch14_aff_change: 4
>> ch14_eq_rearm: 0
>> ch15_events: 947547704
>> ch15_poll: 969073194
>> ch15_arm: 942686515
>> ch15_aff_change: 4
>> ch15_eq_rearm: 0
>> ch16_events: 825804904
>> ch16_poll: 840630747
>> ch16_arm: 822227488
>> ch16_aff_change: 2
>> ch16_eq_rearm: 0
>> ch17_events: 861673823
>> ch17_poll: 874754041
>> ch17_arm: 858520448
>> ch17_aff_change: 2
>> ch17_eq_rearm: 0
>> ch18_events: 879413440
>> ch18_poll: 893962529
>> ch18_arm: 875983204
>> ch18_aff_change: 4
>> ch18_eq_rearm: 0
>> ch19_events: 896073709
>> ch19_poll: 909216857
>> ch19_arm: 893022121
>> ch19_aff_change: 4
>> ch19_eq_rearm: 0
>> ch20_events: 865188535
>> ch20_poll: 880692345
>> ch20_arm: 861440265
>> ch20_aff_change: 3
>> ch20_eq_rearm: 0
>> ch21_events: 862709303
>> ch21_poll: 878104242
>> ch21_arm: 859041767
>> ch21_aff_change: 2
>> ch21_eq_rearm: 0
>> ch22_events: 887720551
>> ch22_poll: 904122074
>> ch22_arm: 883983794
>> ch22_aff_change: 2
>> ch22_eq_rearm: 0
>> ch23_events: 813355027
>> ch23_poll: 828074467
>> ch23_arm: 809912398
>> ch23_aff_change: 4
>> ch23_eq_rearm: 0
>> ch24_events: 822366675
>> ch24_poll: 839917937
>> ch24_arm: 818422754
>> ch24_aff_change: 2
>> ch24_eq_rearm: 0
>> ch25_events: 826642292
>> ch25_poll: 842630121
>> ch25_arm: 822642618
>> ch25_aff_change: 2
>> ch25_eq_rearm: 0
>> ch26_events: 826392584
>> ch26_poll: 843406973
>> ch26_arm: 822455000
>> ch26_aff_change: 3
>> ch26_eq_rearm: 0
>> ch27_events: 828960899
>> ch27_poll: 843866518
>> ch27_arm: 825230937
>> ch27_aff_change: 3
>> ch27_eq_rearm: 0
>> ch28_events: 7
>> ch28_poll: 7
>> ch28_arm: 7
>> ch28_aff_change: 0
>> ch28_eq_rearm: 0
>> ch29_events: 4
>> ch29_poll: 4
>> ch29_arm: 4
>> ch29_aff_change: 0
>> ch29_eq_rearm: 0
>> ch30_events: 4
>> ch30_poll: 4
>> ch30_arm: 4
>> ch30_aff_change: 0
>> ch30_eq_rearm: 0
>> ch31_events: 4
>> ch31_poll: 4
>> ch31_arm: 4
>> ch31_aff_change: 0
>> ch31_eq_rearm: 0
>> ch32_events: 4
>> ch32_poll: 4
>> ch32_arm: 4
>> ch32_aff_change: 0
>> ch32_eq_rearm: 0
>> ch33_events: 4
>> ch33_poll: 4
>> ch33_arm: 4
>> ch33_aff_change: 0
>> ch33_eq_rearm: 0
>> ch34_events: 4
>> ch34_poll: 4
>> ch34_arm: 4
>> ch34_aff_change: 0
>> ch34_eq_rearm: 0
>> ch35_events: 4
>> ch35_poll: 4
>> ch35_arm: 4
>> ch35_aff_change: 0
>> ch35_eq_rearm: 0
>> ch36_events: 4
>> ch36_poll: 4
>> ch36_arm: 4
>> ch36_aff_change: 0
>> ch36_eq_rearm: 0
>> ch37_events: 4
>> ch37_poll: 4
>> ch37_arm: 4
>> ch37_aff_change: 0
>> ch37_eq_rearm: 0
>> ch38_events: 4
>> ch38_poll: 4
>> ch38_arm: 4
>> ch38_aff_change: 0
>> ch38_eq_rearm: 0
>> ch39_events: 4
>> ch39_poll: 4
>> ch39_arm: 4
>> ch39_aff_change: 0
>> ch39_eq_rearm: 0
>> ch40_events: 4
>> ch40_poll: 4
>> ch40_arm: 4
>> ch40_aff_change: 0
>> ch40_eq_rearm: 0
>> ch41_events: 4
>> ch41_poll: 4
>> ch41_arm: 4
>> ch41_aff_change: 0
>> ch41_eq_rearm: 0
>> ch42_events: 4
>> ch42_poll: 4
>> ch42_arm: 4
>> ch42_aff_change: 0
>> ch42_eq_rearm: 0
>> ch43_events: 4
>> ch43_poll: 4
>> ch43_arm: 4
>> ch43_aff_change: 0
>> ch43_eq_rearm: 0
>> ch44_events: 4
>> ch44_poll: 4
>> ch44_arm: 4
>> ch44_aff_change: 0
>> ch44_eq_rearm: 0
>> ch45_events: 4
>> ch45_poll: 4
>> ch45_arm: 4
>> ch45_aff_change: 0
>> ch45_eq_rearm: 0
>> ch46_events: 4
>> ch46_poll: 4
>> ch46_arm: 4
>> ch46_aff_change: 0
>> ch46_eq_rearm: 0
>> ch47_events: 4
>> ch47_poll: 4
>> ch47_arm: 4
>> ch47_aff_change: 0
>> ch47_eq_rearm: 0
>> ch48_events: 4
>> ch48_poll: 4
>> ch48_arm: 4
>> ch48_aff_change: 0
>> ch48_eq_rearm: 0
>> ch49_events: 4
>> ch49_poll: 4
>> ch49_arm: 4
>> ch49_aff_change: 0
>> ch49_eq_rearm: 0
>> ch50_events: 4
>> ch50_poll: 4
>> ch50_arm: 4
>> ch50_aff_change: 0
>> ch50_eq_rearm: 0
>> ch51_events: 4
>> ch51_poll: 4
>> ch51_arm: 4
>> ch51_aff_change: 0
>> ch51_eq_rearm: 0
>> ch52_events: 4
>> ch52_poll: 4
>> ch52_arm: 4
>> ch52_aff_change: 0
>> ch52_eq_rearm: 0
>> ch53_events: 4
>> ch53_poll: 4
>> ch53_arm: 4
>> ch53_aff_change: 0
>> ch53_eq_rearm: 0
>> ch54_events: 4
>> ch54_poll: 4
>> ch54_arm: 4
>> ch54_aff_change: 0
>> ch54_eq_rearm: 0
>> ch55_events: 4
>> ch55_poll: 4
>> ch55_arm: 4
>> ch55_aff_change: 0
>> ch55_eq_rearm: 0
>> rx0_packets: 7284057433
>> rx0_bytes: 4330611281319
>> rx0_csum_complete: 7283623076
>> rx0_csum_unnecessary: 0
>> rx0_csum_unnecessary_inner: 0
>> rx0_csum_none: 434357
>> rx0_xdp_drop: 0
>> rx0_xdp_redirect: 0
>> rx0_lro_packets: 0
>> rx0_lro_bytes: 0
>> rx0_ecn_mark: 0
>> rx0_removed_vlan_packets: 7284057433
>> rx0_wqe_err: 0
>> rx0_mpwqe_filler_cqes: 0
>> rx0_mpwqe_filler_strides: 0
>> rx0_buff_alloc_err: 0
>> rx0_cqe_compress_blks: 0
>> rx0_cqe_compress_pkts: 0
>> rx0_page_reuse: 0
>> rx0_cache_reuse: 1989731589
>> rx0_cache_full: 28213297
>> rx0_cache_empty: 1624089822
>> rx0_cache_busy: 28213961
>> rx0_cache_waive: 1624083610
>> rx0_congst_umr: 0
>> rx0_arfs_err: 0
>> rx0_xdp_tx_xmit: 0
>> rx0_xdp_tx_full: 0
>> rx0_xdp_tx_err: 0
>> rx0_xdp_tx_cqes: 0
>> rx1_packets: 6691319211
>> rx1_bytes: 3799580210608
>> rx1_csum_complete: 6691319211
>> rx1_csum_unnecessary: 0
>> rx1_csum_unnecessary_inner: 0
>> rx1_csum_none: 0
>> rx1_xdp_drop: 0
>> rx1_xdp_redirect: 0
>> rx1_lro_packets: 0
>> rx1_lro_bytes: 0
>> rx1_ecn_mark: 0
>> rx1_removed_vlan_packets: 6691319211
>> rx1_wqe_err: 0
>> rx1_mpwqe_filler_cqes: 0
>> rx1_mpwqe_filler_strides: 0
>> rx1_buff_alloc_err: 0
>> rx1_cqe_compress_blks: 0
>> rx1_cqe_compress_pkts: 0
>> rx1_page_reuse: 0
>> rx1_cache_reuse: 2270019
>> rx1_cache_full: 3343389331
>> rx1_cache_empty: 6656
>> rx1_cache_busy: 3343389585
>> rx1_cache_waive: 0
>> rx1_congst_umr: 0
>> rx1_arfs_err: 0
>> rx1_xdp_tx_xmit: 0
>> rx1_xdp_tx_full: 0
>> rx1_xdp_tx_err: 0
>> rx1_xdp_tx_cqes: 0
>> rx2_packets: 6618370416
>> rx2_bytes: 3762508364015
>> rx2_csum_complete: 6618370416
>> rx2_csum_unnecessary: 0
>> rx2_csum_unnecessary_inner: 0
>> rx2_csum_none: 0
>> rx2_xdp_drop: 0
>> rx2_xdp_redirect: 0
>> rx2_lro_packets: 0
>> rx2_lro_bytes: 0
>> rx2_ecn_mark: 0
>> rx2_removed_vlan_packets: 6618370416
>> rx2_wqe_err: 0
>> rx2_mpwqe_filler_cqes: 0
>> rx2_mpwqe_filler_strides: 0
>> rx2_buff_alloc_err: 0
>> rx2_cqe_compress_blks: 0
>> rx2_cqe_compress_pkts: 0
>> rx2_page_reuse: 0
>> rx2_cache_reuse: 111419328
>> rx2_cache_full: 1807563903
>> rx2_cache_empty: 1390208158
>> rx2_cache_busy: 1807564378
>> rx2_cache_waive: 1390201722
>> rx2_congst_umr: 0
>> rx2_arfs_err: 0
>> rx2_xdp_tx_xmit: 0
>> rx2_xdp_tx_full: 0
>> rx2_xdp_tx_err: 0
>> rx2_xdp_tx_cqes: 0
>> rx3_packets: 6665308976
>> rx3_bytes: 3828546206006
>> rx3_csum_complete: 6665308976
>> rx3_csum_unnecessary: 0
>> rx3_csum_unnecessary_inner: 0
>> rx3_csum_none: 0
>> rx3_xdp_drop: 0
>> rx3_xdp_redirect: 0
>> rx3_lro_packets: 0
>> rx3_lro_bytes: 0
>> rx3_ecn_mark: 0
>> rx3_removed_vlan_packets: 6665308976
>> rx3_wqe_err: 0
>> rx3_mpwqe_filler_cqes: 0
>> rx3_mpwqe_filler_strides: 0
>> rx3_buff_alloc_err: 0
>> rx3_cqe_compress_blks: 0
>> rx3_cqe_compress_pkts: 0
>> rx3_page_reuse: 0
>> rx3_cache_reuse: 215779091
>> rx3_cache_full: 1720040649
>> rx3_cache_empty: 1396840926
>> rx3_cache_busy: 1720041127
>> rx3_cache_waive: 1396834493
>> rx3_congst_umr: 0
>> rx3_arfs_err: 0
>> rx3_xdp_tx_xmit: 0
>> rx3_xdp_tx_full: 0
>> rx3_xdp_tx_err: 0
>> rx3_xdp_tx_cqes: 0
>> rx4_packets: 6764448165
>> rx4_bytes: 3883101339142
>> rx4_csum_complete: 6764448165
>> rx4_csum_unnecessary: 0
>> rx4_csum_unnecessary_inner: 0
>> rx4_csum_none: 0
>> rx4_xdp_drop: 0
>> rx4_xdp_redirect: 0
>> rx4_lro_packets: 0
>> rx4_lro_bytes: 0
>> rx4_ecn_mark: 0
>> rx4_removed_vlan_packets: 6764448165
>> rx4_wqe_err: 0
>> rx4_mpwqe_filler_cqes: 0
>> rx4_mpwqe_filler_strides: 0
>> rx4_buff_alloc_err: 0
>> rx4_cqe_compress_blks: 0
>> rx4_cqe_compress_pkts: 0
>> rx4_page_reuse: 0
>> rx4_cache_reuse: 1930710653
>> rx4_cache_full: 6490815
>> rx4_cache_empty: 1445028605
>> rx4_cache_busy: 6491478
>> rx4_cache_waive: 1445022392
>> rx4_congst_umr: 0
>> rx4_arfs_err: 0
>> rx4_xdp_tx_xmit: 0
>> rx4_xdp_tx_full: 0
>> rx4_xdp_tx_err: 0
>> rx4_xdp_tx_cqes: 0
>> rx5_packets: 6736853264
>> rx5_bytes: 3925186068552
>> rx5_csum_complete: 6736853264
>> rx5_csum_unnecessary: 0
>> rx5_csum_unnecessary_inner: 0
>> rx5_csum_none: 0
>> rx5_xdp_drop: 0
>> rx5_xdp_redirect: 0
>> rx5_lro_packets: 0
>> rx5_lro_bytes: 0
>> rx5_ecn_mark: 0
>> rx5_removed_vlan_packets: 6736853264
>> rx5_wqe_err: 0
>> rx5_mpwqe_filler_cqes: 0
>> rx5_mpwqe_filler_strides: 0
>> rx5_buff_alloc_err: 0
>> rx5_cqe_compress_blks: 0
>> rx5_cqe_compress_pkts: 0
>> rx5_page_reuse: 0
>> rx5_cache_reuse: 7283914
>> rx5_cache_full: 3361142463
>> rx5_cache_empty: 6656
>> rx5_cache_busy: 3361142718
>> rx5_cache_waive: 0
>> rx5_congst_umr: 0
>> rx5_arfs_err: 0
>> rx5_xdp_tx_xmit: 0
>> rx5_xdp_tx_full: 0
>> rx5_xdp_tx_err: 0
>> rx5_xdp_tx_cqes: 0
>> rx6_packets: 6751588828
>> rx6_bytes: 3860537598885
>> rx6_csum_complete: 6751588828
>> rx6_csum_unnecessary: 0
>> rx6_csum_unnecessary_inner: 0
>> rx6_csum_none: 0
>> rx6_xdp_drop: 0
>> rx6_xdp_redirect: 0
>> rx6_lro_packets: 0
>> rx6_lro_bytes: 0
>> rx6_ecn_mark: 0
>> rx6_removed_vlan_packets: 6751588828
>> rx6_wqe_err: 0
>> rx6_mpwqe_filler_cqes: 0
>> rx6_mpwqe_filler_strides: 0
>> rx6_buff_alloc_err: 0
>> rx6_cqe_compress_blks: 0
>> rx6_cqe_compress_pkts: 0
>> rx6_page_reuse: 0
>> rx6_cache_reuse: 96032126
>> rx6_cache_full: 1857890923
>> rx6_cache_empty: 1421877543
>> rx6_cache_busy: 1857891399
>> rx6_cache_waive: 1421871110
>> rx6_congst_umr: 0
>> rx6_arfs_err: 0
>> rx6_xdp_tx_xmit: 0
>> rx6_xdp_tx_full: 0
>> rx6_xdp_tx_err: 0
>> rx6_xdp_tx_cqes: 0
>> rx7_packets: 6935300074
>> rx7_bytes: 4004713524388
>> rx7_csum_complete: 6935300074
>> rx7_csum_unnecessary: 0
>> rx7_csum_unnecessary_inner: 0
>> rx7_csum_none: 0
>> rx7_xdp_drop: 0
>> rx7_xdp_redirect: 0
>> rx7_lro_packets: 0
>> rx7_lro_bytes: 0
>> rx7_ecn_mark: 0
>> rx7_removed_vlan_packets: 6935300074
>> rx7_wqe_err: 0
>> rx7_mpwqe_filler_cqes: 0
>> rx7_mpwqe_filler_strides: 0
>> rx7_buff_alloc_err: 0
>> rx7_cqe_compress_blks: 0
>> rx7_cqe_compress_pkts: 0
>> rx7_page_reuse: 0
>> rx7_cache_reuse: 17555187
>> rx7_cache_full: 3450094595
>> rx7_cache_empty: 6656
>> rx7_cache_busy: 3450094849
>> rx7_cache_waive: 0
>> rx7_congst_umr: 0
>> rx7_arfs_err: 0
>> rx7_xdp_tx_xmit: 0
>> rx7_xdp_tx_full: 0
>> rx7_xdp_tx_err: 0
>> rx7_xdp_tx_cqes: 0
>> rx8_packets: 6678640094
>> rx8_bytes: 3783722686028
>> rx8_csum_complete: 6678640094
>> rx8_csum_unnecessary: 0
>> rx8_csum_unnecessary_inner: 0
>> rx8_csum_none: 0
>> rx8_xdp_drop: 0
>> rx8_xdp_redirect: 0
>> rx8_lro_packets: 0
>> rx8_lro_bytes: 0
>> rx8_ecn_mark: 0
>> rx8_removed_vlan_packets: 6678640094
>> rx8_wqe_err: 0
>> rx8_mpwqe_filler_cqes: 0
>> rx8_mpwqe_filler_strides: 0
>> rx8_buff_alloc_err: 0
>> rx8_cqe_compress_blks: 0
>> rx8_cqe_compress_pkts: 0
>> rx8_page_reuse: 0
>> rx8_cache_reuse: 71006578
>> rx8_cache_full: 1879380649
>> rx8_cache_empty: 1388938999
>> rx8_cache_busy: 1879381123
>> rx8_cache_waive: 1388932565
>> rx8_congst_umr: 0
>> rx8_arfs_err: 0
>> rx8_xdp_tx_xmit: 0
>> rx8_xdp_tx_full: 0
>> rx8_xdp_tx_err: 0
>> rx8_xdp_tx_cqes: 0
>> rx9_packets: 6709855557
>> rx9_bytes: 3849522227880
>> rx9_csum_complete: 6709855557
>> rx9_csum_unnecessary: 0
>> rx9_csum_unnecessary_inner: 0
>> rx9_csum_none: 0
>> rx9_xdp_drop: 0
>> rx9_xdp_redirect: 0
>> rx9_lro_packets: 0
>> rx9_lro_bytes: 0
>> rx9_ecn_mark: 0
>> rx9_removed_vlan_packets: 6709855557
>> rx9_wqe_err: 0
>> rx9_mpwqe_filler_cqes: 0
>> rx9_mpwqe_filler_strides: 0
>> rx9_buff_alloc_err: 0
>> rx9_cqe_compress_blks: 0
>> rx9_cqe_compress_pkts: 0
>> rx9_page_reuse: 0
>> rx9_cache_reuse: 108980215
>> rx9_cache_full: 1822730121
>> rx9_cache_empty: 1423223623
>> rx9_cache_busy: 1822730594
>> rx9_cache_waive: 1423217187
>> rx9_congst_umr: 0
>> rx9_arfs_err: 0
>> rx9_xdp_tx_xmit: 0
>> rx9_xdp_tx_full: 0
>> rx9_xdp_tx_err: 0
>> rx9_xdp_tx_cqes: 0
>> rx10_packets: 6761861066
>> rx10_bytes: 3816266733385
>> rx10_csum_complete: 6761861066
>> rx10_csum_unnecessary: 0
>> rx10_csum_unnecessary_inner: 0
>> rx10_csum_none: 0
>> rx10_xdp_drop: 0
>> rx10_xdp_redirect: 0
>> rx10_lro_packets: 0
>> rx10_lro_bytes: 0
>> rx10_ecn_mark: 0
>> rx10_removed_vlan_packets: 6761861066
>> rx10_wqe_err: 0
>> rx10_mpwqe_filler_cqes: 0
>> rx10_mpwqe_filler_strides: 0
>> rx10_buff_alloc_err: 0
>> rx10_cqe_compress_blks: 0
>> rx10_cqe_compress_pkts: 0
>> rx10_page_reuse: 0
>> rx10_cache_reuse: 3489300
>> rx10_cache_full: 3377440977
>> rx10_cache_empty: 6656
>> rx10_cache_busy: 3377441216
>> rx10_cache_waive: 0
>> rx10_congst_umr: 0
>> rx10_arfs_err: 0
>> rx10_xdp_tx_xmit: 0
>> rx10_xdp_tx_full: 0
>> rx10_xdp_tx_err: 0
>> rx10_xdp_tx_cqes: 0
>> rx11_packets: 6868113938
>> rx11_bytes: 4048196300710
>> rx11_csum_complete: 6868113938
>> rx11_csum_unnecessary: 0
>> rx11_csum_unnecessary_inner: 0
>> rx11_csum_none: 0
>> rx11_xdp_drop: 0
>> rx11_xdp_redirect: 0
>> rx11_lro_packets: 0
>> rx11_lro_bytes: 0
>> rx11_ecn_mark: 0
>> rx11_removed_vlan_packets: 6868113938
>> rx11_wqe_err: 0
>> rx11_mpwqe_filler_cqes: 0
>> rx11_mpwqe_filler_strides: 0
>> rx11_buff_alloc_err: 0
>> rx11_cqe_compress_blks: 0
>> rx11_cqe_compress_pkts: 0
>> rx11_page_reuse: 0
>> rx11_cache_reuse: 1948516819
>> rx11_cache_full: 17132157
>> rx11_cache_empty: 1468413985
>> rx11_cache_busy: 17132820
>> rx11_cache_waive: 1468407772
>> rx11_congst_umr: 0
>> rx11_arfs_err: 0
>> rx11_xdp_tx_xmit: 0
>> rx11_xdp_tx_full: 0
>> rx11_xdp_tx_err: 0
>> rx11_xdp_tx_cqes: 0
>> rx12_packets: 6742955386
>> rx12_bytes: 3865747629271
>> rx12_csum_complete: 6742955386
>> rx12_csum_unnecessary: 0
>> rx12_csum_unnecessary_inner: 0
>> rx12_csum_none: 0
>> rx12_xdp_drop: 0
>> rx12_xdp_redirect: 0
>> rx12_lro_packets: 0
>> rx12_lro_bytes: 0
>> rx12_ecn_mark: 0
>> rx12_removed_vlan_packets: 6742955386
>> rx12_wqe_err: 0
>> rx12_mpwqe_filler_cqes: 0
>> rx12_mpwqe_filler_strides: 0
>> rx12_buff_alloc_err: 0
>> rx12_cqe_compress_blks: 0
>> rx12_cqe_compress_pkts: 0
>> rx12_page_reuse: 0
>> rx12_cache_reuse: 30809331
>> rx12_cache_full: 3340668106
>> rx12_cache_empty: 6656
>> rx12_cache_busy: 3340668333
>> rx12_cache_waive: 0
>> rx12_congst_umr: 0
>> rx12_arfs_err: 0
>> rx12_xdp_tx_xmit: 0
>> rx12_xdp_tx_full: 0
>> rx12_xdp_tx_err: 0
>> rx12_xdp_tx_cqes: 0
>> rx13_packets: 6707028036
>> rx13_bytes: 3813462190623
>> rx13_csum_complete: 6707028036
>> rx13_csum_unnecessary: 0
>> rx13_csum_unnecessary_inner: 0
>> rx13_csum_none: 0
>> rx13_xdp_drop: 0
>> rx13_xdp_redirect: 0
>> rx13_lro_packets: 0
>> rx13_lro_bytes: 0
>> rx13_ecn_mark: 0
>> rx13_removed_vlan_packets: 6707028036
>> rx13_wqe_err: 0
>> rx13_mpwqe_filler_cqes: 0
>> rx13_mpwqe_filler_strides: 0
>> rx13_buff_alloc_err: 0
>> rx13_cqe_compress_blks: 0
>> rx13_cqe_compress_pkts: 0
>> rx13_page_reuse: 0
>> rx13_cache_reuse: 14951053
>> rx13_cache_full: 3338562710
>> rx13_cache_empty: 6656
>> rx13_cache_busy: 3338562963
>> rx13_cache_waive: 0
>> rx13_congst_umr: 0
>> rx13_arfs_err: 0
>> rx13_xdp_tx_xmit: 0
>> rx13_xdp_tx_full: 0
>> rx13_xdp_tx_err: 0
>> rx13_xdp_tx_cqes: 0
>> rx14_packets: 6737074410
>> rx14_bytes: 3868905276119
>> rx14_csum_complete: 6737074410
>> rx14_csum_unnecessary: 0
>> rx14_csum_unnecessary_inner: 0
>> rx14_csum_none: 0
>> rx14_xdp_drop: 0
>> rx14_xdp_redirect: 0
>> rx14_lro_packets: 0
>> rx14_lro_bytes: 0
>> rx14_ecn_mark: 0
>> rx14_removed_vlan_packets: 6737074410
>> rx14_wqe_err: 0
>> rx14_mpwqe_filler_cqes: 0
>> rx14_mpwqe_filler_strides: 0
>> rx14_buff_alloc_err: 0
>> rx14_cqe_compress_blks: 0
>> rx14_cqe_compress_pkts: 0
>> rx14_page_reuse: 0
>> rx14_cache_reuse: 967799432
>> rx14_cache_full: 982704312
>> rx14_cache_empty: 1418039639
>> rx14_cache_busy: 982704789
>> rx14_cache_waive: 1418033206
>> rx14_congst_umr: 0
>> rx14_arfs_err: 0
>> rx14_xdp_tx_xmit: 0
>> rx14_xdp_tx_full: 0
>> rx14_xdp_tx_err: 0
>> rx14_xdp_tx_cqes: 0
>> rx15_packets: 6641887441
>> rx15_bytes: 3742874400402
>> rx15_csum_complete: 6641887441
>> rx15_csum_unnecessary: 0
>> rx15_csum_unnecessary_inner: 0
>> rx15_csum_none: 0
>> rx15_xdp_drop: 0
>> rx15_xdp_redirect: 0
>> rx15_lro_packets: 0
>> rx15_lro_bytes: 0
>> rx15_ecn_mark: 0
>> rx15_removed_vlan_packets: 6641887441
>> rx15_wqe_err: 0
>> rx15_mpwqe_filler_cqes: 0
>> rx15_mpwqe_filler_strides: 0
>> rx15_buff_alloc_err: 0
>> rx15_cqe_compress_blks: 0
>> rx15_cqe_compress_pkts: 0
>> rx15_page_reuse: 0
>> rx15_cache_reuse: 1920227538
>> rx15_cache_full: 19386129
>> rx15_cache_empty: 1381335137
>> rx15_cache_busy: 19387693
>> rx15_cache_waive: 1381329825
>> rx15_congst_umr: 0
>> rx15_arfs_err: 0
>> rx15_xdp_tx_xmit: 0
>> rx15_xdp_tx_full: 0
>> rx15_xdp_tx_err: 0
>> rx15_xdp_tx_cqes: 0
>> rx16_packets: 5420472874
>> rx16_bytes: 3079293332581
>> rx16_csum_complete: 5420472874
>> rx16_csum_unnecessary: 0
>> rx16_csum_unnecessary_inner: 0
>> rx16_csum_none: 0
>> rx16_xdp_drop: 0
>> rx16_xdp_redirect: 0
>> rx16_lro_packets: 0
>> rx16_lro_bytes: 0
>> rx16_ecn_mark: 0
>> rx16_removed_vlan_packets: 5420472874
>> rx16_wqe_err: 0
>> rx16_mpwqe_filler_cqes: 0
>> rx16_mpwqe_filler_strides: 0
>> rx16_buff_alloc_err: 0
>> rx16_cqe_compress_blks: 0
>> rx16_cqe_compress_pkts: 0
>> rx16_page_reuse: 0
>> rx16_cache_reuse: 2361079
>> rx16_cache_full: 2707875103
>> rx16_cache_empty: 6656
>> rx16_cache_busy: 2707875349
>> rx16_cache_waive: 0
>> rx16_congst_umr: 0
>> rx16_arfs_err: 0
>> rx16_xdp_tx_xmit: 0
>> rx16_xdp_tx_full: 0
>> rx16_xdp_tx_err: 0
>> rx16_xdp_tx_cqes: 0
>> rx17_packets: 5428380986
>> rx17_bytes: 3080981893118
>> rx17_csum_complete: 5428380986
>> rx17_csum_unnecessary: 0
>> rx17_csum_unnecessary_inner: 0
>> rx17_csum_none: 0
>> rx17_xdp_drop: 0
>> rx17_xdp_redirect: 0
>> rx17_lro_packets: 0
>> rx17_lro_bytes: 0
>> rx17_ecn_mark: 0
>> rx17_removed_vlan_packets: 5428380986
>> rx17_wqe_err: 0
>> rx17_mpwqe_filler_cqes: 0
>> rx17_mpwqe_filler_strides: 0
>> rx17_buff_alloc_err: 0
>> rx17_cqe_compress_blks: 0
>> rx17_cqe_compress_pkts: 0
>> rx17_page_reuse: 0
>> rx17_cache_reuse: 1552266402
>> rx17_cache_full: 5947505
>> rx17_cache_empty: 1155981856
>> rx17_cache_busy: 5948870
>> rx17_cache_waive: 1155976345
>> rx17_congst_umr: 0
>> rx17_arfs_err: 0
>> rx17_xdp_tx_xmit: 0
>> rx17_xdp_tx_full: 0
>> rx17_xdp_tx_err: 0
>> rx17_xdp_tx_cqes: 0
>> rx18_packets: 5529118410
>> rx18_bytes: 3254749573833
>> rx18_csum_complete: 5529118410
>> rx18_csum_unnecessary: 0
>> rx18_csum_unnecessary_inner: 0
>> rx18_csum_none: 0
>> rx18_xdp_drop: 0
>> rx18_xdp_redirect: 0
>> rx18_lro_packets: 0
>> rx18_lro_bytes: 0
>> rx18_ecn_mark: 0
>> rx18_removed_vlan_packets: 5529118410
>> rx18_wqe_err: 0
>> rx18_mpwqe_filler_cqes: 0
>> rx18_mpwqe_filler_strides: 0
>> rx18_buff_alloc_err: 0
>> rx18_cqe_compress_blks: 0
>> rx18_cqe_compress_pkts: 0
>> rx18_page_reuse: 0
>> rx18_cache_reuse: 67438840
>> rx18_cache_full: 1536718472
>> rx18_cache_empty: 1160408072
>> rx18_cache_busy: 1536718932
>> rx18_cache_waive: 1160401638
>> rx18_congst_umr: 0
>> rx18_arfs_err: 0
>> rx18_xdp_tx_xmit: 0
>> rx18_xdp_tx_full: 0
>> rx18_xdp_tx_err: 0
>> rx18_xdp_tx_cqes: 0
>> rx19_packets: 5449932653
>> rx19_bytes: 3148726579411
>> rx19_csum_complete: 5449932653
>> rx19_csum_unnecessary: 0
>> rx19_csum_unnecessary_inner: 0
>> rx19_csum_none: 0
>> rx19_xdp_drop: 0
>> rx19_xdp_redirect: 0
>> rx19_lro_packets: 0
>> rx19_lro_bytes: 0
>> rx19_ecn_mark: 0
>> rx19_removed_vlan_packets: 5449932653
>> rx19_wqe_err: 0
>> rx19_mpwqe_filler_cqes: 0
>> rx19_mpwqe_filler_strides: 0
>> rx19_buff_alloc_err: 0
>> rx19_cqe_compress_blks: 0
>> rx19_cqe_compress_pkts: 0
>> rx19_page_reuse: 0
>> rx19_cache_reuse: 1537841743
>> rx19_cache_full: 9920960
>> rx19_cache_empty: 1177208938
>> rx19_cache_busy: 9922299
>> rx19_cache_waive: 1177203401
>> rx19_congst_umr: 0
>> rx19_arfs_err: 0
>> rx19_xdp_tx_xmit: 0
>> rx19_xdp_tx_full: 0
>> rx19_xdp_tx_err: 0
>> rx19_xdp_tx_cqes: 0
>> rx20_packets: 5407910071
>> rx20_bytes: 3123560861922
>> rx20_csum_complete: 5407910071
>> rx20_csum_unnecessary: 0
>> rx20_csum_unnecessary_inner: 0
>> rx20_csum_none: 0
>> rx20_xdp_drop: 0
>> rx20_xdp_redirect: 0
>> rx20_lro_packets: 0
>> rx20_lro_bytes: 0
>> rx20_ecn_mark: 0
>> rx20_removed_vlan_packets: 5407910071
>> rx20_wqe_err: 0
>> rx20_mpwqe_filler_cqes: 0
>> rx20_mpwqe_filler_strides: 0
>> rx20_buff_alloc_err: 0
>> rx20_cqe_compress_blks: 0
>> rx20_cqe_compress_pkts: 0
>> rx20_page_reuse: 0
>> rx20_cache_reuse: 10255209
>> rx20_cache_full: 2693699571
>> rx20_cache_empty: 6656
>> rx20_cache_busy: 2693699823
>> rx20_cache_waive: 0
>> rx20_congst_umr: 0
>> rx20_arfs_err: 0
>> rx20_xdp_tx_xmit: 0
>> rx20_xdp_tx_full: 0
>> rx20_xdp_tx_err: 0
>> rx20_xdp_tx_cqes: 0
>> rx21_packets: 5417498508
>> rx21_bytes: 3131335892379
>> rx21_csum_complete: 5417498508
>> rx21_csum_unnecessary: 0
>> rx21_csum_unnecessary_inner: 0
>> rx21_csum_none: 0
>> rx21_xdp_drop: 0
>> rx21_xdp_redirect: 0
>> rx21_lro_packets: 0
>> rx21_lro_bytes: 0
>> rx21_ecn_mark: 0
>> rx21_removed_vlan_packets: 5417498508
>> rx21_wqe_err: 0
>> rx21_mpwqe_filler_cqes: 0
>> rx21_mpwqe_filler_strides: 0
>> rx21_buff_alloc_err: 0
>> rx21_cqe_compress_blks: 0
>> rx21_cqe_compress_pkts: 0
>> rx21_page_reuse: 0
>> rx21_cache_reuse: 192662917
>> rx21_cache_full: 1374120417
>> rx21_cache_empty: 1141972100
>> rx21_cache_busy: 1374120891
>> rx21_cache_waive: 1141965665
>> rx21_congst_umr: 0
>> rx21_arfs_err: 0
>> rx21_xdp_tx_xmit: 0
>> rx21_xdp_tx_full: 0
>> rx21_xdp_tx_err: 0
>> rx21_xdp_tx_cqes: 0
>> rx22_packets: 5613634706
>> rx22_bytes: 3240055099058
>> rx22_csum_complete: 5613634706
>> rx22_csum_unnecessary: 0
>> rx22_csum_unnecessary_inner: 0
>> rx22_csum_none: 0
>> rx22_xdp_drop: 0
>> rx22_xdp_redirect: 0
>> rx22_lro_packets: 0
>> rx22_lro_bytes: 0
>> rx22_ecn_mark: 0
>> rx22_removed_vlan_packets: 5613634706
>> rx22_wqe_err: 0
>> rx22_mpwqe_filler_cqes: 0
>> rx22_mpwqe_filler_strides: 0
>> rx22_buff_alloc_err: 0
>> rx22_cqe_compress_blks: 0
>> rx22_cqe_compress_pkts: 0
>> rx22_page_reuse: 0
>> rx22_cache_reuse: 12161531
>> rx22_cache_full: 2794655567
>> rx22_cache_empty: 6656
>> rx22_cache_busy: 2794655821
>> rx22_cache_waive: 0
>> rx22_congst_umr: 0
>> rx22_arfs_err: 0
>> rx22_xdp_tx_xmit: 0
>> rx22_xdp_tx_full: 0
>> rx22_xdp_tx_err: 0
>> rx22_xdp_tx_cqes: 0
>> rx23_packets: 5389977167
>> rx23_bytes: 3054270771559
>> rx23_csum_complete: 5389977167
>> rx23_csum_unnecessary: 0
>> rx23_csum_unnecessary_inner: 0
>> rx23_csum_none: 0
>> rx23_xdp_drop: 0
>> rx23_xdp_redirect: 0
>> rx23_lro_packets: 0
>> rx23_lro_bytes: 0
>> rx23_ecn_mark: 0
>> rx23_removed_vlan_packets: 5389977167
>> rx23_wqe_err: 0
>> rx23_mpwqe_filler_cqes: 0
>> rx23_mpwqe_filler_strides: 0
>> rx23_buff_alloc_err: 0
>> rx23_cqe_compress_blks: 0
>> rx23_cqe_compress_pkts: 0
>> rx23_page_reuse: 0
>> rx23_cache_reuse: 709328
>> rx23_cache_full: 2694279000
>> rx23_cache_empty: 6656
>> rx23_cache_busy: 2694279252
>> rx23_cache_waive: 0
>> rx23_congst_umr: 0
>> rx23_arfs_err: 0
>> rx23_xdp_tx_xmit: 0
>> rx23_xdp_tx_full: 0
>> rx23_xdp_tx_err: 0
>> rx23_xdp_tx_cqes: 0
>> rx24_packets: 5547561932
>> rx24_bytes: 3166602453443
>> rx24_csum_complete: 5547561932
>> rx24_csum_unnecessary: 0
>> rx24_csum_unnecessary_inner: 0
>> rx24_csum_none: 0
>> rx24_xdp_drop: 0
>> rx24_xdp_redirect: 0
>> rx24_lro_packets: 0
>> rx24_lro_bytes: 0
>> rx24_ecn_mark: 0
>> rx24_removed_vlan_packets: 5547561932
>> rx24_wqe_err: 0
>> rx24_mpwqe_filler_cqes: 0
>> rx24_mpwqe_filler_strides: 0
>> rx24_buff_alloc_err: 0
>> rx24_cqe_compress_blks: 0
>> rx24_cqe_compress_pkts: 0
>> rx24_page_reuse: 0
>> rx24_cache_reuse: 57885119
>> rx24_cache_full: 1529450077
>> rx24_cache_empty: 1186451948
>> rx24_cache_busy: 1529450553
>> rx24_cache_waive: 1186445515
>> rx24_congst_umr: 0
>> rx24_arfs_err: 0
>> rx24_xdp_tx_xmit: 0
>> rx24_xdp_tx_full: 0
>> rx24_xdp_tx_err: 0
>> rx24_xdp_tx_cqes: 0
>> rx25_packets: 5414569326
>> rx25_bytes: 3184757708091
>> rx25_csum_complete: 5414569326
>> rx25_csum_unnecessary: 0
>> rx25_csum_unnecessary_inner: 0
>> rx25_csum_none: 0
>> rx25_xdp_drop: 0
>> rx25_xdp_redirect: 0
>> rx25_lro_packets: 0
>> rx25_lro_bytes: 0
>> rx25_ecn_mark: 0
>> rx25_removed_vlan_packets: 5414569326
>> rx25_wqe_err: 0
>> rx25_mpwqe_filler_cqes: 0
>> rx25_mpwqe_filler_strides: 0
>> rx25_buff_alloc_err: 0
>> rx25_cqe_compress_blks: 0
>> rx25_cqe_compress_pkts: 0
>> rx25_page_reuse: 0
>> rx25_cache_reuse: 5080853
>> rx25_cache_full: 2702203555
>> rx25_cache_empty: 6656
>> rx25_cache_busy: 2702203807
>> rx25_cache_waive: 0
>> rx25_congst_umr: 0
>> rx25_arfs_err: 0
>> rx25_xdp_tx_xmit: 0
>> rx25_xdp_tx_full: 0
>> rx25_xdp_tx_err: 0
>> rx25_xdp_tx_cqes: 0
>> rx26_packets: 5479972151
>> rx26_bytes: 3110642276239
>> rx26_csum_complete: 5479972151
>> rx26_csum_unnecessary: 0
>> rx26_csum_unnecessary_inner: 0
>> rx26_csum_none: 0
>> rx26_xdp_drop: 0
>> rx26_xdp_redirect: 0
>> rx26_lro_packets: 0
>> rx26_lro_bytes: 0
>> rx26_ecn_mark: 0
>> rx26_removed_vlan_packets: 5479972151
>> rx26_wqe_err: 0
>> rx26_mpwqe_filler_cqes: 0
>> rx26_mpwqe_filler_strides: 0
>> rx26_buff_alloc_err: 0
>> rx26_cqe_compress_blks: 0
>> rx26_cqe_compress_pkts: 0
>> rx26_page_reuse: 0
>> rx26_cache_reuse: 26543335
>> rx26_cache_full: 2713442485
>> rx26_cache_empty: 6656
>> rx26_cache_busy: 2713442737
>> rx26_cache_waive: 0
>> rx26_congst_umr: 0
>> rx26_arfs_err: 0
>> rx26_xdp_tx_xmit: 0
>> rx26_xdp_tx_full: 0
>> rx26_xdp_tx_err: 0
>> rx26_xdp_tx_cqes: 0
>> rx27_packets: 5337113900
>> rx27_bytes: 3068966906075
>> rx27_csum_complete: 5337113900
>> rx27_csum_unnecessary: 0
>> rx27_csum_unnecessary_inner: 0
>> rx27_csum_none: 0
>> rx27_xdp_drop: 0
>> rx27_xdp_redirect: 0
>> rx27_lro_packets: 0
>> rx27_lro_bytes: 0
>> rx27_ecn_mark: 0
>> rx27_removed_vlan_packets: 5337113900
>> rx27_wqe_err: 0
>> rx27_mpwqe_filler_cqes: 0
>> rx27_mpwqe_filler_strides: 0
>> rx27_buff_alloc_err: 0
>> rx27_cqe_compress_blks: 0
>> rx27_cqe_compress_pkts: 0
>> rx27_page_reuse: 0
>> rx27_cache_reuse: 1539298962
>> rx27_cache_full: 10861919
>> rx27_cache_empty: 1117173179
>> rx27_cache_busy: 12091463
>> rx27_cache_waive: 1118395847
>> rx27_congst_umr: 0
>> rx27_arfs_err: 0
>> rx27_xdp_tx_xmit: 0
>> rx27_xdp_tx_full: 0
>> rx27_xdp_tx_err: 0
>> rx27_xdp_tx_cqes: 0
>> rx28_packets: 0
>> rx28_bytes: 0
>> rx28_csum_complete: 0
>> rx28_csum_unnecessary: 0
>> rx28_csum_unnecessary_inner: 0
>> rx28_csum_none: 0
>> rx28_xdp_drop: 0
>> rx28_xdp_redirect: 0
>> rx28_lro_packets: 0
>> rx28_lro_bytes: 0
>> rx28_ecn_mark: 0
>> rx28_removed_vlan_packets: 0
>> rx28_wqe_err: 0
>> rx28_mpwqe_filler_cqes: 0
>> rx28_mpwqe_filler_strides: 0
>> rx28_buff_alloc_err: 0
>> rx28_cqe_compress_blks: 0
>> rx28_cqe_compress_pkts: 0
>> rx28_page_reuse: 0
>> rx28_cache_reuse: 0
>> rx28_cache_full: 0
>> rx28_cache_empty: 2560
>> rx28_cache_busy: 0
>> rx28_cache_waive: 0
>> rx28_congst_umr: 0
>> rx28_arfs_err: 0
>> rx28_xdp_tx_xmit: 0
>> rx28_xdp_tx_full: 0
>> rx28_xdp_tx_err: 0
>> rx28_xdp_tx_cqes: 0
>> rx29_packets: 0
>> rx29_bytes: 0
>> rx29_csum_complete: 0
>> rx29_csum_unnecessary: 0
>> rx29_csum_unnecessary_inner: 0
>> rx29_csum_none: 0
>> rx29_xdp_drop: 0
>> rx29_xdp_redirect: 0
>> rx29_lro_packets: 0
>> rx29_lro_bytes: 0
>> rx29_ecn_mark: 0
>> rx29_removed_vlan_packets: 0
>> rx29_wqe_err: 0
>> rx29_mpwqe_filler_cqes: 0
>> rx29_mpwqe_filler_strides: 0
>> rx29_buff_alloc_err: 0
>> rx29_cqe_compress_blks: 0
>> rx29_cqe_compress_pkts: 0
>> rx29_page_reuse: 0
>> rx29_cache_reuse: 0
>> rx29_cache_full: 0
>> rx29_cache_empty: 2560
>> rx29_cache_busy: 0
>> rx29_cache_waive: 0
>> rx29_congst_umr: 0
>> rx29_arfs_err: 0
>> rx29_xdp_tx_xmit: 0
>> rx29_xdp_tx_full: 0
>> rx29_xdp_tx_err: 0
>> rx29_xdp_tx_cqes: 0
>> rx30_packets: 0
>> rx30_bytes: 0
>> rx30_csum_complete: 0
>> rx30_csum_unnecessary: 0
>> rx30_csum_unnecessary_inner: 0
>> rx30_csum_none: 0
>> rx30_xdp_drop: 0
>> rx30_xdp_redirect: 0
>> rx30_lro_packets: 0
>> rx30_lro_bytes: 0
>> rx30_ecn_mark: 0
>> rx30_removed_vlan_packets: 0
>> rx30_wqe_err: 0
>> rx30_mpwqe_filler_cqes: 0
>> rx30_mpwqe_filler_strides: 0
>> rx30_buff_alloc_err: 0
>> rx30_cqe_compress_blks: 0
>> rx30_cqe_compress_pkts: 0
>> rx30_page_reuse: 0
>> rx30_cache_reuse: 0
>> rx30_cache_full: 0
>> rx30_cache_empty: 2560
>> rx30_cache_busy: 0
>> rx30_cache_waive: 0
>> rx30_congst_umr: 0
>> rx30_arfs_err: 0
>> rx30_xdp_tx_xmit: 0
>> rx30_xdp_tx_full: 0
>> rx30_xdp_tx_err: 0
>> rx30_xdp_tx_cqes: 0
>> rx31_packets: 0
>> rx31_bytes: 0
>> rx31_csum_complete: 0
>> rx31_csum_unnecessary: 0
>> rx31_csum_unnecessary_inner: 0
>> rx31_csum_none: 0
>> rx31_xdp_drop: 0
>> rx31_xdp_redirect: 0
>> rx31_lro_packets: 0
>> rx31_lro_bytes: 0
>> rx31_ecn_mark: 0
>> rx31_removed_vlan_packets: 0
>> rx31_wqe_err: 0
>> rx31_mpwqe_filler_cqes: 0
>> rx31_mpwqe_filler_strides: 0
>> rx31_buff_alloc_err: 0
>> rx31_cqe_compress_blks: 0
>> rx31_cqe_compress_pkts: 0
>> rx31_page_reuse: 0
>> rx31_cache_reuse: 0
>> rx31_cache_full: 0
>> rx31_cache_empty: 2560
>> rx31_cache_busy: 0
>> rx31_cache_waive: 0
>> rx31_congst_umr: 0
>> rx31_arfs_err: 0
>> rx31_xdp_tx_xmit: 0
>> rx31_xdp_tx_full: 0
>> rx31_xdp_tx_err: 0
>> rx31_xdp_tx_cqes: 0
>> rx32_packets: 0
>> rx32_bytes: 0
>> rx32_csum_complete: 0
>> rx32_csum_unnecessary: 0
>> rx32_csum_unnecessary_inner: 0
>> rx32_csum_none: 0
>> rx32_xdp_drop: 0
>> rx32_xdp_redirect: 0
>> rx32_lro_packets: 0
>> rx32_lro_bytes: 0
>> rx32_ecn_mark: 0
>> rx32_removed_vlan_packets: 0
>> rx32_wqe_err: 0
>> rx32_mpwqe_filler_cqes: 0
>> rx32_mpwqe_filler_strides: 0
>> rx32_buff_alloc_err: 0
>> rx32_cqe_compress_blks: 0
>> rx32_cqe_compress_pkts: 0
>> rx32_page_reuse: 0
>> rx32_cache_reuse: 0
>> rx32_cache_full: 0
>> rx32_cache_empty: 2560
>> rx32_cache_busy: 0
>> rx32_cache_waive: 0
>> rx32_congst_umr: 0
>> rx32_arfs_err: 0
>> rx32_xdp_tx_xmit: 0
>> rx32_xdp_tx_full: 0
>> rx32_xdp_tx_err: 0
>> rx32_xdp_tx_cqes: 0
>> rx33_packets: 0
>> rx33_bytes: 0
>> rx33_csum_complete: 0
>> rx33_csum_unnecessary: 0
>> rx33_csum_unnecessary_inner: 0
>> rx33_csum_none: 0
>> rx33_xdp_drop: 0
>> rx33_xdp_redirect: 0
>> rx33_lro_packets: 0
>> rx33_lro_bytes: 0
>> rx33_ecn_mark: 0
>> rx33_removed_vlan_packets: 0
>> rx33_wqe_err: 0
>> rx33_mpwqe_filler_cqes: 0
>> rx33_mpwqe_filler_strides: 0
>> rx33_buff_alloc_err: 0
>> rx33_cqe_compress_blks: 0
>> rx33_cqe_compress_pkts: 0
>> rx33_page_reuse: 0
>> rx33_cache_reuse: 0
>> rx33_cache_full: 0
>> rx33_cache_empty: 2560
>> rx33_cache_busy: 0
>> rx33_cache_waive: 0
>> rx33_congst_umr: 0
>> rx33_arfs_err: 0
>> rx33_xdp_tx_xmit: 0
>> rx33_xdp_tx_full: 0
>> rx33_xdp_tx_err: 0
>> rx33_xdp_tx_cqes: 0
>> rx34_packets: 0
>> rx34_bytes: 0
>> rx34_csum_complete: 0
>> rx34_csum_unnecessary: 0
>> rx34_csum_unnecessary_inner: 0
>> rx34_csum_none: 0
>> rx34_xdp_drop: 0
>> rx34_xdp_redirect: 0
>> rx34_lro_packets: 0
>> rx34_lro_bytes: 0
>> rx34_ecn_mark: 0
>> rx34_removed_vlan_packets: 0
>> rx34_wqe_err: 0
>> rx34_mpwqe_filler_cqes: 0
>> rx34_mpwqe_filler_strides: 0
>> rx34_buff_alloc_err: 0
>> rx34_cqe_compress_blks: 0
>> rx34_cqe_compress_pkts: 0
>> rx34_page_reuse: 0
>> rx34_cache_reuse: 0
>> rx34_cache_full: 0
>> rx34_cache_empty: 2560
>> rx34_cache_busy: 0
>> rx34_cache_waive: 0
>> rx34_congst_umr: 0
>> rx34_arfs_err: 0
>> rx34_xdp_tx_xmit: 0
>> rx34_xdp_tx_full: 0
>> rx34_xdp_tx_err: 0
>> rx34_xdp_tx_cqes: 0
>> rx35_packets: 0
>> rx35_bytes: 0
>> rx35_csum_complete: 0
>> rx35_csum_unnecessary: 0
>> rx35_csum_unnecessary_inner: 0
>> rx35_csum_none: 0
>> rx35_xdp_drop: 0
>> rx35_xdp_redirect: 0
>> rx35_lro_packets: 0
>> rx35_lro_bytes: 0
>> rx35_ecn_mark: 0
>> rx35_removed_vlan_packets: 0
>> rx35_wqe_err: 0
>> rx35_mpwqe_filler_cqes: 0
>> rx35_mpwqe_filler_strides: 0
>> rx35_buff_alloc_err: 0
>> rx35_cqe_compress_blks: 0
>> rx35_cqe_compress_pkts: 0
>> rx35_page_reuse: 0
>> rx35_cache_reuse: 0
>> rx35_cache_full: 0
>> rx35_cache_empty: 2560
>> rx35_cache_busy: 0
>> rx35_cache_waive: 0
>> rx35_congst_umr: 0
>> rx35_arfs_err: 0
>> rx35_xdp_tx_xmit: 0
>> rx35_xdp_tx_full: 0
>> rx35_xdp_tx_err: 0
>> rx35_xdp_tx_cqes: 0
>> rx36_packets: 0
>> rx36_bytes: 0
>> rx36_csum_complete: 0
>> rx36_csum_unnecessary: 0
>> rx36_csum_unnecessary_inner: 0
>> rx36_csum_none: 0
>> rx36_xdp_drop: 0
>> rx36_xdp_redirect: 0
>> rx36_lro_packets: 0
>> rx36_lro_bytes: 0
>> rx36_ecn_mark: 0
>> rx36_removed_vlan_packets: 0
>> rx36_wqe_err: 0
>> rx36_mpwqe_filler_cqes: 0
>> rx36_mpwqe_filler_strides: 0
>> rx36_buff_alloc_err: 0
>> rx36_cqe_compress_blks: 0
>> rx36_cqe_compress_pkts: 0
>> rx36_page_reuse: 0
>> rx36_cache_reuse: 0
>> rx36_cache_full: 0
>> rx36_cache_empty: 2560
>> rx36_cache_busy: 0
>> rx36_cache_waive: 0
>> rx36_congst_umr: 0
>> rx36_arfs_err: 0
>> rx36_xdp_tx_xmit: 0
>> rx36_xdp_tx_full: 0
>> rx36_xdp_tx_err: 0
>> rx36_xdp_tx_cqes: 0
>> rx37_packets: 0
>> rx37_bytes: 0
>> rx37_csum_complete: 0
>> rx37_csum_unnecessary: 0
>> rx37_csum_unnecessary_inner: 0
>> rx37_csum_none: 0
>> rx37_xdp_drop: 0
>> rx37_xdp_redirect: 0
>> rx37_lro_packets: 0
>> rx37_lro_bytes: 0
>> rx37_ecn_mark: 0
>> rx37_removed_vlan_packets: 0
>> rx37_wqe_err: 0
>> rx37_mpwqe_filler_cqes: 0
>> rx37_mpwqe_filler_strides: 0
>> rx37_buff_alloc_err: 0
>> rx37_cqe_compress_blks: 0
>> rx37_cqe_compress_pkts: 0
>> rx37_page_reuse: 0
>> rx37_cache_reuse: 0
>> rx37_cache_full: 0
>> rx37_cache_empty: 2560
>> rx37_cache_busy: 0
>> rx37_cache_waive: 0
>> rx37_congst_umr: 0
>> rx37_arfs_err: 0
>> rx37_xdp_tx_xmit: 0
>> rx37_xdp_tx_full: 0
>> rx37_xdp_tx_err: 0
>> rx37_xdp_tx_cqes: 0
>> rx38_packets: 0
>> rx38_bytes: 0
>> rx38_csum_complete: 0
>> rx38_csum_unnecessary: 0
>> rx38_csum_unnecessary_inner: 0
>> rx38_csum_none: 0
>> rx38_xdp_drop: 0
>> rx38_xdp_redirect: 0
>> rx38_lro_packets: 0
>> rx38_lro_bytes: 0
>> rx38_ecn_mark: 0
>> rx38_removed_vlan_packets: 0
>> rx38_wqe_err: 0
>> rx38_mpwqe_filler_cqes: 0
>> rx38_mpwqe_filler_strides: 0
>> rx38_buff_alloc_err: 0
>> rx38_cqe_compress_blks: 0
>> rx38_cqe_compress_pkts: 0
>> rx38_page_reuse: 0
>> rx38_cache_reuse: 0
>> rx38_cache_full: 0
>> rx38_cache_empty: 2560
>> rx38_cache_busy: 0
>> rx38_cache_waive: 0
>> rx38_congst_umr: 0
>> rx38_arfs_err: 0
>> rx38_xdp_tx_xmit: 0
>> rx38_xdp_tx_full: 0
>> rx38_xdp_tx_err: 0
>> rx38_xdp_tx_cqes: 0
>> rx39_packets: 0
>> rx39_bytes: 0
>> rx39_csum_complete: 0
>> rx39_csum_unnecessary: 0
>> rx39_csum_unnecessary_inner: 0
>> rx39_csum_none: 0
>> rx39_xdp_drop: 0
>> rx39_xdp_redirect: 0
>> rx39_lro_packets: 0
>> rx39_lro_bytes: 0
>> rx39_ecn_mark: 0
>> rx39_removed_vlan_packets: 0
>> rx39_wqe_err: 0
>> rx39_mpwqe_filler_cqes: 0
>> rx39_mpwqe_filler_strides: 0
>> rx39_buff_alloc_err: 0
>> rx39_cqe_compress_blks: 0
>> rx39_cqe_compress_pkts: 0
>> rx39_page_reuse: 0
>> rx39_cache_reuse: 0
>> rx39_cache_full: 0
>> rx39_cache_empty: 2560
>> rx39_cache_busy: 0
>> rx39_cache_waive: 0
>> rx39_congst_umr: 0
>> rx39_arfs_err: 0
>> rx39_xdp_tx_xmit: 0
>> rx39_xdp_tx_full: 0
>> rx39_xdp_tx_err: 0
>> rx39_xdp_tx_cqes: 0
>> rx40_packets: 0
>> rx40_bytes: 0
>> rx40_csum_complete: 0
>> rx40_csum_unnecessary: 0
>> rx40_csum_unnecessary_inner: 0
>> rx40_csum_none: 0
>> rx40_xdp_drop: 0
>> rx40_xdp_redirect: 0
>> rx40_lro_packets: 0
>> rx40_lro_bytes: 0
>> rx40_ecn_mark: 0
>> rx40_removed_vlan_packets: 0
>> rx40_wqe_err: 0
>> rx40_mpwqe_filler_cqes: 0
>> rx40_mpwqe_filler_strides: 0
>> rx40_buff_alloc_err: 0
>> rx40_cqe_compress_blks: 0
>> rx40_cqe_compress_pkts: 0
>> rx40_page_reuse: 0
>> rx40_cache_reuse: 0
>> rx40_cache_full: 0
>> rx40_cache_empty: 2560
>> rx40_cache_busy: 0
>> rx40_cache_waive: 0
>> rx40_congst_umr: 0
>> rx40_arfs_err: 0
>> rx40_xdp_tx_xmit: 0
>> rx40_xdp_tx_full: 0
>> rx40_xdp_tx_err: 0
>> rx40_xdp_tx_cqes: 0
>> rx41_packets: 0
>> rx41_bytes: 0
>> rx41_csum_complete: 0
>> rx41_csum_unnecessary: 0
>> rx41_csum_unnecessary_inner: 0
>> rx41_csum_none: 0
>> rx41_xdp_drop: 0
>> rx41_xdp_redirect: 0
>> rx41_lro_packets: 0
>> rx41_lro_bytes: 0
>> rx41_ecn_mark: 0
>> rx41_removed_vlan_packets: 0
>> rx41_wqe_err: 0
>> rx41_mpwqe_filler_cqes: 0
>> rx41_mpwqe_filler_strides: 0
>> rx41_buff_alloc_err: 0
>> rx41_cqe_compress_blks: 0
>> rx41_cqe_compress_pkts: 0
>> rx41_page_reuse: 0
>> rx41_cache_reuse: 0
>> rx41_cache_full: 0
>> rx41_cache_empty: 2560
>> rx41_cache_busy: 0
>> rx41_cache_waive: 0
>> rx41_congst_umr: 0
>> rx41_arfs_err: 0
>> rx41_xdp_tx_xmit: 0
>> rx41_xdp_tx_full: 0
>> rx41_xdp_tx_err: 0
>> rx41_xdp_tx_cqes: 0
>> rx42_packets: 0
>> rx42_bytes: 0
>> rx42_csum_complete: 0
>> rx42_csum_unnecessary: 0
>> rx42_csum_unnecessary_inner: 0
>> rx42_csum_none: 0
>> rx42_xdp_drop: 0
>> rx42_xdp_redirect: 0
>> rx42_lro_packets: 0
>> rx42_lro_bytes: 0
>> rx42_ecn_mark: 0
>> rx42_removed_vlan_packets: 0
>> rx42_wqe_err: 0
>> rx42_mpwqe_filler_cqes: 0
>> rx42_mpwqe_filler_strides: 0
>> rx42_buff_alloc_err: 0
>> rx42_cqe_compress_blks: 0
>> rx42_cqe_compress_pkts: 0
>> rx42_page_reuse: 0
>> rx42_cache_reuse: 0
>> rx42_cache_full: 0
>> rx42_cache_empty: 2560
>> rx42_cache_busy: 0
>> rx42_cache_waive: 0
>> rx42_congst_umr: 0
>> rx42_arfs_err: 0
>> rx42_xdp_tx_xmit: 0
>> rx42_xdp_tx_full: 0
>> rx42_xdp_tx_err: 0
>> rx42_xdp_tx_cqes: 0
>> rx43_packets: 0
>> rx43_bytes: 0
>> rx43_csum_complete: 0
>> rx43_csum_unnecessary: 0
>> rx43_csum_unnecessary_inner: 0
>> rx43_csum_none: 0
>> rx43_xdp_drop: 0
>> rx43_xdp_redirect: 0
>> rx43_lro_packets: 0
>> rx43_lro_bytes: 0
>> rx43_ecn_mark: 0
>> rx43_removed_vlan_packets: 0
>> rx43_wqe_err: 0
>> rx43_mpwqe_filler_cqes: 0
>> rx43_mpwqe_filler_strides: 0
>> rx43_buff_alloc_err: 0
>> rx43_cqe_compress_blks: 0
>> rx43_cqe_compress_pkts: 0
>> rx43_page_reuse: 0
>> rx43_cache_reuse: 0
>> rx43_cache_full: 0
>> rx43_cache_empty: 2560
>> rx43_cache_busy: 0
>> rx43_cache_waive: 0
>> rx43_congst_umr: 0
>> rx43_arfs_err: 0
>> rx43_xdp_tx_xmit: 0
>> rx43_xdp_tx_full: 0
>> rx43_xdp_tx_err: 0
>> rx43_xdp_tx_cqes: 0
>> rx44_packets: 0
>> rx44_bytes: 0
>> rx44_csum_complete: 0
>> rx44_csum_unnecessary: 0
>> rx44_csum_unnecessary_inner: 0
>> rx44_csum_none: 0
>> rx44_xdp_drop: 0
>> rx44_xdp_redirect: 0
>> rx44_lro_packets: 0
>> rx44_lro_bytes: 0
>> rx44_ecn_mark: 0
>> rx44_removed_vlan_packets: 0
>> rx44_wqe_err: 0
>> rx44_mpwqe_filler_cqes: 0
>> rx44_mpwqe_filler_strides: 0
>> rx44_buff_alloc_err: 0
>> rx44_cqe_compress_blks: 0
>> rx44_cqe_compress_pkts: 0
>> rx44_page_reuse: 0
>> rx44_cache_reuse: 0
>> rx44_cache_full: 0
>> rx44_cache_empty: 2560
>> rx44_cache_busy: 0
>> rx44_cache_waive: 0
>> rx44_congst_umr: 0
>> rx44_arfs_err: 0
>> rx44_xdp_tx_xmit: 0
>> rx44_xdp_tx_full: 0
>> rx44_xdp_tx_err: 0
>> rx44_xdp_tx_cqes: 0
>> rx45_packets: 0
>> rx45_bytes: 0
>> rx45_csum_complete: 0
>> rx45_csum_unnecessary: 0
>> rx45_csum_unnecessary_inner: 0
>> rx45_csum_none: 0
>> rx45_xdp_drop: 0
>> rx45_xdp_redirect: 0
>> rx45_lro_packets: 0
>> rx45_lro_bytes: 0
>> rx45_ecn_mark: 0
>> rx45_removed_vlan_packets: 0
>> rx45_wqe_err: 0
>> rx45_mpwqe_filler_cqes: 0
>> rx45_mpwqe_filler_strides: 0
>> rx45_buff_alloc_err: 0
>> rx45_cqe_compress_blks: 0
>> rx45_cqe_compress_pkts: 0
>> rx45_page_reuse: 0
>> rx45_cache_reuse: 0
>> rx45_cache_full: 0
>> rx45_cache_empty: 2560
>> rx45_cache_busy: 0
>> rx45_cache_waive: 0
>> rx45_congst_umr: 0
>> rx45_arfs_err: 0
>> rx45_xdp_tx_xmit: 0
>> rx45_xdp_tx_full: 0
>> rx45_xdp_tx_err: 0
>> rx45_xdp_tx_cqes: 0
>> rx46_packets: 0
>> rx46_bytes: 0
>> rx46_csum_complete: 0
>> rx46_csum_unnecessary: 0
>> rx46_csum_unnecessary_inner: 0
>> rx46_csum_none: 0
>> rx46_xdp_drop: 0
>> rx46_xdp_redirect: 0
>> rx46_lro_packets: 0
>> rx46_lro_bytes: 0
>> rx46_ecn_mark: 0
>> rx46_removed_vlan_packets: 0
>> rx46_wqe_err: 0
>> rx46_mpwqe_filler_cqes: 0
>> rx46_mpwqe_filler_strides: 0
>> rx46_buff_alloc_err: 0
>> rx46_cqe_compress_blks: 0
>> rx46_cqe_compress_pkts: 0
>> rx46_page_reuse: 0
>> rx46_cache_reuse: 0
>> rx46_cache_full: 0
>> rx46_cache_empty: 2560
>> rx46_cache_busy: 0
>> rx46_cache_waive: 0
>> rx46_congst_umr: 0
>> rx46_arfs_err: 0
>> rx46_xdp_tx_xmit: 0
>> rx46_xdp_tx_full: 0
>> rx46_xdp_tx_err: 0
>> rx46_xdp_tx_cqes: 0
>> rx47_packets: 0
>> rx47_bytes: 0
>> rx47_csum_complete: 0
>> rx47_csum_unnecessary: 0
>> rx47_csum_unnecessary_inner: 0
>> rx47_csum_none: 0
>> rx47_xdp_drop: 0
>> rx47_xdp_redirect: 0
>> rx47_lro_packets: 0
>> rx47_lro_bytes: 0
>> rx47_ecn_mark: 0
>> rx47_removed_vlan_packets: 0
>> rx47_wqe_err: 0
>> rx47_mpwqe_filler_cqes: 0
>> rx47_mpwqe_filler_strides: 0
>> rx47_buff_alloc_err: 0
>> rx47_cqe_compress_blks: 0
>> rx47_cqe_compress_pkts: 0
>> rx47_page_reuse: 0
>> rx47_cache_reuse: 0
>> rx47_cache_full: 0
>> rx47_cache_empty: 2560
>> rx47_cache_busy: 0
>> rx47_cache_waive: 0
>> rx47_congst_umr: 0
>> rx47_arfs_err: 0
>> rx47_xdp_tx_xmit: 0
>> rx47_xdp_tx_full: 0
>> rx47_xdp_tx_err: 0
>> rx47_xdp_tx_cqes: 0
>> rx48_packets: 0
>> rx48_bytes: 0
>> rx48_csum_complete: 0
>> rx48_csum_unnecessary: 0
>> rx48_csum_unnecessary_inner: 0
>> rx48_csum_none: 0
>> rx48_xdp_drop: 0
>> rx48_xdp_redirect: 0
>> rx48_lro_packets: 0
>> rx48_lro_bytes: 0
>> rx48_ecn_mark: 0
>> rx48_removed_vlan_packets: 0
>> rx48_wqe_err: 0
>> rx48_mpwqe_filler_cqes: 0
>> rx48_mpwqe_filler_strides: 0
>> rx48_buff_alloc_err: 0
>> rx48_cqe_compress_blks: 0
>> rx48_cqe_compress_pkts: 0
>> rx48_page_reuse: 0
>> rx48_cache_reuse: 0
>> rx48_cache_full: 0
>> rx48_cache_empty: 2560
>> rx48_cache_busy: 0
>> rx48_cache_waive: 0
>> rx48_congst_umr: 0
>> rx48_arfs_err: 0
>> rx48_xdp_tx_xmit: 0
>> rx48_xdp_tx_full: 0
>> rx48_xdp_tx_err: 0
>> rx48_xdp_tx_cqes: 0
>> rx49_packets: 0
>> rx49_bytes: 0
>> rx49_csum_complete: 0
>> rx49_csum_unnecessary: 0
>> rx49_csum_unnecessary_inner: 0
>> rx49_csum_none: 0
>> rx49_xdp_drop: 0
>> rx49_xdp_redirect: 0
>> rx49_lro_packets: 0
>> rx49_lro_bytes: 0
>> rx49_ecn_mark: 0
>> rx49_removed_vlan_packets: 0
>> rx49_wqe_err: 0
>> rx49_mpwqe_filler_cqes: 0
>> rx49_mpwqe_filler_strides: 0
>> rx49_buff_alloc_err: 0
>> rx49_cqe_compress_blks: 0
>> rx49_cqe_compress_pkts: 0
>> rx49_page_reuse: 0
>> rx49_cache_reuse: 0
>> rx49_cache_full: 0
>> rx49_cache_empty: 2560
>> rx49_cache_busy: 0
>> rx49_cache_waive: 0
>> rx49_congst_umr: 0
>> rx49_arfs_err: 0
>> rx49_xdp_tx_xmit: 0
>> rx49_xdp_tx_full: 0
>> rx49_xdp_tx_err: 0
>> rx49_xdp_tx_cqes: 0
>> rx50_packets: 0
>> rx50_bytes: 0
>> rx50_csum_complete: 0
>> rx50_csum_unnecessary: 0
>> rx50_csum_unnecessary_inner: 0
>> rx50_csum_none: 0
>> rx50_xdp_drop: 0
>> rx50_xdp_redirect: 0
>> rx50_lro_packets: 0
>> rx50_lro_bytes: 0
>> rx50_ecn_mark: 0
>> rx50_removed_vlan_packets: 0
>> rx50_wqe_err: 0
>> rx50_mpwqe_filler_cqes: 0
>> rx50_mpwqe_filler_strides: 0
>> rx50_buff_alloc_err: 0
>> rx50_cqe_compress_blks: 0
>> rx50_cqe_compress_pkts: 0
>> rx50_page_reuse: 0
>> rx50_cache_reuse: 0
>> rx50_cache_full: 0
>> rx50_cache_empty: 2560
>> rx50_cache_busy: 0
>> rx50_cache_waive: 0
>> rx50_congst_umr: 0
>> rx50_arfs_err: 0
>> rx50_xdp_tx_xmit: 0
>> rx50_xdp_tx_full: 0
>> rx50_xdp_tx_err: 0
>> rx50_xdp_tx_cqes: 0
>> rx51_packets: 0
>> rx51_bytes: 0
>> rx51_csum_complete: 0
>> rx51_csum_unnecessary: 0
>> rx51_csum_unnecessary_inner: 0
>> rx51_csum_none: 0
>> rx51_xdp_drop: 0
>> rx51_xdp_redirect: 0
>> rx51_lro_packets: 0
>> rx51_lro_bytes: 0
>> rx51_ecn_mark: 0
>> rx51_removed_vlan_packets: 0
>> rx51_wqe_err: 0
>> rx51_mpwqe_filler_cqes: 0
>> rx51_mpwqe_filler_strides: 0
>> rx51_buff_alloc_err: 0
>> rx51_cqe_compress_blks: 0
>> rx51_cqe_compress_pkts: 0
>> rx51_page_reuse: 0
>> rx51_cache_reuse: 0
>> rx51_cache_full: 0
>> rx51_cache_empty: 2560
>> rx51_cache_busy: 0
>> rx51_cache_waive: 0
>> rx51_congst_umr: 0
>> rx51_arfs_err: 0
>> rx51_xdp_tx_xmit: 0
>> rx51_xdp_tx_full: 0
>> rx51_xdp_tx_err: 0
>> rx51_xdp_tx_cqes: 0
>> rx52_packets: 0
>> rx52_bytes: 0
>> rx52_csum_complete: 0
>> rx52_csum_unnecessary: 0
>> rx52_csum_unnecessary_inner: 0
>> rx52_csum_none: 0
>> rx52_xdp_drop: 0
>> rx52_xdp_redirect: 0
>> rx52_lro_packets: 0
>> rx52_lro_bytes: 0
>> rx52_ecn_mark: 0
>> rx52_removed_vlan_packets: 0
>> rx52_wqe_err: 0
>> rx52_mpwqe_filler_cqes: 0
>> rx52_mpwqe_filler_strides: 0
>> rx52_buff_alloc_err: 0
>> rx52_cqe_compress_blks: 0
>> rx52_cqe_compress_pkts: 0
>> rx52_page_reuse: 0
>> rx52_cache_reuse: 0
>> rx52_cache_full: 0
>> rx52_cache_empty: 2560
>> rx52_cache_busy: 0
>> rx52_cache_waive: 0
>> rx52_congst_umr: 0
>> rx52_arfs_err: 0
>> rx52_xdp_tx_xmit: 0
>> rx52_xdp_tx_full: 0
>> rx52_xdp_tx_err: 0
>> rx52_xdp_tx_cqes: 0
>> rx53_packets: 0
>> rx53_bytes: 0
>> rx53_csum_complete: 0
>> rx53_csum_unnecessary: 0
>> rx53_csum_unnecessary_inner: 0
>> rx53_csum_none: 0
>> rx53_xdp_drop: 0
>> rx53_xdp_redirect: 0
>> rx53_lro_packets: 0
>> rx53_lro_bytes: 0
>> rx53_ecn_mark: 0
>> rx53_removed_vlan_packets: 0
>> rx53_wqe_err: 0
>> rx53_mpwqe_filler_cqes: 0
>> rx53_mpwqe_filler_strides: 0
>> rx53_buff_alloc_err: 0
>> rx53_cqe_compress_blks: 0
>> rx53_cqe_compress_pkts: 0
>> rx53_page_reuse: 0
>> rx53_cache_reuse: 0
>> rx53_cache_full: 0
>> rx53_cache_empty: 2560
>> rx53_cache_busy: 0
>> rx53_cache_waive: 0
>> rx53_congst_umr: 0
>> rx53_arfs_err: 0
>> rx53_xdp_tx_xmit: 0
>> rx53_xdp_tx_full: 0
>> rx53_xdp_tx_err: 0
>> rx53_xdp_tx_cqes: 0
>> rx54_packets: 0
>> rx54_bytes: 0
>> rx54_csum_complete: 0
>> rx54_csum_unnecessary: 0
>> rx54_csum_unnecessary_inner: 0
>> rx54_csum_none: 0
>> rx54_xdp_drop: 0
>> rx54_xdp_redirect: 0
>> rx54_lro_packets: 0
>> rx54_lro_bytes: 0
>> rx54_ecn_mark: 0
>> rx54_removed_vlan_packets: 0
>> rx54_wqe_err: 0
>> rx54_mpwqe_filler_cqes: 0
>> rx54_mpwqe_filler_strides: 0
>> rx54_buff_alloc_err: 0
>> rx54_cqe_compress_blks: 0
>> rx54_cqe_compress_pkts: 0
>> rx54_page_reuse: 0
>> rx54_cache_reuse: 0
>> rx54_cache_full: 0
>> rx54_cache_empty: 2560
>> rx54_cache_busy: 0
>> rx54_cache_waive: 0
>> rx54_congst_umr: 0
>> rx54_arfs_err: 0
>> rx54_xdp_tx_xmit: 0
>> rx54_xdp_tx_full: 0
>> rx54_xdp_tx_err: 0
>> rx54_xdp_tx_cqes: 0
>> rx55_packets: 0
>> rx55_bytes: 0
>> rx55_csum_complete: 0
>> rx55_csum_unnecessary: 0
>> rx55_csum_unnecessary_inner: 0
>> rx55_csum_none: 0
>> rx55_xdp_drop: 0
>> rx55_xdp_redirect: 0
>> rx55_lro_packets: 0
>> rx55_lro_bytes: 0
>> rx55_ecn_mark: 0
>> rx55_removed_vlan_packets: 0
>> rx55_wqe_err: 0
>> rx55_mpwqe_filler_cqes: 0
>> rx55_mpwqe_filler_strides: 0
>> rx55_buff_alloc_err: 0
>> rx55_cqe_compress_blks: 0
>> rx55_cqe_compress_pkts: 0
>> rx55_page_reuse: 0
>> rx55_cache_reuse: 0
>> rx55_cache_full: 0
>> rx55_cache_empty: 2560
>> rx55_cache_busy: 0
>> rx55_cache_waive: 0
>> rx55_congst_umr: 0
>> rx55_arfs_err: 0
>> rx55_xdp_tx_xmit: 0
>> rx55_xdp_tx_full: 0
>> rx55_xdp_tx_err: 0
>> rx55_xdp_tx_cqes: 0
>> tx0_packets: 5868971166
>> tx0_bytes: 7384241881537
>> tx0_tso_packets: 1005089669
>> tx0_tso_bytes: 5138882499687
>> tx0_tso_inner_packets: 0
>> tx0_tso_inner_bytes: 0
>> tx0_csum_partial: 1405330470
>> tx0_csum_partial_inner: 0
>> tx0_added_vlan_packets: 3247061022
>> tx0_nop: 83925216
>> tx0_csum_none: 1841730552
>> tx0_stopped: 0
>> tx0_dropped: 0
>> tx0_xmit_more: 29664303
>> tx0_recover: 0
>> tx0_cqes: 3217398842
>> tx0_wake: 0
>> tx0_cqe_err: 0
>> tx1_packets: 5599378674
>> tx1_bytes: 7272236466962
>> tx1_tso_packets: 1024612268
>> tx1_tso_bytes: 5244192050917
>> tx1_tso_inner_packets: 0
>> tx1_tso_inner_bytes: 0
>> tx1_csum_partial: 1438007932
>> tx1_csum_partial_inner: 0
>> tx1_added_vlan_packets: 2919765857
>> tx1_nop: 79661231
>> tx1_csum_none: 1481757925
>> tx1_stopped: 0
>> tx1_dropped: 0
>> tx1_xmit_more: 29485355
>> tx1_recover: 0
>> tx1_cqes: 2890282176
>> tx1_wake: 0
>> tx1_cqe_err: 0
>> tx2_packets: 5413821094
>> tx2_bytes: 7033951631334
>> tx2_tso_packets: 1002868589
>> tx2_tso_bytes: 5089549008985
>> tx2_tso_inner_packets: 0
>> tx2_tso_inner_bytes: 0
>> tx2_csum_partial: 1404186175
>> tx2_csum_partial_inner: 0
>> tx2_added_vlan_packets: 2822670460
>> tx2_nop: 77115408
>> tx2_csum_none: 1418484285
>> tx2_stopped: 0
>> tx2_dropped: 0
>> tx2_xmit_more: 29321129
>> tx2_recover: 0
>> tx2_cqes: 2793351019
>> tx2_wake: 0
>> tx2_cqe_err: 0
>> tx3_packets: 5479609727
>> tx3_bytes: 7116904107659
>> tx3_tso_packets: 1002992639
>> tx3_tso_bytes: 5154225081979
>> tx3_tso_inner_packets: 0
>> tx3_tso_inner_bytes: 0
>> tx3_csum_partial: 1415739849
>> tx3_csum_partial_inner: 0
>> tx3_added_vlan_packets: 2842823811
>> tx3_nop: 78060813
>> tx3_csum_none: 1427083971
>> tx3_stopped: 0
>> tx3_dropped: 0
>> tx3_xmit_more: 28575040
>> tx3_recover: 0
>> tx3_cqes: 2814250785
>> tx3_wake: 0
>> tx3_cqe_err: 0
>> tx4_packets: 5508297397
>> tx4_bytes: 7127659369902
>> tx4_tso_packets: 1007356432
>> tx4_tso_bytes: 5145975736034
>> tx4_tso_inner_packets: 0
>> tx4_tso_inner_bytes: 0
>> tx4_csum_partial: 1411271000
>> tx4_csum_partial_inner: 0
>> tx4_added_vlan_packets: 2882086825
>> tx4_nop: 78433610
>> tx4_csum_none: 1470815825
>> tx4_stopped: 0
>> tx4_dropped: 0
>> tx4_xmit_more: 28632444
>> tx4_recover: 0
>> tx4_cqes: 2853456464
>> tx4_wake: 0
>> tx4_cqe_err: 0
>> tx5_packets: 5513864156
>> tx5_bytes: 7165864145517
>> tx5_tso_packets: 1014046485
>> tx5_tso_bytes: 5192635614477
>> tx5_tso_inner_packets: 0
>> tx5_tso_inner_bytes: 0
>> tx5_csum_partial: 1420810473
>> tx5_csum_partial_inner: 0
>> tx5_added_vlan_packets: 2861370556
>> tx5_nop: 78481355
>> tx5_csum_none: 1440560083
>> tx5_stopped: 0
>> tx5_dropped: 0
>> tx5_xmit_more: 28222467
>> tx5_recover: 0
>> tx5_cqes: 2833149758
>> tx5_wake: 0
>> tx5_cqe_err: 0
>> tx6_packets: 5560724761
>> tx6_bytes: 7210309972086
>> tx6_tso_packets: 994050514
>> tx6_tso_bytes: 5171393741595
>> tx6_tso_inner_packets: 0
>> tx6_tso_inner_bytes: 0
>> tx6_csum_partial: 1414303265
>> tx6_csum_partial_inner: 0
>> tx6_added_vlan_packets: 2905794177
>> tx6_nop: 79353318
>> tx6_csum_none: 1491490912
>> tx6_stopped: 0
>> tx6_dropped: 0
>> tx6_xmit_more: 31246664
>> tx6_recover: 0
>> tx6_cqes: 2874549217
>> tx6_wake: 0
>> tx6_cqe_err: 0
>> tx7_packets: 5557594170
>> tx7_bytes: 7223138778685
>> tx7_tso_packets: 1013475396
>> tx7_tso_bytes: 5241530065484
>> tx7_tso_inner_packets: 0
>> tx7_tso_inner_bytes: 0
>> tx7_csum_partial: 1438604314
>> tx7_csum_partial_inner: 0
>> tx7_added_vlan_packets: 2873917552
>> tx7_nop: 79057059
>> tx7_csum_none: 1435313239
>> tx7_stopped: 0
>> tx7_dropped: 0
>> tx7_xmit_more: 29258761
>> tx7_recover: 0
>> tx7_cqes: 2844660578
>> tx7_wake: 0
>> tx7_cqe_err: 0
>> tx8_packets: 5521254733
>> tx8_bytes: 7208043146297
>> tx8_tso_packets: 1014670801
>> tx8_tso_bytes: 5185842447246
>> tx8_tso_inner_packets: 0
>> tx8_tso_inner_bytes: 0
>> tx8_csum_partial: 1431631562
>> tx8_csum_partial_inner: 0
>> tx8_added_vlan_packets: 2872641129
>> tx8_nop: 78545776
>> tx8_csum_none: 1441009567
>> tx8_stopped: 0
>> tx8_dropped: 0
>> tx8_xmit_more: 29106291
>> tx8_recover: 0
>> tx8_cqes: 2843536748
>> tx8_wake: 0
>> tx8_cqe_err: 0
>> tx9_packets: 5528889957
>> tx9_bytes: 7191793816058
>> tx9_tso_packets: 1015955476
>> tx9_tso_bytes: 5207232047828
>> tx9_tso_inner_packets: 0
>> tx9_tso_inner_bytes: 0
>> tx9_csum_partial: 1421266796
>> tx9_csum_partial_inner: 0
>> tx9_added_vlan_packets: 2869523921
>> tx9_nop: 78586218
>> tx9_csum_none: 1448257125
>> tx9_stopped: 0
>> tx9_dropped: 0
>> tx9_xmit_more: 29483347
>> tx9_recover: 0
>> tx9_cqes: 2840042245
>> tx9_wake: 0
>> tx9_cqe_err: 0
>> tx10_packets: 5556351222
>> tx10_bytes: 7254798330757
>> tx10_tso_packets: 1028554460
>> tx10_tso_bytes: 5246179615774
>> tx10_tso_inner_packets: 0
>> tx10_tso_inner_bytes: 0
>> tx10_csum_partial: 1430459021
>> tx10_csum_partial_inner: 0
>> tx10_added_vlan_packets: 2881683382
>> tx10_nop: 79139584
>> tx10_csum_none: 1451224361
>> tx10_stopped: 0
>> tx10_dropped: 0
>> tx10_xmit_more: 29217190
>> tx10_recover: 0
>> tx10_cqes: 2852467898
>> tx10_wake: 0
>> tx10_cqe_err: 0
>> tx11_packets: 5455631854
>> tx11_bytes: 7061121713772
>> tx11_tso_packets: 992133383
>> tx11_tso_bytes: 5089419722682
>> tx11_tso_inner_packets: 0
>> tx11_tso_inner_bytes: 0
>> tx11_csum_partial: 1395542033
>> tx11_csum_partial_inner: 0
>> tx11_added_vlan_packets: 2852589093
>> tx11_nop: 77799857
>> tx11_csum_none: 1457047060
>> tx11_stopped: 0
>> tx11_dropped: 0
>> tx11_xmit_more: 29559927
>> tx11_recover: 0
>> tx11_cqes: 2823031110
>> tx11_wake: 0
>> tx11_cqe_err: 0
>> tx12_packets: 5488286808
>> tx12_bytes: 7137087569303
>> tx12_tso_packets: 1006435537
>> tx12_tso_bytes: 5163371416750
>> tx12_tso_inner_packets: 0
>> tx12_tso_inner_bytes: 0
>> tx12_csum_partial: 1414799411
>> tx12_csum_partial_inner: 0
>> tx12_added_vlan_packets: 2841679543
>> tx12_nop: 78387039
>> tx12_csum_none: 1426880132
>> tx12_stopped: 0
>> tx12_dropped: 0
>> tx12_xmit_more: 28607526
>> tx12_recover: 0
>> tx12_cqes: 2813073557
>> tx12_wake: 0
>> tx12_cqe_err: 0
>> tx13_packets: 5594132290
>> tx13_bytes: 7251106284829
>> tx13_tso_packets: 1035172061
>> tx13_tso_bytes: 5251200286298
>> tx13_tso_inner_packets: 0
>> tx13_tso_inner_bytes: 0
>> tx13_csum_partial: 1443665981
>> tx13_csum_partial_inner: 0
>> tx13_added_vlan_packets: 2916604799
>> tx13_nop: 79670465
>> tx13_csum_none: 1472938818
>> tx13_stopped: 0
>> tx13_dropped: 0
>> tx13_xmit_more: 27797067
>> tx13_recover: 0
>> tx13_cqes: 2888809352
>> tx13_wake: 0
>> tx13_cqe_err: 0
>> tx14_packets: 5548790952
>> tx14_bytes: 7194211868411
>> tx14_tso_packets: 1021015561
>> tx14_tso_bytes: 5231483708869
>> tx14_tso_inner_packets: 0
>> tx14_tso_inner_bytes: 0
>> tx14_csum_partial: 1427711576
>> tx14_csum_partial_inner: 0
>> tx14_added_vlan_packets: 2875288572
>> tx14_nop: 78900224
>> tx14_csum_none: 1447576996
>> tx14_stopped: 0
>> tx14_dropped: 0
>> tx14_xmit_more: 30003496
>> tx14_recover: 0
>> tx14_cqes: 2845286732
>> tx14_wake: 0
>> tx14_cqe_err: 0
>> tx15_packets: 5609310963
>> tx15_bytes: 7271380831798
>> tx15_tso_packets: 1027830118
>> tx15_tso_bytes: 5229697431506
>> tx15_tso_inner_packets: 0
>> tx15_tso_inner_bytes: 0
>> tx15_csum_partial: 1429209941
>> tx15_csum_partial_inner: 0
>> tx15_added_vlan_packets: 2940315402
>> tx15_nop: 79950883
>> tx15_csum_none: 1511105462
>> tx15_stopped: 0
>> tx15_dropped: 0
>> tx15_xmit_more: 28820740
>> tx15_recover: 0
>> tx15_cqes: 2911496633
>> tx15_wake: 0
>> tx15_cqe_err: 0
>> tx16_packets: 4465363036
>> tx16_bytes: 5769771803704
>> tx16_tso_packets: 817101913
>> tx16_tso_bytes: 4180172833814
>> tx16_tso_inner_packets: 0
>> tx16_tso_inner_bytes: 0
>> tx16_csum_partial: 1136731404
>> tx16_csum_partial_inner: 0
>> tx16_added_vlan_packets: 2332178232
>> tx16_nop: 63458573
>> tx16_csum_none: 1195446828
>> tx16_stopped: 0
>> tx16_dropped: 0
>> tx16_xmit_more: 23756254
>> tx16_recover: 0
>> tx16_cqes: 2308423025
>> tx16_wake: 0
>> tx16_cqe_err: 0
>> tx17_packets: 4380386348
>> tx17_bytes: 5708702994526
>> tx17_tso_packets: 813638023
>> tx17_tso_bytes: 4130806014947
>> tx17_tso_inner_packets: 0
>> tx17_tso_inner_bytes: 0
>> tx17_csum_partial: 1133007164
>> tx17_csum_partial_inner: 0
>> tx17_added_vlan_packets: 2277314787
>> tx17_nop: 62377372
>> tx17_csum_none: 1144307623
>> tx17_stopped: 0
>> tx17_dropped: 0
>> tx17_xmit_more: 23731361
>> tx17_recover: 0
>> tx17_cqes: 2253584638
>> tx17_wake: 0
>> tx17_cqe_err: 0
>> tx18_packets: 4450359743
>> tx18_bytes: 5758968674820
>> tx18_tso_packets: 815791601
>> tx18_tso_bytes: 4179942688909
>> tx18_tso_inner_packets: 0
>> tx18_tso_inner_bytes: 0
>> tx18_csum_partial: 1137649257
>> tx18_csum_partial_inner: 0
>> tx18_added_vlan_packets: 2314556550
>> tx18_nop: 63271085
>> tx18_csum_none: 1176907293
>> tx18_stopped: 0
>> tx18_dropped: 0
>> tx18_xmit_more: 23055770
>> tx18_recover: 0
>> tx18_cqes: 2291501928
>> tx18_wake: 0
>> tx18_cqe_err: 0
>> tx19_packets: 4596064378
>> tx19_bytes: 5916675706535
>> tx19_tso_packets: 825788649
>> tx19_tso_bytes: 4208046929921
>> tx19_tso_inner_packets: 0
>> tx19_tso_inner_bytes: 0
>> tx19_csum_partial: 1150666569
>> tx19_csum_partial_inner: 0
>> tx19_added_vlan_packets: 2450567026
>> tx19_nop: 65468504
>> tx19_csum_none: 1299900457
>> tx19_stopped: 0
>> tx19_dropped: 0
>> tx19_xmit_more: 23846250
>> tx19_recover: 0
>> tx19_cqes: 2426722127
>> tx19_wake: 0
>> tx19_cqe_err: 0
>> tx20_packets: 4424935388
>> tx20_bytes: 5757631205901
>> tx20_tso_packets: 804875006
>> tx20_tso_bytes: 4156262736109
>> tx20_tso_inner_packets: 0
>> tx20_tso_inner_bytes: 0
>> tx20_csum_partial: 1134144916
>> tx20_csum_partial_inner: 0
>> tx20_added_vlan_packets: 2294839665
>> tx20_nop: 63023986
>> tx20_csum_none: 1160694749
>> tx20_stopped: 0
>> tx20_dropped: 0
>> tx20_xmit_more: 23393201
>> tx20_recover: 0
>> tx20_cqes: 2271447623
>> tx20_wake: 0
>> tx20_cqe_err: 0
>> tx21_packets: 4595062285
>> tx21_bytes: 5958671993467
>> tx21_tso_packets: 821936215
>> tx21_tso_bytes: 4187977870684
>> tx21_tso_inner_packets: 0
>> tx21_tso_inner_bytes: 0
>> tx21_csum_partial: 1143339787
>> tx21_csum_partial_inner: 0
>> tx21_added_vlan_packets: 2457167412
>> tx21_nop: 65697763
>> tx21_csum_none: 1313827625
>> tx21_stopped: 0
>> tx21_dropped: 0
>> tx21_xmit_more: 23858345
>> tx21_recover: 0
>> tx21_cqes: 2433310348
>> tx21_wake: 0
>> tx21_cqe_err: 0
>> tx22_packets: 4664446513
>> tx22_bytes: 5931429292082
>> tx22_tso_packets: 814457881
>> tx22_tso_bytes: 4148607956533
>> tx22_tso_inner_packets: 0
>> tx22_tso_inner_bytes: 0
>> tx22_csum_partial: 1127284783
>> tx22_csum_partial_inner: 0
>> tx22_added_vlan_packets: 2548650146
>> tx22_nop: 66299909
>> tx22_csum_none: 1421365363
>> tx22_stopped: 0
>> tx22_dropped: 0
>> tx22_xmit_more: 23800911
>> tx22_recover: 0
>> tx22_cqes: 2524850415
>> tx22_wake: 0
>> tx22_cqe_err: 0
>> tx23_packets: 4416221747
>> tx23_bytes: 5721472587985
>> tx23_tso_packets: 823538520
>> tx23_tso_bytes: 4163520218617
>> tx23_tso_inner_packets: 0
>> tx23_tso_inner_bytes: 0
>> tx23_csum_partial: 1135996006
>> tx23_csum_partial_inner: 0
>> tx23_added_vlan_packets: 2292404120
>> tx23_nop: 62709432
>> tx23_csum_none: 1156408114
>> tx23_stopped: 0
>> tx23_dropped: 0
>> tx23_xmit_more: 22299889
>> tx23_recover: 0
>> tx23_cqes: 2270105487
>> tx23_wake: 0
>> tx23_cqe_err: 0
>> tx24_packets: 4420014824
>> tx24_bytes: 5740767318521
>> tx24_tso_packets: 820838072
>> tx24_tso_bytes: 4183722948422
>> tx24_tso_inner_packets: 0
>> tx24_tso_inner_bytes: 0
>> tx24_csum_partial: 1138070059
>> tx24_csum_partial_inner: 0
>> tx24_added_vlan_packets: 2289043946
>> tx24_nop: 62797341
>> tx24_csum_none: 1150973887
>> tx24_stopped: 0
>> tx24_dropped: 0
>> tx24_xmit_more: 22744690
>> tx24_recover: 0
>> tx24_cqes: 2266300568
>> tx24_wake: 0
>> tx24_cqe_err: 0
>> tx25_packets: 4413225545
>> tx25_bytes: 5716162617155
>> tx25_tso_packets: 808274341
>> tx25_tso_bytes: 4138408857714
>> tx25_tso_inner_packets: 0
>> tx25_tso_inner_bytes: 0
>> tx25_csum_partial: 1134587898
>> tx25_csum_partial_inner: 0
>> tx25_added_vlan_packets: 2297149310
>> tx25_nop: 62958238
>> tx25_csum_none: 1162561412
>> tx25_stopped: 0
>> tx25_dropped: 0
>> tx25_xmit_more: 24463552
>> tx25_recover: 0
>> tx25_cqes: 2272686971
>> tx25_wake: 0
>> tx25_cqe_err: 0
>> tx26_packets: 4524907591
>> tx26_bytes: 5865394280699
>> tx26_tso_packets: 807270022
>> tx26_tso_bytes: 4148754705317
>> tx26_tso_inner_packets: 0
>> tx26_tso_inner_bytes: 0
>> tx26_csum_partial: 1130306933
>> tx26_csum_partial_inner: 0
>> tx26_added_vlan_packets: 2402682460
>> tx26_nop: 64474322
>> tx26_csum_none: 1272375527
>> tx26_stopped: 1
>> tx26_dropped: 0
>> tx26_xmit_more: 23316186
>> tx26_recover: 0
>> tx26_cqes: 2379367502
>> tx26_wake: 1
>> tx26_cqe_err: 0
>> tx27_packets: 4376114969
>> tx27_bytes: 5683551238304
>> tx27_tso_packets: 809344829
>> tx27_tso_bytes: 4124331859270
>> tx27_tso_inner_packets: 0
>> tx27_tso_inner_bytes: 0
>> tx27_csum_partial: 1124954937
>> tx27_csum_partial_inner: 0
>> tx27_added_vlan_packets: 2267871300
>> tx27_nop: 62213214
>> tx27_csum_none: 1142916363
>> tx27_stopped: 0
>> tx27_dropped: 0
>> tx27_xmit_more: 23369974
>> tx27_recover: 0
>> tx27_cqes: 2244502686
>> tx27_wake: 0
>> tx27_cqe_err: 0
>> tx28_packets: 3
>> tx28_bytes: 266
>> tx28_tso_packets: 0
>> tx28_tso_bytes: 0
>> tx28_tso_inner_packets: 0
>> tx28_tso_inner_bytes: 0
>> tx28_csum_partial: 0
>> tx28_csum_partial_inner: 0
>> tx28_added_vlan_packets: 0
>> tx28_nop: 0
>> tx28_csum_none: 3
>> tx28_stopped: 0
>> tx28_dropped: 0
>> tx28_xmit_more: 0
>> tx28_recover: 0
>> tx28_cqes: 3
>> tx28_wake: 0
>> tx28_cqe_err: 0
>> tx29_packets: 0
>> tx29_bytes: 0
>> tx29_tso_packets: 0
>> tx29_tso_bytes: 0
>> tx29_tso_inner_packets: 0
>> tx29_tso_inner_bytes: 0
>> tx29_csum_partial: 0
>> tx29_csum_partial_inner: 0
>> tx29_added_vlan_packets: 0
>> tx29_nop: 0
>> tx29_csum_none: 0
>> tx29_stopped: 0
>> tx29_dropped: 0
>> tx29_xmit_more: 0
>> tx29_recover: 0
>> tx29_cqes: 0
>> tx29_wake: 0
>> tx29_cqe_err: 0
>> tx30_packets: 0
>> tx30_bytes: 0
>> tx30_tso_packets: 0
>> tx30_tso_bytes: 0
>> tx30_tso_inner_packets: 0
>> tx30_tso_inner_bytes: 0
>> tx30_csum_partial: 0
>> tx30_csum_partial_inner: 0
>> tx30_added_vlan_packets: 0
>> tx30_nop: 0
>> tx30_csum_none: 0
>> tx30_stopped: 0
>> tx30_dropped: 0
>> tx30_xmit_more: 0
>> tx30_recover: 0
>> tx30_cqes: 0
>> tx30_wake: 0
>> tx30_cqe_err: 0
>> tx31_packets: 0
>> tx31_bytes: 0
>> tx31_tso_packets: 0
>> tx31_tso_bytes: 0
>> tx31_tso_inner_packets: 0
>> tx31_tso_inner_bytes: 0
>> tx31_csum_partial: 0
>> tx31_csum_partial_inner: 0
>> tx31_added_vlan_packets: 0
>> tx31_nop: 0
>> tx31_csum_none: 0
>> tx31_stopped: 0
>> tx31_dropped: 0
>> tx31_xmit_more: 0
>> tx31_recover: 0
>> tx31_cqes: 0
>> tx31_wake: 0
>> tx31_cqe_err: 0
>> tx32_packets: 0
>> tx32_bytes: 0
>> tx32_tso_packets: 0
>> tx32_tso_bytes: 0
>> tx32_tso_inner_packets: 0
>> tx32_tso_inner_bytes: 0
>> tx32_csum_partial: 0
>> tx32_csum_partial_inner: 0
>> tx32_added_vlan_packets: 0
>> tx32_nop: 0
>> tx32_csum_none: 0
>> tx32_stopped: 0
>> tx32_dropped: 0
>> tx32_xmit_more: 0
>> tx32_recover: 0
>> tx32_cqes: 0
>> tx32_wake: 0
>> tx32_cqe_err: 0
>> tx33_packets: 0
>> tx33_bytes: 0
>> tx33_tso_packets: 0
>> tx33_tso_bytes: 0
>> tx33_tso_inner_packets: 0
>> tx33_tso_inner_bytes: 0
>> tx33_csum_partial: 0
>> tx33_csum_partial_inner: 0
>> tx33_added_vlan_packets: 0
>> tx33_nop: 0
>> tx33_csum_none: 0
>> tx33_stopped: 0
>> tx33_dropped: 0
>> tx33_xmit_more: 0
>> tx33_recover: 0
>> tx33_cqes: 0
>> tx33_wake: 0
>> tx33_cqe_err: 0
>> tx34_packets: 0
>> tx34_bytes: 0
>> tx34_tso_packets: 0
>> tx34_tso_bytes: 0
>> tx34_tso_inner_packets: 0
>> tx34_tso_inner_bytes: 0
>> tx34_csum_partial: 0
>> tx34_csum_partial_inner: 0
>> tx34_added_vlan_packets: 0
>> tx34_nop: 0
>> tx34_csum_none: 0
>> tx34_stopped: 0
>> tx34_dropped: 0
>> tx34_xmit_more: 0
>> tx34_recover: 0
>> tx34_cqes: 0
>> tx34_wake: 0
>> tx34_cqe_err: 0
>> tx35_packets: 0
>> tx35_bytes: 0
>> tx35_tso_packets: 0
>> tx35_tso_bytes: 0
>> tx35_tso_inner_packets: 0
>> tx35_tso_inner_bytes: 0
>> tx35_csum_partial: 0
>> tx35_csum_partial_inner: 0
>> tx35_added_vlan_packets: 0
>> tx35_nop: 0
>> tx35_csum_none: 0
>> tx35_stopped: 0
>> tx35_dropped: 0
>> tx35_xmit_more: 0
>> tx35_recover: 0
>> tx35_cqes: 0
>> tx35_wake: 0
>> tx35_cqe_err: 0
>> tx36_packets: 0
>> tx36_bytes: 0
>> tx36_tso_packets: 0
>> tx36_tso_bytes: 0
>> tx36_tso_inner_packets: 0
>> tx36_tso_inner_bytes: 0
>> tx36_csum_partial: 0
>> tx36_csum_partial_inner: 0
>> tx36_added_vlan_packets: 0
>> tx36_nop: 0
>> tx36_csum_none: 0
>> tx36_stopped: 0
>> tx36_dropped: 0
>> tx36_xmit_more: 0
>> tx36_recover: 0
>> tx36_cqes: 0
>> tx36_wake: 0
>> tx36_cqe_err: 0
>> tx37_packets: 0
>> tx37_bytes: 0
>> tx37_tso_packets: 0
>> tx37_tso_bytes: 0
>> tx37_tso_inner_packets: 0
>> tx37_tso_inner_bytes: 0
>> tx37_csum_partial: 0
>> tx37_csum_partial_inner: 0
>> tx37_added_vlan_packets: 0
>> tx37_nop: 0
>> tx37_csum_none: 0
>> tx37_stopped: 0
>> tx37_dropped: 0
>> tx37_xmit_more: 0
>> tx37_recover: 0
>> tx37_cqes: 0
>> tx37_wake: 0
>> tx37_cqe_err: 0
>> tx38_packets: 0
>> tx38_bytes: 0
>> tx38_tso_packets: 0
>> tx38_tso_bytes: 0
>> tx38_tso_inner_packets: 0
>> tx38_tso_inner_bytes: 0
>> tx38_csum_partial: 0
>> tx38_csum_partial_inner: 0
>> tx38_added_vlan_packets: 0
>> tx38_nop: 0
>> tx38_csum_none: 0
>> tx38_stopped: 0
>> tx38_dropped: 0
>> tx38_xmit_more: 0
>> tx38_recover: 0
>> tx38_cqes: 0
>> tx38_wake: 0
>> tx38_cqe_err: 0
>> tx39_packets: 0
>> tx39_bytes: 0
>> tx39_tso_packets: 0
>> tx39_tso_bytes: 0
>> tx39_tso_inner_packets: 0
>> tx39_tso_inner_bytes: 0
>> tx39_csum_partial: 0
>> tx39_csum_partial_inner: 0
>> tx39_added_vlan_packets: 0
>> tx39_nop: 0
>> tx39_csum_none: 0
>> tx39_stopped: 0
>> tx39_dropped: 0
>> tx39_xmit_more: 0
>> tx39_recover: 0
>> tx39_cqes: 0
>> tx39_wake: 0
>> tx39_cqe_err: 0
>> tx40_packets: 0
>> tx40_bytes: 0
>> tx40_tso_packets: 0
>> tx40_tso_bytes: 0
>> tx40_tso_inner_packets: 0
>> tx40_tso_inner_bytes: 0
>> tx40_csum_partial: 0
>> tx40_csum_partial_inner: 0
>> tx40_added_vlan_packets: 0
>> tx40_nop: 0
>> tx40_csum_none: 0
>> tx40_stopped: 0
>> tx40_dropped: 0
>> tx40_xmit_more: 0
>> tx40_recover: 0
>> tx40_cqes: 0
>> tx40_wake: 0
>> tx40_cqe_err: 0
>> tx41_packets: 0
>> tx41_bytes: 0
>> tx41_tso_packets: 0
>> tx41_tso_bytes: 0
>> tx41_tso_inner_packets: 0
>> tx41_tso_inner_bytes: 0
>> tx41_csum_partial: 0
>> tx41_csum_partial_inner: 0
>> tx41_added_vlan_packets: 0
>> tx41_nop: 0
>> tx41_csum_none: 0
>> tx41_stopped: 0
>> tx41_dropped: 0
>> tx41_xmit_more: 0
>> tx41_recover: 0
>> tx41_cqes: 0
>> tx41_wake: 0
>> tx41_cqe_err: 0
>> tx42_packets: 0
>> tx42_bytes: 0
>> tx42_tso_packets: 0
>> tx42_tso_bytes: 0
>> tx42_tso_inner_packets: 0
>> tx42_tso_inner_bytes: 0
>> tx42_csum_partial: 0
>> tx42_csum_partial_inner: 0
>> tx42_added_vlan_packets: 0
>> tx42_nop: 0
>> tx42_csum_none: 0
>> tx42_stopped: 0
>> tx42_dropped: 0
>> tx42_xmit_more: 0
>> tx42_recover: 0
>> tx42_cqes: 0
>> tx42_wake: 0
>> tx42_cqe_err: 0
>> tx43_packets: 0
>> tx43_bytes: 0
>> tx43_tso_packets: 0
>> tx43_tso_bytes: 0
>> tx43_tso_inner_packets: 0
>> tx43_tso_inner_bytes: 0
>> tx43_csum_partial: 0
>> tx43_csum_partial_inner: 0
>> tx43_added_vlan_packets: 0
>> tx43_nop: 0
>> tx43_csum_none: 0
>> tx43_stopped: 0
>> tx43_dropped: 0
>> tx43_xmit_more: 0
>> tx43_recover: 0
>> tx43_cqes: 0
>> tx43_wake: 0
>> tx43_cqe_err: 0
>> tx44_packets: 0
>> tx44_bytes: 0
>> tx44_tso_packets: 0
>> tx44_tso_bytes: 0
>> tx44_tso_inner_packets: 0
>> tx44_tso_inner_bytes: 0
>> tx44_csum_partial: 0
>> tx44_csum_partial_inner: 0
>> tx44_added_vlan_packets: 0
>> tx44_nop: 0
>> tx44_csum_none: 0
>> tx44_stopped: 0
>> tx44_dropped: 0
>> tx44_xmit_more: 0
>> tx44_recover: 0
>> tx44_cqes: 0
>> tx44_wake: 0
>> tx44_cqe_err: 0
>> tx45_packets: 0
>> tx45_bytes: 0
>> tx45_tso_packets: 0
>> tx45_tso_bytes: 0
>> tx45_tso_inner_packets: 0
>> tx45_tso_inner_bytes: 0
>> tx45_csum_partial: 0
>> tx45_csum_partial_inner: 0
>> tx45_added_vlan_packets: 0
>> tx45_nop: 0
>> tx45_csum_none: 0
>> tx45_stopped: 0
>> tx45_dropped: 0
>> tx45_xmit_more: 0
>> tx45_recover: 0
>> tx45_cqes: 0
>> tx45_wake: 0
>> tx45_cqe_err: 0
>> tx46_packets: 0
>> tx46_bytes: 0
>> tx46_tso_packets: 0
>> tx46_tso_bytes: 0
>> tx46_tso_inner_packets: 0
>> tx46_tso_inner_bytes: 0
>> tx46_csum_partial: 0
>> tx46_csum_partial_inner: 0
>> tx46_added_vlan_packets: 0
>> tx46_nop: 0
>> tx46_csum_none: 0
>> tx46_stopped: 0
>> tx46_dropped: 0
>> tx46_xmit_more: 0
>> tx46_recover: 0
>> tx46_cqes: 0
>> tx46_wake: 0
>> tx46_cqe_err: 0
>> tx47_packets: 0
>> tx47_bytes: 0
>> tx47_tso_packets: 0
>> tx47_tso_bytes: 0
>> tx47_tso_inner_packets: 0
>> tx47_tso_inner_bytes: 0
>> tx47_csum_partial: 0
>> tx47_csum_partial_inner: 0
>> tx47_added_vlan_packets: 0
>> tx47_nop: 0
>> tx47_csum_none: 0
>> tx47_stopped: 0
>> tx47_dropped: 0
>> tx47_xmit_more: 0
>> tx47_recover: 0
>> tx47_cqes: 0
>> tx47_wake: 0
>> tx47_cqe_err: 0
>> tx48_packets: 0
>> tx48_bytes: 0
>> tx48_tso_packets: 0
>> tx48_tso_bytes: 0
>> tx48_tso_inner_packets: 0
>> tx48_tso_inner_bytes: 0
>> tx48_csum_partial: 0
>> tx48_csum_partial_inner: 0
>> tx48_added_vlan_packets: 0
>> tx48_nop: 0
>> tx48_csum_none: 0
>> tx48_stopped: 0
>> tx48_dropped: 0
>> tx48_xmit_more: 0
>> tx48_recover: 0
>> tx48_cqes: 0
>> tx48_wake: 0
>> tx48_cqe_err: 0
>> tx49_packets: 0
>> tx49_bytes: 0
>> tx49_tso_packets: 0
>> tx49_tso_bytes: 0
>> tx49_tso_inner_packets: 0
>> tx49_tso_inner_bytes: 0
>> tx49_csum_partial: 0
>> tx49_csum_partial_inner: 0
>> tx49_added_vlan_packets: 0
>> tx49_nop: 0
>> tx49_csum_none: 0
>> tx49_stopped: 0
>> tx49_dropped: 0
>> tx49_xmit_more: 0
>> tx49_recover: 0
>> tx49_cqes: 0
>> tx49_wake: 0
>> tx49_cqe_err: 0
>> tx50_packets: 0
>> tx50_bytes: 0
>> tx50_tso_packets: 0
>> tx50_tso_bytes: 0
>> tx50_tso_inner_packets: 0
>> tx50_tso_inner_bytes: 0
>> tx50_csum_partial: 0
>> tx50_csum_partial_inner: 0
>> tx50_added_vlan_packets: 0
>> tx50_nop: 0
>> tx50_csum_none: 0
>> tx50_stopped: 0
>> tx50_dropped: 0
>> tx50_xmit_more: 0
>> tx50_recover: 0
>> tx50_cqes: 0
>> tx50_wake: 0
>> tx50_cqe_err: 0
>> tx51_packets: 0
>> tx51_bytes: 0
>> tx51_tso_packets: 0
>> tx51_tso_bytes: 0
>> tx51_tso_inner_packets: 0
>> tx51_tso_inner_bytes: 0
>> tx51_csum_partial: 0
>> tx51_csum_partial_inner: 0
>> tx51_added_vlan_packets: 0
>> tx51_nop: 0
>> tx51_csum_none: 0
>> tx51_stopped: 0
>> tx51_dropped: 0
>> tx51_xmit_more: 0
>> tx51_recover: 0
>> tx51_cqes: 0
>> tx51_wake: 0
>> tx51_cqe_err: 0
>> tx52_packets: 0
>> tx52_bytes: 0
>> tx52_tso_packets: 0
>> tx52_tso_bytes: 0
>> tx52_tso_inner_packets: 0
>> tx52_tso_inner_bytes: 0
>> tx52_csum_partial: 0
>> tx52_csum_partial_inner: 0
>> tx52_added_vlan_packets: 0
>> tx52_nop: 0
>> tx52_csum_none: 0
>> tx52_stopped: 0
>> tx52_dropped: 0
>> tx52_xmit_more: 0
>> tx52_recover: 0
>> tx52_cqes: 0
>> tx52_wake: 0
>> tx52_cqe_err: 0
>> tx53_packets: 0
>> tx53_bytes: 0
>> tx53_tso_packets: 0
>> tx53_tso_bytes: 0
>> tx53_tso_inner_packets: 0
>> tx53_tso_inner_bytes: 0
>> tx53_csum_partial: 0
>> tx53_csum_partial_inner: 0
>> tx53_added_vlan_packets: 0
>> tx53_nop: 0
>> tx53_csum_none: 0
>> tx53_stopped: 0
>> tx53_dropped: 0
>> tx53_xmit_more: 0
>> tx53_recover: 0
>> tx53_cqes: 0
>> tx53_wake: 0
>> tx53_cqe_err: 0
>> tx54_packets: 0
>> tx54_bytes: 0
>> tx54_tso_packets: 0
>> tx54_tso_bytes: 0
>> tx54_tso_inner_packets: 0
>> tx54_tso_inner_bytes: 0
>> tx54_csum_partial: 0
>> tx54_csum_partial_inner: 0
>> tx54_added_vlan_packets: 0
>> tx54_nop: 0
>> tx54_csum_none: 0
>> tx54_stopped: 0
>> tx54_dropped: 0
>> tx54_xmit_more: 0
>> tx54_recover: 0
>> tx54_cqes: 0
>> tx54_wake: 0
>> tx54_cqe_err: 0
>> tx55_packets: 0
>> tx55_bytes: 0
>> tx55_tso_packets: 0
>> tx55_tso_bytes: 0
>> tx55_tso_inner_packets: 0
>> tx55_tso_inner_bytes: 0
>> tx55_csum_partial: 0
>> tx55_csum_partial_inner: 0
>> tx55_added_vlan_packets: 0
>> tx55_nop: 0
>> tx55_csum_none: 0
>> tx55_stopped: 0
>> tx55_dropped: 0
>> tx55_xmit_more: 0
>> tx55_recover: 0
>> tx55_cqes: 0
>> tx55_wake: 0
>> tx55_cqe_err: 0
>> tx0_xdp_xmit: 0
>> tx0_xdp_full: 0
>> tx0_xdp_err: 0
>> tx0_xdp_cqes: 0
>> tx1_xdp_xmit: 0
>> tx1_xdp_full: 0
>> tx1_xdp_err: 0
>> tx1_xdp_cqes: 0
>> tx2_xdp_xmit: 0
>> tx2_xdp_full: 0
>> tx2_xdp_err: 0
>> tx2_xdp_cqes: 0
>> tx3_xdp_xmit: 0
>> tx3_xdp_full: 0
>> tx3_xdp_err: 0
>> tx3_xdp_cqes: 0
>> tx4_xdp_xmit: 0
>> tx4_xdp_full: 0
>> tx4_xdp_err: 0
>> tx4_xdp_cqes: 0
>> tx5_xdp_xmit: 0
>> tx5_xdp_full: 0
>> tx5_xdp_err: 0
>> tx5_xdp_cqes: 0
>> tx6_xdp_xmit: 0
>> tx6_xdp_full: 0
>> tx6_xdp_err: 0
>> tx6_xdp_cqes: 0
>> tx7_xdp_xmit: 0
>> tx7_xdp_full: 0
>> tx7_xdp_err: 0
>> tx7_xdp_cqes: 0
>> tx8_xdp_xmit: 0
>> tx8_xdp_full: 0
>> tx8_xdp_err: 0
>> tx8_xdp_cqes: 0
>> tx9_xdp_xmit: 0
>> tx9_xdp_full: 0
>> tx9_xdp_err: 0
>> tx9_xdp_cqes: 0
>> tx10_xdp_xmit: 0
>> tx10_xdp_full: 0
>> tx10_xdp_err: 0
>> tx10_xdp_cqes: 0
>> tx11_xdp_xmit: 0
>> tx11_xdp_full: 0
>> tx11_xdp_err: 0
>> tx11_xdp_cqes: 0
>> tx12_xdp_xmit: 0
>> tx12_xdp_full: 0
>> tx12_xdp_err: 0
>> tx12_xdp_cqes: 0
>> tx13_xdp_xmit: 0
>> tx13_xdp_full: 0
>> tx13_xdp_err: 0
>> tx13_xdp_cqes: 0
>> tx14_xdp_xmit: 0
>> tx14_xdp_full: 0
>> tx14_xdp_err: 0
>> tx14_xdp_cqes: 0
>> tx15_xdp_xmit: 0
>> tx15_xdp_full: 0
>> tx15_xdp_err: 0
>> tx15_xdp_cqes: 0
>> tx16_xdp_xmit: 0
>> tx16_xdp_full: 0
>> tx16_xdp_err: 0
>> tx16_xdp_cqes: 0
>> tx17_xdp_xmit: 0
>> tx17_xdp_full: 0
>> tx17_xdp_err: 0
>> tx17_xdp_cqes: 0
>> tx18_xdp_xmit: 0
>> tx18_xdp_full: 0
>> tx18_xdp_err: 0
>> tx18_xdp_cqes: 0
>> tx19_xdp_xmit: 0
>> tx19_xdp_full: 0
>> tx19_xdp_err: 0
>> tx19_xdp_cqes: 0
>> tx20_xdp_xmit: 0
>> tx20_xdp_full: 0
>> tx20_xdp_err: 0
>> tx20_xdp_cqes: 0
>> tx21_xdp_xmit: 0
>> tx21_xdp_full: 0
>> tx21_xdp_err: 0
>> tx21_xdp_cqes: 0
>> tx22_xdp_xmit: 0
>> tx22_xdp_full: 0
>> tx22_xdp_err: 0
>> tx22_xdp_cqes: 0
>> tx23_xdp_xmit: 0
>> tx23_xdp_full: 0
>> tx23_xdp_err: 0
>> tx23_xdp_cqes: 0
>> tx24_xdp_xmit: 0
>> tx24_xdp_full: 0
>> tx24_xdp_err: 0
>> tx24_xdp_cqes: 0
>> tx25_xdp_xmit: 0
>> tx25_xdp_full: 0
>> tx25_xdp_err: 0
>> tx25_xdp_cqes: 0
>> tx26_xdp_xmit: 0
>> tx26_xdp_full: 0
>> tx26_xdp_err: 0
>> tx26_xdp_cqes: 0
>> tx27_xdp_xmit: 0
>> tx27_xdp_full: 0
>> tx27_xdp_err: 0
>> tx27_xdp_cqes: 0
>> tx28_xdp_xmit: 0
>> tx28_xdp_full: 0
>> tx28_xdp_err: 0
>> tx28_xdp_cqes: 0
>> tx29_xdp_xmit: 0
>> tx29_xdp_full: 0
>> tx29_xdp_err: 0
>> tx29_xdp_cqes: 0
>> tx30_xdp_xmit: 0
>> tx30_xdp_full: 0
>> tx30_xdp_err: 0
>> tx30_xdp_cqes: 0
>> tx31_xdp_xmit: 0
>> tx31_xdp_full: 0
>> tx31_xdp_err: 0
>> tx31_xdp_cqes: 0
>> tx32_xdp_xmit: 0
>> tx32_xdp_full: 0
>> tx32_xdp_err: 0
>> tx32_xdp_cqes: 0
>> tx33_xdp_xmit: 0
>> tx33_xdp_full: 0
>> tx33_xdp_err: 0
>> tx33_xdp_cqes: 0
>> tx34_xdp_xmit: 0
>> tx34_xdp_full: 0
>> tx34_xdp_err: 0
>> tx34_xdp_cqes: 0
>> tx35_xdp_xmit: 0
>> tx35_xdp_full: 0
>> tx35_xdp_err: 0
>> tx35_xdp_cqes: 0
>> tx36_xdp_xmit: 0
>> tx36_xdp_full: 0
>> tx36_xdp_err: 0
>> tx36_xdp_cqes: 0
>> tx37_xdp_xmit: 0
>> tx37_xdp_full: 0
>> tx37_xdp_err: 0
>> tx37_xdp_cqes: 0
>> tx38_xdp_xmit: 0
>> tx38_xdp_full: 0
>> tx38_xdp_err: 0
>> tx38_xdp_cqes: 0
>> tx39_xdp_xmit: 0
>> tx39_xdp_full: 0
>> tx39_xdp_err: 0
>> tx39_xdp_cqes: 0
>> tx40_xdp_xmit: 0
>> tx40_xdp_full: 0
>> tx40_xdp_err: 0
>> tx40_xdp_cqes: 0
>> tx41_xdp_xmit: 0
>> tx41_xdp_full: 0
>> tx41_xdp_err: 0
>> tx41_xdp_cqes: 0
>> tx42_xdp_xmit: 0
>> tx42_xdp_full: 0
>> tx42_xdp_err: 0
>> tx42_xdp_cqes: 0
>> tx43_xdp_xmit: 0
>> tx43_xdp_full: 0
>> tx43_xdp_err: 0
>> tx43_xdp_cqes: 0
>> tx44_xdp_xmit: 0
>> tx44_xdp_full: 0
>> tx44_xdp_err: 0
>> tx44_xdp_cqes: 0
>> tx45_xdp_xmit: 0
>> tx45_xdp_full: 0
>> tx45_xdp_err: 0
>> tx45_xdp_cqes: 0
>> tx46_xdp_xmit: 0
>> tx46_xdp_full: 0
>> tx46_xdp_err: 0
>> tx46_xdp_cqes: 0
>> tx47_xdp_xmit: 0
>> tx47_xdp_full: 0
>> tx47_xdp_err: 0
>> tx47_xdp_cqes: 0
>> tx48_xdp_xmit: 0
>> tx48_xdp_full: 0
>> tx48_xdp_err: 0
>> tx48_xdp_cqes: 0
>> tx49_xdp_xmit: 0
>> tx49_xdp_full: 0
>> tx49_xdp_err: 0
>> tx49_xdp_cqes: 0
>> tx50_xdp_xmit: 0
>> tx50_xdp_full: 0
>> tx50_xdp_err: 0
>> tx50_xdp_cqes: 0
>> tx51_xdp_xmit: 0
>> tx51_xdp_full: 0
>> tx51_xdp_err: 0
>> tx51_xdp_cqes: 0
>> tx52_xdp_xmit: 0
>> tx52_xdp_full: 0
>> tx52_xdp_err: 0
>> tx52_xdp_cqes: 0
>> tx53_xdp_xmit: 0
>> tx53_xdp_full: 0
>> tx53_xdp_err: 0
>> tx53_xdp_cqes: 0
>> tx54_xdp_xmit: 0
>> tx54_xdp_full: 0
>> tx54_xdp_err: 0
>> tx54_xdp_cqes: 0
>> tx55_xdp_xmit: 0
>> tx55_xdp_full: 0
>> tx55_xdp_err: 0
>> tx55_xdp_cqes: 0
>>
>> ethtool -S enp175s0f0
>> NIC statistics:
>> rx_packets: 141574897253
>> rx_bytes: 184445040406258
>> tx_packets: 172569543894
>> tx_bytes: 99486882076365
>> tx_tso_packets: 9367664195
>> tx_tso_bytes: 56435233992948
>> tx_tso_inner_packets: 0
>> tx_tso_inner_bytes: 0
>> tx_added_vlan_packets: 141297671626
>> tx_nop: 2102916272
>> rx_lro_packets: 0
>> rx_lro_bytes: 0
>> rx_ecn_mark: 0
>> rx_removed_vlan_packets: 141574897252
>> rx_csum_unnecessary: 0
>> rx_csum_none: 23135854
>> rx_csum_complete: 141551761398
>> rx_csum_unnecessary_inner: 0
>> rx_xdp_drop: 0
>> rx_xdp_redirect: 0
>> rx_xdp_tx_xmit: 0
>> rx_xdp_tx_full: 0
>> rx_xdp_tx_err: 0
>> rx_xdp_tx_cqe: 0
>> tx_csum_none: 127934791664
> It is a good idea to look into this, tx is not requesting hw tx
> csumming for a lot of packets, maybe you are wasting a lot of cpu on
> calculating csum, or maybe this is just the rx csum complete..
>
>> tx_csum_partial: 13362879974
>> tx_csum_partial_inner: 0
>> tx_queue_stopped: 232561
> TX queues are stalling, could be an indentation for the pcie
> bottelneck.
>
>> tx_queue_dropped: 0
>> tx_xmit_more: 1266021946
>> tx_recover: 0
>> tx_cqes: 140031716469
>> tx_queue_wake: 232561
>> tx_udp_seg_rem: 0
>> tx_cqe_err: 0
>> tx_xdp_xmit: 0
>> tx_xdp_full: 0
>> tx_xdp_err: 0
>> tx_xdp_cqes: 0
>> rx_wqe_err: 0
>> rx_mpwqe_filler_cqes: 0
>> rx_mpwqe_filler_strides: 0
>> rx_buff_alloc_err: 0
>> rx_cqe_compress_blks: 0
>> rx_cqe_compress_pkts: 0
>> rx_page_reuse: 0
>> rx_cache_reuse: 16625975793
>> rx_cache_full: 54161465914
>> rx_cache_empty: 258048
>> rx_cache_busy: 54161472735
>> rx_cache_waive: 0
>> rx_congst_umr: 0
>> rx_arfs_err: 0
>> ch_events: 40572621887
>> ch_poll: 40885650979
>> ch_arm: 40429276692
>> ch_aff_change: 0
>> ch_eq_rearm: 0
>> rx_out_of_buffer: 2791690
>> rx_if_down_packets: 74
>> rx_vport_unicast_packets: 141843476308
>> rx_vport_unicast_bytes: 185421265403318
>> tx_vport_unicast_packets: 172569484005
>> tx_vport_unicast_bytes: 100019940094298
>> rx_vport_multicast_packets: 85122935
>> rx_vport_multicast_bytes: 5761316431
>> tx_vport_multicast_packets: 6452
>> tx_vport_multicast_bytes: 643540
>> rx_vport_broadcast_packets: 22423624
>> rx_vport_broadcast_bytes: 1390127090
>> tx_vport_broadcast_packets: 22024
>> tx_vport_broadcast_bytes: 1321440
>> rx_vport_rdma_unicast_packets: 0
>> rx_vport_rdma_unicast_bytes: 0
>> tx_vport_rdma_unicast_packets: 0
>> tx_vport_rdma_unicast_bytes: 0
>> rx_vport_rdma_multicast_packets: 0
>> rx_vport_rdma_multicast_bytes: 0
>> tx_vport_rdma_multicast_packets: 0
>> tx_vport_rdma_multicast_bytes: 0
>> tx_packets_phy: 172569501577
>> rx_packets_phy: 142871314588
>> rx_crc_errors_phy: 0
>> tx_bytes_phy: 100710212814151
>> rx_bytes_phy: 187209224289564
>> tx_multicast_phy: 6452
>> tx_broadcast_phy: 22024
>> rx_multicast_phy: 85122933
>> rx_broadcast_phy: 22423623
>> rx_in_range_len_errors_phy: 2
>> rx_out_of_range_len_phy: 0
>> rx_oversize_pkts_phy: 0
>> rx_symbol_err_phy: 0
>> tx_mac_control_phy: 0
>> rx_mac_control_phy: 0
>> rx_unsupported_op_phy: 0
>> rx_pause_ctrl_phy: 0
>> tx_pause_ctrl_phy: 0
>> rx_discards_phy: 920161423
> Ok, this port seem to be suffering more, RX is congested, maybe due to
> the pcie bottleneck.
Yes this side is receiving more traffic - second port is +10G more tx
>> tx_discards_phy: 0
>> tx_errors_phy: 0
>> rx_undersize_pkts_phy: 0
>> rx_fragments_phy: 0
>> rx_jabbers_phy: 0
>> rx_64_bytes_phy: 412006326
>> rx_65_to_127_bytes_phy: 11934371453
>> rx_128_to_255_bytes_phy: 3415281165
>> rx_256_to_511_bytes_phy: 2072955511
>> rx_512_to_1023_bytes_phy: 2415393005
>> rx_1024_to_1518_bytes_phy: 72182391608
>> rx_1519_to_2047_bytes_phy: 50438902587
>> rx_2048_to_4095_bytes_phy: 0
>> rx_4096_to_8191_bytes_phy: 0
>> rx_8192_to_10239_bytes_phy: 0
>> link_down_events_phy: 0
>> rx_pcs_symbol_err_phy: 0
>> rx_corrected_bits_phy: 0
>> rx_pci_signal_integrity: 0
>> tx_pci_signal_integrity: 48
>> rx_prio0_bytes: 186709842592642
>> rx_prio0_packets: 141481966007
>> tx_prio0_bytes: 100710171118138
>> tx_prio0_packets: 172569437949
>> rx_prio1_bytes: 492288152326
>> rx_prio1_packets: 385996045
>> tx_prio1_bytes: 0
>> tx_prio1_packets: 0
>> rx_prio2_bytes: 22119952
>> rx_prio2_packets: 70788
>> tx_prio2_bytes: 0
>> tx_prio2_packets: 0
>> rx_prio3_bytes: 546141102
>> rx_prio3_packets: 681608
>> tx_prio3_bytes: 0
>> tx_prio3_packets: 0
>> rx_prio4_bytes: 14665067
>> rx_prio4_packets: 29486
>> tx_prio4_bytes: 0
>> tx_prio4_packets: 0
>> rx_prio5_bytes: 158862504
>> rx_prio5_packets: 965307
>> tx_prio5_bytes: 0
>> tx_prio5_packets: 0
>> rx_prio6_bytes: 669337783
>> rx_prio6_packets: 1475775
>> tx_prio6_bytes: 0
>> tx_prio6_packets: 0
>> rx_prio7_bytes: 5623481349
>> rx_prio7_packets: 79926412
>> tx_prio7_bytes: 0
>> tx_prio7_packets: 0
>> module_unplug: 0
>> module_bus_stuck: 0
>> module_high_temp: 0
>> module_bad_shorted: 0
>> ch0_events: 1446162630
>> ch0_poll: 1463312972
>> ch0_arm: 1440728278
>> ch0_aff_change: 0
>> ch0_eq_rearm: 0
>> ch1_events: 1384301405
>> ch1_poll: 1399210915
>> ch1_arm: 1378636486
>> ch1_aff_change: 0
>> ch1_eq_rearm: 0
>> ch2_events: 1382788887
>> ch2_poll: 1397231470
>> ch2_arm: 1377058116
>> ch2_aff_change: 0
>> ch2_eq_rearm: 0
>> ch3_events: 1461956995
>> ch3_poll: 1475553146
>> ch3_arm: 1456571625
>> ch3_aff_change: 0
>> ch3_eq_rearm: 0
>> ch4_events: 1497359109
>> ch4_poll: 1511021037
>> ch4_arm: 1491733757
>> ch4_aff_change: 0
>> ch4_eq_rearm: 0
>> ch5_events: 1387736262
>> ch5_poll: 1400964615
>> ch5_arm: 1382382834
>> ch5_aff_change: 0
>> ch5_eq_rearm: 0
>> ch6_events: 1376772405
>> ch6_poll: 1390851449
>> ch6_arm: 1371551764
>> ch6_aff_change: 0
>> ch6_eq_rearm: 0
>> ch7_events: 1431271514
>> ch7_poll: 1445049729
>> ch7_arm: 1425753718
>> ch7_aff_change: 0
>> ch7_eq_rearm: 0
>> ch8_events: 1426976374
>> ch8_poll: 1439938692
>> ch8_arm: 1421392984
>> ch8_aff_change: 0
>> ch8_eq_rearm: 0
>> ch9_events: 1456160031
>> ch9_poll: 1468922870
>> ch9_arm: 1450930446
>> ch9_aff_change: 0
>> ch9_eq_rearm: 0
>> ch10_events: 1443640165
>> ch10_poll: 1456812203
>> ch10_arm: 1438425101
>> ch10_aff_change: 0
>> ch10_eq_rearm: 0
>> ch11_events: 1381104776
>> ch11_poll: 1393811057
>> ch11_arm: 1376059326
>> ch11_aff_change: 0
>> ch11_eq_rearm: 0
>> ch12_events: 1365223276
>> ch12_poll: 1378406059
>> ch12_arm: 1359950494
>> ch12_aff_change: 0
>> ch12_eq_rearm: 0
>> ch13_events: 1421622259
>> ch13_poll: 1434670996
>> ch13_arm: 1416241801
>> ch13_aff_change: 0
>> ch13_eq_rearm: 0
>> ch14_events: 1379084590
>> ch14_poll: 1392425015
>> ch14_arm: 1373675179
>> ch14_aff_change: 0
>> ch14_eq_rearm: 0
>> ch15_events: 1531217338
>> ch15_poll: 1543353833
>> ch15_arm: 1526350453
>> ch15_aff_change: 0
>> ch15_eq_rearm: 0
>> ch16_events: 1460469776
>> ch16_poll: 1467995928
>> ch16_arm: 1456010194
>> ch16_aff_change: 0
>> ch16_eq_rearm: 0
>> ch17_events: 1494067670
>> ch17_poll: 1500856680
>> ch17_arm: 1489232674
>> ch17_aff_change: 0
>> ch17_eq_rearm: 0
>> ch18_events: 1530126866
>> ch18_poll: 1537293620
>> ch18_arm: 1525476123
>> ch18_aff_change: 0
>> ch18_eq_rearm: 0
>> ch19_events: 1499526149
>> ch19_poll: 1506789309
>> ch19_arm: 1495161602
>> ch19_aff_change: 0
>> ch19_eq_rearm: 0
>> ch20_events: 1451479763
>> ch20_poll: 1459767921
>> ch20_arm: 1446360801
>> ch20_aff_change: 0
>> ch20_eq_rearm: 0
>> ch21_events: 1521413613
>> ch21_poll: 1529345146
>> ch21_arm: 1517229314
>> ch21_aff_change: 0
>> ch21_eq_rearm: 0
>> ch22_events: 1471950045
>> ch22_poll: 1479746764
>> ch22_arm: 1467681629
>> ch22_aff_change: 0
>> ch22_eq_rearm: 0
>> ch23_events: 1502968393
>> ch23_poll: 1510419909
>> ch23_arm: 1498168438
>> ch23_aff_change: 0
>> ch23_eq_rearm: 0
>> ch24_events: 1473451639
>> ch24_poll: 1482606899
>> ch24_arm: 1468212489
>> ch24_aff_change: 0
>> ch24_eq_rearm: 0
>> ch25_events: 1440399182
>> ch25_poll: 1448897475
>> ch25_arm: 1435044786
>> ch25_aff_change: 0
>> ch25_eq_rearm: 0
>> ch26_events: 1436831565
>> ch26_poll: 1445485731
>> ch26_arm: 1431827527
>> ch26_aff_change: 0
>> ch26_eq_rearm: 0
>> ch27_events: 1516560621
>> ch27_poll: 1524911010
>> ch27_arm: 1511430164
>> ch27_aff_change: 0
>> ch27_eq_rearm: 0
>> ch28_events: 4
>> ch28_poll: 4
>> ch28_arm: 4
>> ch28_aff_change: 0
>> ch28_eq_rearm: 0
>> ch29_events: 6
>> ch29_poll: 6
>> ch29_arm: 6
>> ch29_aff_change: 0
>> ch29_eq_rearm: 0
>> ch30_events: 4
>> ch30_poll: 4
>> ch30_arm: 4
>> ch30_aff_change: 0
>> ch30_eq_rearm: 0
>> ch31_events: 4
>> ch31_poll: 4
>> ch31_arm: 4
>> ch31_aff_change: 0
>> ch31_eq_rearm: 0
>> ch32_events: 4
>> ch32_poll: 4
>> ch32_arm: 4
>> ch32_aff_change: 0
>> ch32_eq_rearm: 0
>> ch33_events: 4
>> ch33_poll: 4
>> ch33_arm: 4
>> ch33_aff_change: 0
>> ch33_eq_rearm: 0
>> ch34_events: 4
>> ch34_poll: 4
>> ch34_arm: 4
>> ch34_aff_change: 0
>> ch34_eq_rearm: 0
>> ch35_events: 4
>> ch35_poll: 4
>> ch35_arm: 4
>> ch35_aff_change: 0
>> ch35_eq_rearm: 0
>> ch36_events: 4
>> ch36_poll: 4
>> ch36_arm: 4
>> ch36_aff_change: 0
>> ch36_eq_rearm: 0
>> ch37_events: 4
>> ch37_poll: 4
>> ch37_arm: 4
>> ch37_aff_change: 0
>> ch37_eq_rearm: 0
>> ch38_events: 4
>> ch38_poll: 4
>> ch38_arm: 4
>> ch38_aff_change: 0
>> ch38_eq_rearm: 0
>> ch39_events: 4
>> ch39_poll: 4
>> ch39_arm: 4
>> ch39_aff_change: 0
>> ch39_eq_rearm: 0
>> ch40_events: 4
>> ch40_poll: 4
>> ch40_arm: 4
>> ch40_aff_change: 0
>> ch40_eq_rearm: 0
>> ch41_events: 4
>> ch41_poll: 4
>> ch41_arm: 4
>> ch41_aff_change: 0
>> ch41_eq_rearm: 0
>> ch42_events: 4
>> ch42_poll: 4
>> ch42_arm: 4
>> ch42_aff_change: 0
>> ch42_eq_rearm: 0
>> ch43_events: 4
>> ch43_poll: 4
>> ch43_arm: 4
>> ch43_aff_change: 0
>> ch43_eq_rearm: 0
>> ch44_events: 4
>> ch44_poll: 4
>> ch44_arm: 4
>> ch44_aff_change: 0
>> ch44_eq_rearm: 0
>> ch45_events: 4
>> ch45_poll: 4
>> ch45_arm: 4
>> ch45_aff_change: 0
>> ch45_eq_rearm: 0
>> ch46_events: 4
>> ch46_poll: 4
>> ch46_arm: 4
>> ch46_aff_change: 0
>> ch46_eq_rearm: 0
>> ch47_events: 4
>> ch47_poll: 4
>> ch47_arm: 4
>> ch47_aff_change: 0
>> ch47_eq_rearm: 0
>> ch48_events: 4
>> ch48_poll: 4
>> ch48_arm: 4
>> ch48_aff_change: 0
>> ch48_eq_rearm: 0
>> ch49_events: 4
>> ch49_poll: 4
>> ch49_arm: 4
>> ch49_aff_change: 0
>> ch49_eq_rearm: 0
>> ch50_events: 4
>> ch50_poll: 4
>> ch50_arm: 4
>> ch50_aff_change: 0
>> ch50_eq_rearm: 0
>> ch51_events: 4
>> ch51_poll: 4
>> ch51_arm: 4
>> ch51_aff_change: 0
>> ch51_eq_rearm: 0
>> ch52_events: 4
>> ch52_poll: 4
>> ch52_arm: 4
>> ch52_aff_change: 0
>> ch52_eq_rearm: 0
>> ch53_events: 4
>> ch53_poll: 4
>> ch53_arm: 4
>> ch53_aff_change: 0
>> ch53_eq_rearm: 0
>> ch54_events: 4
>> ch54_poll: 4
>> ch54_arm: 4
>> ch54_aff_change: 0
>> ch54_eq_rearm: 0
>> ch55_events: 4
>> ch55_poll: 4
>> ch55_arm: 4
>> ch55_aff_change: 0
>> ch55_eq_rearm: 0
>> rx0_packets: 5861448653
>> rx0_bytes: 7389128595728
>> rx0_csum_complete: 5838312798
>> rx0_csum_unnecessary: 0
>> rx0_csum_unnecessary_inner: 0
>> rx0_csum_none: 23135855
>> rx0_xdp_drop: 0
>> rx0_xdp_redirect: 0
>> rx0_lro_packets: 0
>> rx0_lro_bytes: 0
>> rx0_ecn_mark: 0
>> rx0_removed_vlan_packets: 5861448653
>> rx0_wqe_err: 0
>> rx0_mpwqe_filler_cqes: 0
>> rx0_mpwqe_filler_strides: 0
>> rx0_buff_alloc_err: 0
>> rx0_cqe_compress_blks: 0
>> rx0_cqe_compress_pkts: 0
>> rx0_page_reuse: 0
>> rx0_cache_reuse: 2559
>> rx0_cache_full: 2930721512
>> rx0_cache_empty: 6656
>> rx0_cache_busy: 2930721765
>> rx0_cache_waive: 0
>> rx0_congst_umr: 0
>> rx0_arfs_err: 0
>> rx0_xdp_tx_xmit: 0
>> rx0_xdp_tx_full: 0
>> rx0_xdp_tx_err: 0
>> rx0_xdp_tx_cqes: 0
>> rx1_packets: 5550585106
>> rx1_bytes: 7255635262803
>> rx1_csum_complete: 5550585106
>> rx1_csum_unnecessary: 0
>> rx1_csum_unnecessary_inner: 0
>> rx1_csum_none: 0
>> rx1_xdp_drop: 0
>> rx1_xdp_redirect: 0
>> rx1_lro_packets: 0
>> rx1_lro_bytes: 0
>> rx1_ecn_mark: 0
>> rx1_removed_vlan_packets: 5550585106
>> rx1_wqe_err: 0
>> rx1_mpwqe_filler_cqes: 0
>> rx1_mpwqe_filler_strides: 0
>> rx1_buff_alloc_err: 0
>> rx1_cqe_compress_blks: 0
>> rx1_cqe_compress_pkts: 0
>> rx1_page_reuse: 0
>> rx1_cache_reuse: 2918845
>> rx1_cache_full: 2772373453
>> rx1_cache_empty: 6656
>> rx1_cache_busy: 2772373707
>> rx1_cache_waive: 0
>> rx1_congst_umr: 0
>> rx1_arfs_err: 0
>> rx1_xdp_tx_xmit: 0
>> rx1_xdp_tx_full: 0
>> rx1_xdp_tx_err: 0
>> rx1_xdp_tx_cqes: 0
>> rx2_packets: 5383874739
>> rx2_bytes: 7031545423967
>> rx2_csum_complete: 5383874739
>> rx2_csum_unnecessary: 0
>> rx2_csum_unnecessary_inner: 0
>> rx2_csum_none: 0
>> rx2_xdp_drop: 0
>> rx2_xdp_redirect: 0
>> rx2_lro_packets: 0
>> rx2_lro_bytes: 0
>> rx2_ecn_mark: 0
>> rx2_removed_vlan_packets: 5383874739
>> rx2_wqe_err: 0
>> rx2_mpwqe_filler_cqes: 0
>> rx2_mpwqe_filler_strides: 0
>> rx2_buff_alloc_err: 0
>> rx2_cqe_compress_blks: 0
>> rx2_cqe_compress_pkts: 0
>> rx2_page_reuse: 0
>> rx2_cache_reuse: 2173370
>> rx2_cache_full: 2689763744
>> rx2_cache_empty: 6656
>> rx2_cache_busy: 2689763998
>> rx2_cache_waive: 0
>> rx2_congst_umr: 0
>> rx2_arfs_err: 0
>> rx2_xdp_tx_xmit: 0
>> rx2_xdp_tx_full: 0
>> rx2_xdp_tx_err: 0
>> rx2_xdp_tx_cqes: 0
>> rx3_packets: 5456494012
>> rx3_bytes: 7120241119485
>> rx3_csum_complete: 5456494012
>> rx3_csum_unnecessary: 0
>> rx3_csum_unnecessary_inner: 0
>> rx3_csum_none: 0
>> rx3_xdp_drop: 0
>> rx3_xdp_redirect: 0
>> rx3_lro_packets: 0
>> rx3_lro_bytes: 0
>> rx3_ecn_mark: 0
>> rx3_removed_vlan_packets: 5456494012
>> rx3_wqe_err: 0
>> rx3_mpwqe_filler_cqes: 0
>> rx3_mpwqe_filler_strides: 0
>> rx3_buff_alloc_err: 0
>> rx3_cqe_compress_blks: 0
>> rx3_cqe_compress_pkts: 0
>> rx3_page_reuse: 0
>> rx3_cache_reuse: 2120123
>> rx3_cache_full: 2726126628
>> rx3_cache_empty: 6656
>> rx3_cache_busy: 2726126881
>> rx3_cache_waive: 0
>> rx3_congst_umr: 0
>> rx3_arfs_err: 0
>> rx3_xdp_tx_xmit: 0
>> rx3_xdp_tx_full: 0
>> rx3_xdp_tx_err: 0
>> rx3_xdp_tx_cqes: 0
>> rx4_packets: 5475216251
>> rx4_bytes: 7123129170196
>> rx4_csum_complete: 5475216251
>> rx4_csum_unnecessary: 0
>> rx4_csum_unnecessary_inner: 0
>> rx4_csum_none: 0
>> rx4_xdp_drop: 0
>> rx4_xdp_redirect: 0
>> rx4_lro_packets: 0
>> rx4_lro_bytes: 0
>> rx4_ecn_mark: 0
>> rx4_removed_vlan_packets: 5475216251
>> rx4_wqe_err: 0
>> rx4_mpwqe_filler_cqes: 0
>> rx4_mpwqe_filler_strides: 0
>> rx4_buff_alloc_err: 0
>> rx4_cqe_compress_blks: 0
>> rx4_cqe_compress_pkts: 0
>> rx4_page_reuse: 0
>> rx4_cache_reuse: 2668296355
>> rx4_cache_full: 69311549
>> rx4_cache_empty: 6656
>> rx4_cache_busy: 69311769
>> rx4_cache_waive: 0
>> rx4_congst_umr: 0
>> rx4_arfs_err: 0
>> rx4_xdp_tx_xmit: 0
>> rx4_xdp_tx_full: 0
>> rx4_xdp_tx_err: 0
>> rx4_xdp_tx_cqes: 0
>> rx5_packets: 5474372232
>> rx5_bytes: 7159146801926
>> rx5_csum_complete: 5474372232
>> rx5_csum_unnecessary: 0
>> rx5_csum_unnecessary_inner: 0
>> rx5_csum_none: 0
>> rx5_xdp_drop: 0
>> rx5_xdp_redirect: 0
>> rx5_lro_packets: 0
>> rx5_lro_bytes: 0
>> rx5_ecn_mark: 0
>> rx5_removed_vlan_packets: 5474372232
>> rx5_wqe_err: 0
>> rx5_mpwqe_filler_cqes: 0
>> rx5_mpwqe_filler_strides: 0
>> rx5_buff_alloc_err: 0
>> rx5_cqe_compress_blks: 0
>> rx5_cqe_compress_pkts: 0
>> rx5_page_reuse: 0
>> rx5_cache_reuse: 626187
>> rx5_cache_full: 2736559674
>> rx5_cache_empty: 6656
>> rx5_cache_busy: 2736559929
>> rx5_cache_waive: 0
>> rx5_congst_umr: 0
>> rx5_arfs_err: 0
>> rx5_xdp_tx_xmit: 0
>> rx5_xdp_tx_full: 0
>> rx5_xdp_tx_err: 0
>> rx5_xdp_tx_cqes: 0
>> rx6_packets: 5533622456
>> rx6_bytes: 7207308809081
>> rx6_csum_complete: 5533622456
>> rx6_csum_unnecessary: 0
>> rx6_csum_unnecessary_inner: 0
>> rx6_csum_none: 0
>> rx6_xdp_drop: 0
>> rx6_xdp_redirect: 0
>> rx6_lro_packets: 0
>> rx6_lro_bytes: 0
>> rx6_ecn_mark: 0
>> rx6_removed_vlan_packets: 5533622456
>> rx6_wqe_err: 0
>> rx6_mpwqe_filler_cqes: 0
>> rx6_mpwqe_filler_strides: 0
>> rx6_buff_alloc_err: 0
>> rx6_cqe_compress_blks: 0
>> rx6_cqe_compress_pkts: 0
>> rx6_page_reuse: 0
>> rx6_cache_reuse: 2325217
>> rx6_cache_full: 2764485756
>> rx6_cache_empty: 6656
>> rx6_cache_busy: 2764486011
>> rx6_cache_waive: 0
>> rx6_congst_umr: 0
>> rx6_arfs_err: 0
>> rx6_xdp_tx_xmit: 0
>> rx6_xdp_tx_full: 0
>> rx6_xdp_tx_err: 0
>> rx6_xdp_tx_cqes: 0
>> rx7_packets: 5533901822
>> rx7_bytes: 7227441240536
>> rx7_csum_complete: 5533901822
>> rx7_csum_unnecessary: 0
>> rx7_csum_unnecessary_inner: 0
>> rx7_csum_none: 0
>> rx7_xdp_drop: 0
>> rx7_xdp_redirect: 0
>> rx7_lro_packets: 0
>> rx7_lro_bytes: 0
>> rx7_ecn_mark: 0
>> rx7_removed_vlan_packets: 5533901822
>> rx7_wqe_err: 0
>> rx7_mpwqe_filler_cqes: 0
>> rx7_mpwqe_filler_strides: 0
>> rx7_buff_alloc_err: 0
>> rx7_cqe_compress_blks: 0
>> rx7_cqe_compress_pkts: 0
>> rx7_page_reuse: 0
>> rx7_cache_reuse: 2372505
>> rx7_cache_full: 2764578151
>> rx7_cache_empty: 6656
>> rx7_cache_busy: 2764578403
>> rx7_cache_waive: 0
>> rx7_congst_umr: 0
>> rx7_arfs_err: 0
>> rx7_xdp_tx_xmit: 0
>> rx7_xdp_tx_full: 0
>> rx7_xdp_tx_err: 0
>> rx7_xdp_tx_cqes: 0
>> rx8_packets: 5485670137
>> rx8_bytes: 7203339989013
>> rx8_csum_complete: 5485670137
>> rx8_csum_unnecessary: 0
>> rx8_csum_unnecessary_inner: 0
>> rx8_csum_none: 0
>> rx8_xdp_drop: 0
>> rx8_xdp_redirect: 0
>> rx8_lro_packets: 0
>> rx8_lro_bytes: 0
>> rx8_ecn_mark: 0
>> rx8_removed_vlan_packets: 5485670137
>> rx8_wqe_err: 0
>> rx8_mpwqe_filler_cqes: 0
>> rx8_mpwqe_filler_strides: 0
>> rx8_buff_alloc_err: 0
>> rx8_cqe_compress_blks: 0
>> rx8_cqe_compress_pkts: 0
>> rx8_page_reuse: 0
>> rx8_cache_reuse: 7522232
>> rx8_cache_full: 2735312581
>> rx8_cache_empty: 6656
>> rx8_cache_busy: 2735312836
>> rx8_cache_waive: 0
>> rx8_congst_umr: 0
>> rx8_arfs_err: 0
>> rx8_xdp_tx_xmit: 0
>> rx8_xdp_tx_full: 0
>> rx8_xdp_tx_err: 0
>> rx8_xdp_tx_cqes: 0
>> rx9_packets: 5482212354
>> rx9_bytes: 7169663341718
>> rx9_csum_complete: 5482212354
>> rx9_csum_unnecessary: 0
>> rx9_csum_unnecessary_inner: 0
>> rx9_csum_none: 0
>> rx9_xdp_drop: 0
>> rx9_xdp_redirect: 0
>> rx9_lro_packets: 0
>> rx9_lro_bytes: 0
>> rx9_ecn_mark: 0
>> rx9_removed_vlan_packets: 5482212354
>> rx9_wqe_err: 0
>> rx9_mpwqe_filler_cqes: 0
>> rx9_mpwqe_filler_strides: 0
>> rx9_buff_alloc_err: 0
>> rx9_cqe_compress_blks: 0
>> rx9_cqe_compress_pkts: 0
>> rx9_page_reuse: 0
>> rx9_cache_reuse: 37279961
>> rx9_cache_full: 2703825961
>> rx9_cache_empty: 6656
>> rx9_cache_busy: 2703826215
>> rx9_cache_waive: 0
>> rx9_congst_umr: 0
>> rx9_arfs_err: 0
>> rx9_xdp_tx_xmit: 0
>> rx9_xdp_tx_full: 0
>> rx9_xdp_tx_err: 0
>> rx9_xdp_tx_cqes: 0
>> rx10_packets: 5524679952
>> rx10_bytes: 7248301275181
>> rx10_csum_complete: 5524679952
>> rx10_csum_unnecessary: 0
>> rx10_csum_unnecessary_inner: 0
>> rx10_csum_none: 0
>> rx10_xdp_drop: 0
>> rx10_xdp_redirect: 0
>> rx10_lro_packets: 0
>> rx10_lro_bytes: 0
>> rx10_ecn_mark: 0
>> rx10_removed_vlan_packets: 5524679952
>> rx10_wqe_err: 0
>> rx10_mpwqe_filler_cqes: 0
>> rx10_mpwqe_filler_strides: 0
>> rx10_buff_alloc_err: 0
>> rx10_cqe_compress_blks: 0
>> rx10_cqe_compress_pkts: 0
>> rx10_page_reuse: 0
>> rx10_cache_reuse: 2049666
>> rx10_cache_full: 2760290055
>> rx10_cache_empty: 6656
>> rx10_cache_busy: 2760290310
>> rx10_cache_waive: 0
>> rx10_congst_umr: 0
>> rx10_arfs_err: 0
>> rx10_xdp_tx_xmit: 0
>> rx10_xdp_tx_full: 0
>> rx10_xdp_tx_err: 0
>> rx10_xdp_tx_cqes: 0
>> rx11_packets: 5394633545
>> rx11_bytes: 7033509636092
>> rx11_csum_complete: 5394633545
>> rx11_csum_unnecessary: 0
>> rx11_csum_unnecessary_inner: 0
>> rx11_csum_none: 0
>> rx11_xdp_drop: 0
>> rx11_xdp_redirect: 0
>> rx11_lro_packets: 0
>> rx11_lro_bytes: 0
>> rx11_ecn_mark: 0
>> rx11_removed_vlan_packets: 5394633545
>> rx11_wqe_err: 0
>> rx11_mpwqe_filler_cqes: 0
>> rx11_mpwqe_filler_strides: 0
>> rx11_buff_alloc_err: 0
>> rx11_cqe_compress_blks: 0
>> rx11_cqe_compress_pkts: 0
>> rx11_page_reuse: 0
>> rx11_cache_reuse: 2617466268
>> rx11_cache_full: 79850284
>> rx11_cache_empty: 6656
>> rx11_cache_busy: 79850504
>> rx11_cache_waive: 0
>> rx11_congst_umr: 0
>> rx11_arfs_err: 0
>> rx11_xdp_tx_xmit: 0
>> rx11_xdp_tx_full: 0
>> rx11_xdp_tx_err: 0
>> rx11_xdp_tx_cqes: 0
>> rx12_packets: 5458907385
>> rx12_bytes: 7134867867515
>> rx12_csum_complete: 5458907385
>> rx12_csum_unnecessary: 0
>> rx12_csum_unnecessary_inner: 0
>> rx12_csum_none: 0
>> rx12_xdp_drop: 0
>> rx12_xdp_redirect: 0
>> rx12_lro_packets: 0
>> rx12_lro_bytes: 0
>> rx12_ecn_mark: 0
>> rx12_removed_vlan_packets: 5458907385
>> rx12_wqe_err: 0
>> rx12_mpwqe_filler_cqes: 0
>> rx12_mpwqe_filler_strides: 0
>> rx12_buff_alloc_err: 0
>> rx12_cqe_compress_blks: 0
>> rx12_cqe_compress_pkts: 0
>> rx12_page_reuse: 0
>> rx12_cache_reuse: 2650214169
>> rx12_cache_full: 79239303
>> rx12_cache_empty: 6656
>> rx12_cache_busy: 79239523
>> rx12_cache_waive: 0
>> rx12_congst_umr: 0
>> rx12_arfs_err: 0
>> rx12_xdp_tx_xmit: 0
>> rx12_xdp_tx_full: 0
>> rx12_xdp_tx_err: 0
>> rx12_xdp_tx_cqes: 0
>> rx13_packets: 5549932912
>> rx13_bytes: 7232548705586
>> rx13_csum_complete: 5549932912
>> rx13_csum_unnecessary: 0
>> rx13_csum_unnecessary_inner: 0
>> rx13_csum_none: 0
>> rx13_xdp_drop: 0
>> rx13_xdp_redirect: 0
>> rx13_lro_packets: 0
>> rx13_lro_bytes: 0
>> rx13_ecn_mark: 0
>> rx13_removed_vlan_packets: 5549932912
>> rx13_wqe_err: 0
>> rx13_mpwqe_filler_cqes: 0
>> rx13_mpwqe_filler_strides: 0
>> rx13_buff_alloc_err: 0
>> rx13_cqe_compress_blks: 0
>> rx13_cqe_compress_pkts: 0
>> rx13_page_reuse: 0
>> rx13_cache_reuse: 2417696
>> rx13_cache_full: 2772548505
>> rx13_cache_empty: 6656
>> rx13_cache_busy: 2772548760
>> rx13_cache_waive: 0
>> rx13_congst_umr: 0
>> rx13_arfs_err: 0
>> rx13_xdp_tx_xmit: 0
>> rx13_xdp_tx_full: 0
>> rx13_xdp_tx_err: 0
>> rx13_xdp_tx_cqes: 0
>> rx14_packets: 5517712329
>> rx14_bytes: 7192111965227
>> rx14_csum_complete: 5517712329
>> rx14_csum_unnecessary: 0
>> rx14_csum_unnecessary_inner: 0
>> rx14_csum_none: 0
>> rx14_xdp_drop: 0
>> rx14_xdp_redirect: 0
>> rx14_lro_packets: 0
>> rx14_lro_bytes: 0
>> rx14_ecn_mark: 0
>> rx14_removed_vlan_packets: 5517712329
>> rx14_wqe_err: 0
>> rx14_mpwqe_filler_cqes: 0
>> rx14_mpwqe_filler_strides: 0
>> rx14_buff_alloc_err: 0
>> rx14_cqe_compress_blks: 0
>> rx14_cqe_compress_pkts: 0
>> rx14_page_reuse: 0
>> rx14_cache_reuse: 1830206
>> rx14_cache_full: 2757025703
>> rx14_cache_empty: 6656
>> rx14_cache_busy: 2757025958
>> rx14_cache_waive: 0
>> rx14_congst_umr: 0
>> rx14_arfs_err: 0
>> rx14_xdp_tx_xmit: 0
>> rx14_xdp_tx_full: 0
>> rx14_xdp_tx_err: 0
>> rx14_xdp_tx_cqes: 0
>> rx15_packets: 5578343373
>> rx15_bytes: 7268484501219
>> rx15_csum_complete: 5578343373
>> rx15_csum_unnecessary: 0
>> rx15_csum_unnecessary_inner: 0
>> rx15_csum_none: 0
>> rx15_xdp_drop: 0
>> rx15_xdp_redirect: 0
>> rx15_lro_packets: 0
>> rx15_lro_bytes: 0
>> rx15_ecn_mark: 0
>> rx15_removed_vlan_packets: 5578343373
>> rx15_wqe_err: 0
>> rx15_mpwqe_filler_cqes: 0
>> rx15_mpwqe_filler_strides: 0
>> rx15_buff_alloc_err: 0
>> rx15_cqe_compress_blks: 0
>> rx15_cqe_compress_pkts: 0
>> rx15_page_reuse: 0
>> rx15_cache_reuse: 2317165
>> rx15_cache_full: 2786854266
>> rx15_cache_empty: 6656
>> rx15_cache_busy: 2786854519
>> rx15_cache_waive: 0
>> rx15_congst_umr: 0
>> rx15_arfs_err: 0
>> rx15_xdp_tx_xmit: 0
>> rx15_xdp_tx_full: 0
>> rx15_xdp_tx_err: 0
>> rx15_xdp_tx_cqes: 0
>> rx16_packets: 4435773951
>> rx16_bytes: 5766665272007
>> rx16_csum_complete: 4435773951
>> rx16_csum_unnecessary: 0
>> rx16_csum_unnecessary_inner: 0
>> rx16_csum_none: 0
>> rx16_xdp_drop: 0
>> rx16_xdp_redirect: 0
>> rx16_lro_packets: 0
>> rx16_lro_bytes: 0
>> rx16_ecn_mark: 0
>> rx16_removed_vlan_packets: 4435773951
>> rx16_wqe_err: 0
>> rx16_mpwqe_filler_cqes: 0
>> rx16_mpwqe_filler_strides: 0
>> rx16_buff_alloc_err: 0
>> rx16_cqe_compress_blks: 0
>> rx16_cqe_compress_pkts: 0
>> rx16_page_reuse: 0
>> rx16_cache_reuse: 2033793
>> rx16_cache_full: 2215852927
>> rx16_cache_empty: 6656
>> rx16_cache_busy: 2215853179
>> rx16_cache_waive: 0
>> rx16_congst_umr: 0
>> rx16_arfs_err: 0
>> rx16_xdp_tx_xmit: 0
>> rx16_xdp_tx_full: 0
>> rx16_xdp_tx_err: 0
>> rx16_xdp_tx_cqes: 0
>> rx17_packets: 4344087587
>> rx17_bytes: 5695006496323
>> rx17_csum_complete: 4344087587
>> rx17_csum_unnecessary: 0
>> rx17_csum_unnecessary_inner: 0
>> rx17_csum_none: 0
>> rx17_xdp_drop: 0
>> rx17_xdp_redirect: 0
>> rx17_lro_packets: 0
>> rx17_lro_bytes: 0
>> rx17_ecn_mark: 0
>> rx17_removed_vlan_packets: 4344087587
>> rx17_wqe_err: 0
>> rx17_mpwqe_filler_cqes: 0
>> rx17_mpwqe_filler_strides: 0
>> rx17_buff_alloc_err: 0
>> rx17_cqe_compress_blks: 0
>> rx17_cqe_compress_pkts: 0
>> rx17_page_reuse: 0
>> rx17_cache_reuse: 2652127
>> rx17_cache_full: 2169391411
>> rx17_cache_empty: 6656
>> rx17_cache_busy: 2169391665
>> rx17_cache_waive: 0
>> rx17_congst_umr: 0
>> rx17_arfs_err: 0
>> rx17_xdp_tx_xmit: 0
>> rx17_xdp_tx_full: 0
>> rx17_xdp_tx_err: 0
>> rx17_xdp_tx_cqes: 0
>> rx18_packets: 4407422804
>> rx18_bytes: 5741134634177
>> rx18_csum_complete: 4407422804
>> rx18_csum_unnecessary: 0
>> rx18_csum_unnecessary_inner: 0
>> rx18_csum_none: 0
>> rx18_xdp_drop: 0
>> rx18_xdp_redirect: 0
>> rx18_lro_packets: 0
>> rx18_lro_bytes: 0
>> rx18_ecn_mark: 0
>> rx18_removed_vlan_packets: 4407422804
>> rx18_wqe_err: 0
>> rx18_mpwqe_filler_cqes: 0
>> rx18_mpwqe_filler_strides: 0
>> rx18_buff_alloc_err: 0
>> rx18_cqe_compress_blks: 0
>> rx18_cqe_compress_pkts: 0
>> rx18_page_reuse: 0
>> rx18_cache_reuse: 2156080239
>> rx18_cache_full: 47630941
>> rx18_cache_empty: 6656
>> rx18_cache_busy: 47631161
>> rx18_cache_waive: 0
>> rx18_congst_umr: 0
>> rx18_arfs_err: 0
>> rx18_xdp_tx_xmit: 0
>> rx18_xdp_tx_full: 0
>> rx18_xdp_tx_err: 0
>> rx18_xdp_tx_cqes: 0
>> rx19_packets: 4545554180
>> rx19_bytes: 5905277503466
>> rx19_csum_complete: 4545554180
>> rx19_csum_unnecessary: 0
>> rx19_csum_unnecessary_inner: 0
>> rx19_csum_none: 0
>> rx19_xdp_drop: 0
>> rx19_xdp_redirect: 0
>> rx19_lro_packets: 0
>> rx19_lro_bytes: 0
>> rx19_ecn_mark: 0
>> rx19_removed_vlan_packets: 4545554180
>> rx19_wqe_err: 0
>> rx19_mpwqe_filler_cqes: 0
>> rx19_mpwqe_filler_strides: 0
>> rx19_buff_alloc_err: 0
>> rx19_cqe_compress_blks: 0
>> rx19_cqe_compress_pkts: 0
>> rx19_page_reuse: 0
>> rx19_cache_reuse: 11112455
>> rx19_cache_full: 2261664379
>> rx19_cache_empty: 6656
>> rx19_cache_busy: 2261664601
>> rx19_cache_waive: 0
>> rx19_congst_umr: 0
>> rx19_arfs_err: 0
>> rx19_xdp_tx_xmit: 0
>> rx19_xdp_tx_full: 0
>> rx19_xdp_tx_err: 0
>> rx19_xdp_tx_cqes: 0
>> rx20_packets: 4397428553
>> rx20_bytes: 5757329184301
>> rx20_csum_complete: 4397428553
>> rx20_csum_unnecessary: 0
>> rx20_csum_unnecessary_inner: 0
>> rx20_csum_none: 0
>> rx20_xdp_drop: 0
>> rx20_xdp_redirect: 0
>> rx20_lro_packets: 0
>> rx20_lro_bytes: 0
>> rx20_ecn_mark: 0
>> rx20_removed_vlan_packets: 4397428553
>> rx20_wqe_err: 0
>> rx20_mpwqe_filler_cqes: 0
>> rx20_mpwqe_filler_strides: 0
>> rx20_buff_alloc_err: 0
>> rx20_cqe_compress_blks: 0
>> rx20_cqe_compress_pkts: 0
>> rx20_page_reuse: 0
>> rx20_cache_reuse: 2168116995
>> rx20_cache_full: 30597061
>> rx20_cache_empty: 6656
>> rx20_cache_busy: 30597281
>> rx20_cache_waive: 0
>> rx20_congst_umr: 0
>> rx20_arfs_err: 0
>> rx20_xdp_tx_xmit: 0
>> rx20_xdp_tx_full: 0
>> rx20_xdp_tx_err: 0
>> rx20_xdp_tx_cqes: 0
>> rx21_packets: 4552564821
>> rx21_bytes: 5944840329249
>> rx21_csum_complete: 4552564821
>> rx21_csum_unnecessary: 0
>> rx21_csum_unnecessary_inner: 0
>> rx21_csum_none: 0
>> rx21_xdp_drop: 0
>> rx21_xdp_redirect: 0
>> rx21_lro_packets: 0
>> rx21_lro_bytes: 0
>> rx21_ecn_mark: 0
>> rx21_removed_vlan_packets: 4552564821
>> rx21_wqe_err: 0
>> rx21_mpwqe_filler_cqes: 0
>> rx21_mpwqe_filler_strides: 0
>> rx21_buff_alloc_err: 0
>> rx21_cqe_compress_blks: 0
>> rx21_cqe_compress_pkts: 0
>> rx21_page_reuse: 0
>> rx21_cache_reuse: 2295681
>> rx21_cache_full: 2273986474
>> rx21_cache_empty: 6656
>> rx21_cache_busy: 2273986727
>> rx21_cache_waive: 0
>> rx21_congst_umr: 0
>> rx21_arfs_err: 0
>> rx21_xdp_tx_xmit: 0
>> rx21_xdp_tx_full: 0
>> rx21_xdp_tx_err: 0
>> rx21_xdp_tx_cqes: 0
>> rx22_packets: 4629499740
>> rx22_bytes: 5924206566499
>> rx22_csum_complete: 4629499740
>> rx22_csum_unnecessary: 0
>> rx22_csum_unnecessary_inner: 0
>> rx22_csum_none: 0
>> rx22_xdp_drop: 0
>> rx22_xdp_redirect: 0
>> rx22_lro_packets: 0
>> rx22_lro_bytes: 0
>> rx22_ecn_mark: 0
>> rx22_removed_vlan_packets: 4629499740
>> rx22_wqe_err: 0
>> rx22_mpwqe_filler_cqes: 0
>> rx22_mpwqe_filler_strides: 0
>> rx22_buff_alloc_err: 0
>> rx22_cqe_compress_blks: 0
>> rx22_cqe_compress_pkts: 0
>> rx22_page_reuse: 0
>> rx22_cache_reuse: 1407527
>> rx22_cache_full: 2313342088
>> rx22_cache_empty: 6656
>> rx22_cache_busy: 2313342341
>> rx22_cache_waive: 0
>> rx22_congst_umr: 0
>> rx22_arfs_err: 0
>> rx22_xdp_tx_xmit: 0
>> rx22_xdp_tx_full: 0
>> rx22_xdp_tx_err: 0
>> rx22_xdp_tx_cqes: 0
>> rx23_packets: 4387124505
>> rx23_bytes: 5718118678470
>> rx23_csum_complete: 4387124505
>> rx23_csum_unnecessary: 0
>> rx23_csum_unnecessary_inner: 0
>> rx23_csum_none: 0
>> rx23_xdp_drop: 0
>> rx23_xdp_redirect: 0
>> rx23_lro_packets: 0
>> rx23_lro_bytes: 0
>> rx23_ecn_mark: 0
>> rx23_removed_vlan_packets: 4387124505
>> rx23_wqe_err: 0
>> rx23_mpwqe_filler_cqes: 0
>> rx23_mpwqe_filler_strides: 0
>> rx23_buff_alloc_err: 0
>> rx23_cqe_compress_blks: 0
>> rx23_cqe_compress_pkts: 0
>> rx23_page_reuse: 0
>> rx23_cache_reuse: 2013280
>> rx23_cache_full: 2191548717
>> rx23_cache_empty: 6656
>> rx23_cache_busy: 2191548972
>> rx23_cache_waive: 0
>> rx23_congst_umr: 0
>> rx23_arfs_err: 0
>> rx23_xdp_tx_xmit: 0
>> rx23_xdp_tx_full: 0
>> rx23_xdp_tx_err: 0
>> rx23_xdp_tx_cqes: 0
>> rx24_packets: 4398791634
>> rx24_bytes: 5744875564632
>> rx24_csum_complete: 4398791634
>> rx24_csum_unnecessary: 0
>> rx24_csum_unnecessary_inner: 0
>> rx24_csum_none: 0
>> rx24_xdp_drop: 0
>> rx24_xdp_redirect: 0
>> rx24_lro_packets: 0
>> rx24_lro_bytes: 0
>> rx24_ecn_mark: 0
>> rx24_removed_vlan_packets: 4398791634
>> rx24_wqe_err: 0
>> rx24_mpwqe_filler_cqes: 0
>> rx24_mpwqe_filler_strides: 0
>> rx24_buff_alloc_err: 0
>> rx24_cqe_compress_blks: 0
>> rx24_cqe_compress_pkts: 0
>> rx24_page_reuse: 0
>> rx24_cache_reuse: 2143926100
>> rx24_cache_full: 55469496
>> rx24_cache_empty: 6656
>> rx24_cache_busy: 55469716
>> rx24_cache_waive: 0
>> rx24_congst_umr: 0
>> rx24_arfs_err: 0
>> rx24_xdp_tx_xmit: 0
>> rx24_xdp_tx_full: 0
>> rx24_xdp_tx_err: 0
>> rx24_xdp_tx_cqes: 0
>> rx25_packets: 4377204935
>> rx25_bytes: 5710369124105
>> rx25_csum_complete: 4377204935
>> rx25_csum_unnecessary: 0
>> rx25_csum_unnecessary_inner: 0
>> rx25_csum_none: 0
>> rx25_xdp_drop: 0
>> rx25_xdp_redirect: 0
>> rx25_lro_packets: 0
>> rx25_lro_bytes: 0
>> rx25_ecn_mark: 0
>> rx25_removed_vlan_packets: 4377204935
>> rx25_wqe_err: 0
>> rx25_mpwqe_filler_cqes: 0
>> rx25_mpwqe_filler_strides: 0
>> rx25_buff_alloc_err: 0
>> rx25_cqe_compress_blks: 0
>> rx25_cqe_compress_pkts: 0
>> rx25_page_reuse: 0
>> rx25_cache_reuse: 2132658660
>> rx25_cache_full: 55943584
>> rx25_cache_empty: 6656
>> rx25_cache_busy: 55943804
>> rx25_cache_waive: 0
>> rx25_congst_umr: 0
>> rx25_arfs_err: 0
>> rx25_xdp_tx_xmit: 0
>> rx25_xdp_tx_full: 0
>> rx25_xdp_tx_err: 0
>> rx25_xdp_tx_cqes: 0
>> rx26_packets: 4496003688
>> rx26_bytes: 5862180715503
>> rx26_csum_complete: 4496003688
>> rx26_csum_unnecessary: 0
>> rx26_csum_unnecessary_inner: 0
>> rx26_csum_none: 0
>> rx26_xdp_drop: 0
>> rx26_xdp_redirect: 0
>> rx26_lro_packets: 0
>> rx26_lro_bytes: 0
>> rx26_ecn_mark: 0
>> rx26_removed_vlan_packets: 4496003688
>> rx26_wqe_err: 0
>> rx26_mpwqe_filler_cqes: 0
>> rx26_mpwqe_filler_strides: 0
>> rx26_buff_alloc_err: 0
>> rx26_cqe_compress_blks: 0
>> rx26_cqe_compress_pkts: 0
>> rx26_page_reuse: 0
>> rx26_cache_reuse: 8
>> rx26_cache_full: 2248001581
>> rx26_cache_empty: 6656
>> rx26_cache_busy: 2248001836
>> rx26_cache_waive: 0
>> rx26_congst_umr: 0
>> rx26_arfs_err: 0
>> rx26_xdp_tx_xmit: 0
>> rx26_xdp_tx_full: 0
>> rx26_xdp_tx_err: 0
>> rx26_xdp_tx_cqes: 0
>> rx27_packets: 4341849333
>> rx27_bytes: 5678653545018
>> rx27_csum_complete: 4341849333
>> rx27_csum_unnecessary: 0
>> rx27_csum_unnecessary_inner: 0
>> rx27_csum_none: 0
>> rx27_xdp_drop: 0
>> rx27_xdp_redirect: 0
>> rx27_lro_packets: 0
>> rx27_lro_bytes: 0
>> rx27_ecn_mark: 0
>> rx27_removed_vlan_packets: 4341849333
>> rx27_wqe_err: 0
>> rx27_mpwqe_filler_cqes: 0
>> rx27_mpwqe_filler_strides: 0
>> rx27_buff_alloc_err: 0
>> rx27_cqe_compress_blks: 0
>> rx27_cqe_compress_pkts: 0
>> rx27_page_reuse: 0
>> rx27_cache_reuse: 1748188
>> rx27_cache_full: 2169176223
>> rx27_cache_empty: 6656
>> rx27_cache_busy: 2169176476
>> rx27_cache_waive: 0
>> rx27_congst_umr: 0
>> rx27_arfs_err: 0
>> rx27_xdp_tx_xmit: 0
>> rx27_xdp_tx_full: 0
>> rx27_xdp_tx_err: 0
>> rx27_xdp_tx_cqes: 0
>> rx28_packets: 0
>> rx28_bytes: 0
>> rx28_csum_complete: 0
>> rx28_csum_unnecessary: 0
>> rx28_csum_unnecessary_inner: 0
>> rx28_csum_none: 0
>> rx28_xdp_drop: 0
>> rx28_xdp_redirect: 0
>> rx28_lro_packets: 0
>> rx28_lro_bytes: 0
>> rx28_ecn_mark: 0
>> rx28_removed_vlan_packets: 0
>> rx28_wqe_err: 0
>> rx28_mpwqe_filler_cqes: 0
>> rx28_mpwqe_filler_strides: 0
>> rx28_buff_alloc_err: 0
>> rx28_cqe_compress_blks: 0
>> rx28_cqe_compress_pkts: 0
>> rx28_page_reuse: 0
>> rx28_cache_reuse: 0
>> rx28_cache_full: 0
>> rx28_cache_empty: 2560
>> rx28_cache_busy: 0
>> rx28_cache_waive: 0
>> rx28_congst_umr: 0
>> rx28_arfs_err: 0
>> rx28_xdp_tx_xmit: 0
>> rx28_xdp_tx_full: 0
>> rx28_xdp_tx_err: 0
>> rx28_xdp_tx_cqes: 0
>> rx29_packets: 0
>> rx29_bytes: 0
>> rx29_csum_complete: 0
>> rx29_csum_unnecessary: 0
>> rx29_csum_unnecessary_inner: 0
>> rx29_csum_none: 0
>> rx29_xdp_drop: 0
>> rx29_xdp_redirect: 0
>> rx29_lro_packets: 0
>> rx29_lro_bytes: 0
>> rx29_ecn_mark: 0
>> rx29_removed_vlan_packets: 0
>> rx29_wqe_err: 0
>> rx29_mpwqe_filler_cqes: 0
>> rx29_mpwqe_filler_strides: 0
>> rx29_buff_alloc_err: 0
>> rx29_cqe_compress_blks: 0
>> rx29_cqe_compress_pkts: 0
>> rx29_page_reuse: 0
>> rx29_cache_reuse: 0
>> rx29_cache_full: 0
>> rx29_cache_empty: 2560
>> rx29_cache_busy: 0
>> rx29_cache_waive: 0
>> rx29_congst_umr: 0
>> rx29_arfs_err: 0
>> rx29_xdp_tx_xmit: 0
>> rx29_xdp_tx_full: 0
>> rx29_xdp_tx_err: 0
>> rx29_xdp_tx_cqes: 0
>> rx30_packets: 0
>> rx30_bytes: 0
>> rx30_csum_complete: 0
>> rx30_csum_unnecessary: 0
>> rx30_csum_unnecessary_inner: 0
>> rx30_csum_none: 0
>> rx30_xdp_drop: 0
>> rx30_xdp_redirect: 0
>> rx30_lro_packets: 0
>> rx30_lro_bytes: 0
>> rx30_ecn_mark: 0
>> rx30_removed_vlan_packets: 0
>> rx30_wqe_err: 0
>> rx30_mpwqe_filler_cqes: 0
>> rx30_mpwqe_filler_strides: 0
>> rx30_buff_alloc_err: 0
>> rx30_cqe_compress_blks: 0
>> rx30_cqe_compress_pkts: 0
>> rx30_page_reuse: 0
>> rx30_cache_reuse: 0
>> rx30_cache_full: 0
>> rx30_cache_empty: 2560
>> rx30_cache_busy: 0
>> rx30_cache_waive: 0
>> rx30_congst_umr: 0
>> rx30_arfs_err: 0
>> rx30_xdp_tx_xmit: 0
>> rx30_xdp_tx_full: 0
>> rx30_xdp_tx_err: 0
>> rx30_xdp_tx_cqes: 0
>> rx31_packets: 0
>> rx31_bytes: 0
>> rx31_csum_complete: 0
>> rx31_csum_unnecessary: 0
>> rx31_csum_unnecessary_inner: 0
>> rx31_csum_none: 0
>> rx31_xdp_drop: 0
>> rx31_xdp_redirect: 0
>> rx31_lro_packets: 0
>> rx31_lro_bytes: 0
>> rx31_ecn_mark: 0
>> rx31_removed_vlan_packets: 0
>> rx31_wqe_err: 0
>> rx31_mpwqe_filler_cqes: 0
>> rx31_mpwqe_filler_strides: 0
>> rx31_buff_alloc_err: 0
>> rx31_cqe_compress_blks: 0
>> rx31_cqe_compress_pkts: 0
>> rx31_page_reuse: 0
>> rx31_cache_reuse: 0
>> rx31_cache_full: 0
>> rx31_cache_empty: 2560
>> rx31_cache_busy: 0
>> rx31_cache_waive: 0
>> rx31_congst_umr: 0
>> rx31_arfs_err: 0
>> rx31_xdp_tx_xmit: 0
>> rx31_xdp_tx_full: 0
>> rx31_xdp_tx_err: 0
>> rx31_xdp_tx_cqes: 0
>> rx32_packets: 0
>> rx32_bytes: 0
>> rx32_csum_complete: 0
>> rx32_csum_unnecessary: 0
>> rx32_csum_unnecessary_inner: 0
>> rx32_csum_none: 0
>> rx32_xdp_drop: 0
>> rx32_xdp_redirect: 0
>> rx32_lro_packets: 0
>> rx32_lro_bytes: 0
>> rx32_ecn_mark: 0
>> rx32_removed_vlan_packets: 0
>> rx32_wqe_err: 0
>> rx32_mpwqe_filler_cqes: 0
>> rx32_mpwqe_filler_strides: 0
>> rx32_buff_alloc_err: 0
>> rx32_cqe_compress_blks: 0
>> rx32_cqe_compress_pkts: 0
>> rx32_page_reuse: 0
>> rx32_cache_reuse: 0
>> rx32_cache_full: 0
>> rx32_cache_empty: 2560
>> rx32_cache_busy: 0
>> rx32_cache_waive: 0
>> rx32_congst_umr: 0
>> rx32_arfs_err: 0
>> rx32_xdp_tx_xmit: 0
>> rx32_xdp_tx_full: 0
>> rx32_xdp_tx_err: 0
>> rx32_xdp_tx_cqes: 0
>> rx33_packets: 0
>> rx33_bytes: 0
>> rx33_csum_complete: 0
>> rx33_csum_unnecessary: 0
>> rx33_csum_unnecessary_inner: 0
>> rx33_csum_none: 0
>> rx33_xdp_drop: 0
>> rx33_xdp_redirect: 0
>> rx33_lro_packets: 0
>> rx33_lro_bytes: 0
>> rx33_ecn_mark: 0
>> rx33_removed_vlan_packets: 0
>> rx33_wqe_err: 0
>> rx33_mpwqe_filler_cqes: 0
>> rx33_mpwqe_filler_strides: 0
>> rx33_buff_alloc_err: 0
>> rx33_cqe_compress_blks: 0
>> rx33_cqe_compress_pkts: 0
>> rx33_page_reuse: 0
>> rx33_cache_reuse: 0
>> rx33_cache_full: 0
>> rx33_cache_empty: 2560
>> rx33_cache_busy: 0
>> rx33_cache_waive: 0
>> rx33_congst_umr: 0
>> rx33_arfs_err: 0
>> rx33_xdp_tx_xmit: 0
>> rx33_xdp_tx_full: 0
>> rx33_xdp_tx_err: 0
>> rx33_xdp_tx_cqes: 0
>> rx34_packets: 0
>> rx34_bytes: 0
>> rx34_csum_complete: 0
>> rx34_csum_unnecessary: 0
>> rx34_csum_unnecessary_inner: 0
>> rx34_csum_none: 0
>> rx34_xdp_drop: 0
>> rx34_xdp_redirect: 0
>> rx34_lro_packets: 0
>> rx34_lro_bytes: 0
>> rx34_ecn_mark: 0
>> rx34_removed_vlan_packets: 0
>> rx34_wqe_err: 0
>> rx34_mpwqe_filler_cqes: 0
>> rx34_mpwqe_filler_strides: 0
>> rx34_buff_alloc_err: 0
>> rx34_cqe_compress_blks: 0
>> rx34_cqe_compress_pkts: 0
>> rx34_page_reuse: 0
>> rx34_cache_reuse: 0
>> rx34_cache_full: 0
>> rx34_cache_empty: 2560
>> rx34_cache_busy: 0
>> rx34_cache_waive: 0
>> rx34_congst_umr: 0
>> rx34_arfs_err: 0
>> rx34_xdp_tx_xmit: 0
>> rx34_xdp_tx_full: 0
>> rx34_xdp_tx_err: 0
>> rx34_xdp_tx_cqes: 0
>> rx35_packets: 0
>> rx35_bytes: 0
>> rx35_csum_complete: 0
>> rx35_csum_unnecessary: 0
>> rx35_csum_unnecessary_inner: 0
>> rx35_csum_none: 0
>> rx35_xdp_drop: 0
>> rx35_xdp_redirect: 0
>> rx35_lro_packets: 0
>> rx35_lro_bytes: 0
>> rx35_ecn_mark: 0
>> rx35_removed_vlan_packets: 0
>> rx35_wqe_err: 0
>> rx35_mpwqe_filler_cqes: 0
>> rx35_mpwqe_filler_strides: 0
>> rx35_buff_alloc_err: 0
>> rx35_cqe_compress_blks: 0
>> rx35_cqe_compress_pkts: 0
>> rx35_page_reuse: 0
>> rx35_cache_reuse: 0
>> rx35_cache_full: 0
>> rx35_cache_empty: 2560
>> rx35_cache_busy: 0
>> rx35_cache_waive: 0
>> rx35_congst_umr: 0
>> rx35_arfs_err: 0
>> rx35_xdp_tx_xmit: 0
>> rx35_xdp_tx_full: 0
>> rx35_xdp_tx_err: 0
>> rx35_xdp_tx_cqes: 0
>> rx36_packets: 0
>> rx36_bytes: 0
>> rx36_csum_complete: 0
>> rx36_csum_unnecessary: 0
>> rx36_csum_unnecessary_inner: 0
>> rx36_csum_none: 0
>> rx36_xdp_drop: 0
>> rx36_xdp_redirect: 0
>> rx36_lro_packets: 0
>> rx36_lro_bytes: 0
>> rx36_ecn_mark: 0
>> rx36_removed_vlan_packets: 0
>> rx36_wqe_err: 0
>> rx36_mpwqe_filler_cqes: 0
>> rx36_mpwqe_filler_strides: 0
>> rx36_buff_alloc_err: 0
>> rx36_cqe_compress_blks: 0
>> rx36_cqe_compress_pkts: 0
>> rx36_page_reuse: 0
>> rx36_cache_reuse: 0
>> rx36_cache_full: 0
>> rx36_cache_empty: 2560
>> rx36_cache_busy: 0
>> rx36_cache_waive: 0
>> rx36_congst_umr: 0
>> rx36_arfs_err: 0
>> rx36_xdp_tx_xmit: 0
>> rx36_xdp_tx_full: 0
>> rx36_xdp_tx_err: 0
>> rx36_xdp_tx_cqes: 0
>> rx37_packets: 0
>> rx37_bytes: 0
>> rx37_csum_complete: 0
>> rx37_csum_unnecessary: 0
>> rx37_csum_unnecessary_inner: 0
>> rx37_csum_none: 0
>> rx37_xdp_drop: 0
>> rx37_xdp_redirect: 0
>> rx37_lro_packets: 0
>> rx37_lro_bytes: 0
>> rx37_ecn_mark: 0
>> rx37_removed_vlan_packets: 0
>> rx37_wqe_err: 0
>> rx37_mpwqe_filler_cqes: 0
>> rx37_mpwqe_filler_strides: 0
>> rx37_buff_alloc_err: 0
>> rx37_cqe_compress_blks: 0
>> rx37_cqe_compress_pkts: 0
>> rx37_page_reuse: 0
>> rx37_cache_reuse: 0
>> rx37_cache_full: 0
>> rx37_cache_empty: 2560
>> rx37_cache_busy: 0
>> rx37_cache_waive: 0
>> rx37_congst_umr: 0
>> rx37_arfs_err: 0
>> rx37_xdp_tx_xmit: 0
>> rx37_xdp_tx_full: 0
>> rx37_xdp_tx_err: 0
>> rx37_xdp_tx_cqes: 0
>> rx38_packets: 0
>> rx38_bytes: 0
>> rx38_csum_complete: 0
>> rx38_csum_unnecessary: 0
>> rx38_csum_unnecessary_inner: 0
>> rx38_csum_none: 0
>> rx38_xdp_drop: 0
>> rx38_xdp_redirect: 0
>> rx38_lro_packets: 0
>> rx38_lro_bytes: 0
>> rx38_ecn_mark: 0
>> rx38_removed_vlan_packets: 0
>> rx38_wqe_err: 0
>> rx38_mpwqe_filler_cqes: 0
>> rx38_mpwqe_filler_strides: 0
>> rx38_buff_alloc_err: 0
>> rx38_cqe_compress_blks: 0
>> rx38_cqe_compress_pkts: 0
>> rx38_page_reuse: 0
>> rx38_cache_reuse: 0
>> rx38_cache_full: 0
>> rx38_cache_empty: 2560
>> rx38_cache_busy: 0
>> rx38_cache_waive: 0
>> rx38_congst_umr: 0
>> rx38_arfs_err: 0
>> rx38_xdp_tx_xmit: 0
>> rx38_xdp_tx_full: 0
>> rx38_xdp_tx_err: 0
>> rx38_xdp_tx_cqes: 0
>> rx39_packets: 0
>> rx39_bytes: 0
>> rx39_csum_complete: 0
>> rx39_csum_unnecessary: 0
>> rx39_csum_unnecessary_inner: 0
>> rx39_csum_none: 0
>> rx39_xdp_drop: 0
>> rx39_xdp_redirect: 0
>> rx39_lro_packets: 0
>> rx39_lro_bytes: 0
>> rx39_ecn_mark: 0
>> rx39_removed_vlan_packets: 0
>> rx39_wqe_err: 0
>> rx39_mpwqe_filler_cqes: 0
>> rx39_mpwqe_filler_strides: 0
>> rx39_buff_alloc_err: 0
>> rx39_cqe_compress_blks: 0
>> rx39_cqe_compress_pkts: 0
>> rx39_page_reuse: 0
>> rx39_cache_reuse: 0
>> rx39_cache_full: 0
>> rx39_cache_empty: 2560
>> rx39_cache_busy: 0
>> rx39_cache_waive: 0
>> rx39_congst_umr: 0
>> rx39_arfs_err: 0
>> rx39_xdp_tx_xmit: 0
>> rx39_xdp_tx_full: 0
>> rx39_xdp_tx_err: 0
>> rx39_xdp_tx_cqes: 0
>> rx40_packets: 0
>> rx40_bytes: 0
>> rx40_csum_complete: 0
>> rx40_csum_unnecessary: 0
>> rx40_csum_unnecessary_inner: 0
>> rx40_csum_none: 0
>> rx40_xdp_drop: 0
>> rx40_xdp_redirect: 0
>> rx40_lro_packets: 0
>> rx40_lro_bytes: 0
>> rx40_ecn_mark: 0
>> rx40_removed_vlan_packets: 0
>> rx40_wqe_err: 0
>> rx40_mpwqe_filler_cqes: 0
>> rx40_mpwqe_filler_strides: 0
>> rx40_buff_alloc_err: 0
>> rx40_cqe_compress_blks: 0
>> rx40_cqe_compress_pkts: 0
>> rx40_page_reuse: 0
>> rx40_cache_reuse: 0
>> rx40_cache_full: 0
>> rx40_cache_empty: 2560
>> rx40_cache_busy: 0
>> rx40_cache_waive: 0
>> rx40_congst_umr: 0
>> rx40_arfs_err: 0
>> rx40_xdp_tx_xmit: 0
>> rx40_xdp_tx_full: 0
>> rx40_xdp_tx_err: 0
>> rx40_xdp_tx_cqes: 0
>> rx41_packets: 0
>> rx41_bytes: 0
>> rx41_csum_complete: 0
>> rx41_csum_unnecessary: 0
>> rx41_csum_unnecessary_inner: 0
>> rx41_csum_none: 0
>> rx41_xdp_drop: 0
>> rx41_xdp_redirect: 0
>> rx41_lro_packets: 0
>> rx41_lro_bytes: 0
>> rx41_ecn_mark: 0
>> rx41_removed_vlan_packets: 0
>> rx41_wqe_err: 0
>> rx41_mpwqe_filler_cqes: 0
>> rx41_mpwqe_filler_strides: 0
>> rx41_buff_alloc_err: 0
>> rx41_cqe_compress_blks: 0
>> rx41_cqe_compress_pkts: 0
>> rx41_page_reuse: 0
>> rx41_cache_reuse: 0
>> rx41_cache_full: 0
>> rx41_cache_empty: 2560
>> rx41_cache_busy: 0
>> rx41_cache_waive: 0
>> rx41_congst_umr: 0
>> rx41_arfs_err: 0
>> rx41_xdp_tx_xmit: 0
>> rx41_xdp_tx_full: 0
>> rx41_xdp_tx_err: 0
>> rx41_xdp_tx_cqes: 0
>> rx42_packets: 0
>> rx42_bytes: 0
>> rx42_csum_complete: 0
>> rx42_csum_unnecessary: 0
>> rx42_csum_unnecessary_inner: 0
>> rx42_csum_none: 0
>> rx42_xdp_drop: 0
>> rx42_xdp_redirect: 0
>> rx42_lro_packets: 0
>> rx42_lro_bytes: 0
>> rx42_ecn_mark: 0
>> rx42_removed_vlan_packets: 0
>> rx42_wqe_err: 0
>> rx42_mpwqe_filler_cqes: 0
>> rx42_mpwqe_filler_strides: 0
>> rx42_buff_alloc_err: 0
>> rx42_cqe_compress_blks: 0
>> rx42_cqe_compress_pkts: 0
>> rx42_page_reuse: 0
>> rx42_cache_reuse: 0
>> rx42_cache_full: 0
>> rx42_cache_empty: 2560
>> rx42_cache_busy: 0
>> rx42_cache_waive: 0
>> rx42_congst_umr: 0
>> rx42_arfs_err: 0
>> rx42_xdp_tx_xmit: 0
>> rx42_xdp_tx_full: 0
>> rx42_xdp_tx_err: 0
>> rx42_xdp_tx_cqes: 0
>> rx43_packets: 0
>> rx43_bytes: 0
>> rx43_csum_complete: 0
>> rx43_csum_unnecessary: 0
>> rx43_csum_unnecessary_inner: 0
>> rx43_csum_none: 0
>> rx43_xdp_drop: 0
>> rx43_xdp_redirect: 0
>> rx43_lro_packets: 0
>> rx43_lro_bytes: 0
>> rx43_ecn_mark: 0
>> rx43_removed_vlan_packets: 0
>> rx43_wqe_err: 0
>> rx43_mpwqe_filler_cqes: 0
>> rx43_mpwqe_filler_strides: 0
>> rx43_buff_alloc_err: 0
>> rx43_cqe_compress_blks: 0
>> rx43_cqe_compress_pkts: 0
>> rx43_page_reuse: 0
>> rx43_cache_reuse: 0
>> rx43_cache_full: 0
>> rx43_cache_empty: 2560
>> rx43_cache_busy: 0
>> rx43_cache_waive: 0
>> rx43_congst_umr: 0
>> rx43_arfs_err: 0
>> rx43_xdp_tx_xmit: 0
>> rx43_xdp_tx_full: 0
>> rx43_xdp_tx_err: 0
>> rx43_xdp_tx_cqes: 0
>> rx44_packets: 0
>> rx44_bytes: 0
>> rx44_csum_complete: 0
>> rx44_csum_unnecessary: 0
>> rx44_csum_unnecessary_inner: 0
>> rx44_csum_none: 0
>> rx44_xdp_drop: 0
>> rx44_xdp_redirect: 0
>> rx44_lro_packets: 0
>> rx44_lro_bytes: 0
>> rx44_ecn_mark: 0
>> rx44_removed_vlan_packets: 0
>> rx44_wqe_err: 0
>> rx44_mpwqe_filler_cqes: 0
>> rx44_mpwqe_filler_strides: 0
>> rx44_buff_alloc_err: 0
>> rx44_cqe_compress_blks: 0
>> rx44_cqe_compress_pkts: 0
>> rx44_page_reuse: 0
>> rx44_cache_reuse: 0
>> rx44_cache_full: 0
>> rx44_cache_empty: 2560
>> rx44_cache_busy: 0
>> rx44_cache_waive: 0
>> rx44_congst_umr: 0
>> rx44_arfs_err: 0
>> rx44_xdp_tx_xmit: 0
>> rx44_xdp_tx_full: 0
>> rx44_xdp_tx_err: 0
>> rx44_xdp_tx_cqes: 0
>> rx45_packets: 0
>> rx45_bytes: 0
>> rx45_csum_complete: 0
>> rx45_csum_unnecessary: 0
>> rx45_csum_unnecessary_inner: 0
>> rx45_csum_none: 0
>> rx45_xdp_drop: 0
>> rx45_xdp_redirect: 0
>> rx45_lro_packets: 0
>> rx45_lro_bytes: 0
>> rx45_ecn_mark: 0
>> rx45_removed_vlan_packets: 0
>> rx45_wqe_err: 0
>> rx45_mpwqe_filler_cqes: 0
>> rx45_mpwqe_filler_strides: 0
>> rx45_buff_alloc_err: 0
>> rx45_cqe_compress_blks: 0
>> rx45_cqe_compress_pkts: 0
>> rx45_page_reuse: 0
>> rx45_cache_reuse: 0
>> rx45_cache_full: 0
>> rx45_cache_empty: 2560
>> rx45_cache_busy: 0
>> rx45_cache_waive: 0
>> rx45_congst_umr: 0
>> rx45_arfs_err: 0
>> rx45_xdp_tx_xmit: 0
>> rx45_xdp_tx_full: 0
>> rx45_xdp_tx_err: 0
>> rx45_xdp_tx_cqes: 0
>> rx46_packets: 0
>> rx46_bytes: 0
>> rx46_csum_complete: 0
>> rx46_csum_unnecessary: 0
>> rx46_csum_unnecessary_inner: 0
>> rx46_csum_none: 0
>> rx46_xdp_drop: 0
>> rx46_xdp_redirect: 0
>> rx46_lro_packets: 0
>> rx46_lro_bytes: 0
>> rx46_ecn_mark: 0
>> rx46_removed_vlan_packets: 0
>> rx46_wqe_err: 0
>> rx46_mpwqe_filler_cqes: 0
>> rx46_mpwqe_filler_strides: 0
>> rx46_buff_alloc_err: 0
>> rx46_cqe_compress_blks: 0
>> rx46_cqe_compress_pkts: 0
>> rx46_page_reuse: 0
>> rx46_cache_reuse: 0
>> rx46_cache_full: 0
>> rx46_cache_empty: 2560
>> rx46_cache_busy: 0
>> rx46_cache_waive: 0
>> rx46_congst_umr: 0
>> rx46_arfs_err: 0
>> rx46_xdp_tx_xmit: 0
>> rx46_xdp_tx_full: 0
>> rx46_xdp_tx_err: 0
>> rx46_xdp_tx_cqes: 0
>> rx47_packets: 0
>> rx47_bytes: 0
>> rx47_csum_complete: 0
>> rx47_csum_unnecessary: 0
>> rx47_csum_unnecessary_inner: 0
>> rx47_csum_none: 0
>> rx47_xdp_drop: 0
>> rx47_xdp_redirect: 0
>> rx47_lro_packets: 0
>> rx47_lro_bytes: 0
>> rx47_ecn_mark: 0
>> rx47_removed_vlan_packets: 0
>> rx47_wqe_err: 0
>> rx47_mpwqe_filler_cqes: 0
>> rx47_mpwqe_filler_strides: 0
>> rx47_buff_alloc_err: 0
>> rx47_cqe_compress_blks: 0
>> rx47_cqe_compress_pkts: 0
>> rx47_page_reuse: 0
>> rx47_cache_reuse: 0
>> rx47_cache_full: 0
>> rx47_cache_empty: 2560
>> rx47_cache_busy: 0
>> rx47_cache_waive: 0
>> rx47_congst_umr: 0
>> rx47_arfs_err: 0
>> rx47_xdp_tx_xmit: 0
>> rx47_xdp_tx_full: 0
>> rx47_xdp_tx_err: 0
>> rx47_xdp_tx_cqes: 0
>> rx48_packets: 0
>> rx48_bytes: 0
>> rx48_csum_complete: 0
>> rx48_csum_unnecessary: 0
>> rx48_csum_unnecessary_inner: 0
>> rx48_csum_none: 0
>> rx48_xdp_drop: 0
>> rx48_xdp_redirect: 0
>> rx48_lro_packets: 0
>> rx48_lro_bytes: 0
>> rx48_ecn_mark: 0
>> rx48_removed_vlan_packets: 0
>> rx48_wqe_err: 0
>> rx48_mpwqe_filler_cqes: 0
>> rx48_mpwqe_filler_strides: 0
>> rx48_buff_alloc_err: 0
>> rx48_cqe_compress_blks: 0
>> rx48_cqe_compress_pkts: 0
>> rx48_page_reuse: 0
>> rx48_cache_reuse: 0
>> rx48_cache_full: 0
>> rx48_cache_empty: 2560
>> rx48_cache_busy: 0
>> rx48_cache_waive: 0
>> rx48_congst_umr: 0
>> rx48_arfs_err: 0
>> rx48_xdp_tx_xmit: 0
>> rx48_xdp_tx_full: 0
>> rx48_xdp_tx_err: 0
>> rx48_xdp_tx_cqes: 0
>> rx49_packets: 0
>> rx49_bytes: 0
>> rx49_csum_complete: 0
>> rx49_csum_unnecessary: 0
>> rx49_csum_unnecessary_inner: 0
>> rx49_csum_none: 0
>> rx49_xdp_drop: 0
>> rx49_xdp_redirect: 0
>> rx49_lro_packets: 0
>> rx49_lro_bytes: 0
>> rx49_ecn_mark: 0
>> rx49_removed_vlan_packets: 0
>> rx49_wqe_err: 0
>> rx49_mpwqe_filler_cqes: 0
>> rx49_mpwqe_filler_strides: 0
>> rx49_buff_alloc_err: 0
>> rx49_cqe_compress_blks: 0
>> rx49_cqe_compress_pkts: 0
>> rx49_page_reuse: 0
>> rx49_cache_reuse: 0
>> rx49_cache_full: 0
>> rx49_cache_empty: 2560
>> rx49_cache_busy: 0
>> rx49_cache_waive: 0
>> rx49_congst_umr: 0
>> rx49_arfs_err: 0
>> rx49_xdp_tx_xmit: 0
>> rx49_xdp_tx_full: 0
>> rx49_xdp_tx_err: 0
>> rx49_xdp_tx_cqes: 0
>> rx50_packets: 0
>> rx50_bytes: 0
>> rx50_csum_complete: 0
>> rx50_csum_unnecessary: 0
>> rx50_csum_unnecessary_inner: 0
>> rx50_csum_none: 0
>> rx50_xdp_drop: 0
>> rx50_xdp_redirect: 0
>> rx50_lro_packets: 0
>> rx50_lro_bytes: 0
>> rx50_ecn_mark: 0
>> rx50_removed_vlan_packets: 0
>> rx50_wqe_err: 0
>> rx50_mpwqe_filler_cqes: 0
>> rx50_mpwqe_filler_strides: 0
>> rx50_buff_alloc_err: 0
>> rx50_cqe_compress_blks: 0
>> rx50_cqe_compress_pkts: 0
>> rx50_page_reuse: 0
>> rx50_cache_reuse: 0
>> rx50_cache_full: 0
>> rx50_cache_empty: 2560
>> rx50_cache_busy: 0
>> rx50_cache_waive: 0
>> rx50_congst_umr: 0
>> rx50_arfs_err: 0
>> rx50_xdp_tx_xmit: 0
>> rx50_xdp_tx_full: 0
>> rx50_xdp_tx_err: 0
>> rx50_xdp_tx_cqes: 0
>> rx51_packets: 0
>> rx51_bytes: 0
>> rx51_csum_complete: 0
>> rx51_csum_unnecessary: 0
>> rx51_csum_unnecessary_inner: 0
>> rx51_csum_none: 0
>> rx51_xdp_drop: 0
>> rx51_xdp_redirect: 0
>> rx51_lro_packets: 0
>> rx51_lro_bytes: 0
>> rx51_ecn_mark: 0
>> rx51_removed_vlan_packets: 0
>> rx51_wqe_err: 0
>> rx51_mpwqe_filler_cqes: 0
>> rx51_mpwqe_filler_strides: 0
>> rx51_buff_alloc_err: 0
>> rx51_cqe_compress_blks: 0
>> rx51_cqe_compress_pkts: 0
>> rx51_page_reuse: 0
>> rx51_cache_reuse: 0
>> rx51_cache_full: 0
>> rx51_cache_empty: 2560
>> rx51_cache_busy: 0
>> rx51_cache_waive: 0
>> rx51_congst_umr: 0
>> rx51_arfs_err: 0
>> rx51_xdp_tx_xmit: 0
>> rx51_xdp_tx_full: 0
>> rx51_xdp_tx_err: 0
>> rx51_xdp_tx_cqes: 0
>> rx52_packets: 0
>> rx52_bytes: 0
>> rx52_csum_complete: 0
>> rx52_csum_unnecessary: 0
>> rx52_csum_unnecessary_inner: 0
>> rx52_csum_none: 0
>> rx52_xdp_drop: 0
>> rx52_xdp_redirect: 0
>> rx52_lro_packets: 0
>> rx52_lro_bytes: 0
>> rx52_ecn_mark: 0
>> rx52_removed_vlan_packets: 0
>> rx52_wqe_err: 0
>> rx52_mpwqe_filler_cqes: 0
>> rx52_mpwqe_filler_strides: 0
>> rx52_buff_alloc_err: 0
>> rx52_cqe_compress_blks: 0
>> rx52_cqe_compress_pkts: 0
>> rx52_page_reuse: 0
>> rx52_cache_reuse: 0
>> rx52_cache_full: 0
>> rx52_cache_empty: 2560
>> rx52_cache_busy: 0
>> rx52_cache_waive: 0
>> rx52_congst_umr: 0
>> rx52_arfs_err: 0
>> rx52_xdp_tx_xmit: 0
>> rx52_xdp_tx_full: 0
>> rx52_xdp_tx_err: 0
>> rx52_xdp_tx_cqes: 0
>> rx53_packets: 0
>> rx53_bytes: 0
>> rx53_csum_complete: 0
>> rx53_csum_unnecessary: 0
>> rx53_csum_unnecessary_inner: 0
>> rx53_csum_none: 0
>> rx53_xdp_drop: 0
>> rx53_xdp_redirect: 0
>> rx53_lro_packets: 0
>> rx53_lro_bytes: 0
>> rx53_ecn_mark: 0
>> rx53_removed_vlan_packets: 0
>> rx53_wqe_err: 0
>> rx53_mpwqe_filler_cqes: 0
>> rx53_mpwqe_filler_strides: 0
>> rx53_buff_alloc_err: 0
>> rx53_cqe_compress_blks: 0
>> rx53_cqe_compress_pkts: 0
>> rx53_page_reuse: 0
>> rx53_cache_reuse: 0
>> rx53_cache_full: 0
>> rx53_cache_empty: 2560
>> rx53_cache_busy: 0
>> rx53_cache_waive: 0
>> rx53_congst_umr: 0
>> rx53_arfs_err: 0
>> rx53_xdp_tx_xmit: 0
>> rx53_xdp_tx_full: 0
>> rx53_xdp_tx_err: 0
>> rx53_xdp_tx_cqes: 0
>> rx54_packets: 0
>> rx54_bytes: 0
>> rx54_csum_complete: 0
>> rx54_csum_unnecessary: 0
>> rx54_csum_unnecessary_inner: 0
>> rx54_csum_none: 0
>> rx54_xdp_drop: 0
>> rx54_xdp_redirect: 0
>> rx54_lro_packets: 0
>> rx54_lro_bytes: 0
>> rx54_ecn_mark: 0
>> rx54_removed_vlan_packets: 0
>> rx54_wqe_err: 0
>> rx54_mpwqe_filler_cqes: 0
>> rx54_mpwqe_filler_strides: 0
>> rx54_buff_alloc_err: 0
>> rx54_cqe_compress_blks: 0
>> rx54_cqe_compress_pkts: 0
>> rx54_page_reuse: 0
>> rx54_cache_reuse: 0
>> rx54_cache_full: 0
>> rx54_cache_empty: 2560
>> rx54_cache_busy: 0
>> rx54_cache_waive: 0
>> rx54_congst_umr: 0
>> rx54_arfs_err: 0
>> rx54_xdp_tx_xmit: 0
>> rx54_xdp_tx_full: 0
>> rx54_xdp_tx_err: 0
>> rx54_xdp_tx_cqes: 0
>> rx55_packets: 0
>> rx55_bytes: 0
>> rx55_csum_complete: 0
>> rx55_csum_unnecessary: 0
>> rx55_csum_unnecessary_inner: 0
>> rx55_csum_none: 0
>> rx55_xdp_drop: 0
>> rx55_xdp_redirect: 0
>> rx55_lro_packets: 0
>> rx55_lro_bytes: 0
>> rx55_ecn_mark: 0
>> rx55_removed_vlan_packets: 0
>> rx55_wqe_err: 0
>> rx55_mpwqe_filler_cqes: 0
>> rx55_mpwqe_filler_strides: 0
>> rx55_buff_alloc_err: 0
>> rx55_cqe_compress_blks: 0
>> rx55_cqe_compress_pkts: 0
>> rx55_page_reuse: 0
>> rx55_cache_reuse: 0
>> rx55_cache_full: 0
>> rx55_cache_empty: 2560
>> rx55_cache_busy: 0
>> rx55_cache_waive: 0
>> rx55_congst_umr: 0
>> rx55_arfs_err: 0
>> rx55_xdp_tx_xmit: 0
>> rx55_xdp_tx_full: 0
>> rx55_xdp_tx_err: 0
>> rx55_xdp_tx_cqes: 0
>> tx0_packets: 6019477917
>> tx0_bytes: 3445238940825
>> tx0_tso_packets: 311304622
>> tx0_tso_bytes: 1897094773213
>> tx0_tso_inner_packets: 0
>> tx0_tso_inner_bytes: 0
>> tx0_csum_partial: 457981794
>> tx0_csum_partial_inner: 0
>> tx0_added_vlan_packets: 4965567654
>> tx0_nop: 72290329
>> tx0_csum_none: 4507585860
>> tx0_stopped: 9118
>> tx0_dropped: 0
>> tx0_xmit_more: 51651593
>> tx0_recover: 0
>> tx0_cqes: 4913918402
>> tx0_wake: 9118
>> tx0_cqe_err: 0
>> tx1_packets: 5700413414
>> tx1_bytes: 3340870662350
>> tx1_tso_packets: 318201557
>> tx1_tso_bytes: 1915233462303
>> tx1_tso_inner_packets: 0
>> tx1_tso_inner_bytes: 0
>> tx1_csum_partial: 461736722
>> tx1_csum_partial_inner: 0
>> tx1_added_vlan_packets: 4638708749
>> tx1_nop: 70061796
>> tx1_csum_none: 4176972027
>> tx1_stopped: 9248
>> tx1_dropped: 0
>> tx1_xmit_more: 39531959
>> tx1_recover: 0
>> tx1_cqes: 4599179178
>> tx1_wake: 9248
>> tx1_cqe_err: 0
>> tx2_packets: 5795960848
>> tx2_bytes: 3394876820271
>> tx2_tso_packets: 322935065
>> tx2_tso_bytes: 1910825901109
>> tx2_tso_inner_packets: 0
>> tx2_tso_inner_bytes: 0
>> tx2_csum_partial: 460747092
>> tx2_csum_partial_inner: 0
>> tx2_added_vlan_packets: 4743705654
>> tx2_nop: 72722430
>> tx2_csum_none: 4282958562
>> tx2_stopped: 8938
>> tx2_dropped: 0
>> tx2_xmit_more: 44084718
>> tx2_recover: 0
>> tx2_cqes: 4699623410
>> tx2_wake: 8938
>> tx2_cqe_err: 0
>> tx3_packets: 5580215878
>> tx3_bytes: 3191677257787
>> tx3_tso_packets: 305771141
>> tx3_tso_bytes: 1823265793476
>> tx3_tso_inner_packets: 0
>> tx3_tso_inner_bytes: 0
>> tx3_csum_partial: 434976070
>> tx3_csum_partial_inner: 0
>> tx3_added_vlan_packets: 4569899956
>> tx3_nop: 68184348
>> tx3_csum_none: 4134923886
>> tx3_stopped: 8383
>> tx3_dropped: 0
>> tx3_xmit_more: 41940375
>> tx3_recover: 0
>> tx3_cqes: 4527961924
>> tx3_wake: 8383
>> tx3_cqe_err: 0
>> tx4_packets: 6795007068
>> tx4_bytes: 3963890025270
>> tx4_tso_packets: 358437617
>> tx4_tso_bytes: 2154747995355
>> tx4_tso_inner_packets: 0
>> tx4_tso_inner_bytes: 0
>> tx4_csum_partial: 504764524
>> tx4_csum_partial_inner: 0
>> tx4_added_vlan_packets: 5602510191
>> tx4_nop: 81345604
>> tx4_csum_none: 5097745667
>> tx4_stopped: 10248
>> tx4_dropped: 0
>> tx4_xmit_more: 49068571
>> tx4_recover: 0
>> tx4_cqes: 5553444276
>> tx4_wake: 10248
>> tx4_cqe_err: 0
>> tx5_packets: 6408089261
>> tx5_bytes: 3676275848279
>> tx5_tso_packets: 345129329
>> tx5_tso_bytes: 2108447877473
>> tx5_tso_inner_packets: 0
>> tx5_tso_inner_bytes: 0
>> tx5_csum_partial: 494705894
>> tx5_csum_partial_inner: 0
>> tx5_added_vlan_packets: 5235998343
>> tx5_nop: 77694627
>> tx5_csum_none: 4741292449
>> tx5_stopped: 46
>> tx5_dropped: 0
>> tx5_xmit_more: 46675831
>> tx5_recover: 0
>> tx5_cqes: 5189323550
>> tx5_wake: 46
>> tx5_cqe_err: 0
>> tx6_packets: 6382289663
>> tx6_bytes: 3670991418150
>> tx6_tso_packets: 342927826
>> tx6_tso_bytes: 2075049679904
>> tx6_tso_inner_packets: 0
>> tx6_tso_inner_bytes: 0
>> tx6_csum_partial: 490369221
>> tx6_csum_partial_inner: 0
>> tx6_added_vlan_packets: 5232144528
>> tx6_nop: 77391246
>> tx6_csum_none: 4741775307
>> tx6_stopped: 10823
>> tx6_dropped: 0
>> tx6_xmit_more: 44487607
>> tx6_recover: 0
>> tx6_cqes: 5187659877
>> tx6_wake: 10823
>> tx6_cqe_err: 0
>> tx7_packets: 6456378284
>> tx7_bytes: 3758013320518
>> tx7_tso_packets: 350958294
>> tx7_tso_bytes: 2126833408524
>> tx7_tso_inner_packets: 0
>> tx7_tso_inner_bytes: 0
>> tx7_csum_partial: 501804109
>> tx7_csum_partial_inner: 0
>> tx7_added_vlan_packets: 5275635204
>> tx7_nop: 79010883
>> tx7_csum_none: 4773831096
>> tx7_stopped: 14684
>> tx7_dropped: 0
>> tx7_xmit_more: 44447469
>> tx7_recover: 0
>> tx7_cqes: 5231191770
>> tx7_wake: 14684
>> tx7_cqe_err: 0
>> tx8_packets: 6401799768
>> tx8_bytes: 3681210808766
>> tx8_tso_packets: 342878228
>> tx8_tso_bytes: 2089688012191
>> tx8_tso_inner_packets: 0
>> tx8_tso_inner_bytes: 0
>> tx8_csum_partial: 494865145
>> tx8_csum_partial_inner: 0
>> tx8_added_vlan_packets: 5242288908
>> tx8_nop: 77250910
>> tx8_csum_none: 4747423763
>> tx8_stopped: 2
>> tx8_dropped: 0
>> tx8_xmit_more: 44191737
>> tx8_recover: 0
>> tx8_cqes: 5198098454
>> tx8_wake: 2
>> tx8_cqe_err: 0
>> tx9_packets: 6632882888
>> tx9_bytes: 3820110338309
>> tx9_tso_packets: 354189056
>> tx9_tso_bytes: 2187883597128
>> tx9_tso_inner_packets: 0
>> tx9_tso_inner_bytes: 0
>> tx9_csum_partial: 511108218
>> tx9_csum_partial_inner: 0
>> tx9_added_vlan_packets: 5413836353
>> tx9_nop: 80560668
>> tx9_csum_none: 4902728135
>> tx9_stopped: 9091
>> tx9_dropped: 0
>> tx9_xmit_more: 54501293
>> tx9_recover: 0
>> tx9_cqes: 5359337638
>> tx9_wake: 9091
>> tx9_cqe_err: 0
>> tx10_packets: 6421786406
>> tx10_bytes: 3692798413429
>> tx10_tso_packets: 346878943
>> tx10_tso_bytes: 2111921062110
>> tx10_tso_inner_packets: 0
>> tx10_tso_inner_bytes: 0
>> tx10_csum_partial: 494356645
>> tx10_csum_partial_inner: 0
>> tx10_added_vlan_packets: 5248274374
>> tx10_nop: 77922624
>> tx10_csum_none: 4753917730
>> tx10_stopped: 9617
>> tx10_dropped: 0
>> tx10_xmit_more: 44473939
>> tx10_recover: 0
>> tx10_cqes: 5203802547
>> tx10_wake: 9617
>> tx10_cqe_err: 0
>> tx11_packets: 6406750938
>> tx11_bytes: 3660343565126
>> tx11_tso_packets: 355917271
>> tx11_tso_bytes: 2130812246956
>> tx11_tso_inner_packets: 0
>> tx11_tso_inner_bytes: 0
>> tx11_csum_partial: 500336369
>> tx11_csum_partial_inner: 0
>> tx11_added_vlan_packets: 5228267547
>> tx11_nop: 78906315
>> tx11_csum_none: 4727931178
>> tx11_stopped: 9607
>> tx11_dropped: 0
>> tx11_xmit_more: 40041492
>> tx11_recover: 0
>> tx11_cqes: 5188228290
>> tx11_wake: 9607
>> tx11_cqe_err: 0
>> tx12_packets: 6422347846
>> tx12_bytes: 3718772753227
>> tx12_tso_packets: 355397223
>> tx12_tso_bytes: 2162614059758
>> tx12_tso_inner_packets: 0
>> tx12_tso_inner_bytes: 0
>> tx12_csum_partial: 511437844
>> tx12_csum_partial_inner: 0
>> tx12_added_vlan_packets: 5221373746
>> tx12_nop: 78866779
>> tx12_csum_none: 4709935902
>> tx12_stopped: 10280
>> tx12_dropped: 0
>> tx12_xmit_more: 42189399
>> tx12_recover: 0
>> tx12_cqes: 5179187154
>> tx12_wake: 10280
>> tx12_cqe_err: 0
>> tx13_packets: 6429383816
>> tx13_bytes: 3725679445046
>> tx13_tso_packets: 360934759
>> tx13_tso_bytes: 2148016411436
>> tx13_tso_inner_packets: 0
>> tx13_tso_inner_bytes: 0
>> tx13_csum_partial: 505245849
>> tx13_csum_partial_inner: 0
>> tx13_added_vlan_packets: 5240267441
>> tx13_nop: 80295637
>> tx13_csum_none: 4735021592
>> tx13_stopped: 84
>> tx13_dropped: 0
>> tx13_xmit_more: 43118045
>> tx13_recover: 0
>> tx13_cqes: 5197150348
>> tx13_wake: 84
>> tx13_cqe_err: 0
>> tx14_packets: 6375279148
>> tx14_bytes: 3624267203336
>> tx14_tso_packets: 344388148
>> tx14_tso_bytes: 2094966273548
>> tx14_tso_inner_packets: 0
>> tx14_tso_inner_bytes: 0
>> tx14_csum_partial: 494129407
>> tx14_csum_partial_inner: 0
>> tx14_added_vlan_packets: 5210749337
>> tx14_nop: 77280615
>> tx14_csum_none: 4716619930
>> tx14_stopped: 13057
>> tx14_dropped: 0
>> tx14_xmit_more: 40849682
>> tx14_recover: 0
>> tx14_cqes: 5169902694
>> tx14_wake: 13057
>> tx14_cqe_err: 0
>> tx15_packets: 6489306520
>> tx15_bytes: 3775716194795
>> tx15_tso_packets: 368716406
>> tx15_tso_bytes: 2165876423354
>> tx15_tso_inner_packets: 0
>> tx15_tso_inner_bytes: 0
>> tx15_csum_partial: 509887864
>> tx15_csum_partial_inner: 0
>> tx15_added_vlan_packets: 5296767390
>> tx15_nop: 80803468
>> tx15_csum_none: 4786879529
>> tx15_stopped: 1
>> tx15_dropped: 0
>> tx15_xmit_more: 46979676
>> tx15_recover: 0
>> tx15_cqes: 5249789328
>> tx15_wake: 1
>> tx15_cqe_err: 0
>> tx16_packets: 6559857761
>> tx16_bytes: 3724080573905
>> tx16_tso_packets: 350864176
>> tx16_tso_bytes: 2099634006033
>> tx16_tso_inner_packets: 0
>> tx16_tso_inner_bytes: 0
>> tx16_csum_partial: 489397232
>> tx16_csum_partial_inner: 0
>> tx16_added_vlan_packets: 5398869334
>> tx16_nop: 79046075
>> tx16_csum_none: 4909472106
>> tx16_stopped: 4480
>> tx16_dropped: 0
>> tx16_xmit_more: 47273286
>> tx16_recover: 0
>> tx16_cqes: 5351598315
>> tx16_wake: 4480
>> tx16_cqe_err: 0
>> tx17_packets: 6358711533
>> tx17_bytes: 3650180865573
>> tx17_tso_packets: 350723136
>> tx17_tso_bytes: 2109426587128
>> tx17_tso_inner_packets: 0
>> tx17_tso_inner_bytes: 0
>> tx17_csum_partial: 494719487
>> tx17_csum_partial_inner: 0
>> tx17_added_vlan_packets: 5190068796
>> tx17_nop: 77285612
>> tx17_csum_none: 4695349309
>> tx17_stopped: 10443
>> tx17_dropped: 0
>> tx17_xmit_more: 45582108
>> tx17_recover: 0
>> tx17_cqes: 5144489363
>> tx17_wake: 10443
>> tx17_cqe_err: 0
>> tx18_packets: 6655328437
>> tx18_bytes: 3801768461807
>> tx18_tso_packets: 356516373
>> tx18_tso_bytes: 2164829247550
>> tx18_tso_inner_packets: 0
>> tx18_tso_inner_bytes: 0
>> tx18_csum_partial: 500508446
>> tx18_csum_partial_inner: 0
>> tx18_added_vlan_packets: 5454166840
>> tx18_nop: 80423007
>> tx18_csum_none: 4953658394
>> tx18_stopped: 14760
>> tx18_dropped: 0
>> tx18_xmit_more: 50837465
>> tx18_recover: 0
>> tx18_cqes: 5403332553
>> tx18_wake: 14760
>> tx18_cqe_err: 0
>> tx19_packets: 6408680611
>> tx19_bytes: 3644119934372
>> tx19_tso_packets: 350727530
>> tx19_tso_bytes: 2089896715365
>> tx19_tso_inner_packets: 0
>> tx19_tso_inner_bytes: 0
>> tx19_csum_partial: 486536490
>> tx19_csum_partial_inner: 0
>> tx19_added_vlan_packets: 5255839020
>> tx19_nop: 78525198
>> tx19_csum_none: 4769302530
>> tx19_stopped: 8614
>> tx19_dropped: 0
>> tx19_xmit_more: 43605232
>> tx19_recover: 0
>> tx19_cqes: 5212236833
>> tx19_wake: 8614
>> tx19_cqe_err: 0
>> tx20_packets: 5609275141
>> tx20_bytes: 3187279031581
>> tx20_tso_packets: 298609303
>> tx20_tso_bytes: 1794382229379
>> tx20_tso_inner_packets: 0
>> tx20_tso_inner_bytes: 0
>> tx20_csum_partial: 430691178
>> tx20_csum_partial_inner: 0
>> tx20_added_vlan_packets: 4616844286
>> tx20_nop: 67450040
>> tx20_csum_none: 4186153108
>> tx20_stopped: 9099
>> tx20_dropped: 0
>> tx20_xmit_more: 42040991
>> tx20_recover: 0
>> tx20_cqes: 4574805846
>> tx20_wake: 9099
>> tx20_cqe_err: 0
>> tx21_packets: 5641621183
>> tx21_bytes: 3279282331124
>> tx21_tso_packets: 311297057
>> tx21_tso_bytes: 1875735401012
>> tx21_tso_inner_packets: 0
>> tx21_tso_inner_bytes: 0
>> tx21_csum_partial: 444333894
>> tx21_csum_partial_inner: 0
>> tx21_added_vlan_packets: 4603527701
>> tx21_nop: 68857983
>> tx21_csum_none: 4159193807
>> tx21_stopped: 10082
>> tx21_dropped: 0
>> tx21_xmit_more: 43988081
>> tx21_recover: 0
>> tx21_cqes: 4559542410
>> tx21_wake: 10082
>> tx21_cqe_err: 0
>> tx22_packets: 5822168288
>> tx22_bytes: 3452026726862
>> tx22_tso_packets: 308230791
>> tx22_tso_bytes: 1859686450671
>> tx22_tso_inner_packets: 0
>> tx22_tso_inner_bytes: 0
>> tx22_csum_partial: 442751518
>> tx22_csum_partial_inner: 0
>> tx22_added_vlan_packets: 4792100335
>> tx22_nop: 70631706
>> tx22_csum_none: 4349348817
>> tx22_stopped: 9355
>> tx22_dropped: 0
>> tx22_xmit_more: 45165994
>> tx22_recover: 0
>> tx22_cqes: 4746936601
>> tx22_wake: 9355
>> tx22_cqe_err: 0
>> tx23_packets: 5664896066
>> tx23_bytes: 3207724186946
>> tx23_tso_packets: 300418757
>> tx23_tso_bytes: 1794180478679
>> tx23_tso_inner_packets: 0
>> tx23_tso_inner_bytes: 0
>> tx23_csum_partial: 429898848
>> tx23_csum_partial_inner: 0
>> tx23_added_vlan_packets: 4674317320
>> tx23_nop: 67899896
>> tx23_csum_none: 4244418472
>> tx23_stopped: 11684
>> tx23_dropped: 0
>> tx23_xmit_more: 43351132
>> tx23_recover: 0
>> tx23_cqes: 4630969028
>> tx23_wake: 11684
>> tx23_cqe_err: 0
>> tx24_packets: 5663326601
>> tx24_bytes: 3250127095110
>> tx24_tso_packets: 301327422
>> tx24_tso_bytes: 1831260534157
>> tx24_tso_inner_packets: 0
>> tx24_tso_inner_bytes: 0
>> tx24_csum_partial: 438757312
>> tx24_csum_partial_inner: 0
>> tx24_added_vlan_packets: 4646014986
>> tx24_nop: 68431153
>> tx24_csum_none: 4207257674
>> tx24_stopped: 9240
>> tx24_dropped: 0
>> tx24_xmit_more: 47699542
>> tx24_recover: 0
>> tx24_cqes: 4598317913
>> tx24_wake: 9240
>> tx24_cqe_err: 0
>> tx25_packets: 5703883962
>> tx25_bytes: 3291856915695
>> tx25_tso_packets: 308900318
>> tx25_tso_bytes: 1855516128386
>> tx25_tso_inner_packets: 0
>> tx25_tso_inner_bytes: 0
>> tx25_csum_partial: 444753744
>> tx25_csum_partial_inner: 0
>> tx25_added_vlan_packets: 4676528924
>> tx25_nop: 69230967
>> tx25_csum_none: 4231775180
>> tx25_stopped: 1140
>> tx25_dropped: 0
>> tx25_xmit_more: 40819195
>> tx25_recover: 0
>> tx25_cqes: 4635710966
>> tx25_wake: 1140
>> tx25_cqe_err: 0
>> tx26_packets: 5803495984
>> tx26_bytes: 3413564272139
>> tx26_tso_packets: 319986230
>> tx26_tso_bytes: 1929042839677
>> tx26_tso_inner_packets: 0
>> tx26_tso_inner_bytes: 0
>> tx26_csum_partial: 464771163
>> tx26_csum_partial_inner: 0
>> tx26_added_vlan_packets: 4734767280
>> tx26_nop: 71345080
>> tx26_csum_none: 4269996117
>> tx26_stopped: 10972
>> tx26_dropped: 0
>> tx26_xmit_more: 43793424
>> tx26_recover: 0
>> tx26_cqes: 4690976400
>> tx26_wake: 10972
>> tx26_cqe_err: 0
>> tx27_packets: 5960955343
>> tx27_bytes: 3444156164526
>> tx27_tso_packets: 325099639
>> tx27_tso_bytes: 1928378678784
>> tx27_tso_inner_packets: 0
>> tx27_tso_inner_bytes: 0
>> tx27_csum_partial: 467310289
>> tx27_csum_partial_inner: 0
>> tx27_added_vlan_packets: 4888651368
>> tx27_nop: 73201664
>> tx27_csum_none: 4421341079
>> tx27_stopped: 9465
>> tx27_dropped: 0
>> tx27_xmit_more: 53632121
>> tx27_recover: 0
>> tx27_cqes: 4835021398
>> tx27_wake: 9465
>> tx27_cqe_err: 0
>> tx28_packets: 0
>> tx28_bytes: 0
>> tx28_tso_packets: 0
>> tx28_tso_bytes: 0
>> tx28_tso_inner_packets: 0
>> tx28_tso_inner_bytes: 0
>> tx28_csum_partial: 0
>> tx28_csum_partial_inner: 0
>> tx28_added_vlan_packets: 0
>> tx28_nop: 0
>> tx28_csum_none: 0
>> tx28_stopped: 0
>> tx28_dropped: 0
>> tx28_xmit_more: 0
>> tx28_recover: 0
>> tx28_cqes: 0
>> tx28_wake: 0
>> tx28_cqe_err: 0
>> tx29_packets: 3
>> tx29_bytes: 266
>> tx29_tso_packets: 0
>> tx29_tso_bytes: 0
>> tx29_tso_inner_packets: 0
>> tx29_tso_inner_bytes: 0
>> tx29_csum_partial: 0
>> tx29_csum_partial_inner: 0
>> tx29_added_vlan_packets: 0
>> tx29_nop: 0
>> tx29_csum_none: 3
>> tx29_stopped: 0
>> tx29_dropped: 0
>> tx29_xmit_more: 1
>> tx29_recover: 0
>> tx29_cqes: 2
>> tx29_wake: 0
>> tx29_cqe_err: 0
>> tx30_packets: 0
>> tx30_bytes: 0
>> tx30_tso_packets: 0
>> tx30_tso_bytes: 0
>> tx30_tso_inner_packets: 0
>> tx30_tso_inner_bytes: 0
>> tx30_csum_partial: 0
>> tx30_csum_partial_inner: 0
>> tx30_added_vlan_packets: 0
>> tx30_nop: 0
>> tx30_csum_none: 0
>> tx30_stopped: 0
>> tx30_dropped: 0
>> tx30_xmit_more: 0
>> tx30_recover: 0
>> tx30_cqes: 0
>> tx30_wake: 0
>> tx30_cqe_err: 0
>> tx31_packets: 0
>> tx31_bytes: 0
>> tx31_tso_packets: 0
>> tx31_tso_bytes: 0
>> tx31_tso_inner_packets: 0
>> tx31_tso_inner_bytes: 0
>> tx31_csum_partial: 0
>> tx31_csum_partial_inner: 0
>> tx31_added_vlan_packets: 0
>> tx31_nop: 0
>> tx31_csum_none: 0
>> tx31_stopped: 0
>> tx31_dropped: 0
>> tx31_xmit_more: 0
>> tx31_recover: 0
>> tx31_cqes: 0
>> tx31_wake: 0
>> tx31_cqe_err: 0
>> tx32_packets: 0
>> tx32_bytes: 0
>> tx32_tso_packets: 0
>> tx32_tso_bytes: 0
>> tx32_tso_inner_packets: 0
>> tx32_tso_inner_bytes: 0
>> tx32_csum_partial: 0
>> tx32_csum_partial_inner: 0
>> tx32_added_vlan_packets: 0
>> tx32_nop: 0
>> tx32_csum_none: 0
>> tx32_stopped: 0
>> tx32_dropped: 0
>> tx32_xmit_more: 0
>> tx32_recover: 0
>> tx32_cqes: 0
>> tx32_wake: 0
>> tx32_cqe_err: 0
>> tx33_packets: 0
>> tx33_bytes: 0
>> tx33_tso_packets: 0
>> tx33_tso_bytes: 0
>> tx33_tso_inner_packets: 0
>> tx33_tso_inner_bytes: 0
>> tx33_csum_partial: 0
>> tx33_csum_partial_inner: 0
>> tx33_added_vlan_packets: 0
>> tx33_nop: 0
>> tx33_csum_none: 0
>> tx33_stopped: 0
>> tx33_dropped: 0
>> tx33_xmit_more: 0
>> tx33_recover: 0
>> tx33_cqes: 0
>> tx33_wake: 0
>> tx33_cqe_err: 0
>> tx34_packets: 0
>> tx34_bytes: 0
>> tx34_tso_packets: 0
>> tx34_tso_bytes: 0
>> tx34_tso_inner_packets: 0
>> tx34_tso_inner_bytes: 0
>> tx34_csum_partial: 0
>> tx34_csum_partial_inner: 0
>> tx34_added_vlan_packets: 0
>> tx34_nop: 0
>> tx34_csum_none: 0
>> tx34_stopped: 0
>> tx34_dropped: 0
>> tx34_xmit_more: 0
>> tx34_recover: 0
>> tx34_cqes: 0
>> tx34_wake: 0
>> tx34_cqe_err: 0
>> tx35_packets: 0
>> tx35_bytes: 0
>> tx35_tso_packets: 0
>> tx35_tso_bytes: 0
>> tx35_tso_inner_packets: 0
>> tx35_tso_inner_bytes: 0
>> tx35_csum_partial: 0
>> tx35_csum_partial_inner: 0
>> tx35_added_vlan_packets: 0
>> tx35_nop: 0
>> tx35_csum_none: 0
>> tx35_stopped: 0
>> tx35_dropped: 0
>> tx35_xmit_more: 0
>> tx35_recover: 0
>> tx35_cqes: 0
>> tx35_wake: 0
>> tx35_cqe_err: 0
>> tx36_packets: 0
>> tx36_bytes: 0
>> tx36_tso_packets: 0
>> tx36_tso_bytes: 0
>> tx36_tso_inner_packets: 0
>> tx36_tso_inner_bytes: 0
>> tx36_csum_partial: 0
>> tx36_csum_partial_inner: 0
>> tx36_added_vlan_packets: 0
>> tx36_nop: 0
>> tx36_csum_none: 0
>> tx36_stopped: 0
>> tx36_dropped: 0
>> tx36_xmit_more: 0
>> tx36_recover: 0
>> tx36_cqes: 0
>> tx36_wake: 0
>> tx36_cqe_err: 0
>> tx37_packets: 0
>> tx37_bytes: 0
>> tx37_tso_packets: 0
>> tx37_tso_bytes: 0
>> tx37_tso_inner_packets: 0
>> tx37_tso_inner_bytes: 0
>> tx37_csum_partial: 0
>> tx37_csum_partial_inner: 0
>> tx37_added_vlan_packets: 0
>> tx37_nop: 0
>> tx37_csum_none: 0
>> tx37_stopped: 0
>> tx37_dropped: 0
>> tx37_xmit_more: 0
>> tx37_recover: 0
>> tx37_cqes: 0
>> tx37_wake: 0
>> tx37_cqe_err: 0
>> tx38_packets: 0
>> tx38_bytes: 0
>> tx38_tso_packets: 0
>> tx38_tso_bytes: 0
>> tx38_tso_inner_packets: 0
>> tx38_tso_inner_bytes: 0
>> tx38_csum_partial: 0
>> tx38_csum_partial_inner: 0
>> tx38_added_vlan_packets: 0
>> tx38_nop: 0
>> tx38_csum_none: 0
>> tx38_stopped: 0
>> tx38_dropped: 0
>> tx38_xmit_more: 0
>> tx38_recover: 0
>> tx38_cqes: 0
>> tx38_wake: 0
>> tx38_cqe_err: 0
>> tx39_packets: 0
>> tx39_bytes: 0
>> tx39_tso_packets: 0
>> tx39_tso_bytes: 0
>> tx39_tso_inner_packets: 0
>> tx39_tso_inner_bytes: 0
>> tx39_csum_partial: 0
>> tx39_csum_partial_inner: 0
>> tx39_added_vlan_packets: 0
>> tx39_nop: 0
>> tx39_csum_none: 0
>> tx39_stopped: 0
>> tx39_dropped: 0
>> tx39_xmit_more: 0
>> tx39_recover: 0
>> tx39_cqes: 0
>> tx39_wake: 0
>> tx39_cqe_err: 0
>> tx40_packets: 0
>> tx40_bytes: 0
>> tx40_tso_packets: 0
>> tx40_tso_bytes: 0
>> tx40_tso_inner_packets: 0
>> tx40_tso_inner_bytes: 0
>> tx40_csum_partial: 0
>> tx40_csum_partial_inner: 0
>> tx40_added_vlan_packets: 0
>> tx40_nop: 0
>> tx40_csum_none: 0
>> tx40_stopped: 0
>> tx40_dropped: 0
>> tx40_xmit_more: 0
>> tx40_recover: 0
>> tx40_cqes: 0
>> tx40_wake: 0
>> tx40_cqe_err: 0
>> tx41_packets: 0
>> tx41_bytes: 0
>> tx41_tso_packets: 0
>> tx41_tso_bytes: 0
>> tx41_tso_inner_packets: 0
>> tx41_tso_inner_bytes: 0
>> tx41_csum_partial: 0
>> tx41_csum_partial_inner: 0
>> tx41_added_vlan_packets: 0
>> tx41_nop: 0
>> tx41_csum_none: 0
>> tx41_stopped: 0
>> tx41_dropped: 0
>> tx41_xmit_more: 0
>> tx41_recover: 0
>> tx41_cqes: 0
>> tx41_wake: 0
>> tx41_cqe_err: 0
>> tx42_packets: 0
>> tx42_bytes: 0
>> tx42_tso_packets: 0
>> tx42_tso_bytes: 0
>> tx42_tso_inner_packets: 0
>> tx42_tso_inner_bytes: 0
>> tx42_csum_partial: 0
>> tx42_csum_partial_inner: 0
>> tx42_added_vlan_packets: 0
>> tx42_nop: 0
>> tx42_csum_none: 0
>> tx42_stopped: 0
>> tx42_dropped: 0
>> tx42_xmit_more: 0
>> tx42_recover: 0
>> tx42_cqes: 0
>> tx42_wake: 0
>> tx42_cqe_err: 0
>> tx43_packets: 0
>> tx43_bytes: 0
>> tx43_tso_packets: 0
>> tx43_tso_bytes: 0
>> tx43_tso_inner_packets: 0
>> tx43_tso_inner_bytes: 0
>> tx43_csum_partial: 0
>> tx43_csum_partial_inner: 0
>> tx43_added_vlan_packets: 0
>> tx43_nop: 0
>> tx43_csum_none: 0
>> tx43_stopped: 0
>> tx43_dropped: 0
>> tx43_xmit_more: 0
>> tx43_recover: 0
>> tx43_cqes: 0
>> tx43_wake: 0
>> tx43_cqe_err: 0
>> tx44_packets: 0
>> tx44_bytes: 0
>> tx44_tso_packets: 0
>> tx44_tso_bytes: 0
>> tx44_tso_inner_packets: 0
>> tx44_tso_inner_bytes: 0
>> tx44_csum_partial: 0
>> tx44_csum_partial_inner: 0
>> tx44_added_vlan_packets: 0
>> tx44_nop: 0
>> tx44_csum_none: 0
>> tx44_stopped: 0
>> tx44_dropped: 0
>> tx44_xmit_more: 0
>> tx44_recover: 0
>> tx44_cqes: 0
>> tx44_wake: 0
>> tx44_cqe_err: 0
>> tx45_packets: 0
>> tx45_bytes: 0
>> tx45_tso_packets: 0
>> tx45_tso_bytes: 0
>> tx45_tso_inner_packets: 0
>> tx45_tso_inner_bytes: 0
>> tx45_csum_partial: 0
>> tx45_csum_partial_inner: 0
>> tx45_added_vlan_packets: 0
>> tx45_nop: 0
>> tx45_csum_none: 0
>> tx45_stopped: 0
>> tx45_dropped: 0
>> tx45_xmit_more: 0
>> tx45_recover: 0
>> tx45_cqes: 0
>> tx45_wake: 0
>> tx45_cqe_err: 0
>> tx46_packets: 0
>> tx46_bytes: 0
>> tx46_tso_packets: 0
>> tx46_tso_bytes: 0
>> tx46_tso_inner_packets: 0
>> tx46_tso_inner_bytes: 0
>> tx46_csum_partial: 0
>> tx46_csum_partial_inner: 0
>> tx46_added_vlan_packets: 0
>> tx46_nop: 0
>> tx46_csum_none: 0
>> tx46_stopped: 0
>> tx46_dropped: 0
>> tx46_xmit_more: 0
>> tx46_recover: 0
>> tx46_cqes: 0
>> tx46_wake: 0
>> tx46_cqe_err: 0
>> tx47_packets: 0
>> tx47_bytes: 0
>> tx47_tso_packets: 0
>> tx47_tso_bytes: 0
>> tx47_tso_inner_packets: 0
>> tx47_tso_inner_bytes: 0
>> tx47_csum_partial: 0
>> tx47_csum_partial_inner: 0
>> tx47_added_vlan_packets: 0
>> tx47_nop: 0
>> tx47_csum_none: 0
>> tx47_stopped: 0
>> tx47_dropped: 0
>> tx47_xmit_more: 0
>> tx47_recover: 0
>> tx47_cqes: 0
>> tx47_wake: 0
>> tx47_cqe_err: 0
>> tx48_packets: 0
>> tx48_bytes: 0
>> tx48_tso_packets: 0
>> tx48_tso_bytes: 0
>> tx48_tso_inner_packets: 0
>> tx48_tso_inner_bytes: 0
>> tx48_csum_partial: 0
>> tx48_csum_partial_inner: 0
>> tx48_added_vlan_packets: 0
>> tx48_nop: 0
>> tx48_csum_none: 0
>> tx48_stopped: 0
>> tx48_dropped: 0
>> tx48_xmit_more: 0
>> tx48_recover: 0
>> tx48_cqes: 0
>> tx48_wake: 0
>> tx48_cqe_err: 0
>> tx49_packets: 0
>> tx49_bytes: 0
>> tx49_tso_packets: 0
>> tx49_tso_bytes: 0
>> tx49_tso_inner_packets: 0
>> tx49_tso_inner_bytes: 0
>> tx49_csum_partial: 0
>> tx49_csum_partial_inner: 0
>> tx49_added_vlan_packets: 0
>> tx49_nop: 0
>> tx49_csum_none: 0
>> tx49_stopped: 0
>> tx49_dropped: 0
>> tx49_xmit_more: 0
>> tx49_recover: 0
>> tx49_cqes: 0
>> tx49_wake: 0
>> tx49_cqe_err: 0
>> tx50_packets: 0
>> tx50_bytes: 0
>> tx50_tso_packets: 0
>> tx50_tso_bytes: 0
>> tx50_tso_inner_packets: 0
>> tx50_tso_inner_bytes: 0
>> tx50_csum_partial: 0
>> tx50_csum_partial_inner: 0
>> tx50_added_vlan_packets: 0
>> tx50_nop: 0
>> tx50_csum_none: 0
>> tx50_stopped: 0
>> tx50_dropped: 0
>> tx50_xmit_more: 0
>> tx50_recover: 0
>> tx50_cqes: 0
>> tx50_wake: 0
>> tx50_cqe_err: 0
>> tx51_packets: 0
>> tx51_bytes: 0
>> tx51_tso_packets: 0
>> tx51_tso_bytes: 0
>> tx51_tso_inner_packets: 0
>> tx51_tso_inner_bytes: 0
>> tx51_csum_partial: 0
>> tx51_csum_partial_inner: 0
>> tx51_added_vlan_packets: 0
>> tx51_nop: 0
>> tx51_csum_none: 0
>> tx51_stopped: 0
>> tx51_dropped: 0
>> tx51_xmit_more: 0
>> tx51_recover: 0
>> tx51_cqes: 0
>> tx51_wake: 0
>> tx51_cqe_err: 0
>> tx52_packets: 0
>> tx52_bytes: 0
>> tx52_tso_packets: 0
>> tx52_tso_bytes: 0
>> tx52_tso_inner_packets: 0
>> tx52_tso_inner_bytes: 0
>> tx52_csum_partial: 0
>> tx52_csum_partial_inner: 0
>> tx52_added_vlan_packets: 0
>> tx52_nop: 0
>> tx52_csum_none: 0
>> tx52_stopped: 0
>> tx52_dropped: 0
>> tx52_xmit_more: 0
>> tx52_recover: 0
>> tx52_cqes: 0
>> tx52_wake: 0
>> tx52_cqe_err: 0
>> tx53_packets: 0
>> tx53_bytes: 0
>> tx53_tso_packets: 0
>> tx53_tso_bytes: 0
>> tx53_tso_inner_packets: 0
>> tx53_tso_inner_bytes: 0
>> tx53_csum_partial: 0
>> tx53_csum_partial_inner: 0
>> tx53_added_vlan_packets: 0
>> tx53_nop: 0
>> tx53_csum_none: 0
>> tx53_stopped: 0
>> tx53_dropped: 0
>> tx53_xmit_more: 0
>> tx53_recover: 0
>> tx53_cqes: 0
>> tx53_wake: 0
>> tx53_cqe_err: 0
>> tx54_packets: 0
>> tx54_bytes: 0
>> tx54_tso_packets: 0
>> tx54_tso_bytes: 0
>> tx54_tso_inner_packets: 0
>> tx54_tso_inner_bytes: 0
>> tx54_csum_partial: 0
>> tx54_csum_partial_inner: 0
>> tx54_added_vlan_packets: 0
>> tx54_nop: 0
>> tx54_csum_none: 0
>> tx54_stopped: 0
>> tx54_dropped: 0
>> tx54_xmit_more: 0
>> tx54_recover: 0
>> tx54_cqes: 0
>> tx54_wake: 0
>> tx54_cqe_err: 0
>> tx55_packets: 0
>> tx55_bytes: 0
>> tx55_tso_packets: 0
>> tx55_tso_bytes: 0
>> tx55_tso_inner_packets: 0
>> tx55_tso_inner_bytes: 0
>> tx55_csum_partial: 0
>> tx55_csum_partial_inner: 0
>> tx55_added_vlan_packets: 0
>> tx55_nop: 0
>> tx55_csum_none: 0
>> tx55_stopped: 0
>> tx55_dropped: 0
>> tx55_xmit_more: 0
>> tx55_recover: 0
>> tx55_cqes: 0
>> tx55_wake: 0
>> tx55_cqe_err: 0
>> tx0_xdp_xmit: 0
>> tx0_xdp_full: 0
>> tx0_xdp_err: 0
>> tx0_xdp_cqes: 0
>> tx1_xdp_xmit: 0
>> tx1_xdp_full: 0
>> tx1_xdp_err: 0
>> tx1_xdp_cqes: 0
>> tx2_xdp_xmit: 0
>> tx2_xdp_full: 0
>> tx2_xdp_err: 0
>> tx2_xdp_cqes: 0
>> tx3_xdp_xmit: 0
>> tx3_xdp_full: 0
>> tx3_xdp_err: 0
>> tx3_xdp_cqes: 0
>> tx4_xdp_xmit: 0
>> tx4_xdp_full: 0
>> tx4_xdp_err: 0
>> tx4_xdp_cqes: 0
>> tx5_xdp_xmit: 0
>> tx5_xdp_full: 0
>> tx5_xdp_err: 0
>> tx5_xdp_cqes: 0
>> tx6_xdp_xmit: 0
>> tx6_xdp_full: 0
>> tx6_xdp_err: 0
>> tx6_xdp_cqes: 0
>> tx7_xdp_xmit: 0
>> tx7_xdp_full: 0
>> tx7_xdp_err: 0
>> tx7_xdp_cqes: 0
>> tx8_xdp_xmit: 0
>> tx8_xdp_full: 0
>> tx8_xdp_err: 0
>> tx8_xdp_cqes: 0
>> tx9_xdp_xmit: 0
>> tx9_xdp_full: 0
>> tx9_xdp_err: 0
>> tx9_xdp_cqes: 0
>> tx10_xdp_xmit: 0
>> tx10_xdp_full: 0
>> tx10_xdp_err: 0
>> tx10_xdp_cqes: 0
>> tx11_xdp_xmit: 0
>> tx11_xdp_full: 0
>> tx11_xdp_err: 0
>> tx11_xdp_cqes: 0
>> tx12_xdp_xmit: 0
>> tx12_xdp_full: 0
>> tx12_xdp_err: 0
>> tx12_xdp_cqes: 0
>> tx13_xdp_xmit: 0
>> tx13_xdp_full: 0
>> tx13_xdp_err: 0
>> tx13_xdp_cqes: 0
>> tx14_xdp_xmit: 0
>> tx14_xdp_full: 0
>> tx14_xdp_err: 0
>> tx14_xdp_cqes: 0
>> tx15_xdp_xmit: 0
>> tx15_xdp_full: 0
>> tx15_xdp_err: 0
>> tx15_xdp_cqes: 0
>> tx16_xdp_xmit: 0
>> tx16_xdp_full: 0
>> tx16_xdp_err: 0
>> tx16_xdp_cqes: 0
>> tx17_xdp_xmit: 0
>> tx17_xdp_full: 0
>> tx17_xdp_err: 0
>> tx17_xdp_cqes: 0
>> tx18_xdp_xmit: 0
>> tx18_xdp_full: 0
>> tx18_xdp_err: 0
>> tx18_xdp_cqes: 0
>> tx19_xdp_xmit: 0
>> tx19_xdp_full: 0
>> tx19_xdp_err: 0
>> tx19_xdp_cqes: 0
>> tx20_xdp_xmit: 0
>> tx20_xdp_full: 0
>> tx20_xdp_err: 0
>> tx20_xdp_cqes: 0
>> tx21_xdp_xmit: 0
>> tx21_xdp_full: 0
>> tx21_xdp_err: 0
>> tx21_xdp_cqes: 0
>> tx22_xdp_xmit: 0
>> tx22_xdp_full: 0
>> tx22_xdp_err: 0
>> tx22_xdp_cqes: 0
>> tx23_xdp_xmit: 0
>> tx23_xdp_full: 0
>> tx23_xdp_err: 0
>> tx23_xdp_cqes: 0
>> tx24_xdp_xmit: 0
>> tx24_xdp_full: 0
>> tx24_xdp_err: 0
>> tx24_xdp_cqes: 0
>> tx25_xdp_xmit: 0
>> tx25_xdp_full: 0
>> tx25_xdp_err: 0
>> tx25_xdp_cqes: 0
>> tx26_xdp_xmit: 0
>> tx26_xdp_full: 0
>> tx26_xdp_err: 0
>> tx26_xdp_cqes: 0
>> tx27_xdp_xmit: 0
>> tx27_xdp_full: 0
>> tx27_xdp_err: 0
>> tx27_xdp_cqes: 0
>> tx28_xdp_xmit: 0
>> tx28_xdp_full: 0
>> tx28_xdp_err: 0
>> tx28_xdp_cqes: 0
>> tx29_xdp_xmit: 0
>> tx29_xdp_full: 0
>> tx29_xdp_err: 0
>> tx29_xdp_cqes: 0
>> tx30_xdp_xmit: 0
>> tx30_xdp_full: 0
>> tx30_xdp_err: 0
>> tx30_xdp_cqes: 0
>> tx31_xdp_xmit: 0
>> tx31_xdp_full: 0
>> tx31_xdp_err: 0
>> tx31_xdp_cqes: 0
>> tx32_xdp_xmit: 0
>> tx32_xdp_full: 0
>> tx32_xdp_err: 0
>> tx32_xdp_cqes: 0
>> tx33_xdp_xmit: 0
>> tx33_xdp_full: 0
>> tx33_xdp_err: 0
>> tx33_xdp_cqes: 0
>> tx34_xdp_xmit: 0
>> tx34_xdp_full: 0
>> tx34_xdp_err: 0
>> tx34_xdp_cqes: 0
>> tx35_xdp_xmit: 0
>> tx35_xdp_full: 0
>> tx35_xdp_err: 0
>> tx35_xdp_cqes: 0
>> tx36_xdp_xmit: 0
>> tx36_xdp_full: 0
>> tx36_xdp_err: 0
>> tx36_xdp_cqes: 0
>> tx37_xdp_xmit: 0
>> tx37_xdp_full: 0
>> tx37_xdp_err: 0
>> tx37_xdp_cqes: 0
>> tx38_xdp_xmit: 0
>> tx38_xdp_full: 0
>> tx38_xdp_err: 0
>> tx38_xdp_cqes: 0
>> tx39_xdp_xmit: 0
>> tx39_xdp_full: 0
>> tx39_xdp_err: 0
>> tx39_xdp_cqes: 0
>> tx40_xdp_xmit: 0
>> tx40_xdp_full: 0
>> tx40_xdp_err: 0
>> tx40_xdp_cqes: 0
>> tx41_xdp_xmit: 0
>> tx41_xdp_full: 0
>> tx41_xdp_err: 0
>> tx41_xdp_cqes: 0
>> tx42_xdp_xmit: 0
>> tx42_xdp_full: 0
>> tx42_xdp_err: 0
>> tx42_xdp_cqes: 0
>> tx43_xdp_xmit: 0
>> tx43_xdp_full: 0
>> tx43_xdp_err: 0
>> tx43_xdp_cqes: 0
>> tx44_xdp_xmit: 0
>> tx44_xdp_full: 0
>> tx44_xdp_err: 0
>> tx44_xdp_cqes: 0
>> tx45_xdp_xmit: 0
>> tx45_xdp_full: 0
>> tx45_xdp_err: 0
>> tx45_xdp_cqes: 0
>> tx46_xdp_xmit: 0
>> tx46_xdp_full: 0
>> tx46_xdp_err: 0
>> tx46_xdp_cqes: 0
>> tx47_xdp_xmit: 0
>> tx47_xdp_full: 0
>> tx47_xdp_err: 0
>> tx47_xdp_cqes: 0
>> tx48_xdp_xmit: 0
>> tx48_xdp_full: 0
>> tx48_xdp_err: 0
>> tx48_xdp_cqes: 0
>> tx49_xdp_xmit: 0
>> tx49_xdp_full: 0
>> tx49_xdp_err: 0
>> tx49_xdp_cqes: 0
>> tx50_xdp_xmit: 0
>> tx50_xdp_full: 0
>> tx50_xdp_err: 0
>> tx50_xdp_cqes: 0
>> tx51_xdp_xmit: 0
>> tx51_xdp_full: 0
>> tx51_xdp_err: 0
>> tx51_xdp_cqes: 0
>> tx52_xdp_xmit: 0
>> tx52_xdp_full: 0
>> tx52_xdp_err: 0
>> tx52_xdp_cqes: 0
>> tx53_xdp_xmit: 0
>> tx53_xdp_full: 0
>> tx53_xdp_err: 0
>> tx53_xdp_cqes: 0
>> tx54_xdp_xmit: 0
>> tx54_xdp_full: 0
>> tx54_xdp_err: 0
>> tx54_xdp_cqes: 0
>> tx55_xdp_xmit: 0
>> tx55_xdp_full: 0
>> tx55_xdp_err: 0
>> tx55_xdp_cqes: 0
>>
>>
>> mpstat -P ALL 1 10
>> Average: CPU %usr %nice %sys %iowait %irq %soft
>> %steal
>> %guest %gnice %idle
>> Average: all 0.04 0.00 6.94 0.02 0.00 32.00
>> 0.00 0.00 0.00 61.00
>> Average: 0 0.00 0.00 1.20 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 98.80
>> Average: 1 0.00 0.00 2.30 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 97.70
>> Average: 2 0.10 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 99.90
>> Average: 3 0.10 0.00 1.50 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 98.40
>> Average: 4 0.50 0.00 2.50 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 97.00
>> Average: 5 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 6 0.90 0.00 10.20 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 88.90
>> Average: 7 0.00 0.00 0.00 1.40 0.00 0.00
>> 0.00
>> 0.00 0.00 98.60
>> Average: 8 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 9 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 10 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 11 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 12 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 13 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 14 0.00 0.00 12.99 0.00 0.00 62.64
>> 0.00 0.00 0.00 24.38
>> Average: 15 0.00 0.00 12.70 0.00 0.00 63.40
>> 0.00 0.00 0.00 23.90
>> Average: 16 0.00 0.00 11.20 0.00 0.00 66.40
>> 0.00 0.00 0.00 22.40
>> Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
>> 0.00 0.00 0.00 31.30
>> Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
>> 0.00 0.00 0.00 24.90
>> Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
>> 0.00 0.00 0.00 19.68
>> Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
>> 0.00 0.00 0.00 18.00
>> Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
>> 0.00 0.00 0.00 17.40
>> Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
>> 0.00 0.00 0.00 26.03
>> Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
>> 0.00 0.00 0.00 17.52
>> Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
>> 0.00 0.00 0.00 18.20
>> Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
>> 0.00 0.00 0.00 17.68
>> Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
>> 0.00 0.00 0.00 18.20
>> Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
>> 0.00 0.00 0.00 20.42
> The numa cores are not at 100% util, you have around 20% of idle on
> each one.
Yes - no 100% cpu - but the difference between 80% and 100% is like push
aditional 1-2Gbit/s
>
>> Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 96.10
>> Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 99.70
>> Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 99.50
>> Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 100.00
>> Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 97.40
>> Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 99.10
>> Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
>> 0.00
>> 0.00 0.00 99.40
>> Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
>> 0.00 0.00 0.00 19.42
>> Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
>> 0.00 0.00 0.00 26.60
>> Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
>> 0.00 0.00 0.00 21.60
>> Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
>> 0.00 0.00 0.00 20.50
>> Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
>> 0.00 0.00 0.00 21.60
>> Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
>> 0.00 0.00 0.00 24.58
>> Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
>> 0.00 0.00 0.00 24.68
>> Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
>> 0.00 0.00 0.00 28.34
>> Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
>> 0.00 0.00 0.00 25.83
>> Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
>> 0.00 0.00 0.00 27.20
>> Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
>> 0.00 0.00 0.00 28.03
>> Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
>> 0.00 0.00 0.00 24.70
>> Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
>> 0.00 0.00 0.00 25.30
>> Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
>> 0.00 0.00 0.00 26.77
>>
>>
>> ethtool -k enp175s0f0
>> Features for enp175s0f0:
>> rx-checksumming: on
>> tx-checksumming: on
>> tx-checksum-ipv4: on
>> tx-checksum-ip-generic: off [fixed]
>> tx-checksum-ipv6: on
>> tx-checksum-fcoe-crc: off [fixed]
>> tx-checksum-sctp: off [fixed]
>> scatter-gather: on
>> tx-scatter-gather: on
>> tx-scatter-gather-fraglist: off [fixed]
>> tcp-segmentation-offload: on
>> tx-tcp-segmentation: on
>> tx-tcp-ecn-segmentation: off [fixed]
>> tx-tcp-mangleid-segmentation: off
>> tx-tcp6-segmentation: on
>> udp-fragmentation-offload: off
>> generic-segmentation-offload: on
>> generic-receive-offload: on
>> large-receive-offload: off [fixed]
>> rx-vlan-offload: on
>> tx-vlan-offload: on
>> ntuple-filters: off
>> receive-hashing: on
>> highdma: on [fixed]
>> rx-vlan-filter: on
>> vlan-challenged: off [fixed]
>> tx-lockless: off [fixed]
>> netns-local: off [fixed]
>> tx-gso-robust: off [fixed]
>> tx-fcoe-segmentation: off [fixed]
>> tx-gre-segmentation: on
>> tx-gre-csum-segmentation: on
>> tx-ipxip4-segmentation: off [fixed]
>> tx-ipxip6-segmentation: off [fixed]
>> tx-udp_tnl-segmentation: on
>> tx-udp_tnl-csum-segmentation: on
>> tx-gso-partial: on
>> tx-sctp-segmentation: off [fixed]
>> tx-esp-segmentation: off [fixed]
>> tx-udp-segmentation: on
>> fcoe-mtu: off [fixed]
>> tx-nocache-copy: off
>> loopback: off [fixed]
>> rx-fcs: off
>> rx-all: off
>> tx-vlan-stag-hw-insert: on
>> rx-vlan-stag-hw-parse: off [fixed]
>> rx-vlan-stag-filter: on [fixed]
>> l2-fwd-offload: off [fixed]
>> hw-tc-offload: off
>> esp-hw-offload: off [fixed]
>> esp-tx-csum-hw-offload: off [fixed]
>> rx-udp_tunnel-port-offload: on
>> tls-hw-tx-offload: off [fixed]
>> tls-hw-rx-offload: off [fixed]
>> rx-gro-hw: off [fixed]
>> tls-hw-record: off [fixed]
>>
>> ethtool -c enp175s0f0
>> Coalesce parameters for enp175s0f0:
>> Adaptive RX: off TX: on
>> stats-block-usecs: 0
>> sample-interval: 0
>> pkt-rate-low: 0
>> pkt-rate-high: 0
>> dmac: 32703
>>
>> rx-usecs: 256
>> rx-frames: 128
>> rx-usecs-irq: 0
>> rx-frames-irq: 0
>>
>> tx-usecs: 8
>> tx-frames: 128
>> tx-usecs-irq: 0
>> tx-frames-irq: 0
>>
>> rx-usecs-low: 0
>> rx-frame-low: 0
>> tx-usecs-low: 0
>> tx-frame-low: 0
>>
>> rx-usecs-high: 0
>> rx-frame-high: 0
>> tx-usecs-high: 0
>> tx-frame-high: 0
>>
>> ethtool -g enp175s0f0
>> Ring parameters for enp175s0f0:
>> Pre-set maximums:
>> RX: 8192
>> RX Mini: 0
>> RX Jumbo: 0
>> TX: 8192
>> Current hardware settings:
>> RX: 4096
>> RX Mini: 0
>> RX Jumbo: 0
>> TX: 4096
>>
>>
>>
>>
>>
>>
Also changed a little coalesce params - and best for this config are:
ethtool -c enp175s0f0
Coalesce parameters for enp175s0f0:
Adaptive RX: off TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
dmac: 32573
rx-usecs: 40
rx-frames: 128
rx-usecs-irq: 0
rx-frames-irq: 0
tx-usecs: 8
tx-frames: 8
tx-usecs-irq: 0
tx-frames-irq: 0
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
Less drops on RX side - and more pps in overall forwarded.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 10:55 ` Jesper Dangaard Brouer
@ 2018-11-01 13:52 ` Paweł Staszewski
2018-11-01 17:23 ` David Ahern
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 13:52 UTC (permalink / raw)
To: Jesper Dangaard Brouer, David Ahern; +Cc: netdev, Yoel Caspersen
W dniu 01.11.2018 o 11:55, Jesper Dangaard Brouer pisze:
> On Wed, 31 Oct 2018 21:37:16 -0600 David Ahern <dsahern@gmail.com> wrote:
>
>> This is mainly a forwarding use case? Seems so based on the perf report.
>> I suspect forwarding with XDP would show pretty good improvement.
> Yes, significant performance improvements.
>
> Notice Davids talk: "Leveraging Kernel Tables with XDP"
> http://vger.kernel.org/lpc-networking2018.html#session-1
It will be rly interesting
> It looks like that you are doing "pure" IP-routing, without any
> iptables conntrack stuff (from your perf report data). That will
> actually be a really good use-case for accelerating this with XDP.
Yes pure IP routing
iptables used only for some local input filtering.
>
> I want you to understand the philosophy behind how David and I want
> people to leverage XDP. Think of XDP as a software offload layer for
> the kernel network stack. Setup and use Linux kernel network stack, but
> accelerate parts of it with XDP, e.g. the route FIB lookup.
>
> Sample code avail here:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/xdp_fwd_kern.c
I can try some tests on same hw but testlab configuration - will give it
a try :)
> (I do warn, what we just found a bug/crash in setup+tairdown for the
> mlx5 driver you are using, that we/mlx _will_ fix soon)
Ok
>
>
>> You need the vlan changes I have queued up though.
> I know Yoel will be very interested in those changes too! I've
> convinced Yoel to write an XDP program for his Border Network Gateway
> (BNG) production system[1], and his is a heavy VLAN user. And the plan
> is to Open Source this when he have-something-working.
>
> [1] https://www.version2.dk/blog/software-router-del-5-linux-bng-1086060
Ok - for now i need to split traffic into two separate 100G ports placed
in two different x16 pciexpress slots to check if the problem is mainly
caused by no more pciex x16 bandwidth available.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 9:22 ` Jesper Dangaard Brouer
2018-11-01 10:34 ` Paweł Staszewski
@ 2018-11-01 15:27 ` Aaron Lu
2018-11-01 20:23 ` Saeed Mahameed
1 sibling, 1 reply; 77+ messages in thread
From: Aaron Lu @ 2018-11-01 15:27 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: Paweł Staszewski, Eric Dumazet, netdev, Tariq Toukan,
Ilias Apalodimas, Yoel Caspersen, Mel Gorman
On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer wrote:
... ...
> Section copied out:
>
> mlx5e_poll_tx_cq
> |
> --16.34%--napi_consume_skb
> |
> |--12.65%--__free_pages_ok
> | |
> | --11.86%--free_one_page
> | |
> | |--10.10%--queued_spin_lock_slowpath
> | |
> | --0.65%--_raw_spin_lock
This callchain looks like it is freeing higher order pages than order 0:
__free_pages_ok is only called for pages whose order are bigger than 0.
> |
> |--1.55%--page_frag_free
> |
> --1.44%--skb_release_data
>
>
> Let me explain what (I think) happens. The mlx5 driver RX-page recycle
> mechanism is not effective in this workload, and pages have to go
> through the page allocator. The lock contention happens during mlx5
> DMA TX completion cycle. And the page allocator cannot keep up at
> these speeds.
>
> One solution is extend page allocator with a bulk free API. (This have
> been on my TODO list for a long time, but I don't have a
> micro-benchmark that trick the driver page-recycle to fail). It should
> fit nicely, as I can see that kmem_cache_free_bulk() does get
> activated (bulk freeing SKBs), which means that DMA TX completion do
> have a bulk of packets.
>
> We can (and should) also improve the page recycle scheme in the driver.
> After LPC, I have a project with Tariq and Ilias (Cc'ed) to improve the
> page_pool, and we will (attempt) to generalize this, for both high-end
> mlx5 and more low-end ARM64-boards (macchiatobin and espressobin).
>
> The MM-people is in parallel working to improve the performance of
> order-0 page returns. Thus, the explicit page bulk free API might
> actually become less important. I actually think (Cc.) Aaron have a
> patchset he would like you to test, which removes the (zone->)lock
> you hit in free_one_page().
Thanks Jesper.
Yes, the said patchset is in this branch:
https://github.com/aaronlu/linux no_merge_cluster_alloc_4.19-rc5
But as I said above, I think the lock contention here is for
order > 0 pages so my current patchset will not work here, unfortunately.
BTW, Mel Gorman has suggested an alternative way to improve page
allocator's scalability and I'm working on it right now, it will
improve page allocator's scalability for all order pages. I might be
able to post it some time next week, will CC all of you when it's ready.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 11:09 ` Paweł Staszewski
@ 2018-11-01 16:49 ` Paweł Staszewski
2018-11-01 20:37 ` Saeed Mahameed
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 16:49 UTC (permalink / raw)
To: Saeed Mahameed, netdev
W dniu 01.11.2018 o 12:09, Paweł Staszewski pisze:
>>> rx_cqe_compress_pkts: 0
>> If this is a pcie bottleneck it might be useful to enable CQE
>> compression (to reduce PCIe completion descriptors transactions)
>> you should see the above rx_cqe_compress_pkts increasing when enabled.
>>
>> $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
>> $ ethtool --show-priv-flags enp175s0f1
>> Private flags for p6p1:
>> rx_cqe_moder : on
>> cqe_moder : off
>> rx_cqe_compress : on
>> ...
>>
>> try this on both interfaces.
> Done
> ethtool --show-priv-flags enp175s0f1
> Private flags for enp175s0f1:
> rx_cqe_moder : on
> tx_cqe_moder : off
> rx_cqe_compress : on
> rx_striding_rq : off
> rx_no_csum_complete: off
>
> ethtool --show-priv-flags enp175s0f0
> Private flags for enp175s0f0:
> rx_cqe_moder : on
> tx_cqe_moder : off
> rx_cqe_compress : on
> rx_striding_rq : off
> rx_no_csum_complete: off
Enabling cqe compress changes nothing after reaching 64Gbit RX /
64Gbit/s TX on interfaces cpu's are saturated at 100%
ethtool -S enp175s0f1 | grep rx_cqe_compress
rx_cqe_compress_blks: 5657836379
rx_cqe_compress_pkts: 13153761080
ethtool -S enp175s0f0 | grep rx_cqe_compress
rx_cqe_compress_blks: 5994612500
rx_cqe_compress_pkts: 13579014869
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
- iface Rx Tx Total
==============================================================================
enp175s0f1: 27.03 Gb/s 37.09 Gb/s
64.12 Gb/s
enp175s0f0: 36.84 Gb/s 26.82 Gb/s
63.66 Gb/s
------------------------------------------------------------------------------
total: 63.85 Gb/s 63.87 Gb/s 127.72 Gb/s
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
/ iface Rx Tx Total
==============================================================================
enp175s0f1: 3.22 GB/s 4.26 GB/s
7.48 GB/s
enp175s0f0: 4.24 GB/s 3.21 GB/s
7.45 GB/s
------------------------------------------------------------------------------
total: 7.46 GB/s 7.47 GB/s
14.93 GB/s
mpstat
Average: CPU %usr %nice %sys %iowait %irq %soft %steal
%guest %gnice %idle
Average: all 0.05 0.00 0.19 0.02 0.00 42.74 0.00
0.00 0.00 56.99
Average: 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 1 0.00 0.00 0.30 0.00 0.00 0.00 0.00
0.00 0.00 99.70
Average: 2 0.00 0.00 0.20 0.00 0.00 0.00 0.00
0.00 0.00 99.80
Average: 3 0.00 0.00 0.20 1.20 0.00 0.00 0.00
0.00 0.00 98.60
Average: 4 0.10 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 5 0.00 0.00 0.10 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 6 0.10 0.00 0.20 0.00 0.00 0.00 0.00
0.00 0.00 99.70
Average: 7 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 11 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 12 1.40 0.00 4.50 0.00 0.00 0.00 0.00
0.00 0.00 94.10
Average: 13 0.00 0.00 1.60 0.00 0.00 0.00 0.00
0.00 0.00 98.40
Average: 14 0.00 0.00 0.00 0.00 0.00 84.10 0.00
0.00 0.00 15.90
Average: 15 0.00 0.00 0.10 0.00 0.00 93.70 0.00
0.00 0.00 6.20
Average: 16 0.00 0.00 0.10 0.00 0.00 94.31 0.00
0.00 0.00 5.59
Average: 17 0.00 0.00 0.00 0.00 0.00 95.30 0.00
0.00 0.00 4.70
Average: 18 0.00 0.00 0.00 0.00 0.00 62.80 0.00
0.00 0.00 37.20
Average: 19 0.00 0.00 0.10 0.00 0.00 98.90 0.00
0.00 0.00 1.00
Average: 20 0.00 0.00 0.00 0.00 0.00 99.30 0.00
0.00 0.00 0.70
Average: 21 0.00 0.00 0.00 0.00 0.00 100.00 0.00
0.00 0.00 0.00
Average: 22 0.00 0.00 0.00 0.00 0.00 99.90 0.00
0.00 0.00 0.10
Average: 23 0.00 0.00 0.10 0.00 0.00 99.90 0.00
0.00 0.00 0.00
Average: 24 0.00 0.00 0.10 0.00 0.00 97.10 0.00
0.00 0.00 2.80
Average: 25 0.00 0.00 0.00 0.00 0.00 64.06 0.00
0.00 0.00 35.94
Average: 26 0.00 0.00 0.10 0.00 0.00 88.50 0.00
0.00 0.00 11.40
Average: 27 0.00 0.00 0.00 0.00 0.00 94.10 0.00
0.00 0.00 5.90
Average: 28 0.80 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 99.20
Average: 29 0.00 0.00 0.10 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 30 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 31 0.20 0.00 0.80 0.00 0.00 0.00 0.00
0.00 0.00 99.00
Average: 32 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 33 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 34 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 35 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 36 0.20 0.00 0.40 0.00 0.00 0.00 0.00
0.00 0.00 99.40
Average: 37 0.00 0.00 0.10 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 38 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 39 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 40 0.10 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 41 0.10 0.00 1.20 0.00 0.00 0.00 0.00
0.00 0.00 98.70
Average: 42 0.00 0.00 0.10 0.00 0.00 78.92 0.00
0.00 0.00 20.98
Average: 43 0.00 0.00 0.00 0.00 0.00 81.00 0.00
0.00 0.00 19.00
Average: 44 0.00 0.00 0.00 0.00 0.00 82.58 0.00
0.00 0.00 17.42
Average: 45 0.00 0.00 0.00 0.00 0.00 68.97 0.00
0.00 0.00 31.03
Average: 46 0.00 0.00 0.10 0.00 0.00 79.20 0.00
0.00 0.00 20.70
Average: 47 0.00 0.00 0.00 0.00 0.00 71.33 0.00
0.00 0.00 28.67
Average: 48 0.00 0.00 0.10 0.00 0.00 72.40 0.00
0.00 0.00 27.50
Average: 49 0.00 0.00 0.00 0.00 0.00 90.79 0.00
0.00 0.00 9.21
Average: 50 0.00 0.00 0.10 0.00 0.00 93.20 0.00
0.00 0.00 6.70
Average: 51 0.00 0.00 0.00 0.00 0.00 91.70 0.00
0.00 0.00 8.30
Average: 52 0.00 0.00 0.10 0.00 0.00 79.90 0.00
0.00 0.00 20.00
Average: 53 0.00 0.00 0.00 0.00 0.00 76.20 0.00
0.00 0.00 23.80
Average: 54 0.00 0.00 0.00 0.00 0.00 89.59 0.00
0.00 0.00 10.41
Average: 55 0.00 0.00 0.10 0.00 0.00 65.97 0.00
0.00 0.00 33.93
So yes it looks like pcie x16 limit and pcie is 8/8GB/s both directions
- 16GB/s one direction
So 100Gbit/s network controller need to have pcie x32 :)
I understand in normal server host scenario (no forwarding) most traffic
is outbound or inbound - not many situations where we have 100G input
and 100G output.
I will replace then this 2 port 100G nic with two connect-x 5 100G nic's
installed in two different pcie x16
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 13:52 ` Paweł Staszewski
@ 2018-11-01 17:23 ` David Ahern
2018-11-01 17:30 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: David Ahern @ 2018-11-01 17:23 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer, David Ahern
Cc: netdev, Yoel Caspersen
On 11/1/18 7:52 AM, Paweł Staszewski wrote:
>
>
> W dniu 01.11.2018 o 11:55, Jesper Dangaard Brouer pisze:
>> On Wed, 31 Oct 2018 21:37:16 -0600 David Ahern <dsahern@gmail.com> wrote:
>>
>>> This is mainly a forwarding use case? Seems so based on the perf report.
>>> I suspect forwarding with XDP would show pretty good improvement.
>> Yes, significant performance improvements.
>>
>> Notice Davids talk: "Leveraging Kernel Tables with XDP"
>> http://vger.kernel.org/lpc-networking2018.html#session-1
> It will be rly interesting
It's pushing the exact use case you have: FRR manages the FIB, XDP
programs get access to updates as they happen for fast path forwarding.
>
>> It looks like that you are doing "pure" IP-routing, without any
>> iptables conntrack stuff (from your perf report data). That will
>> actually be a really good use-case for accelerating this with XDP.
> Yes pure IP routing
> iptables used only for some local input filtering.
>
>
>>
>> I want you to understand the philosophy behind how David and I want
>> people to leverage XDP. Think of XDP as a software offload layer for
>> the kernel network stack. Setup and use Linux kernel network stack, but
>> accelerate parts of it with XDP, e.g. the route FIB lookup.
>>
>> Sample code avail here:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/xdp_fwd_kern.c
>>
> I can try some tests on same hw but testlab configuration - will give it
> a try :)
>
That version does not work with VLANs. I have patches for it but it
needs a bit more work before sending out. Perhaps I can get back to it
next week.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 17:23 ` David Ahern
@ 2018-11-01 17:30 ` Paweł Staszewski
2018-11-03 17:32 ` David Ahern
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 17:30 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 01.11.2018 o 18:23, David Ahern pisze:
> On 11/1/18 7:52 AM, Paweł Staszewski wrote:
>>
>> W dniu 01.11.2018 o 11:55, Jesper Dangaard Brouer pisze:
>>> On Wed, 31 Oct 2018 21:37:16 -0600 David Ahern <dsahern@gmail.com> wrote:
>>>
>>>> This is mainly a forwarding use case? Seems so based on the perf report.
>>>> I suspect forwarding with XDP would show pretty good improvement.
>>> Yes, significant performance improvements.
>>>
>>> Notice Davids talk: "Leveraging Kernel Tables with XDP"
>>> http://vger.kernel.org/lpc-networking2018.html#session-1
>> It will be rly interesting
> It's pushing the exact use case you have: FRR manages the FIB, XDP
> programs get access to updates as they happen for fast path forwarding.
Cant wait then :)
>>> It looks like that you are doing "pure" IP-routing, without any
>>> iptables conntrack stuff (from your perf report data). That will
>>> actually be a really good use-case for accelerating this with XDP.
>> Yes pure IP routing
>> iptables used only for some local input filtering.
>>
>>
>>> I want you to understand the philosophy behind how David and I want
>>> people to leverage XDP. Think of XDP as a software offload layer for
>>> the kernel network stack. Setup and use Linux kernel network stack, but
>>> accelerate parts of it with XDP, e.g. the route FIB lookup.
>>>
>>> Sample code avail here:
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/xdp_fwd_kern.c
>>>
>> I can try some tests on same hw but testlab configuration - will give it
>> a try :)
>>
> That version does not work with VLANs. I have patches for it but it
> needs a bit more work before sending out. Perhaps I can get back to it
> next week.
>
Will be nice - next week i will be able to replace network controller
and install separate two 100Gbit nics into two pciex x16 slots - so can
test without hitting pcie bandwidth limits.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 15:27 ` Aaron Lu
@ 2018-11-01 20:23 ` Saeed Mahameed
2018-11-02 5:23 ` Aaron Lu
0 siblings, 1 reply; 77+ messages in thread
From: Saeed Mahameed @ 2018-11-01 20:23 UTC (permalink / raw)
To: aaron.lu, brouer
Cc: pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> wrote:
> ... ...
> > Section copied out:
> >
> > mlx5e_poll_tx_cq
> > |
> > --16.34%--napi_consume_skb
> > |
> > |--12.65%--__free_pages_ok
> > | |
> > | --11.86%--free_one_page
> > | |
> > | |--10.10%
> > --queued_spin_lock_slowpath
> > | |
> > | --0.65%--_raw_spin_lock
>
> This callchain looks like it is freeing higher order pages than order
> 0:
> __free_pages_ok is only called for pages whose order are bigger than
> 0.
mlx5 rx uses only order 0 pages, so i don't know where these high order
tx SKBs are coming from..
>
> > |
> > |--1.55%--page_frag_free
> > |
> > --1.44%--skb_release_data
> >
> >
> > Let me explain what (I think) happens. The mlx5 driver RX-page
> > recycle
> > mechanism is not effective in this workload, and pages have to go
> > through the page allocator. The lock contention happens during
> > mlx5
> > DMA TX completion cycle. And the page allocator cannot keep up at
> > these speeds.
> >
> > One solution is extend page allocator with a bulk free API. (This
> > have
> > been on my TODO list for a long time, but I don't have a
> > micro-benchmark that trick the driver page-recycle to fail). It
> > should
> > fit nicely, as I can see that kmem_cache_free_bulk() does get
> > activated (bulk freeing SKBs), which means that DMA TX completion
> > do
> > have a bulk of packets.
> >
> > We can (and should) also improve the page recycle scheme in the
> > driver.
> > After LPC, I have a project with Tariq and Ilias (Cc'ed) to improve
> > the
> > page_pool, and we will (attempt) to generalize this, for both high-
> > end
> > mlx5 and more low-end ARM64-boards (macchiatobin and espressobin).
> >
> > The MM-people is in parallel working to improve the performance of
> > order-0 page returns. Thus, the explicit page bulk free API might
> > actually become less important. I actually think (Cc.) Aaron have
> > a
> > patchset he would like you to test, which removes the (zone->)lock
> > you hit in free_one_page().
>
> Thanks Jesper.
>
> Yes, the said patchset is in this branch:
> https://github.com/aaronlu/linux no_merge_cluster_alloc_4.19-rc5
>
> But as I said above, I think the lock contention here is for
> order > 0 pages so my current patchset will not work here,
> unfortunately.
>
> BTW, Mel Gorman has suggested an alternative way to improve page
> allocator's scalability and I'm working on it right now, it will
> improve page allocator's scalability for all order pages. I might be
> able to post it some time next week, will CC all of you when it's
> ready.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 11:09 ` Paweł Staszewski
2018-11-01 16:49 ` Paweł Staszewski
@ 2018-11-01 20:37 ` Saeed Mahameed
2018-11-01 21:18 ` Paweł Staszewski
2018-11-03 0:18 ` Paweł Staszewski
1 sibling, 2 replies; 77+ messages in thread
From: Saeed Mahameed @ 2018-11-01 20:37 UTC (permalink / raw)
To: pstaszewski, netdev
On Thu, 2018-11-01 at 12:09 +0100, Paweł Staszewski wrote:
>
> W dniu 01.11.2018 o 10:50, Saeed Mahameed pisze:
> > On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
> > > Hi
> > >
> > > So maybee someone will be interested how linux kernel handles
> > > normal
> > > traffic (not pktgen :) )
> > >
> > >
> > > Server HW configuration:
> > >
> > > CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
> > >
> > > NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
> > >
> > >
> > > Server software:
> > >
> > > FRR - as routing daemon
> > >
> > > enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to
> > > local
> > > numa
> > > node)
> > >
> > > enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local
> > > numa
> > > node)
> > >
> > >
> > > Maximum traffic that server can handle:
> > >
> > > Bandwidth
> > >
> > > bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> > > input: /proc/net/dev type: rate
> > > \ iface Rx Tx Total
> > > =================================================================
> > > ====
> > > =========
> > > enp175s0f1: 28.51 Gb/s 37.24
> > > Gb/s
> > > 65.74 Gb/s
> > > enp175s0f0: 38.07 Gb/s 28.44
> > > Gb/s
> > > 66.51 Gb/s
> > > ---------------------------------------------------------------
> > > ----
> > > -----------
> > > total: 66.58 Gb/s 65.67
> > > Gb/s
> > > 132.25 Gb/s
> > >
> > >
> > > Packets per second:
> > >
> > > bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> > > input: /proc/net/dev type: rate
> > > - iface Rx Tx Total
> > > =================================================================
> > > ====
> > > =========
> > > enp175s0f1: 5248589.00 P/s 3486617.75 P/s
> > > 8735207.00 P/s
> > > enp175s0f0: 3557944.25 P/s 5232516.00 P/s
> > > 8790460.00 P/s
> > > ---------------------------------------------------------------
> > > ----
> > > -----------
> > > total: 8806533.00 P/s 8719134.00 P/s
> > > 17525668.00 P/s
> > >
> > >
> > > After reaching that limits nics on the upstream side (more RX
> > > traffic)
> > > start to drop packets
> > >
> > >
> > > I just dont understand that server can't handle more bandwidth
> > > (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
> > > RX
> > > side are increasing.
> > >
> >
> > Where do you see 40 Gb/s ? you showed that both ports on the same
> > NIC (
> > same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) =
> > 132.25
> > Gb/s which aligns with your pcie link limit, what am i missing ?
>
> hmm yes that was my concern also - cause cant find anywhere
> informations
> about that bandwidth is uni or bidirectional - so if 126Gbit for x16
> 8GT
> is unidir - then bidir will be 126/2 ~68Gbit - which will fit total
> bw
> on both ports
i think it is bidir
> This can explain maybee also why cpuload is rising rapidly from
> 120Gbit/s in total to 132Gbit (counters of bwmng are from /proc/net -
> so
> there can be some error in reading them when offloading (gro/gso/tso)
> on
> nic's is enabled that is why
>
> >
> > > Was thinking that maybee reached some pcie x16 limit - but x16
> > > 8GT
> > > is
> > > 126Gbit - and also when testing with pktgen i can reach more bw
> > > and
> > > pps
> > > (like 4x more comparing to normal internet traffic)
> > >
> >
> > Are you forwarding when using pktgen as well or you just testing
> > the RX
> > side pps ?
>
> Yes pktgen was tested on single port RX
> Can check also forwarding to eliminate pciex limits
>
So this explains why you have more RX pps, since tx is idle and pcie
will be free to do only rx.
[...]
> >
> > > ethtool -S enp175s0f1
> > > NIC statistics:
> > > rx_packets: 173730800927
> > > rx_bytes: 99827422751332
> > > tx_packets: 142532009512
> > > tx_bytes: 184633045911222
> > > tx_tso_packets: 25989113891
> > > tx_tso_bytes: 132933363384458
> > > tx_tso_inner_packets: 0
> > > tx_tso_inner_bytes: 0
> > > tx_added_vlan_packets: 74630239613
> > > tx_nop: 2029817748
> > > rx_lro_packets: 0
> > > rx_lro_bytes: 0
> > > rx_ecn_mark: 0
> > > rx_removed_vlan_packets: 173730800927
> > > rx_csum_unnecessary: 0
> > > rx_csum_none: 434357
> > > rx_csum_complete: 173730366570
> > > rx_csum_unnecessary_inner: 0
> > > rx_xdp_drop: 0
> > > rx_xdp_redirect: 0
> > > rx_xdp_tx_xmit: 0
> > > rx_xdp_tx_full: 0
> > > rx_xdp_tx_err: 0
> > > rx_xdp_tx_cqe: 0
> > > tx_csum_none: 38260960853
> > > tx_csum_partial: 36369278774
> > > tx_csum_partial_inner: 0
> > > tx_queue_stopped: 1
> > > tx_queue_dropped: 0
> > > tx_xmit_more: 748638099
> > > tx_recover: 0
> > > tx_cqes: 73881645031
> > > tx_queue_wake: 1
> > > tx_udp_seg_rem: 0
> > > tx_cqe_err: 0
> > > tx_xdp_xmit: 0
> > > tx_xdp_full: 0
> > > tx_xdp_err: 0
> > > tx_xdp_cqes: 0
> > > rx_wqe_err: 0
> > > rx_mpwqe_filler_cqes: 0
> > > rx_mpwqe_filler_strides: 0
> > > rx_buff_alloc_err: 0
> > > rx_cqe_compress_blks: 0
> > > rx_cqe_compress_pkts: 0
> >
> > If this is a pcie bottleneck it might be useful to enable CQE
> > compression (to reduce PCIe completion descriptors transactions)
> > you should see the above rx_cqe_compress_pkts increasing when
> > enabled.
> >
> > $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
> > $ ethtool --show-priv-flags enp175s0f1
> > Private flags for p6p1:
> > rx_cqe_moder : on
> > cqe_moder : off
> > rx_cqe_compress : on
> > ...
> >
> > try this on both interfaces.
>
> Done
> ethtool --show-priv-flags enp175s0f1
> Private flags for enp175s0f1:
> rx_cqe_moder : on
> tx_cqe_moder : off
> rx_cqe_compress : on
> rx_striding_rq : off
> rx_no_csum_complete: off
>
> ethtool --show-priv-flags enp175s0f0
> Private flags for enp175s0f0:
> rx_cqe_moder : on
> tx_cqe_moder : off
> rx_cqe_compress : on
> rx_striding_rq : off
> rx_no_csum_complete: off
>
did it help reduce the load on the pcie ? do you see more pps ?
what is the ratio between rx_cqe_compress_pkts and over all rx packets
?
[...]
> > > ethtool -S enp175s0f0
> > > NIC statistics:
> > > rx_packets: 141574897253
> > > rx_bytes: 184445040406258
> > > tx_packets: 172569543894
> > > tx_bytes: 99486882076365
> > > tx_tso_packets: 9367664195
> > > tx_tso_bytes: 56435233992948
> > > tx_tso_inner_packets: 0
> > > tx_tso_inner_bytes: 0
> > > tx_added_vlan_packets: 141297671626
> > > tx_nop: 2102916272
> > > rx_lro_packets: 0
> > > rx_lro_bytes: 0
> > > rx_ecn_mark: 0
> > > rx_removed_vlan_packets: 141574897252
> > > rx_csum_unnecessary: 0
> > > rx_csum_none: 23135854
> > > rx_csum_complete: 141551761398
> > > rx_csum_unnecessary_inner: 0
> > > rx_xdp_drop: 0
> > > rx_xdp_redirect: 0
> > > rx_xdp_tx_xmit: 0
> > > rx_xdp_tx_full: 0
> > > rx_xdp_tx_err: 0
> > > rx_xdp_tx_cqe: 0
> > > tx_csum_none: 127934791664
> >
> > It is a good idea to look into this, tx is not requesting hw tx
> > csumming for a lot of packets, maybe you are wasting a lot of cpu
> > on
> > calculating csum, or maybe this is just the rx csum complete..
> >
> > > tx_csum_partial: 13362879974
> > > tx_csum_partial_inner: 0
> > > tx_queue_stopped: 232561
> >
> > TX queues are stalling, could be an indentation for the pcie
> > bottelneck.
> >
> > > tx_queue_dropped: 0
> > > tx_xmit_more: 1266021946
> > > tx_recover: 0
> > > tx_cqes: 140031716469
> > > tx_queue_wake: 232561
> > > tx_udp_seg_rem: 0
> > > tx_cqe_err: 0
> > > tx_xdp_xmit: 0
> > > tx_xdp_full: 0
> > > tx_xdp_err: 0
> > > tx_xdp_cqes: 0
> > > rx_wqe_err: 0
> > > rx_mpwqe_filler_cqes: 0
> > > rx_mpwqe_filler_strides: 0
> > > rx_buff_alloc_err: 0
> > > rx_cqe_compress_blks: 0
> > > rx_cqe_compress_pkts: 0
> > > rx_page_reuse: 0
> > > rx_cache_reuse: 16625975793
> > > rx_cache_full: 54161465914
> > > rx_cache_empty: 258048
> > > rx_cache_busy: 54161472735
> > > rx_cache_waive: 0
> > > rx_congst_umr: 0
> > > rx_arfs_err: 0
> > > ch_events: 40572621887
> > > ch_poll: 40885650979
> > > ch_arm: 40429276692
> > > ch_aff_change: 0
> > > ch_eq_rearm: 0
> > > rx_out_of_buffer: 2791690
> > > rx_if_down_packets: 74
> > > rx_vport_unicast_packets: 141843476308
> > > rx_vport_unicast_bytes: 185421265403318
> > > tx_vport_unicast_packets: 172569484005
> > > tx_vport_unicast_bytes: 100019940094298
> > > rx_vport_multicast_packets: 85122935
> > > rx_vport_multicast_bytes: 5761316431
> > > tx_vport_multicast_packets: 6452
> > > tx_vport_multicast_bytes: 643540
> > > rx_vport_broadcast_packets: 22423624
> > > rx_vport_broadcast_bytes: 1390127090
> > > tx_vport_broadcast_packets: 22024
> > > tx_vport_broadcast_bytes: 1321440
> > > rx_vport_rdma_unicast_packets: 0
> > > rx_vport_rdma_unicast_bytes: 0
> > > tx_vport_rdma_unicast_packets: 0
> > > tx_vport_rdma_unicast_bytes: 0
> > > rx_vport_rdma_multicast_packets: 0
> > > rx_vport_rdma_multicast_bytes: 0
> > > tx_vport_rdma_multicast_packets: 0
> > > tx_vport_rdma_multicast_bytes: 0
> > > tx_packets_phy: 172569501577
> > > rx_packets_phy: 142871314588
> > > rx_crc_errors_phy: 0
> > > tx_bytes_phy: 100710212814151
> > > rx_bytes_phy: 187209224289564
> > > tx_multicast_phy: 6452
> > > tx_broadcast_phy: 22024
> > > rx_multicast_phy: 85122933
> > > rx_broadcast_phy: 22423623
> > > rx_in_range_len_errors_phy: 2
> > > rx_out_of_range_len_phy: 0
> > > rx_oversize_pkts_phy: 0
> > > rx_symbol_err_phy: 0
> > > tx_mac_control_phy: 0
> > > rx_mac_control_phy: 0
> > > rx_unsupported_op_phy: 0
> > > rx_pause_ctrl_phy: 0
> > > tx_pause_ctrl_phy: 0
> > > rx_discards_phy: 920161423
> >
> > Ok, this port seem to be suffering more, RX is congested, maybe due
> > to
> > the pcie bottleneck.
>
> Yes this side is receiving more traffic - second port is +10G more tx
>
[...]
> > > Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
> > > 0.00 0.00 0.00 31.30
> > > Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
> > > 0.00 0.00 0.00 24.90
> > > Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
> > > 0.00 0.00 0.00 19.68
> > > Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
> > > 0.00 0.00 0.00 18.00
> > > Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
> > > 0.00 0.00 0.00 17.40
> > > Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
> > > 0.00 0.00 0.00 26.03
> > > Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
> > > 0.00 0.00 0.00 17.52
> > > Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
> > > 0.00 0.00 0.00 18.20
> > > Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
> > > 0.00 0.00 0.00 17.68
> > > Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
> > > 0.00 0.00 0.00 18.20
> > > Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
> > > 0.00 0.00 0.00 20.42
> >
> > The numa cores are not at 100% util, you have around 20% of idle on
> > each one.
>
> Yes - no 100% cpu - but the difference between 80% and 100% is like
> push
> aditional 1-2Gbit/s
>
yes but, it doens't look like the bottleneck is the cpu, although it is
close to be :)..
> >
> > > Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 96.10
> > > Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 99.70
> > > Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 99.50
> > > Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 100.00
> > > Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 97.40
> > > Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 99.10
> > > Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
> > > 0.00
> > > 0.00 0.00 99.40
> > > Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
> > > 0.00 0.00 0.00 19.42
> > > Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
> > > 0.00 0.00 0.00 26.60
> > > Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
> > > 0.00 0.00 0.00 21.60
> > > Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
> > > 0.00 0.00 0.00 20.50
> > > Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
> > > 0.00 0.00 0.00 21.60
> > > Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
> > > 0.00 0.00 0.00 24.58
> > > Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
> > > 0.00 0.00 0.00 24.68
> > > Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
> > > 0.00 0.00 0.00 28.34
> > > Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
> > > 0.00 0.00 0.00 25.83
> > > Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
> > > 0.00 0.00 0.00 27.20
> > > Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
> > > 0.00 0.00 0.00 28.03
> > > Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
> > > 0.00 0.00 0.00 24.70
> > > Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
> > > 0.00 0.00 0.00 25.30
> > > Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
> > > 0.00 0.00 0.00 26.77
> > >
> > >
> > > ethtool -k enp175s0f0
> > > Features for enp175s0f0:
> > > rx-checksumming: on
> > > tx-checksumming: on
> > > tx-checksum-ipv4: on
> > > tx-checksum-ip-generic: off [fixed]
> > > tx-checksum-ipv6: on
> > > tx-checksum-fcoe-crc: off [fixed]
> > > tx-checksum-sctp: off [fixed]
> > > scatter-gather: on
> > > tx-scatter-gather: on
> > > tx-scatter-gather-fraglist: off [fixed]
> > > tcp-segmentation-offload: on
> > > tx-tcp-segmentation: on
> > > tx-tcp-ecn-segmentation: off [fixed]
> > > tx-tcp-mangleid-segmentation: off
> > > tx-tcp6-segmentation: on
> > > udp-fragmentation-offload: off
> > > generic-segmentation-offload: on
> > > generic-receive-offload: on
> > > large-receive-offload: off [fixed]
> > > rx-vlan-offload: on
> > > tx-vlan-offload: on
> > > ntuple-filters: off
> > > receive-hashing: on
> > > highdma: on [fixed]
> > > rx-vlan-filter: on
> > > vlan-challenged: off [fixed]
> > > tx-lockless: off [fixed]
> > > netns-local: off [fixed]
> > > tx-gso-robust: off [fixed]
> > > tx-fcoe-segmentation: off [fixed]
> > > tx-gre-segmentation: on
> > > tx-gre-csum-segmentation: on
> > > tx-ipxip4-segmentation: off [fixed]
> > > tx-ipxip6-segmentation: off [fixed]
> > > tx-udp_tnl-segmentation: on
> > > tx-udp_tnl-csum-segmentation: on
> > > tx-gso-partial: on
> > > tx-sctp-segmentation: off [fixed]
> > > tx-esp-segmentation: off [fixed]
> > > tx-udp-segmentation: on
> > > fcoe-mtu: off [fixed]
> > > tx-nocache-copy: off
> > > loopback: off [fixed]
> > > rx-fcs: off
> > > rx-all: off
> > > tx-vlan-stag-hw-insert: on
> > > rx-vlan-stag-hw-parse: off [fixed]
> > > rx-vlan-stag-filter: on [fixed]
> > > l2-fwd-offload: off [fixed]
> > > hw-tc-offload: off
> > > esp-hw-offload: off [fixed]
> > > esp-tx-csum-hw-offload: off [fixed]
> > > rx-udp_tunnel-port-offload: on
> > > tls-hw-tx-offload: off [fixed]
> > > tls-hw-rx-offload: off [fixed]
> > > rx-gro-hw: off [fixed]
> > > tls-hw-record: off [fixed]
> > >
> > > ethtool -c enp175s0f0
> > > Coalesce parameters for enp175s0f0:
> > > Adaptive RX: off TX: on
> > > stats-block-usecs: 0
> > > sample-interval: 0
> > > pkt-rate-low: 0
> > > pkt-rate-high: 0
> > > dmac: 32703
> > >
> > > rx-usecs: 256
> > > rx-frames: 128
> > > rx-usecs-irq: 0
> > > rx-frames-irq: 0
> > >
> > > tx-usecs: 8
> > > tx-frames: 128
> > > tx-usecs-irq: 0
> > > tx-frames-irq: 0
> > >
> > > rx-usecs-low: 0
> > > rx-frame-low: 0
> > > tx-usecs-low: 0
> > > tx-frame-low: 0
> > >
> > > rx-usecs-high: 0
> > > rx-frame-high: 0
> > > tx-usecs-high: 0
> > > tx-frame-high: 0
> > >
> > > ethtool -g enp175s0f0
> > > Ring parameters for enp175s0f0:
> > > Pre-set maximums:
> > > RX: 8192
> > > RX Mini: 0
> > > RX Jumbo: 0
> > > TX: 8192
> > > Current hardware settings:
> > > RX: 4096
> > > RX Mini: 0
> > > RX Jumbo: 0
> > > TX: 4096
> > >
> > >
> > >
> > >
> > >
> > >
>
> Also changed a little coalesce params - and best for this config are:
> ethtool -c enp175s0f0
> Coalesce parameters for enp175s0f0:
> Adaptive RX: off TX: off
> stats-block-usecs: 0
> sample-interval: 0
> pkt-rate-low: 0
> pkt-rate-high: 0
> dmac: 32573
>
> rx-usecs: 40
> rx-frames: 128
> rx-usecs-irq: 0
> rx-frames-irq: 0
>
> tx-usecs: 8
> tx-frames: 8
> tx-usecs-irq: 0
> tx-frames-irq: 0
>
> rx-usecs-low: 0
> rx-frame-low: 0
> tx-usecs-low: 0
> tx-frame-low: 0
>
> rx-usecs-high: 0
> rx-frame-high: 0
> tx-usecs-high: 0
> tx-frame-high: 0
>
>
> Less drops on RX side - and more pps in overall forwarded.
>
how much improvement ? maybe we can improve our adaptive rx coal to be
efficient for this work load.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 20:37 ` Saeed Mahameed
@ 2018-11-01 21:18 ` Paweł Staszewski
2018-11-01 21:24 ` Paweł Staszewski
2018-11-03 0:18 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 21:18 UTC (permalink / raw)
To: Saeed Mahameed, netdev
W dniu 01.11.2018 o 21:37, Saeed Mahameed pisze:
> On Thu, 2018-11-01 at 12:09 +0100, Paweł Staszewski wrote:
>> W dniu 01.11.2018 o 10:50, Saeed Mahameed pisze:
>>> On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
>>>> Hi
>>>>
>>>> So maybee someone will be interested how linux kernel handles
>>>> normal
>>>> traffic (not pktgen :) )
>>>>
>>>>
>>>> Server HW configuration:
>>>>
>>>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>>>
>>>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>>>
>>>>
>>>> Server software:
>>>>
>>>> FRR - as routing daemon
>>>>
>>>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to
>>>> local
>>>> numa
>>>> node)
>>>>
>>>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local
>>>> numa
>>>> node)
>>>>
>>>>
>>>> Maximum traffic that server can handle:
>>>>
>>>> Bandwidth
>>>>
>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>> input: /proc/net/dev type: rate
>>>> \ iface Rx Tx Total
>>>> =================================================================
>>>> ====
>>>> =========
>>>> enp175s0f1: 28.51 Gb/s 37.24
>>>> Gb/s
>>>> 65.74 Gb/s
>>>> enp175s0f0: 38.07 Gb/s 28.44
>>>> Gb/s
>>>> 66.51 Gb/s
>>>> ---------------------------------------------------------------
>>>> ----
>>>> -----------
>>>> total: 66.58 Gb/s 65.67
>>>> Gb/s
>>>> 132.25 Gb/s
>>>>
>>>>
>>>> Packets per second:
>>>>
>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>> input: /proc/net/dev type: rate
>>>> - iface Rx Tx Total
>>>> =================================================================
>>>> ====
>>>> =========
>>>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
>>>> 8735207.00 P/s
>>>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
>>>> 8790460.00 P/s
>>>> ---------------------------------------------------------------
>>>> ----
>>>> -----------
>>>> total: 8806533.00 P/s 8719134.00 P/s
>>>> 17525668.00 P/s
>>>>
>>>>
>>>> After reaching that limits nics on the upstream side (more RX
>>>> traffic)
>>>> start to drop packets
>>>>
>>>>
>>>> I just dont understand that server can't handle more bandwidth
>>>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
>>>> RX
>>>> side are increasing.
>>>>
>>> Where do you see 40 Gb/s ? you showed that both ports on the same
>>> NIC (
>>> same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) =
>>> 132.25
>>> Gb/s which aligns with your pcie link limit, what am i missing ?
>> hmm yes that was my concern also - cause cant find anywhere
>> informations
>> about that bandwidth is uni or bidirectional - so if 126Gbit for x16
>> 8GT
>> is unidir - then bidir will be 126/2 ~68Gbit - which will fit total
>> bw
>> on both ports
> i think it is bidir
>
>> This can explain maybee also why cpuload is rising rapidly from
>> 120Gbit/s in total to 132Gbit (counters of bwmng are from /proc/net -
>> so
>> there can be some error in reading them when offloading (gro/gso/tso)
>> on
>> nic's is enabled that is why
>>
>>>> Was thinking that maybee reached some pcie x16 limit - but x16
>>>> 8GT
>>>> is
>>>> 126Gbit - and also when testing with pktgen i can reach more bw
>>>> and
>>>> pps
>>>> (like 4x more comparing to normal internet traffic)
>>>>
>>> Are you forwarding when using pktgen as well or you just testing
>>> the RX
>>> side pps ?
>> Yes pktgen was tested on single port RX
>> Can check also forwarding to eliminate pciex limits
>>
> So this explains why you have more RX pps, since tx is idle and pcie
> will be free to do only rx.
>
> [...]
>
>
>>>> ethtool -S enp175s0f1
>>>> NIC statistics:
>>>> rx_packets: 173730800927
>>>> rx_bytes: 99827422751332
>>>> tx_packets: 142532009512
>>>> tx_bytes: 184633045911222
>>>> tx_tso_packets: 25989113891
>>>> tx_tso_bytes: 132933363384458
>>>> tx_tso_inner_packets: 0
>>>> tx_tso_inner_bytes: 0
>>>> tx_added_vlan_packets: 74630239613
>>>> tx_nop: 2029817748
>>>> rx_lro_packets: 0
>>>> rx_lro_bytes: 0
>>>> rx_ecn_mark: 0
>>>> rx_removed_vlan_packets: 173730800927
>>>> rx_csum_unnecessary: 0
>>>> rx_csum_none: 434357
>>>> rx_csum_complete: 173730366570
>>>> rx_csum_unnecessary_inner: 0
>>>> rx_xdp_drop: 0
>>>> rx_xdp_redirect: 0
>>>> rx_xdp_tx_xmit: 0
>>>> rx_xdp_tx_full: 0
>>>> rx_xdp_tx_err: 0
>>>> rx_xdp_tx_cqe: 0
>>>> tx_csum_none: 38260960853
>>>> tx_csum_partial: 36369278774
>>>> tx_csum_partial_inner: 0
>>>> tx_queue_stopped: 1
>>>> tx_queue_dropped: 0
>>>> tx_xmit_more: 748638099
>>>> tx_recover: 0
>>>> tx_cqes: 73881645031
>>>> tx_queue_wake: 1
>>>> tx_udp_seg_rem: 0
>>>> tx_cqe_err: 0
>>>> tx_xdp_xmit: 0
>>>> tx_xdp_full: 0
>>>> tx_xdp_err: 0
>>>> tx_xdp_cqes: 0
>>>> rx_wqe_err: 0
>>>> rx_mpwqe_filler_cqes: 0
>>>> rx_mpwqe_filler_strides: 0
>>>> rx_buff_alloc_err: 0
>>>> rx_cqe_compress_blks: 0
>>>> rx_cqe_compress_pkts: 0
>>> If this is a pcie bottleneck it might be useful to enable CQE
>>> compression (to reduce PCIe completion descriptors transactions)
>>> you should see the above rx_cqe_compress_pkts increasing when
>>> enabled.
>>>
>>> $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
>>> $ ethtool --show-priv-flags enp175s0f1
>>> Private flags for p6p1:
>>> rx_cqe_moder : on
>>> cqe_moder : off
>>> rx_cqe_compress : on
>>> ...
>>>
>>> try this on both interfaces.
>> Done
>> ethtool --show-priv-flags enp175s0f1
>> Private flags for enp175s0f1:
>> rx_cqe_moder : on
>> tx_cqe_moder : off
>> rx_cqe_compress : on
>> rx_striding_rq : off
>> rx_no_csum_complete: off
>>
>> ethtool --show-priv-flags enp175s0f0
>> Private flags for enp175s0f0:
>> rx_cqe_moder : on
>> tx_cqe_moder : off
>> rx_cqe_compress : on
>> rx_striding_rq : off
>> rx_no_csum_complete: off
>>
> did it help reduce the load on the pcie ? do you see more pps ?
> what is the ratio between rx_cqe_compress_pkts and over all rx packets
> ?
So - a little more pps
Before change top - graph / after bottom -> image with graph stats from
proc/net/dev
cqe_compress enabled at 11:55
Sorry - but for real life traffic it is hard to do any counter
differences - cause traffic just rising alone from minute to minute :)
But for that time the change is visible on graph - cause was almost same
for past 20minutes before change.
full ethtool below:
NIC statistics:
rx_packets: 516522465438
rx_bytes: 680052911258729
tx_packets: 677697545586
tx_bytes: 413647643141709
tx_tso_packets: 42530913279
tx_tso_bytes: 235655668554142
tx_tso_inner_packets: 0
tx_tso_inner_bytes: 0
tx_added_vlan_packets: 551156530885
tx_nop: 8536823558
rx_lro_packets: 0
rx_lro_bytes: 0
rx_ecn_mark: 0
rx_removed_vlan_packets: 516522465438
rx_csum_unnecessary: 0
rx_csum_none: 50382868
rx_csum_complete: 516472082570
rx_csum_unnecessary_inner: 0
rx_xdp_drop: 0
rx_xdp_redirect: 0
rx_xdp_tx_xmit: 0
rx_xdp_tx_full: 0
rx_xdp_tx_err: 0
rx_xdp_tx_cqe: 0
tx_csum_none: 494075047017
tx_csum_partial: 57081483898
tx_csum_partial_inner: 0
tx_queue_stopped: 518624
tx_queue_dropped: 0
tx_xmit_more: 1717880628
tx_recover: 0
tx_cqes: 549438869029
tx_queue_wake: 518627
tx_udp_seg_rem: 0
tx_cqe_err: 0
tx_xdp_xmit: 0
tx_xdp_full: 0
tx_xdp_err: 0
tx_xdp_cqes: 0
rx_wqe_err: 0
rx_mpwqe_filler_cqes: 0
rx_mpwqe_filler_strides: 0
rx_buff_alloc_err: 0
rx_cqe_compress_blks: 11483228712
rx_cqe_compress_pkts: 25794213324
rx_page_reuse: 0
rx_cache_reuse: 63610249810
rx_cache_full: 194650916511
rx_cache_empty: 1118208
rx_cache_busy: 194650982430
rx_cache_waive: 0
rx_congst_umr: 0
rx_arfs_err: 0
ch_events: 119556002196
ch_poll: 121107424977
ch_arm: 115856746008
ch_aff_change: 31
ch_eq_rearm: 0
rx_out_of_buffer: 6880325
rx_if_down_packets: 2062529
rx_vport_unicast_packets: 517433716795
rx_vport_unicast_bytes: 683464347301443
tx_vport_unicast_packets: 677697453738
tx_vport_unicast_bytes: 415788589663315
rx_vport_multicast_packets: 208258309
rx_vport_multicast_bytes: 14224046052
tx_vport_multicast_packets: 21689
tx_vport_multicast_bytes: 2158334
rx_vport_broadcast_packets: 75838646
rx_vport_broadcast_bytes: 4697944695
tx_vport_broadcast_packets: 68730
tx_vport_broadcast_bytes: 4123800
rx_vport_rdma_unicast_packets: 0
rx_vport_rdma_unicast_bytes: 0
tx_vport_rdma_unicast_packets: 0
tx_vport_rdma_unicast_bytes: 0
rx_vport_rdma_multicast_packets: 0
rx_vport_rdma_multicast_bytes: 0
tx_vport_rdma_multicast_packets: 0
tx_vport_rdma_multicast_bytes: 0
tx_packets_phy: 677697543252
rx_packets_phy: 521319491878
rx_crc_errors_phy: 0
tx_bytes_phy: 418499385791411
rx_bytes_phy: 690322537017274
tx_multicast_phy: 21689
tx_broadcast_phy: 68730
rx_multicast_phy: 208258305
rx_broadcast_phy: 75838646
rx_in_range_len_errors_phy: 4
rx_out_of_range_len_phy: 0
rx_oversize_pkts_phy: 0
rx_symbol_err_phy: 0
tx_mac_control_phy: 0
rx_mac_control_phy: 0
rx_unsupported_op_phy: 0
rx_pause_ctrl_phy: 0
tx_pause_ctrl_phy: 0
rx_discards_phy: 3601449265
tx_discards_phy: 0
tx_errors_phy: 0
rx_undersize_pkts_phy: 0
rx_fragments_phy: 0
rx_jabbers_phy: 0
rx_64_bytes_phy: 1416456771
rx_65_to_127_bytes_phy: 40750434737
rx_128_to_255_bytes_phy: 11518110310
rx_256_to_511_bytes_phy: 7055850637
rx_512_to_1023_bytes_phy: 7811550424
rx_1024_to_1518_bytes_phy: 265547564845
rx_1519_to_2047_bytes_phy: 187219522899
rx_2048_to_4095_bytes_phy: 0
rx_4096_to_8191_bytes_phy: 0
rx_8192_to_10239_bytes_phy: 0
link_down_events_phy: 0
rx_pcs_symbol_err_phy: 0
rx_corrected_bits_phy: 0
rx_pci_signal_integrity: 0
tx_pci_signal_integrity: 48
rx_prio0_bytes: 688807632117485
rx_prio0_packets: 516310309931
tx_prio0_bytes: 418499382756025
tx_prio0_packets: 677697534982
rx_prio1_bytes: 1497701612877
rx_prio1_packets: 1206768094
tx_prio1_bytes: 0
tx_prio1_packets: 0
rx_prio2_bytes: 112271227
rx_prio2_packets: 337295
tx_prio2_bytes: 0
tx_prio2_packets: 0
rx_prio3_bytes: 1165455555
rx_prio3_packets: 1544310
tx_prio3_bytes: 0
tx_prio3_packets: 0
rx_prio4_bytes: 161857240
rx_prio4_packets: 341392
tx_prio4_bytes: 0
tx_prio4_packets: 0
rx_prio5_bytes: 455031612
rx_prio5_packets: 2861469
tx_prio5_bytes: 0
tx_prio5_packets: 0
rx_prio6_bytes: 1873928697
rx_prio6_packets: 5146981
tx_prio6_bytes: 0
tx_prio6_packets: 0
rx_prio7_bytes: 13423452430
rx_prio7_packets: 190724796
tx_prio7_bytes: 0
tx_prio7_packets: 0
module_unplug: 0
module_bus_stuck: 0
module_high_temp: 0
module_bad_shorted: 0
ch0_events: 4252266777
ch0_poll: 4330804273
ch0_arm: 4120233182
ch0_aff_change: 2
ch0_eq_rearm: 0
ch1_events: 3938415938
ch1_poll: 4012621322
ch1_arm: 3810131188
ch1_aff_change: 2
ch1_eq_rearm: 0
ch2_events: 3897428860
ch2_poll: 3973886848
ch2_arm: 3773019397
ch2_aff_change: 1
ch2_eq_rearm: 0
ch3_events: 4108000541
ch3_poll: 4180139872
ch3_arm: 3982093366
ch3_aff_change: 1
ch3_eq_rearm: 0
ch4_events: 4652570079
ch4_poll: 4720541090
ch4_arm: 4524475054
ch4_aff_change: 2
ch4_eq_rearm: 0
ch5_events: 3899177385
ch5_poll: 3974274186
ch5_arm: 3772299186
ch5_aff_change: 2
ch5_eq_rearm: 0
ch6_events: 3915161350
ch6_poll: 3992338199
ch6_arm: 3794710989
ch6_aff_change: 0
ch6_eq_rearm: 0
ch7_events: 4008175631
ch7_poll: 4081321248
ch7_arm: 3882263723
ch7_aff_change: 0
ch7_eq_rearm: 0
ch8_events: 4207422352
ch8_poll: 4276465449
ch8_arm: 4077650366
ch8_aff_change: 0
ch8_eq_rearm: 0
ch9_events: 4036491879
ch9_poll: 4108975987
ch9_arm: 3914493694
ch9_aff_change: 0
ch9_eq_rearm: 0
ch10_events: 4066261595
ch10_poll: 4134419606
ch10_arm: 3936637711
ch10_aff_change: 1
ch10_eq_rearm: 0
ch11_events: 4440494043
ch11_poll: 4507578730
ch11_arm: 4318629438
ch11_aff_change: 0
ch11_eq_rearm: 0
ch12_events: 4066958252
ch12_poll: 4130191506
ch12_arm: 3934337782
ch12_aff_change: 0
ch12_eq_rearm: 0
ch13_events: 4051309159
ch13_poll: 4118864120
ch13_arm: 3921011919
ch13_aff_change: 0
ch13_eq_rearm: 0
ch14_events: 4321664800
ch14_poll: 4382433680
ch14_arm: 4186130552
ch14_aff_change: 0
ch14_eq_rearm: 0
ch15_events: 4701102075
ch15_poll: 4760373932
ch15_arm: 4570151468
ch15_aff_change: 0
ch15_eq_rearm: 0
ch16_events: 4311052687
ch16_poll: 4345937129
ch16_arm: 4170883819
ch16_aff_change: 0
ch16_eq_rearm: 0
ch17_events: 4647570931
ch17_poll: 4680218533
ch17_arm: 4509426288
ch17_aff_change: 0
ch17_eq_rearm: 0
ch18_events: 4598195702
ch18_poll: 4631314898
ch18_arm: 4457267084
ch18_aff_change: 0
ch18_eq_rearm: 0
ch19_events: 4808094560
ch19_poll: 4841368340
ch19_arm: 4670604358
ch19_aff_change: 0
ch19_eq_rearm: 0
ch20_events: 4240910605
ch20_poll: 4276531502
ch20_arm: 4101767278
ch20_aff_change: 1
ch20_eq_rearm: 0
ch21_events: 4389371472
ch21_poll: 4426870311
ch21_arm: 4249339045
ch21_aff_change: 2
ch21_eq_rearm: 0
ch22_events: 4282958754
ch22_poll: 4319228073
ch22_arm: 4145102991
ch22_aff_change: 2
ch22_eq_rearm: 0
ch23_events: 4440196528
ch23_poll: 4474090188
ch23_arm: 4300837147
ch23_aff_change: 2
ch23_eq_rearm: 0
ch24_events: 4326875785
ch24_poll: 4364971263
ch24_arm: 4186404526
ch24_aff_change: 2
ch24_eq_rearm: 0
ch25_events: 4286528453
ch25_poll: 4324089445
ch25_arm: 4147222616
ch25_aff_change: 3
ch25_eq_rearm: 0
ch26_events: 4098043104
ch26_poll: 4138133745
ch26_arm: 3967438971
ch26_aff_change: 4
ch26_eq_rearm: 0
ch27_events: 4563302840
ch27_poll: 4599441446
ch27_arm: 4432182806
ch27_aff_change: 4
ch27_eq_rearm: 0
ch28_events: 4
ch28_poll: 4
ch28_arm: 4
ch28_aff_change: 0
ch28_eq_rearm: 0
ch29_events: 6
ch29_poll: 6
ch29_arm: 6
ch29_aff_change: 0
ch29_eq_rearm: 0
ch30_events: 4
ch30_poll: 4
ch30_arm: 4
ch30_aff_change: 0
ch30_eq_rearm: 0
ch31_events: 4
ch31_poll: 4
ch31_arm: 4
ch31_aff_change: 0
ch31_eq_rearm: 0
ch32_events: 4
ch32_poll: 4
ch32_arm: 4
ch32_aff_change: 0
ch32_eq_rearm: 0
ch33_events: 4
ch33_poll: 4
ch33_arm: 4
ch33_aff_change: 0
ch33_eq_rearm: 0
ch34_events: 4
ch34_poll: 4
ch34_arm: 4
ch34_aff_change: 0
ch34_eq_rearm: 0
ch35_events: 4
ch35_poll: 4
ch35_arm: 4
ch35_aff_change: 0
ch35_eq_rearm: 0
ch36_events: 4
ch36_poll: 4
ch36_arm: 4
ch36_aff_change: 0
ch36_eq_rearm: 0
ch37_events: 4
ch37_poll: 4
ch37_arm: 4
ch37_aff_change: 0
ch37_eq_rearm: 0
ch38_events: 4
ch38_poll: 4
ch38_arm: 4
ch38_aff_change: 0
ch38_eq_rearm: 0
ch39_events: 4
ch39_poll: 4
ch39_arm: 4
ch39_aff_change: 0
ch39_eq_rearm: 0
ch40_events: 4
ch40_poll: 4
ch40_arm: 4
ch40_aff_change: 0
ch40_eq_rearm: 0
ch41_events: 4
ch41_poll: 4
ch41_arm: 4
ch41_aff_change: 0
ch41_eq_rearm: 0
ch42_events: 4
ch42_poll: 4
ch42_arm: 4
ch42_aff_change: 0
ch42_eq_rearm: 0
ch43_events: 4
ch43_poll: 4
ch43_arm: 4
ch43_aff_change: 0
ch43_eq_rearm: 0
ch44_events: 4
ch44_poll: 4
ch44_arm: 4
ch44_aff_change: 0
ch44_eq_rearm: 0
ch45_events: 4
ch45_poll: 4
ch45_arm: 4
ch45_aff_change: 0
ch45_eq_rearm: 0
ch46_events: 4
ch46_poll: 4
ch46_arm: 4
ch46_aff_change: 0
ch46_eq_rearm: 0
ch47_events: 4
ch47_poll: 4
ch47_arm: 4
ch47_aff_change: 0
ch47_eq_rearm: 0
ch48_events: 4
ch48_poll: 4
ch48_arm: 4
ch48_aff_change: 0
ch48_eq_rearm: 0
ch49_events: 4
ch49_poll: 4
ch49_arm: 4
ch49_aff_change: 0
ch49_eq_rearm: 0
ch50_events: 4
ch50_poll: 4
ch50_arm: 4
ch50_aff_change: 0
ch50_eq_rearm: 0
ch51_events: 4
ch51_poll: 4
ch51_arm: 4
ch51_aff_change: 0
ch51_eq_rearm: 0
ch52_events: 4
ch52_poll: 4
ch52_arm: 4
ch52_aff_change: 0
ch52_eq_rearm: 0
ch53_events: 4
ch53_poll: 4
ch53_arm: 4
ch53_aff_change: 0
ch53_eq_rearm: 0
ch54_events: 4
ch54_poll: 4
ch54_arm: 4
ch54_aff_change: 0
ch54_eq_rearm: 0
ch55_events: 4
ch55_poll: 4
ch55_arm: 4
ch55_aff_change: 0
ch55_eq_rearm: 0
rx0_packets: 21390033774
rx0_bytes: 27326856299122
rx0_csum_complete: 21339650906
rx0_csum_unnecessary: 0
rx0_csum_unnecessary_inner: 0
rx0_csum_none: 50382868
rx0_xdp_drop: 0
rx0_xdp_redirect: 0
rx0_lro_packets: 0
rx0_lro_bytes: 0
rx0_ecn_mark: 0
rx0_removed_vlan_packets: 21390033774
rx0_wqe_err: 0
rx0_mpwqe_filler_cqes: 0
rx0_mpwqe_filler_strides: 0
rx0_buff_alloc_err: 0
rx0_cqe_compress_blks: 481077641
rx0_cqe_compress_pkts: 1085647489
rx0_page_reuse: 0
rx0_cache_reuse: 19050049
rx0_cache_full: 10675964285
rx0_cache_empty: 37376
rx0_cache_busy: 10675966819
rx0_cache_waive: 0
rx0_congst_umr: 0
rx0_arfs_err: 0
rx0_xdp_tx_xmit: 0
rx0_xdp_tx_full: 0
rx0_xdp_tx_err: 0
rx0_xdp_tx_cqes: 0
rx1_packets: 19868919527
rx1_bytes: 26149716991561
rx1_csum_complete: 19868919527
rx1_csum_unnecessary: 0
rx1_csum_unnecessary_inner: 0
rx1_csum_none: 0
rx1_xdp_drop: 0
rx1_xdp_redirect: 0
rx1_lro_packets: 0
rx1_lro_bytes: 0
rx1_ecn_mark: 0
rx1_removed_vlan_packets: 19868919527
rx1_wqe_err: 0
rx1_mpwqe_filler_cqes: 0
rx1_mpwqe_filler_strides: 0
rx1_buff_alloc_err: 0
rx1_cqe_compress_blks: 420210560
rx1_cqe_compress_pkts: 941233388
rx1_page_reuse: 0
rx1_cache_reuse: 46200002
rx1_cache_full: 9888257242
rx1_cache_empty: 37376
rx1_cache_busy: 9888259746
rx1_cache_waive: 0
rx1_congst_umr: 0
rx1_arfs_err: 0
rx1_xdp_tx_xmit: 0
rx1_xdp_tx_full: 0
rx1_xdp_tx_err: 0
rx1_xdp_tx_cqes: 0
rx2_packets: 19575013662
rx2_bytes: 25759818417945
rx2_csum_complete: 19575013662
rx2_csum_unnecessary: 0
rx2_csum_unnecessary_inner: 0
rx2_csum_none: 0
rx2_xdp_drop: 0
rx2_xdp_redirect: 0
rx2_lro_packets: 0
rx2_lro_bytes: 0
rx2_ecn_mark: 0
rx2_removed_vlan_packets: 19575013662
rx2_wqe_err: 0
rx2_mpwqe_filler_cqes: 0
rx2_mpwqe_filler_strides: 0
rx2_buff_alloc_err: 0
rx2_cqe_compress_blks: 412345511
rx2_cqe_compress_pkts: 923376167
rx2_page_reuse: 0
rx2_cache_reuse: 38837731
rx2_cache_full: 9748666548
rx2_cache_empty: 37376
rx2_cache_busy: 9748669093
rx2_cache_waive: 0
rx2_congst_umr: 0
rx2_arfs_err: 0
rx2_xdp_tx_xmit: 0
rx2_xdp_tx_full: 0
rx2_xdp_tx_err: 0
rx2_xdp_tx_cqes: 0
rx3_packets: 19795911749
rx3_bytes: 25969475566905
rx3_csum_complete: 19795911749
rx3_csum_unnecessary: 0
rx3_csum_unnecessary_inner: 0
rx3_csum_none: 0
rx3_xdp_drop: 0
rx3_xdp_redirect: 0
rx3_lro_packets: 0
rx3_lro_bytes: 0
rx3_ecn_mark: 0
rx3_removed_vlan_packets: 19795911749
rx3_wqe_err: 0
rx3_mpwqe_filler_cqes: 0
rx3_mpwqe_filler_strides: 0
rx3_buff_alloc_err: 0
rx3_cqe_compress_blks: 416658765
rx3_cqe_compress_pkts: 934986266
rx3_page_reuse: 0
rx3_cache_reuse: 34542124
rx3_cache_full: 9863411232
rx3_cache_empty: 37376
rx3_cache_busy: 9863413732
rx3_cache_waive: 0
rx3_congst_umr: 0
rx3_arfs_err: 0
rx3_xdp_tx_xmit: 0
rx3_xdp_tx_full: 0
rx3_xdp_tx_err: 0
rx3_xdp_tx_cqes: 0
rx4_packets: 20445652378
rx4_bytes: 26949065110265
rx4_csum_complete: 20445652378
rx4_csum_unnecessary: 0
rx4_csum_unnecessary_inner: 0
rx4_csum_none: 0
rx4_xdp_drop: 0
rx4_xdp_redirect: 0
rx4_lro_packets: 0
rx4_lro_bytes: 0
rx4_ecn_mark: 0
rx4_removed_vlan_packets: 20445652378
rx4_wqe_err: 0
rx4_mpwqe_filler_cqes: 0
rx4_mpwqe_filler_strides: 0
rx4_buff_alloc_err: 0
rx4_cqe_compress_blks: 506085858
rx4_cqe_compress_pkts: 1147860328
rx4_page_reuse: 0
rx4_cache_reuse: 10122542864
rx4_cache_full: 100281206
rx4_cache_empty: 37376
rx4_cache_busy: 100283304
rx4_cache_waive: 0
rx4_congst_umr: 0
rx4_arfs_err: 0
rx4_xdp_tx_xmit: 0
rx4_xdp_tx_full: 0
rx4_xdp_tx_err: 0
rx4_xdp_tx_cqes: 0
rx5_packets: 19622362246
rx5_bytes: 25843450982982
rx5_csum_complete: 19622362246
rx5_csum_unnecessary: 0
rx5_csum_unnecessary_inner: 0
rx5_csum_none: 0
rx5_xdp_drop: 0
rx5_xdp_redirect: 0
rx5_lro_packets: 0
rx5_lro_bytes: 0
rx5_ecn_mark: 0
rx5_removed_vlan_packets: 19622362246
rx5_wqe_err: 0
rx5_mpwqe_filler_cqes: 0
rx5_mpwqe_filler_strides: 0
rx5_buff_alloc_err: 0
rx5_cqe_compress_blks: 422840924
rx5_cqe_compress_pkts: 948005878
rx5_page_reuse: 0
rx5_cache_reuse: 31285453
rx5_cache_full: 9779893117
rx5_cache_empty: 37376
rx5_cache_busy: 9779895647
rx5_cache_waive: 0
rx5_congst_umr: 0
rx5_arfs_err: 0
rx5_xdp_tx_xmit: 0
rx5_xdp_tx_full: 0
rx5_xdp_tx_err: 0
rx5_xdp_tx_cqes: 0
rx6_packets: 19788231278
rx6_bytes: 25985783006486
rx6_csum_complete: 19788231278
rx6_csum_unnecessary: 0
rx6_csum_unnecessary_inner: 0
rx6_csum_none: 0
rx6_xdp_drop: 0
rx6_xdp_redirect: 0
rx6_lro_packets: 0
rx6_lro_bytes: 0
rx6_ecn_mark: 0
rx6_removed_vlan_packets: 19788231278
rx6_wqe_err: 0
rx6_mpwqe_filler_cqes: 0
rx6_mpwqe_filler_strides: 0
rx6_buff_alloc_err: 0
rx6_cqe_compress_blks: 418799056
rx6_cqe_compress_pkts: 938282685
rx6_page_reuse: 0
rx6_cache_reuse: 18114793
rx6_cache_full: 9875998295
rx6_cache_empty: 37376
rx6_cache_busy: 9876000831
rx6_cache_waive: 0
rx6_congst_umr: 0
rx6_arfs_err: 0
rx6_xdp_tx_xmit: 0
rx6_xdp_tx_full: 0
rx6_xdp_tx_err: 0
rx6_xdp_tx_cqes: 0
rx7_packets: 19795759168
rx7_bytes: 26085056586860
rx7_csum_complete: 19795759168
rx7_csum_unnecessary: 0
rx7_csum_unnecessary_inner: 0
rx7_csum_none: 0
rx7_xdp_drop: 0
rx7_xdp_redirect: 0
rx7_lro_packets: 0
rx7_lro_bytes: 0
rx7_ecn_mark: 0
rx7_removed_vlan_packets: 19795759168
rx7_wqe_err: 0
rx7_mpwqe_filler_cqes: 0
rx7_mpwqe_filler_strides: 0
rx7_buff_alloc_err: 0
rx7_cqe_compress_blks: 413959224
rx7_cqe_compress_pkts: 927675936
rx7_page_reuse: 0
rx7_cache_reuse: 23902990
rx7_cache_full: 9873974042
rx7_cache_empty: 37376
rx7_cache_busy: 9873976574
rx7_cache_waive: 0
rx7_congst_umr: 0
rx7_arfs_err: 0
rx7_xdp_tx_xmit: 0
rx7_xdp_tx_full: 0
rx7_xdp_tx_err: 0
rx7_xdp_tx_cqes: 0
rx8_packets: 19963477439
rx8_bytes: 26384640501789
rx8_csum_complete: 19963477439
rx8_csum_unnecessary: 0
rx8_csum_unnecessary_inner: 0
rx8_csum_none: 0
rx8_xdp_drop: 0
rx8_xdp_redirect: 0
rx8_lro_packets: 0
rx8_lro_bytes: 0
rx8_ecn_mark: 0
rx8_removed_vlan_packets: 19963477439
rx8_wqe_err: 0
rx8_mpwqe_filler_cqes: 0
rx8_mpwqe_filler_strides: 0
rx8_buff_alloc_err: 0
rx8_cqe_compress_blks: 420422857
rx8_cqe_compress_pkts: 942720292
rx8_page_reuse: 0
rx8_cache_reuse: 88181713
rx8_cache_full: 9893554525
rx8_cache_empty: 37376
rx8_cache_busy: 9893556983
rx8_cache_waive: 0
rx8_congst_umr: 0
rx8_arfs_err: 0
rx8_xdp_tx_xmit: 0
rx8_xdp_tx_full: 0
rx8_xdp_tx_err: 0
rx8_xdp_tx_cqes: 0
rx9_packets: 19726642138
rx9_bytes: 26063924286499
rx9_csum_complete: 19726642138
rx9_csum_unnecessary: 0
rx9_csum_unnecessary_inner: 0
rx9_csum_none: 0
rx9_xdp_drop: 0
rx9_xdp_redirect: 0
rx9_lro_packets: 0
rx9_lro_bytes: 0
rx9_ecn_mark: 0
rx9_removed_vlan_packets: 19726642138
rx9_wqe_err: 0
rx9_mpwqe_filler_cqes: 0
rx9_mpwqe_filler_strides: 0
rx9_buff_alloc_err: 0
rx9_cqe_compress_blks: 424227411
rx9_cqe_compress_pkts: 951534873
rx9_page_reuse: 0
rx9_cache_reuse: 482901440
rx9_cache_full: 9380417487
rx9_cache_empty: 37376
rx9_cache_busy: 9380419608
rx9_cache_waive: 0
rx9_congst_umr: 0
rx9_arfs_err: 0
rx9_xdp_tx_xmit: 0
rx9_xdp_tx_full: 0
rx9_xdp_tx_err: 0
rx9_xdp_tx_cqes: 0
rx10_packets: 19901229170
rx10_bytes: 26300854495044
rx10_csum_complete: 19901229170
rx10_csum_unnecessary: 0
rx10_csum_unnecessary_inner: 0
rx10_csum_none: 0
rx10_xdp_drop: 0
rx10_xdp_redirect: 0
rx10_lro_packets: 0
rx10_lro_bytes: 0
rx10_ecn_mark: 0
rx10_removed_vlan_packets: 19901229170
rx10_wqe_err: 0
rx10_mpwqe_filler_cqes: 0
rx10_mpwqe_filler_strides: 0
rx10_buff_alloc_err: 0
rx10_cqe_compress_blks: 419082938
rx10_cqe_compress_pkts: 940791347
rx10_page_reuse: 0
rx10_cache_reuse: 14896055
rx10_cache_full: 9935715977
rx10_cache_empty: 37376
rx10_cache_busy: 9935718513
rx10_cache_waive: 0
rx10_congst_umr: 0
rx10_arfs_err: 0
rx10_xdp_tx_xmit: 0
rx10_xdp_tx_full: 0
rx10_xdp_tx_err: 0
rx10_xdp_tx_cqes: 0
rx11_packets: 20352190494
rx11_bytes: 26851034425372
rx11_csum_complete: 20352190494
rx11_csum_unnecessary: 0
rx11_csum_unnecessary_inner: 0
rx11_csum_none: 0
rx11_xdp_drop: 0
rx11_xdp_redirect: 0
rx11_lro_packets: 0
rx11_lro_bytes: 0
rx11_ecn_mark: 0
rx11_removed_vlan_packets: 20352190494
rx11_wqe_err: 0
rx11_mpwqe_filler_cqes: 0
rx11_mpwqe_filler_strides: 0
rx11_buff_alloc_err: 0
rx11_cqe_compress_blks: 501992147
rx11_cqe_compress_pkts: 1140398610
rx11_page_reuse: 0
rx11_cache_reuse: 10071721531
rx11_cache_full: 104371621
rx11_cache_empty: 37376
rx11_cache_busy: 104373697
rx11_cache_waive: 0
rx11_congst_umr: 0
rx11_arfs_err: 0
rx11_xdp_tx_xmit: 0
rx11_xdp_tx_full: 0
rx11_xdp_tx_err: 0
rx11_xdp_tx_cqes: 0
rx12_packets: 19934747149
rx12_bytes: 26296478787829
rx12_csum_complete: 19934747149
rx12_csum_unnecessary: 0
rx12_csum_unnecessary_inner: 0
rx12_csum_none: 0
rx12_xdp_drop: 0
rx12_xdp_redirect: 0
rx12_lro_packets: 0
rx12_lro_bytes: 0
rx12_ecn_mark: 0
rx12_removed_vlan_packets: 19934747149
rx12_wqe_err: 0
rx12_mpwqe_filler_cqes: 0
rx12_mpwqe_filler_strides: 0
rx12_buff_alloc_err: 0
rx12_cqe_compress_blks: 443350570
rx12_cqe_compress_pkts: 995997220
rx12_page_reuse: 0
rx12_cache_reuse: 9864934174
rx12_cache_full: 102437428
rx12_cache_empty: 37376
rx12_cache_busy: 102439382
rx12_cache_waive: 0
rx12_congst_umr: 0
rx12_arfs_err: 0
rx12_xdp_tx_xmit: 0
rx12_xdp_tx_full: 0
rx12_xdp_tx_err: 0
rx12_xdp_tx_cqes: 0
rx13_packets: 19866908096
rx13_bytes: 26160931936186
rx13_csum_complete: 19866908096
rx13_csum_unnecessary: 0
rx13_csum_unnecessary_inner: 0
rx13_csum_none: 0
rx13_xdp_drop: 0
rx13_xdp_redirect: 0
rx13_lro_packets: 0
rx13_lro_bytes: 0
rx13_ecn_mark: 0
rx13_removed_vlan_packets: 19866908096
rx13_wqe_err: 0
rx13_mpwqe_filler_cqes: 0
rx13_mpwqe_filler_strides: 0
rx13_buff_alloc_err: 0
rx13_cqe_compress_blks: 413640141
rx13_cqe_compress_pkts: 926175066
rx13_page_reuse: 0
rx13_cache_reuse: 36358610
rx13_cache_full: 9897092921
rx13_cache_empty: 37376
rx13_cache_busy: 9897095422
rx13_cache_waive: 0
rx13_congst_umr: 0
rx13_arfs_err: 0
rx13_xdp_tx_xmit: 0
rx13_xdp_tx_full: 0
rx13_xdp_tx_err: 0
rx13_xdp_tx_cqes: 0
rx14_packets: 20229035746
rx14_bytes: 26655092809172
rx14_csum_complete: 20229035746
rx14_csum_unnecessary: 0
rx14_csum_unnecessary_inner: 0
rx14_csum_none: 0
rx14_xdp_drop: 0
rx14_xdp_redirect: 0
rx14_lro_packets: 0
rx14_lro_bytes: 0
rx14_ecn_mark: 0
rx14_removed_vlan_packets: 20229035746
rx14_wqe_err: 0
rx14_mpwqe_filler_cqes: 0
rx14_mpwqe_filler_strides: 0
rx14_buff_alloc_err: 0
rx14_cqe_compress_blks: 460990337
rx14_cqe_compress_pkts: 1041287948
rx14_page_reuse: 0
rx14_cache_reuse: 25649275
rx14_cache_full: 10088866045
rx14_cache_empty: 37376
rx14_cache_busy: 10088868574
rx14_cache_waive: 0
rx14_congst_umr: 0
rx14_arfs_err: 0
rx14_xdp_tx_xmit: 0
rx14_xdp_tx_full: 0
rx14_xdp_tx_err: 0
rx14_xdp_tx_cqes: 0
rx15_packets: 20528177154
rx15_bytes: 27029263893264
rx15_csum_complete: 20528177154
rx15_csum_unnecessary: 0
rx15_csum_unnecessary_inner: 0
rx15_csum_none: 0
rx15_xdp_drop: 0
rx15_xdp_redirect: 0
rx15_lro_packets: 0
rx15_lro_bytes: 0
rx15_ecn_mark: 0
rx15_removed_vlan_packets: 20528177154
rx15_wqe_err: 0
rx15_mpwqe_filler_cqes: 0
rx15_mpwqe_filler_strides: 0
rx15_buff_alloc_err: 0
rx15_cqe_compress_blks: 476776176
rx15_cqe_compress_pkts: 1076153263
rx15_page_reuse: 0
rx15_cache_reuse: 48426735
rx15_cache_full: 10215659289
rx15_cache_empty: 37376
rx15_cache_busy: 10215661817
rx15_cache_waive: 0
rx15_congst_umr: 0
rx15_arfs_err: 0
rx15_xdp_tx_xmit: 0
rx15_xdp_tx_full: 0
rx15_xdp_tx_err: 0
rx15_xdp_tx_cqes: 0
rx16_packets: 16104078098
rx16_bytes: 21256361789679
rx16_csum_complete: 16104078098
rx16_csum_unnecessary: 0
rx16_csum_unnecessary_inner: 0
rx16_csum_none: 0
rx16_xdp_drop: 0
rx16_xdp_redirect: 0
rx16_lro_packets: 0
rx16_lro_bytes: 0
rx16_ecn_mark: 0
rx16_removed_vlan_packets: 16104078098
rx16_wqe_err: 0
rx16_mpwqe_filler_cqes: 0
rx16_mpwqe_filler_strides: 0
rx16_buff_alloc_err: 0
rx16_cqe_compress_blks: 352082054
rx16_cqe_compress_pkts: 787161670
rx16_page_reuse: 0
rx16_cache_reuse: 25912567
rx16_cache_full: 8026124051
rx16_cache_empty: 37376
rx16_cache_busy: 8026126465
rx16_cache_waive: 0
rx16_congst_umr: 0
rx16_arfs_err: 0
rx16_xdp_tx_xmit: 0
rx16_xdp_tx_full: 0
rx16_xdp_tx_err: 0
rx16_xdp_tx_cqes: 0
rx17_packets: 16314055017
rx17_bytes: 21589139030173
rx17_csum_complete: 16314055017
rx17_csum_unnecessary: 0
rx17_csum_unnecessary_inner: 0
rx17_csum_none: 0
rx17_xdp_drop: 0
rx17_xdp_redirect: 0
rx17_lro_packets: 0
rx17_lro_bytes: 0
rx17_ecn_mark: 0
rx17_removed_vlan_packets: 16314055017
rx17_wqe_err: 0
rx17_mpwqe_filler_cqes: 0
rx17_mpwqe_filler_strides: 0
rx17_buff_alloc_err: 0
rx17_cqe_compress_blks: 387834541
rx17_cqe_compress_pkts: 871851081
rx17_page_reuse: 0
rx17_cache_reuse: 24021313
rx17_cache_full: 8133003829
rx17_cache_empty: 37376
rx17_cache_busy: 8133006175
rx17_cache_waive: 0
rx17_congst_umr: 0
rx17_arfs_err: 0
rx17_xdp_tx_xmit: 0
rx17_xdp_tx_full: 0
rx17_xdp_tx_err: 0
rx17_xdp_tx_cqes: 0
rx18_packets: 16439016814
rx18_bytes: 21648651917475
rx18_csum_complete: 16439016814
rx18_csum_unnecessary: 0
rx18_csum_unnecessary_inner: 0
rx18_csum_none: 0
rx18_xdp_drop: 0
rx18_xdp_redirect: 0
rx18_lro_packets: 0
rx18_lro_bytes: 0
rx18_ecn_mark: 0
rx18_removed_vlan_packets: 16439016814
rx18_wqe_err: 0
rx18_mpwqe_filler_cqes: 0
rx18_mpwqe_filler_strides: 0
rx18_buff_alloc_err: 0
rx18_cqe_compress_blks: 375066666
rx18_cqe_compress_pkts: 843563974
rx18_page_reuse: 0
rx18_cache_reuse: 8151064266
rx18_cache_full: 68442025
rx18_cache_empty: 37376
rx18_cache_busy: 68444122
rx18_cache_waive: 0
rx18_congst_umr: 0
rx18_arfs_err: 0
rx18_xdp_tx_xmit: 0
rx18_xdp_tx_full: 0
rx18_xdp_tx_err: 0
rx18_xdp_tx_cqes: 0
rx19_packets: 16641223506
rx19_bytes: 21964749940935
rx19_csum_complete: 16641223506
rx19_csum_unnecessary: 0
rx19_csum_unnecessary_inner: 0
rx19_csum_none: 0
rx19_xdp_drop: 0
rx19_xdp_redirect: 0
rx19_lro_packets: 0
rx19_lro_bytes: 0
rx19_ecn_mark: 0
rx19_removed_vlan_packets: 16641223506
rx19_wqe_err: 0
rx19_mpwqe_filler_cqes: 0
rx19_mpwqe_filler_strides: 0
rx19_buff_alloc_err: 0
rx19_cqe_compress_blks: 387825932
rx19_cqe_compress_pkts: 872266355
rx19_page_reuse: 0
rx19_cache_reuse: 116433620
rx19_cache_full: 8204175954
rx19_cache_empty: 37376
rx19_cache_busy: 8204178120
rx19_cache_waive: 0
rx19_congst_umr: 0
rx19_arfs_err: 0
rx19_xdp_tx_xmit: 0
rx19_xdp_tx_full: 0
rx19_xdp_tx_err: 0
rx19_xdp_tx_cqes: 0
rx20_packets: 16206927741
rx20_bytes: 21387447038430
rx20_csum_complete: 16206927741
rx20_csum_unnecessary: 0
rx20_csum_unnecessary_inner: 0
rx20_csum_none: 0
rx20_xdp_drop: 0
rx20_xdp_redirect: 0
rx20_lro_packets: 0
rx20_lro_bytes: 0
rx20_ecn_mark: 0
rx20_removed_vlan_packets: 16206927741
rx20_wqe_err: 0
rx20_mpwqe_filler_cqes: 0
rx20_mpwqe_filler_strides: 0
rx20_buff_alloc_err: 0
rx20_cqe_compress_blks: 370144620
rx20_cqe_compress_pkts: 829122671
rx20_page_reuse: 0
rx20_cache_reuse: 8053733744
rx20_cache_full: 49728026
rx20_cache_empty: 37376
rx20_cache_busy: 49730116
rx20_cache_waive: 0
rx20_congst_umr: 0
rx20_arfs_err: 0
rx20_xdp_tx_xmit: 0
rx20_xdp_tx_full: 0
rx20_xdp_tx_err: 0
rx20_xdp_tx_cqes: 0
rx21_packets: 16562361314
rx21_bytes: 21856653284356
rx21_csum_complete: 16562361314
rx21_csum_unnecessary: 0
rx21_csum_unnecessary_inner: 0
rx21_csum_none: 0
rx21_xdp_drop: 0
rx21_xdp_redirect: 0
rx21_lro_packets: 0
rx21_lro_bytes: 0
rx21_ecn_mark: 0
rx21_removed_vlan_packets: 16562361314
rx21_wqe_err: 0
rx21_mpwqe_filler_cqes: 0
rx21_mpwqe_filler_strides: 0
rx21_buff_alloc_err: 0
rx21_cqe_compress_blks: 350790425
rx21_cqe_compress_pkts: 783850729
rx21_page_reuse: 0
rx21_cache_reuse: 28077493
rx21_cache_full: 8253100706
rx21_cache_empty: 37376
rx21_cache_busy: 8253103147
rx21_cache_waive: 0
rx21_congst_umr: 0
rx21_arfs_err: 0
rx21_xdp_tx_xmit: 0
rx21_xdp_tx_full: 0
rx21_xdp_tx_err: 0
rx21_xdp_tx_cqes: 0
rx22_packets: 16350307571
rx22_bytes: 21408575325592
rx22_csum_complete: 16350307571
rx22_csum_unnecessary: 0
rx22_csum_unnecessary_inner: 0
rx22_csum_none: 0
rx22_xdp_drop: 0
rx22_xdp_redirect: 0
rx22_lro_packets: 0
rx22_lro_bytes: 0
rx22_ecn_mark: 0
rx22_removed_vlan_packets: 16350307571
rx22_wqe_err: 0
rx22_mpwqe_filler_cqes: 0
rx22_mpwqe_filler_strides: 0
rx22_buff_alloc_err: 0
rx22_cqe_compress_blks: 353531065
rx22_cqe_compress_pkts: 790814415
rx22_page_reuse: 0
rx22_cache_reuse: 16934343
rx22_cache_full: 8158216889
rx22_cache_empty: 37376
rx22_cache_busy: 8158219417
rx22_cache_waive: 0
rx22_congst_umr: 0
rx22_arfs_err: 0
rx22_xdp_tx_xmit: 0
rx22_xdp_tx_full: 0
rx22_xdp_tx_err: 0
rx22_xdp_tx_cqes: 0
rx23_packets: 16019811764
rx23_bytes: 21137182570985
rx23_csum_complete: 16019811764
rx23_csum_unnecessary: 0
rx23_csum_unnecessary_inner: 0
rx23_csum_none: 0
rx23_xdp_drop: 0
rx23_xdp_redirect: 0
rx23_lro_packets: 0
rx23_lro_bytes: 0
rx23_ecn_mark: 0
rx23_removed_vlan_packets: 16019811764
rx23_wqe_err: 0
rx23_mpwqe_filler_cqes: 0
rx23_mpwqe_filler_strides: 0
rx23_buff_alloc_err: 0
rx23_cqe_compress_blks: 349733033
rx23_cqe_compress_pkts: 781248862
rx23_page_reuse: 0
rx23_cache_reuse: 33422343
rx23_cache_full: 7976481152
rx23_cache_empty: 37376
rx23_cache_busy: 7976483525
rx23_cache_waive: 0
rx23_congst_umr: 0
rx23_arfs_err: 0
rx23_xdp_tx_xmit: 0
rx23_xdp_tx_full: 0
rx23_xdp_tx_err: 0
rx23_xdp_tx_cqes: 0
rx24_packets: 16212040646
rx24_bytes: 21393399325700
rx24_csum_complete: 16212040646
rx24_csum_unnecessary: 0
rx24_csum_unnecessary_inner: 0
rx24_csum_none: 0
rx24_xdp_drop: 0
rx24_xdp_redirect: 0
rx24_lro_packets: 0
rx24_lro_bytes: 0
rx24_ecn_mark: 0
rx24_removed_vlan_packets: 16212040646
rx24_wqe_err: 0
rx24_mpwqe_filler_cqes: 0
rx24_mpwqe_filler_strides: 0
rx24_buff_alloc_err: 0
rx24_cqe_compress_blks: 379833752
rx24_cqe_compress_pkts: 852020179
rx24_page_reuse: 0
rx24_cache_reuse: 8033552512
rx24_cache_full: 72465843
rx24_cache_empty: 37376
rx24_cache_busy: 72467789
rx24_cache_waive: 0
rx24_congst_umr: 0
rx24_arfs_err: 0
rx24_xdp_tx_xmit: 0
rx24_xdp_tx_full: 0
rx24_xdp_tx_err: 0
rx24_xdp_tx_cqes: 0
rx25_packets: 16412186257
rx25_bytes: 21651198388407
rx25_csum_complete: 16412186257
rx25_csum_unnecessary: 0
rx25_csum_unnecessary_inner: 0
rx25_csum_none: 0
rx25_xdp_drop: 0
rx25_xdp_redirect: 0
rx25_lro_packets: 0
rx25_lro_bytes: 0
rx25_ecn_mark: 0
rx25_removed_vlan_packets: 16412186257
rx25_wqe_err: 0
rx25_mpwqe_filler_cqes: 0
rx25_mpwqe_filler_strides: 0
rx25_buff_alloc_err: 0
rx25_cqe_compress_blks: 383979685
rx25_cqe_compress_pkts: 861985772
rx25_page_reuse: 0
rx25_cache_reuse: 8129807841
rx25_cache_full: 76283342
rx25_cache_empty: 37376
rx25_cache_busy: 76285271
rx25_cache_waive: 0
rx25_congst_umr: 0
rx25_arfs_err: 0
rx25_xdp_tx_xmit: 0
rx25_xdp_tx_full: 0
rx25_xdp_tx_err: 0
rx25_xdp_tx_cqes: 0
rx26_packets: 16304310003
rx26_bytes: 21571217538721
rx26_csum_complete: 16304310003
rx26_csum_unnecessary: 0
rx26_csum_unnecessary_inner: 0
rx26_csum_none: 0
rx26_xdp_drop: 0
rx26_xdp_redirect: 0
rx26_lro_packets: 0
rx26_lro_bytes: 0
rx26_ecn_mark: 0
rx26_removed_vlan_packets: 16304310003
rx26_wqe_err: 0
rx26_mpwqe_filler_cqes: 0
rx26_mpwqe_filler_strides: 0
rx26_buff_alloc_err: 0
rx26_cqe_compress_blks: 353314041
rx26_cqe_compress_pkts: 788838424
rx26_page_reuse: 0
rx26_cache_reuse: 19673790
rx26_cache_full: 8132478659
rx26_cache_empty: 37376
rx26_cache_busy: 8132481198
rx26_cache_waive: 0
rx26_congst_umr: 0
rx26_arfs_err: 0
rx26_xdp_tx_xmit: 0
rx26_xdp_tx_full: 0
rx26_xdp_tx_err: 0
rx26_xdp_tx_cqes: 0
rx27_packets: 16171856079
rx27_bytes: 21376891736540
rx27_csum_complete: 16171856079
rx27_csum_unnecessary: 0
rx27_csum_unnecessary_inner: 0
rx27_csum_none: 0
rx27_xdp_drop: 0
rx27_xdp_redirect: 0
rx27_lro_packets: 0
rx27_lro_bytes: 0
rx27_ecn_mark: 0
rx27_removed_vlan_packets: 16171856079
rx27_wqe_err: 0
rx27_mpwqe_filler_cqes: 0
rx27_mpwqe_filler_strides: 0
rx27_buff_alloc_err: 0
rx27_cqe_compress_blks: 386632845
rx27_cqe_compress_pkts: 869362576
rx27_page_reuse: 0
rx27_cache_reuse: 10070560
rx27_cache_full: 8075854928
rx27_cache_empty: 37376
rx27_cache_busy: 8075857468
rx27_cache_waive: 0
rx27_congst_umr: 0
rx27_arfs_err: 0
rx27_xdp_tx_xmit: 0
rx27_xdp_tx_full: 0
rx27_xdp_tx_err: 0
rx27_xdp_tx_cqes: 0
rx28_packets: 0
rx28_bytes: 0
rx28_csum_complete: 0
rx28_csum_unnecessary: 0
rx28_csum_unnecessary_inner: 0
rx28_csum_none: 0
rx28_xdp_drop: 0
rx28_xdp_redirect: 0
rx28_lro_packets: 0
rx28_lro_bytes: 0
rx28_ecn_mark: 0
rx28_removed_vlan_packets: 0
rx28_wqe_err: 0
rx28_mpwqe_filler_cqes: 0
rx28_mpwqe_filler_strides: 0
rx28_buff_alloc_err: 0
rx28_cqe_compress_blks: 0
rx28_cqe_compress_pkts: 0
rx28_page_reuse: 0
rx28_cache_reuse: 0
rx28_cache_full: 0
rx28_cache_empty: 2560
rx28_cache_busy: 0
rx28_cache_waive: 0
rx28_congst_umr: 0
rx28_arfs_err: 0
rx28_xdp_tx_xmit: 0
rx28_xdp_tx_full: 0
rx28_xdp_tx_err: 0
rx28_xdp_tx_cqes: 0
rx29_packets: 0
rx29_bytes: 0
rx29_csum_complete: 0
rx29_csum_unnecessary: 0
rx29_csum_unnecessary_inner: 0
rx29_csum_none: 0
rx29_xdp_drop: 0
rx29_xdp_redirect: 0
rx29_lro_packets: 0
rx29_lro_bytes: 0
rx29_ecn_mark: 0
rx29_removed_vlan_packets: 0
rx29_wqe_err: 0
rx29_mpwqe_filler_cqes: 0
rx29_mpwqe_filler_strides: 0
rx29_buff_alloc_err: 0
rx29_cqe_compress_blks: 0
rx29_cqe_compress_pkts: 0
rx29_page_reuse: 0
rx29_cache_reuse: 0
rx29_cache_full: 0
rx29_cache_empty: 2560
rx29_cache_busy: 0
rx29_cache_waive: 0
rx29_congst_umr: 0
rx29_arfs_err: 0
rx29_xdp_tx_xmit: 0
rx29_xdp_tx_full: 0
rx29_xdp_tx_err: 0
rx29_xdp_tx_cqes: 0
rx30_packets: 0
rx30_bytes: 0
rx30_csum_complete: 0
rx30_csum_unnecessary: 0
rx30_csum_unnecessary_inner: 0
rx30_csum_none: 0
rx30_xdp_drop: 0
rx30_xdp_redirect: 0
rx30_lro_packets: 0
rx30_lro_bytes: 0
rx30_ecn_mark: 0
rx30_removed_vlan_packets: 0
rx30_wqe_err: 0
rx30_mpwqe_filler_cqes: 0
rx30_mpwqe_filler_strides: 0
rx30_buff_alloc_err: 0
rx30_cqe_compress_blks: 0
rx30_cqe_compress_pkts: 0
rx30_page_reuse: 0
rx30_cache_reuse: 0
rx30_cache_full: 0
rx30_cache_empty: 2560
rx30_cache_busy: 0
rx30_cache_waive: 0
rx30_congst_umr: 0
rx30_arfs_err: 0
rx30_xdp_tx_xmit: 0
rx30_xdp_tx_full: 0
rx30_xdp_tx_err: 0
rx30_xdp_tx_cqes: 0
rx31_packets: 0
rx31_bytes: 0
rx31_csum_complete: 0
rx31_csum_unnecessary: 0
rx31_csum_unnecessary_inner: 0
rx31_csum_none: 0
rx31_xdp_drop: 0
rx31_xdp_redirect: 0
rx31_lro_packets: 0
rx31_lro_bytes: 0
rx31_ecn_mark: 0
rx31_removed_vlan_packets: 0
rx31_wqe_err: 0
rx31_mpwqe_filler_cqes: 0
rx31_mpwqe_filler_strides: 0
rx31_buff_alloc_err: 0
rx31_cqe_compress_blks: 0
rx31_cqe_compress_pkts: 0
rx31_page_reuse: 0
rx31_cache_reuse: 0
rx31_cache_full: 0
rx31_cache_empty: 2560
rx31_cache_busy: 0
rx31_cache_waive: 0
rx31_congst_umr: 0
rx31_arfs_err: 0
rx31_xdp_tx_xmit: 0
rx31_xdp_tx_full: 0
rx31_xdp_tx_err: 0
rx31_xdp_tx_cqes: 0
rx32_packets: 0
rx32_bytes: 0
rx32_csum_complete: 0
rx32_csum_unnecessary: 0
rx32_csum_unnecessary_inner: 0
rx32_csum_none: 0
rx32_xdp_drop: 0
rx32_xdp_redirect: 0
rx32_lro_packets: 0
rx32_lro_bytes: 0
rx32_ecn_mark: 0
rx32_removed_vlan_packets: 0
rx32_wqe_err: 0
rx32_mpwqe_filler_cqes: 0
rx32_mpwqe_filler_strides: 0
rx32_buff_alloc_err: 0
rx32_cqe_compress_blks: 0
rx32_cqe_compress_pkts: 0
rx32_page_reuse: 0
rx32_cache_reuse: 0
rx32_cache_full: 0
rx32_cache_empty: 2560
rx32_cache_busy: 0
rx32_cache_waive: 0
rx32_congst_umr: 0
rx32_arfs_err: 0
rx32_xdp_tx_xmit: 0
rx32_xdp_tx_full: 0
rx32_xdp_tx_err: 0
rx32_xdp_tx_cqes: 0
rx33_packets: 0
rx33_bytes: 0
rx33_csum_complete: 0
rx33_csum_unnecessary: 0
rx33_csum_unnecessary_inner: 0
rx33_csum_none: 0
rx33_xdp_drop: 0
rx33_xdp_redirect: 0
rx33_lro_packets: 0
rx33_lro_bytes: 0
rx33_ecn_mark: 0
rx33_removed_vlan_packets: 0
rx33_wqe_err: 0
rx33_mpwqe_filler_cqes: 0
rx33_mpwqe_filler_strides: 0
rx33_buff_alloc_err: 0
rx33_cqe_compress_blks: 0
rx33_cqe_compress_pkts: 0
rx33_page_reuse: 0
rx33_cache_reuse: 0
rx33_cache_full: 0
rx33_cache_empty: 2560
rx33_cache_busy: 0
rx33_cache_waive: 0
rx33_congst_umr: 0
rx33_arfs_err: 0
rx33_xdp_tx_xmit: 0
rx33_xdp_tx_full: 0
rx33_xdp_tx_err: 0
rx33_xdp_tx_cqes: 0
rx34_packets: 0
rx34_bytes: 0
rx34_csum_complete: 0
rx34_csum_unnecessary: 0
rx34_csum_unnecessary_inner: 0
rx34_csum_none: 0
rx34_xdp_drop: 0
rx34_xdp_redirect: 0
rx34_lro_packets: 0
rx34_lro_bytes: 0
rx34_ecn_mark: 0
rx34_removed_vlan_packets: 0
rx34_wqe_err: 0
rx34_mpwqe_filler_cqes: 0
rx34_mpwqe_filler_strides: 0
rx34_buff_alloc_err: 0
rx34_cqe_compress_blks: 0
rx34_cqe_compress_pkts: 0
rx34_page_reuse: 0
rx34_cache_reuse: 0
rx34_cache_full: 0
rx34_cache_empty: 2560
rx34_cache_busy: 0
rx34_cache_waive: 0
rx34_congst_umr: 0
rx34_arfs_err: 0
rx34_xdp_tx_xmit: 0
rx34_xdp_tx_full: 0
rx34_xdp_tx_err: 0
rx34_xdp_tx_cqes: 0
rx35_packets: 0
rx35_bytes: 0
rx35_csum_complete: 0
rx35_csum_unnecessary: 0
rx35_csum_unnecessary_inner: 0
rx35_csum_none: 0
rx35_xdp_drop: 0
rx35_xdp_redirect: 0
rx35_lro_packets: 0
rx35_lro_bytes: 0
rx35_ecn_mark: 0
rx35_removed_vlan_packets: 0
rx35_wqe_err: 0
rx35_mpwqe_filler_cqes: 0
rx35_mpwqe_filler_strides: 0
rx35_buff_alloc_err: 0
rx35_cqe_compress_blks: 0
rx35_cqe_compress_pkts: 0
rx35_page_reuse: 0
rx35_cache_reuse: 0
rx35_cache_full: 0
rx35_cache_empty: 2560
rx35_cache_busy: 0
rx35_cache_waive: 0
rx35_congst_umr: 0
rx35_arfs_err: 0
rx35_xdp_tx_xmit: 0
rx35_xdp_tx_full: 0
rx35_xdp_tx_err: 0
rx35_xdp_tx_cqes: 0
rx36_packets: 0
rx36_bytes: 0
rx36_csum_complete: 0
rx36_csum_unnecessary: 0
rx36_csum_unnecessary_inner: 0
rx36_csum_none: 0
rx36_xdp_drop: 0
rx36_xdp_redirect: 0
rx36_lro_packets: 0
rx36_lro_bytes: 0
rx36_ecn_mark: 0
rx36_removed_vlan_packets: 0
rx36_wqe_err: 0
rx36_mpwqe_filler_cqes: 0
rx36_mpwqe_filler_strides: 0
rx36_buff_alloc_err: 0
rx36_cqe_compress_blks: 0
rx36_cqe_compress_pkts: 0
rx36_page_reuse: 0
rx36_cache_reuse: 0
rx36_cache_full: 0
rx36_cache_empty: 2560
rx36_cache_busy: 0
rx36_cache_waive: 0
rx36_congst_umr: 0
rx36_arfs_err: 0
rx36_xdp_tx_xmit: 0
rx36_xdp_tx_full: 0
rx36_xdp_tx_err: 0
rx36_xdp_tx_cqes: 0
rx37_packets: 0
rx37_bytes: 0
rx37_csum_complete: 0
rx37_csum_unnecessary: 0
rx37_csum_unnecessary_inner: 0
rx37_csum_none: 0
rx37_xdp_drop: 0
rx37_xdp_redirect: 0
rx37_lro_packets: 0
rx37_lro_bytes: 0
rx37_ecn_mark: 0
rx37_removed_vlan_packets: 0
rx37_wqe_err: 0
rx37_mpwqe_filler_cqes: 0
rx37_mpwqe_filler_strides: 0
rx37_buff_alloc_err: 0
rx37_cqe_compress_blks: 0
rx37_cqe_compress_pkts: 0
rx37_page_reuse: 0
rx37_cache_reuse: 0
rx37_cache_full: 0
rx37_cache_empty: 2560
rx37_cache_busy: 0
rx37_cache_waive: 0
rx37_congst_umr: 0
rx37_arfs_err: 0
rx37_xdp_tx_xmit: 0
rx37_xdp_tx_full: 0
rx37_xdp_tx_err: 0
rx37_xdp_tx_cqes: 0
rx38_packets: 0
rx38_bytes: 0
rx38_csum_complete: 0
rx38_csum_unnecessary: 0
rx38_csum_unnecessary_inner: 0
rx38_csum_none: 0
rx38_xdp_drop: 0
rx38_xdp_redirect: 0
rx38_lro_packets: 0
rx38_lro_bytes: 0
rx38_ecn_mark: 0
rx38_removed_vlan_packets: 0
rx38_wqe_err: 0
rx38_mpwqe_filler_cqes: 0
rx38_mpwqe_filler_strides: 0
rx38_buff_alloc_err: 0
rx38_cqe_compress_blks: 0
rx38_cqe_compress_pkts: 0
rx38_page_reuse: 0
rx38_cache_reuse: 0
rx38_cache_full: 0
rx38_cache_empty: 2560
rx38_cache_busy: 0
rx38_cache_waive: 0
rx38_congst_umr: 0
rx38_arfs_err: 0
rx38_xdp_tx_xmit: 0
rx38_xdp_tx_full: 0
rx38_xdp_tx_err: 0
rx38_xdp_tx_cqes: 0
rx39_packets: 0
rx39_bytes: 0
rx39_csum_complete: 0
rx39_csum_unnecessary: 0
rx39_csum_unnecessary_inner: 0
rx39_csum_none: 0
rx39_xdp_drop: 0
rx39_xdp_redirect: 0
rx39_lro_packets: 0
rx39_lro_bytes: 0
rx39_ecn_mark: 0
rx39_removed_vlan_packets: 0
rx39_wqe_err: 0
rx39_mpwqe_filler_cqes: 0
rx39_mpwqe_filler_strides: 0
rx39_buff_alloc_err: 0
rx39_cqe_compress_blks: 0
rx39_cqe_compress_pkts: 0
rx39_page_reuse: 0
rx39_cache_reuse: 0
rx39_cache_full: 0
rx39_cache_empty: 2560
rx39_cache_busy: 0
rx39_cache_waive: 0
rx39_congst_umr: 0
rx39_arfs_err: 0
rx39_xdp_tx_xmit: 0
rx39_xdp_tx_full: 0
rx39_xdp_tx_err: 0
rx39_xdp_tx_cqes: 0
rx40_packets: 0
rx40_bytes: 0
rx40_csum_complete: 0
rx40_csum_unnecessary: 0
rx40_csum_unnecessary_inner: 0
rx40_csum_none: 0
rx40_xdp_drop: 0
rx40_xdp_redirect: 0
rx40_lro_packets: 0
rx40_lro_bytes: 0
rx40_ecn_mark: 0
rx40_removed_vlan_packets: 0
rx40_wqe_err: 0
rx40_mpwqe_filler_cqes: 0
rx40_mpwqe_filler_strides: 0
rx40_buff_alloc_err: 0
rx40_cqe_compress_blks: 0
rx40_cqe_compress_pkts: 0
rx40_page_reuse: 0
rx40_cache_reuse: 0
rx40_cache_full: 0
rx40_cache_empty: 2560
rx40_cache_busy: 0
rx40_cache_waive: 0
rx40_congst_umr: 0
rx40_arfs_err: 0
rx40_xdp_tx_xmit: 0
rx40_xdp_tx_full: 0
rx40_xdp_tx_err: 0
rx40_xdp_tx_cqes: 0
rx41_packets: 0
rx41_bytes: 0
rx41_csum_complete: 0
rx41_csum_unnecessary: 0
rx41_csum_unnecessary_inner: 0
rx41_csum_none: 0
rx41_xdp_drop: 0
rx41_xdp_redirect: 0
rx41_lro_packets: 0
rx41_lro_bytes: 0
rx41_ecn_mark: 0
rx41_removed_vlan_packets: 0
rx41_wqe_err: 0
rx41_mpwqe_filler_cqes: 0
rx41_mpwqe_filler_strides: 0
rx41_buff_alloc_err: 0
rx41_cqe_compress_blks: 0
rx41_cqe_compress_pkts: 0
rx41_page_reuse: 0
rx41_cache_reuse: 0
rx41_cache_full: 0
rx41_cache_empty: 2560
rx41_cache_busy: 0
rx41_cache_waive: 0
rx41_congst_umr: 0
rx41_arfs_err: 0
rx41_xdp_tx_xmit: 0
rx41_xdp_tx_full: 0
rx41_xdp_tx_err: 0
rx41_xdp_tx_cqes: 0
rx42_packets: 0
rx42_bytes: 0
rx42_csum_complete: 0
rx42_csum_unnecessary: 0
rx42_csum_unnecessary_inner: 0
rx42_csum_none: 0
rx42_xdp_drop: 0
rx42_xdp_redirect: 0
rx42_lro_packets: 0
rx42_lro_bytes: 0
rx42_ecn_mark: 0
rx42_removed_vlan_packets: 0
rx42_wqe_err: 0
rx42_mpwqe_filler_cqes: 0
rx42_mpwqe_filler_strides: 0
rx42_buff_alloc_err: 0
rx42_cqe_compress_blks: 0
rx42_cqe_compress_pkts: 0
rx42_page_reuse: 0
rx42_cache_reuse: 0
rx42_cache_full: 0
rx42_cache_empty: 2560
rx42_cache_busy: 0
rx42_cache_waive: 0
rx42_congst_umr: 0
rx42_arfs_err: 0
rx42_xdp_tx_xmit: 0
rx42_xdp_tx_full: 0
rx42_xdp_tx_err: 0
rx42_xdp_tx_cqes: 0
rx43_packets: 0
rx43_bytes: 0
rx43_csum_complete: 0
rx43_csum_unnecessary: 0
rx43_csum_unnecessary_inner: 0
rx43_csum_none: 0
rx43_xdp_drop: 0
rx43_xdp_redirect: 0
rx43_lro_packets: 0
rx43_lro_bytes: 0
rx43_ecn_mark: 0
rx43_removed_vlan_packets: 0
rx43_wqe_err: 0
rx43_mpwqe_filler_cqes: 0
rx43_mpwqe_filler_strides: 0
rx43_buff_alloc_err: 0
rx43_cqe_compress_blks: 0
rx43_cqe_compress_pkts: 0
rx43_page_reuse: 0
rx43_cache_reuse: 0
rx43_cache_full: 0
rx43_cache_empty: 2560
rx43_cache_busy: 0
rx43_cache_waive: 0
rx43_congst_umr: 0
rx43_arfs_err: 0
rx43_xdp_tx_xmit: 0
rx43_xdp_tx_full: 0
rx43_xdp_tx_err: 0
rx43_xdp_tx_cqes: 0
rx44_packets: 0
rx44_bytes: 0
rx44_csum_complete: 0
rx44_csum_unnecessary: 0
rx44_csum_unnecessary_inner: 0
rx44_csum_none: 0
rx44_xdp_drop: 0
rx44_xdp_redirect: 0
rx44_lro_packets: 0
rx44_lro_bytes: 0
rx44_ecn_mark: 0
rx44_removed_vlan_packets: 0
rx44_wqe_err: 0
rx44_mpwqe_filler_cqes: 0
rx44_mpwqe_filler_strides: 0
rx44_buff_alloc_err: 0
rx44_cqe_compress_blks: 0
rx44_cqe_compress_pkts: 0
rx44_page_reuse: 0
rx44_cache_reuse: 0
rx44_cache_full: 0
rx44_cache_empty: 2560
rx44_cache_busy: 0
rx44_cache_waive: 0
rx44_congst_umr: 0
rx44_arfs_err: 0
rx44_xdp_tx_xmit: 0
rx44_xdp_tx_full: 0
rx44_xdp_tx_err: 0
rx44_xdp_tx_cqes: 0
rx45_packets: 0
rx45_bytes: 0
rx45_csum_complete: 0
rx45_csum_unnecessary: 0
rx45_csum_unnecessary_inner: 0
rx45_csum_none: 0
rx45_xdp_drop: 0
rx45_xdp_redirect: 0
rx45_lro_packets: 0
rx45_lro_bytes: 0
rx45_ecn_mark: 0
rx45_removed_vlan_packets: 0
rx45_wqe_err: 0
rx45_mpwqe_filler_cqes: 0
rx45_mpwqe_filler_strides: 0
rx45_buff_alloc_err: 0
rx45_cqe_compress_blks: 0
rx45_cqe_compress_pkts: 0
rx45_page_reuse: 0
rx45_cache_reuse: 0
rx45_cache_full: 0
rx45_cache_empty: 2560
rx45_cache_busy: 0
rx45_cache_waive: 0
rx45_congst_umr: 0
rx45_arfs_err: 0
rx45_xdp_tx_xmit: 0
rx45_xdp_tx_full: 0
rx45_xdp_tx_err: 0
rx45_xdp_tx_cqes: 0
rx46_packets: 0
rx46_bytes: 0
rx46_csum_complete: 0
rx46_csum_unnecessary: 0
rx46_csum_unnecessary_inner: 0
rx46_csum_none: 0
rx46_xdp_drop: 0
rx46_xdp_redirect: 0
rx46_lro_packets: 0
rx46_lro_bytes: 0
rx46_ecn_mark: 0
rx46_removed_vlan_packets: 0
rx46_wqe_err: 0
rx46_mpwqe_filler_cqes: 0
rx46_mpwqe_filler_strides: 0
rx46_buff_alloc_err: 0
rx46_cqe_compress_blks: 0
rx46_cqe_compress_pkts: 0
rx46_page_reuse: 0
rx46_cache_reuse: 0
rx46_cache_full: 0
rx46_cache_empty: 2560
rx46_cache_busy: 0
rx46_cache_waive: 0
rx46_congst_umr: 0
rx46_arfs_err: 0
rx46_xdp_tx_xmit: 0
rx46_xdp_tx_full: 0
rx46_xdp_tx_err: 0
rx46_xdp_tx_cqes: 0
rx47_packets: 0
rx47_bytes: 0
rx47_csum_complete: 0
rx47_csum_unnecessary: 0
rx47_csum_unnecessary_inner: 0
rx47_csum_none: 0
rx47_xdp_drop: 0
rx47_xdp_redirect: 0
rx47_lro_packets: 0
rx47_lro_bytes: 0
rx47_ecn_mark: 0
rx47_removed_vlan_packets: 0
rx47_wqe_err: 0
rx47_mpwqe_filler_cqes: 0
rx47_mpwqe_filler_strides: 0
rx47_buff_alloc_err: 0
rx47_cqe_compress_blks: 0
rx47_cqe_compress_pkts: 0
rx47_page_reuse: 0
rx47_cache_reuse: 0
rx47_cache_full: 0
rx47_cache_empty: 2560
rx47_cache_busy: 0
rx47_cache_waive: 0
rx47_congst_umr: 0
rx47_arfs_err: 0
rx47_xdp_tx_xmit: 0
rx47_xdp_tx_full: 0
rx47_xdp_tx_err: 0
rx47_xdp_tx_cqes: 0
rx48_packets: 0
rx48_bytes: 0
rx48_csum_complete: 0
rx48_csum_unnecessary: 0
rx48_csum_unnecessary_inner: 0
rx48_csum_none: 0
rx48_xdp_drop: 0
rx48_xdp_redirect: 0
rx48_lro_packets: 0
rx48_lro_bytes: 0
rx48_ecn_mark: 0
rx48_removed_vlan_packets: 0
rx48_wqe_err: 0
rx48_mpwqe_filler_cqes: 0
rx48_mpwqe_filler_strides: 0
rx48_buff_alloc_err: 0
rx48_cqe_compress_blks: 0
rx48_cqe_compress_pkts: 0
rx48_page_reuse: 0
rx48_cache_reuse: 0
rx48_cache_full: 0
rx48_cache_empty: 2560
rx48_cache_busy: 0
rx48_cache_waive: 0
rx48_congst_umr: 0
rx48_arfs_err: 0
rx48_xdp_tx_xmit: 0
rx48_xdp_tx_full: 0
rx48_xdp_tx_err: 0
rx48_xdp_tx_cqes: 0
rx49_packets: 0
rx49_bytes: 0
rx49_csum_complete: 0
rx49_csum_unnecessary: 0
rx49_csum_unnecessary_inner: 0
rx49_csum_none: 0
rx49_xdp_drop: 0
rx49_xdp_redirect: 0
rx49_lro_packets: 0
rx49_lro_bytes: 0
rx49_ecn_mark: 0
rx49_removed_vlan_packets: 0
rx49_wqe_err: 0
rx49_mpwqe_filler_cqes: 0
rx49_mpwqe_filler_strides: 0
rx49_buff_alloc_err: 0
rx49_cqe_compress_blks: 0
rx49_cqe_compress_pkts: 0
rx49_page_reuse: 0
rx49_cache_reuse: 0
rx49_cache_full: 0
rx49_cache_empty: 2560
rx49_cache_busy: 0
rx49_cache_waive: 0
rx49_congst_umr: 0
rx49_arfs_err: 0
rx49_xdp_tx_xmit: 0
rx49_xdp_tx_full: 0
rx49_xdp_tx_err: 0
rx49_xdp_tx_cqes: 0
rx50_packets: 0
rx50_bytes: 0
rx50_csum_complete: 0
rx50_csum_unnecessary: 0
rx50_csum_unnecessary_inner: 0
rx50_csum_none: 0
rx50_xdp_drop: 0
rx50_xdp_redirect: 0
rx50_lro_packets: 0
rx50_lro_bytes: 0
rx50_ecn_mark: 0
rx50_removed_vlan_packets: 0
rx50_wqe_err: 0
rx50_mpwqe_filler_cqes: 0
rx50_mpwqe_filler_strides: 0
rx50_buff_alloc_err: 0
rx50_cqe_compress_blks: 0
rx50_cqe_compress_pkts: 0
rx50_page_reuse: 0
rx50_cache_reuse: 0
rx50_cache_full: 0
rx50_cache_empty: 2560
rx50_cache_busy: 0
rx50_cache_waive: 0
rx50_congst_umr: 0
rx50_arfs_err: 0
rx50_xdp_tx_xmit: 0
rx50_xdp_tx_full: 0
rx50_xdp_tx_err: 0
rx50_xdp_tx_cqes: 0
rx51_packets: 0
rx51_bytes: 0
rx51_csum_complete: 0
rx51_csum_unnecessary: 0
rx51_csum_unnecessary_inner: 0
rx51_csum_none: 0
rx51_xdp_drop: 0
rx51_xdp_redirect: 0
rx51_lro_packets: 0
rx51_lro_bytes: 0
rx51_ecn_mark: 0
rx51_removed_vlan_packets: 0
rx51_wqe_err: 0
rx51_mpwqe_filler_cqes: 0
rx51_mpwqe_filler_strides: 0
rx51_buff_alloc_err: 0
rx51_cqe_compress_blks: 0
rx51_cqe_compress_pkts: 0
rx51_page_reuse: 0
rx51_cache_reuse: 0
rx51_cache_full: 0
rx51_cache_empty: 2560
rx51_cache_busy: 0
rx51_cache_waive: 0
rx51_congst_umr: 0
rx51_arfs_err: 0
rx51_xdp_tx_xmit: 0
rx51_xdp_tx_full: 0
rx51_xdp_tx_err: 0
rx51_xdp_tx_cqes: 0
rx52_packets: 0
rx52_bytes: 0
rx52_csum_complete: 0
rx52_csum_unnecessary: 0
rx52_csum_unnecessary_inner: 0
rx52_csum_none: 0
rx52_xdp_drop: 0
rx52_xdp_redirect: 0
rx52_lro_packets: 0
rx52_lro_bytes: 0
rx52_ecn_mark: 0
rx52_removed_vlan_packets: 0
rx52_wqe_err: 0
rx52_mpwqe_filler_cqes: 0
rx52_mpwqe_filler_strides: 0
rx52_buff_alloc_err: 0
rx52_cqe_compress_blks: 0
rx52_cqe_compress_pkts: 0
rx52_page_reuse: 0
rx52_cache_reuse: 0
rx52_cache_full: 0
rx52_cache_empty: 2560
rx52_cache_busy: 0
rx52_cache_waive: 0
rx52_congst_umr: 0
rx52_arfs_err: 0
rx52_xdp_tx_xmit: 0
rx52_xdp_tx_full: 0
rx52_xdp_tx_err: 0
rx52_xdp_tx_cqes: 0
rx53_packets: 0
rx53_bytes: 0
rx53_csum_complete: 0
rx53_csum_unnecessary: 0
rx53_csum_unnecessary_inner: 0
rx53_csum_none: 0
rx53_xdp_drop: 0
rx53_xdp_redirect: 0
rx53_lro_packets: 0
rx53_lro_bytes: 0
rx53_ecn_mark: 0
rx53_removed_vlan_packets: 0
rx53_wqe_err: 0
rx53_mpwqe_filler_cqes: 0
rx53_mpwqe_filler_strides: 0
rx53_buff_alloc_err: 0
rx53_cqe_compress_blks: 0
rx53_cqe_compress_pkts: 0
rx53_page_reuse: 0
rx53_cache_reuse: 0
rx53_cache_full: 0
rx53_cache_empty: 2560
rx53_cache_busy: 0
rx53_cache_waive: 0
rx53_congst_umr: 0
rx53_arfs_err: 0
rx53_xdp_tx_xmit: 0
rx53_xdp_tx_full: 0
rx53_xdp_tx_err: 0
rx53_xdp_tx_cqes: 0
rx54_packets: 0
rx54_bytes: 0
rx54_csum_complete: 0
rx54_csum_unnecessary: 0
rx54_csum_unnecessary_inner: 0
rx54_csum_none: 0
rx54_xdp_drop: 0
rx54_xdp_redirect: 0
rx54_lro_packets: 0
rx54_lro_bytes: 0
rx54_ecn_mark: 0
rx54_removed_vlan_packets: 0
rx54_wqe_err: 0
rx54_mpwqe_filler_cqes: 0
rx54_mpwqe_filler_strides: 0
rx54_buff_alloc_err: 0
rx54_cqe_compress_blks: 0
rx54_cqe_compress_pkts: 0
rx54_page_reuse: 0
rx54_cache_reuse: 0
rx54_cache_full: 0
rx54_cache_empty: 2560
rx54_cache_busy: 0
rx54_cache_waive: 0
rx54_congst_umr: 0
rx54_arfs_err: 0
rx54_xdp_tx_xmit: 0
rx54_xdp_tx_full: 0
rx54_xdp_tx_err: 0
rx54_xdp_tx_cqes: 0
rx55_packets: 0
rx55_bytes: 0
rx55_csum_complete: 0
rx55_csum_unnecessary: 0
rx55_csum_unnecessary_inner: 0
rx55_csum_none: 0
rx55_xdp_drop: 0
rx55_xdp_redirect: 0
rx55_lro_packets: 0
rx55_lro_bytes: 0
rx55_ecn_mark: 0
rx55_removed_vlan_packets: 0
rx55_wqe_err: 0
rx55_mpwqe_filler_cqes: 0
rx55_mpwqe_filler_strides: 0
rx55_buff_alloc_err: 0
rx55_cqe_compress_blks: 0
rx55_cqe_compress_pkts: 0
rx55_page_reuse: 0
rx55_cache_reuse: 0
rx55_cache_full: 0
rx55_cache_empty: 2560
rx55_cache_busy: 0
rx55_cache_waive: 0
rx55_congst_umr: 0
rx55_arfs_err: 0
rx55_xdp_tx_xmit: 0
rx55_xdp_tx_full: 0
rx55_xdp_tx_err: 0
rx55_xdp_tx_cqes: 0
tx0_packets: 24512439668
tx0_bytes: 15287569052791
tx0_tso_packets: 1536157106
tx0_tso_bytes: 8571753637944
tx0_tso_inner_packets: 0
tx0_tso_inner_bytes: 0
tx0_csum_partial: 2132156117
tx0_csum_partial_inner: 0
tx0_added_vlan_packets: 19906601448
tx0_nop: 308098536
tx0_csum_none: 17774445331
tx0_stopped: 19625
tx0_dropped: 0
tx0_xmit_more: 67864870
tx0_recover: 0
tx0_cqes: 19838744246
tx0_wake: 19624
tx0_cqe_err: 0
tx1_packets: 22598557053
tx1_bytes: 13568850145010
tx1_tso_packets: 1369529475
tx1_tso_bytes: 7661777265382
tx1_tso_inner_packets: 0
tx1_tso_inner_bytes: 0
tx1_csum_partial: 1884639496
tx1_csum_partial_inner: 0
tx1_added_vlan_packets: 18468333696
tx1_nop: 281301783
tx1_csum_none: 16583694200
tx1_stopped: 19457
tx1_dropped: 0
tx1_xmit_more: 55170875
tx1_recover: 0
tx1_cqes: 18413169824
tx1_wake: 19455
tx1_cqe_err: 0
tx2_packets: 22821611433
tx2_bytes: 13752535163683
tx2_tso_packets: 1396978825
tx2_tso_bytes: 7774704508463
tx2_tso_inner_packets: 0
tx2_tso_inner_bytes: 0
tx2_csum_partial: 1897834558
tx2_csum_partial_inner: 0
tx2_added_vlan_packets: 18641958085
tx2_nop: 286934891
tx2_csum_none: 16744123527
tx2_stopped: 13214
tx2_dropped: 0
tx2_xmit_more: 61749446
tx2_recover: 0
tx2_cqes: 18580215654
tx2_wake: 13214
tx2_cqe_err: 0
tx3_packets: 22580809948
tx3_bytes: 13730542936609
tx3_tso_packets: 1370434579
tx3_tso_bytes: 7605636711455
tx3_tso_inner_packets: 0
tx3_tso_inner_bytes: 0
tx3_csum_partial: 1865573748
tx3_csum_partial_inner: 0
tx3_added_vlan_packets: 18491873644
tx3_nop: 281195875
tx3_csum_none: 16626299896
tx3_stopped: 12542
tx3_dropped: 0
tx3_xmit_more: 57681647
tx3_recover: 0
tx3_cqes: 18434198757
tx3_wake: 12540
tx3_cqe_err: 0
tx4_packets: 27801801208
tx4_bytes: 17058453171137
tx4_tso_packets: 1740500105
tx4_tso_bytes: 9474905622036
tx4_tso_inner_packets: 0
tx4_tso_inner_bytes: 0
tx4_csum_partial: 2279225376
tx4_csum_partial_inner: 0
tx4_added_vlan_packets: 22744081633
tx4_nop: 349753979
tx4_csum_none: 20464856257
tx4_stopped: 14816
tx4_dropped: 0
tx4_xmit_more: 65469322
tx4_recover: 0
tx4_cqes: 22678618972
tx4_wake: 14816
tx4_cqe_err: 0
tx5_packets: 25099783024
tx5_bytes: 14917740698381
tx5_tso_packets: 1512988013
tx5_tso_bytes: 8571208921023
tx5_tso_inner_packets: 0
tx5_tso_inner_bytes: 0
tx5_csum_partial: 2078498561
tx5_csum_partial_inner: 0
tx5_added_vlan_packets: 20465533760
tx5_nop: 312614719
tx5_csum_none: 18387035199
tx5_stopped: 4605
tx5_dropped: 0
tx5_xmit_more: 64188936
tx5_recover: 0
tx5_cqes: 20401350718
tx5_wake: 4604
tx5_cqe_err: 0
tx6_packets: 25025504896
tx6_bytes: 14908021946070
tx6_tso_packets: 1515718977
tx6_tso_bytes: 8511442522461
tx6_tso_inner_packets: 0
tx6_tso_inner_bytes: 0
tx6_csum_partial: 2056378610
tx6_csum_partial_inner: 0
tx6_added_vlan_packets: 20434066400
tx6_nop: 310594020
tx6_csum_none: 18377687790
tx6_stopped: 15234
tx6_dropped: 0
tx6_xmit_more: 61130422
tx6_recover: 0
tx6_cqes: 20372943611
tx6_wake: 15234
tx6_cqe_err: 0
tx7_packets: 25457096169
tx7_bytes: 15456289446172
tx7_tso_packets: 1553342799
tx7_tso_bytes: 8764550988105
tx7_tso_inner_packets: 0
tx7_tso_inner_bytes: 0
tx7_csum_partial: 2105765233
tx7_csum_partial_inner: 0
tx7_added_vlan_packets: 20720382377
tx7_nop: 319044853
tx7_csum_none: 18614617145
tx7_stopped: 18745
tx7_dropped: 0
tx7_xmit_more: 57050107
tx7_recover: 0
tx7_cqes: 20663340775
tx7_wake: 18746
tx7_cqe_err: 0
tx8_packets: 25389771649
tx8_bytes: 15225503883962
tx8_tso_packets: 1563367648
tx8_tso_bytes: 8710384514258
tx8_tso_inner_packets: 0
tx8_tso_inner_bytes: 0
tx8_csum_partial: 2106586634
tx8_csum_partial_inner: 0
tx8_added_vlan_packets: 20704676274
tx8_nop: 318149261
tx8_csum_none: 18598089640
tx8_stopped: 4733
tx8_dropped: 0
tx8_xmit_more: 61014317
tx8_recover: 0
tx8_cqes: 20643667301
tx8_wake: 4735
tx8_cqe_err: 0
tx9_packets: 25521500166
tx9_bytes: 15302293145755
tx9_tso_packets: 1546316697
tx9_tso_bytes: 8770688145926
tx9_tso_inner_packets: 0
tx9_tso_inner_bytes: 0
tx9_csum_partial: 2097652880
tx9_csum_partial_inner: 0
tx9_added_vlan_packets: 20778408432
tx9_nop: 318538543
tx9_csum_none: 18680755556
tx9_stopped: 16118
tx9_dropped: 0
tx9_xmit_more: 68509728
tx9_recover: 0
tx9_cqes: 20709906498
tx9_wake: 16118
tx9_cqe_err: 0
tx10_packets: 25451605829
tx10_bytes: 15386896170792
tx10_tso_packets: 1576473520
tx10_tso_bytes: 8880888676383
tx10_tso_inner_packets: 0
tx10_tso_inner_bytes: 0
tx10_csum_partial: 2129796141
tx10_csum_partial_inner: 0
tx10_added_vlan_packets: 20659622590
tx10_nop: 319117433
tx10_csum_none: 18529826450
tx10_stopped: 20187
tx10_dropped: 0
tx10_xmit_more: 58892184
tx10_recover: 0
tx10_cqes: 20600737739
tx10_wake: 20188
tx10_cqe_err: 0
tx11_packets: 27008919793
tx11_bytes: 16587719213058
tx11_tso_packets: 1734884654
tx11_tso_bytes: 9475681471870
tx11_tso_inner_packets: 0
tx11_tso_inner_bytes: 0
tx11_csum_partial: 2296162292
tx11_csum_partial_inner: 0
tx11_added_vlan_packets: 21943096263
tx11_nop: 344188182
tx11_csum_none: 19646933971
tx11_stopped: 9703
tx11_dropped: 0
tx11_xmit_more: 66530718
tx11_recover: 0
tx11_cqes: 21876571667
tx11_wake: 9704
tx11_cqe_err: 0
tx12_packets: 25969493269
tx12_bytes: 15980767963416
tx12_tso_packets: 1671396456
tx12_tso_bytes: 9268973672821
tx12_tso_inner_packets: 0
tx12_tso_inner_bytes: 0
tx12_csum_partial: 2243809182
tx12_csum_partial_inner: 0
tx12_added_vlan_packets: 20980642456
tx12_nop: 330241007
tx12_csum_none: 18736833276
tx12_stopped: 10341
tx12_dropped: 0
tx12_xmit_more: 57834100
tx12_recover: 0
tx12_cqes: 20922815079
tx12_wake: 10342
tx12_cqe_err: 0
tx13_packets: 25332762261
tx13_bytes: 15353213283280
tx13_tso_packets: 1577433599
tx13_tso_bytes: 8785240284281
tx13_tso_inner_packets: 0
tx13_tso_inner_bytes: 0
tx13_csum_partial: 2110640515
tx13_csum_partial_inner: 0
tx13_added_vlan_packets: 20605670910
tx13_nop: 319805741
tx13_csum_none: 18495030395
tx13_stopped: 7006
tx13_dropped: 0
tx13_xmit_more: 58314402
tx13_recover: 0
tx13_cqes: 20547362770
tx13_wake: 7008
tx13_cqe_err: 0
tx14_packets: 26333743548
tx14_bytes: 16070719060573
tx14_tso_packets: 1677922970
tx14_tso_bytes: 9240299765487
tx14_tso_inner_packets: 0
tx14_tso_inner_bytes: 0
tx14_csum_partial: 2215668906
tx14_csum_partial_inner: 0
tx14_added_vlan_packets: 21384410786
tx14_nop: 332734939
tx14_csum_none: 19168741880
tx14_stopped: 13160
tx14_dropped: 0
tx14_xmit_more: 57650391
tx14_recover: 0
tx14_cqes: 21326767783
tx14_wake: 13161
tx14_cqe_err: 0
tx15_packets: 26824968971
tx15_bytes: 16687994233452
tx15_tso_packets: 1755745052
tx15_tso_bytes: 9533814012441
tx15_tso_inner_packets: 0
tx15_tso_inner_bytes: 0
tx15_csum_partial: 2304778064
tx15_csum_partial_inner: 0
tx15_added_vlan_packets: 21740906107
tx15_nop: 344143287
tx15_csum_none: 19436128058
tx15_stopped: 75
tx15_dropped: 0
tx15_xmit_more: 63325832
tx15_recover: 0
tx15_cqes: 21677585345
tx15_wake: 74
tx15_cqe_err: 0
tx16_packets: 24488158946
tx16_bytes: 15027415004570
tx16_tso_packets: 1559127391
tx16_tso_bytes: 8658691917845
tx16_tso_inner_packets: 0
tx16_tso_inner_bytes: 0
tx16_csum_partial: 2075856395
tx16_csum_partial_inner: 0
tx16_added_vlan_packets: 19835695731
tx16_nop: 308464189
tx16_csum_none: 17759839340
tx16_stopped: 4567
tx16_dropped: 0
tx16_xmit_more: 62631422
tx16_recover: 0
tx16_cqes: 19773070012
tx16_wake: 4568
tx16_cqe_err: 0
tx17_packets: 24700413784
tx17_bytes: 15216529713715
tx17_tso_packets: 1597555108
tx17_tso_bytes: 8773728661243
tx17_tso_inner_packets: 0
tx17_tso_inner_bytes: 0
tx17_csum_partial: 2127177297
tx17_csum_partial_inner: 0
tx17_added_vlan_packets: 20003144561
tx17_nop: 313356918
tx17_csum_none: 17875967264
tx17_stopped: 12572
tx17_dropped: 0
tx17_xmit_more: 62742980
tx17_recover: 0
tx17_cqes: 19940407615
tx17_wake: 12573
tx17_cqe_err: 0
tx18_packets: 24887710046
tx18_bytes: 15245034034664
tx18_tso_packets: 1582550520
tx18_tso_bytes: 8782692335483
tx18_tso_inner_packets: 0
tx18_tso_inner_bytes: 0
tx18_csum_partial: 2084514331
tx18_csum_partial_inner: 0
tx18_added_vlan_packets: 20173879181
tx18_nop: 314818702
tx18_csum_none: 18089364850
tx18_stopped: 21366
tx18_dropped: 0
tx18_xmit_more: 62485819
tx18_recover: 0
tx18_cqes: 20111400935
tx18_wake: 21366
tx18_cqe_err: 0
tx19_packets: 24831057648
tx19_bytes: 15164663890576
tx19_tso_packets: 1599135489
tx19_tso_bytes: 8756045449746
tx19_tso_inner_packets: 0
tx19_tso_inner_bytes: 0
tx19_csum_partial: 2119746608
tx19_csum_partial_inner: 0
tx19_added_vlan_packets: 20143573903
tx19_nop: 316966450
tx19_csum_none: 18023827295
tx19_stopped: 11431
tx19_dropped: 0
tx19_xmit_more: 57535904
tx19_recover: 0
tx19_cqes: 20086045325
tx19_wake: 11431
tx19_cqe_err: 0
tx20_packets: 21943735263
tx20_bytes: 13528749492187
tx20_tso_packets: 1390048103
tx20_tso_bytes: 7629058809637
tx20_tso_inner_packets: 0
tx20_tso_inner_bytes: 0
tx20_csum_partial: 1848533941
tx20_csum_partial_inner: 0
tx20_added_vlan_packets: 17861417651
tx20_nop: 276840365
tx20_csum_none: 16012883710
tx20_stopped: 38457
tx20_dropped: 0
tx20_xmit_more: 57042753
tx20_recover: 0
tx20_cqes: 17804384839
tx20_wake: 38457
tx20_cqe_err: 0
tx21_packets: 21476926958
tx21_bytes: 13096410597896
tx21_tso_packets: 1367724090
tx21_tso_bytes: 7568364585127
tx21_tso_inner_packets: 0
tx21_tso_inner_bytes: 0
tx21_csum_partial: 1830570727
tx21_csum_partial_inner: 0
tx21_added_vlan_packets: 17421087814
tx21_nop: 270611519
tx21_csum_none: 15590517087
tx21_stopped: 31213
tx21_dropped: 0
tx21_xmit_more: 60305389
tx21_recover: 0
tx21_cqes: 17360791205
tx21_wake: 31213
tx21_cqe_err: 0
tx22_packets: 21819106444
tx22_bytes: 13492871887100
tx22_tso_packets: 1387002018
tx22_tso_bytes: 7617705727669
tx22_tso_inner_packets: 0
tx22_tso_inner_bytes: 0
tx22_csum_partial: 1853632107
tx22_csum_partial_inner: 0
tx22_added_vlan_packets: 17743255447
tx22_nop: 274820992
tx22_csum_none: 15889623340
tx22_stopped: 24814
tx22_dropped: 0
tx22_xmit_more: 60811304
tx22_recover: 0
tx22_cqes: 17682451111
tx22_wake: 24815
tx22_cqe_err: 0
tx23_packets: 21830455800
tx23_bytes: 13427551902532
tx23_tso_packets: 1388556038
tx23_tso_bytes: 7604040587125
tx23_tso_inner_packets: 0
tx23_tso_inner_bytes: 0
tx23_csum_partial: 1850819694
tx23_csum_partial_inner: 0
tx23_added_vlan_packets: 17761271122
tx23_nop: 275142775
tx23_csum_none: 15910451428
tx23_stopped: 29899
tx23_dropped: 0
tx23_xmit_more: 58924909
tx23_recover: 0
tx23_cqes: 17702355187
tx23_wake: 29898
tx23_cqe_err: 0
tx24_packets: 21961484213
tx24_bytes: 13531373062497
tx24_tso_packets: 1394697504
tx24_tso_bytes: 7663866609308
tx24_tso_inner_packets: 0
tx24_tso_inner_bytes: 0
tx24_csum_partial: 1857072074
tx24_csum_partial_inner: 0
tx24_added_vlan_packets: 17856887568
tx24_nop: 276352855
tx24_csum_none: 15999815494
tx24_stopped: 33924
tx24_dropped: 0
tx24_xmit_more: 63992426
tx24_recover: 0
tx24_cqes: 17792905243
tx24_wake: 33923
tx24_cqe_err: 0
tx25_packets: 21853593838
tx25_bytes: 13357487830519
tx25_tso_packets: 1398822411
tx25_tso_bytes: 7691191518838
tx25_tso_inner_packets: 0
tx25_tso_inner_bytes: 0
tx25_csum_partial: 1869483109
tx25_csum_partial_inner: 0
tx25_added_vlan_packets: 17734634614
tx25_nop: 276327643
tx25_csum_none: 15865151505
tx25_stopped: 38651
tx25_dropped: 0
tx25_xmit_more: 56410535
tx25_recover: 0
tx25_cqes: 17678234537
tx25_wake: 38650
tx25_cqe_err: 0
tx26_packets: 21480261205
tx26_bytes: 13148973015935
tx26_tso_packets: 1348132284
tx26_tso_bytes: 7523489481775
tx26_tso_inner_packets: 0
tx26_tso_inner_bytes: 0
tx26_csum_partial: 1839740745
tx26_csum_partial_inner: 0
tx26_added_vlan_packets: 17430592911
tx26_nop: 270367836
tx26_csum_none: 15590852166
tx26_stopped: 34044
tx26_dropped: 0
tx26_xmit_more: 59870114
tx26_recover: 0
tx26_cqes: 17370736612
tx26_wake: 34043
tx26_cqe_err: 0
tx27_packets: 22694273108
tx27_bytes: 14135473431004
tx27_tso_packets: 1418371875
tx27_tso_bytes: 7784842263038
tx27_tso_inner_packets: 0
tx27_tso_inner_bytes: 0
tx27_csum_partial: 1919170584
tx27_csum_partial_inner: 0
tx27_added_vlan_packets: 18520826023
tx27_nop: 286296272
tx27_csum_none: 16601655439
tx27_stopped: 38125
tx27_dropped: 0
tx27_xmit_more: 72749775
tx27_recover: 0
tx27_cqes: 18448090270
tx27_wake: 38127
tx27_cqe_err: 0
tx28_packets: 0
tx28_bytes: 0
tx28_tso_packets: 0
tx28_tso_bytes: 0
tx28_tso_inner_packets: 0
tx28_tso_inner_bytes: 0
tx28_csum_partial: 0
tx28_csum_partial_inner: 0
tx28_added_vlan_packets: 0
tx28_nop: 0
tx28_csum_none: 0
tx28_stopped: 0
tx28_dropped: 0
tx28_xmit_more: 0
tx28_recover: 0
tx28_cqes: 0
tx28_wake: 0
tx28_cqe_err: 0
tx29_packets: 3
tx29_bytes: 266
tx29_tso_packets: 0
tx29_tso_bytes: 0
tx29_tso_inner_packets: 0
tx29_tso_inner_bytes: 0
tx29_csum_partial: 0
tx29_csum_partial_inner: 0
tx29_added_vlan_packets: 0
tx29_nop: 0
tx29_csum_none: 3
tx29_stopped: 0
tx29_dropped: 0
tx29_xmit_more: 1
tx29_recover: 0
tx29_cqes: 2
tx29_wake: 0
tx29_cqe_err: 0
tx30_packets: 0
tx30_bytes: 0
tx30_tso_packets: 0
tx30_tso_bytes: 0
tx30_tso_inner_packets: 0
tx30_tso_inner_bytes: 0
tx30_csum_partial: 0
tx30_csum_partial_inner: 0
tx30_added_vlan_packets: 0
tx30_nop: 0
tx30_csum_none: 0
tx30_stopped: 0
tx30_dropped: 0
tx30_xmit_more: 0
tx30_recover: 0
tx30_cqes: 0
tx30_wake: 0
tx30_cqe_err: 0
tx31_packets: 0
tx31_bytes: 0
tx31_tso_packets: 0
tx31_tso_bytes: 0
tx31_tso_inner_packets: 0
tx31_tso_inner_bytes: 0
tx31_csum_partial: 0
tx31_csum_partial_inner: 0
tx31_added_vlan_packets: 0
tx31_nop: 0
tx31_csum_none: 0
tx31_stopped: 0
tx31_dropped: 0
tx31_xmit_more: 0
tx31_recover: 0
tx31_cqes: 0
tx31_wake: 0
tx31_cqe_err: 0
tx32_packets: 0
tx32_bytes: 0
tx32_tso_packets: 0
tx32_tso_bytes: 0
tx32_tso_inner_packets: 0
tx32_tso_inner_bytes: 0
tx32_csum_partial: 0
tx32_csum_partial_inner: 0
tx32_added_vlan_packets: 0
tx32_nop: 0
tx32_csum_none: 0
tx32_stopped: 0
tx32_dropped: 0
tx32_xmit_more: 0
tx32_recover: 0
tx32_cqes: 0
tx32_wake: 0
tx32_cqe_err: 0
tx33_packets: 0
tx33_bytes: 0
tx33_tso_packets: 0
tx33_tso_bytes: 0
tx33_tso_inner_packets: 0
tx33_tso_inner_bytes: 0
tx33_csum_partial: 0
tx33_csum_partial_inner: 0
tx33_added_vlan_packets: 0
tx33_nop: 0
tx33_csum_none: 0
tx33_stopped: 0
tx33_dropped: 0
tx33_xmit_more: 0
tx33_recover: 0
tx33_cqes: 0
tx33_wake: 0
tx33_cqe_err: 0
tx34_packets: 0
tx34_bytes: 0
tx34_tso_packets: 0
tx34_tso_bytes: 0
tx34_tso_inner_packets: 0
tx34_tso_inner_bytes: 0
tx34_csum_partial: 0
tx34_csum_partial_inner: 0
tx34_added_vlan_packets: 0
tx34_nop: 0
tx34_csum_none: 0
tx34_stopped: 0
tx34_dropped: 0
tx34_xmit_more: 0
tx34_recover: 0
tx34_cqes: 0
tx34_wake: 0
tx34_cqe_err: 0
tx35_packets: 0
tx35_bytes: 0
tx35_tso_packets: 0
tx35_tso_bytes: 0
tx35_tso_inner_packets: 0
tx35_tso_inner_bytes: 0
tx35_csum_partial: 0
tx35_csum_partial_inner: 0
tx35_added_vlan_packets: 0
tx35_nop: 0
tx35_csum_none: 0
tx35_stopped: 0
tx35_dropped: 0
tx35_xmit_more: 0
tx35_recover: 0
tx35_cqes: 0
tx35_wake: 0
tx35_cqe_err: 0
tx36_packets: 0
tx36_bytes: 0
tx36_tso_packets: 0
tx36_tso_bytes: 0
tx36_tso_inner_packets: 0
tx36_tso_inner_bytes: 0
tx36_csum_partial: 0
tx36_csum_partial_inner: 0
tx36_added_vlan_packets: 0
tx36_nop: 0
tx36_csum_none: 0
tx36_stopped: 0
tx36_dropped: 0
tx36_xmit_more: 0
tx36_recover: 0
tx36_cqes: 0
tx36_wake: 0
tx36_cqe_err: 0
tx37_packets: 0
tx37_bytes: 0
tx37_tso_packets: 0
tx37_tso_bytes: 0
tx37_tso_inner_packets: 0
tx37_tso_inner_bytes: 0
tx37_csum_partial: 0
tx37_csum_partial_inner: 0
tx37_added_vlan_packets: 0
tx37_nop: 0
tx37_csum_none: 0
tx37_stopped: 0
tx37_dropped: 0
tx37_xmit_more: 0
tx37_recover: 0
tx37_cqes: 0
tx37_wake: 0
tx37_cqe_err: 0
tx38_packets: 0
tx38_bytes: 0
tx38_tso_packets: 0
tx38_tso_bytes: 0
tx38_tso_inner_packets: 0
tx38_tso_inner_bytes: 0
tx38_csum_partial: 0
tx38_csum_partial_inner: 0
tx38_added_vlan_packets: 0
tx38_nop: 0
tx38_csum_none: 0
tx38_stopped: 0
tx38_dropped: 0
tx38_xmit_more: 0
tx38_recover: 0
tx38_cqes: 0
tx38_wake: 0
tx38_cqe_err: 0
tx39_packets: 0
tx39_bytes: 0
tx39_tso_packets: 0
tx39_tso_bytes: 0
tx39_tso_inner_packets: 0
tx39_tso_inner_bytes: 0
tx39_csum_partial: 0
tx39_csum_partial_inner: 0
tx39_added_vlan_packets: 0
tx39_nop: 0
tx39_csum_none: 0
tx39_stopped: 0
tx39_dropped: 0
tx39_xmit_more: 0
tx39_recover: 0
tx39_cqes: 0
tx39_wake: 0
tx39_cqe_err: 0
tx40_packets: 0
tx40_bytes: 0
tx40_tso_packets: 0
tx40_tso_bytes: 0
tx40_tso_inner_packets: 0
tx40_tso_inner_bytes: 0
tx40_csum_partial: 0
tx40_csum_partial_inner: 0
tx40_added_vlan_packets: 0
tx40_nop: 0
tx40_csum_none: 0
tx40_stopped: 0
tx40_dropped: 0
tx40_xmit_more: 0
tx40_recover: 0
tx40_cqes: 0
tx40_wake: 0
tx40_cqe_err: 0
tx41_packets: 0
tx41_bytes: 0
tx41_tso_packets: 0
tx41_tso_bytes: 0
tx41_tso_inner_packets: 0
tx41_tso_inner_bytes: 0
tx41_csum_partial: 0
tx41_csum_partial_inner: 0
tx41_added_vlan_packets: 0
tx41_nop: 0
tx41_csum_none: 0
tx41_stopped: 0
tx41_dropped: 0
tx41_xmit_more: 0
tx41_recover: 0
tx41_cqes: 0
tx41_wake: 0
tx41_cqe_err: 0
tx42_packets: 0
tx42_bytes: 0
tx42_tso_packets: 0
tx42_tso_bytes: 0
tx42_tso_inner_packets: 0
tx42_tso_inner_bytes: 0
tx42_csum_partial: 0
tx42_csum_partial_inner: 0
tx42_added_vlan_packets: 0
tx42_nop: 0
tx42_csum_none: 0
tx42_stopped: 0
tx42_dropped: 0
tx42_xmit_more: 0
tx42_recover: 0
tx42_cqes: 0
tx42_wake: 0
tx42_cqe_err: 0
tx43_packets: 0
tx43_bytes: 0
tx43_tso_packets: 0
tx43_tso_bytes: 0
tx43_tso_inner_packets: 0
tx43_tso_inner_bytes: 0
tx43_csum_partial: 0
tx43_csum_partial_inner: 0
tx43_added_vlan_packets: 0
tx43_nop: 0
tx43_csum_none: 0
tx43_stopped: 0
tx43_dropped: 0
tx43_xmit_more: 0
tx43_recover: 0
tx43_cqes: 0
tx43_wake: 0
tx43_cqe_err: 0
tx44_packets: 0
tx44_bytes: 0
tx44_tso_packets: 0
tx44_tso_bytes: 0
tx44_tso_inner_packets: 0
tx44_tso_inner_bytes: 0
tx44_csum_partial: 0
tx44_csum_partial_inner: 0
tx44_added_vlan_packets: 0
tx44_nop: 0
tx44_csum_none: 0
tx44_stopped: 0
tx44_dropped: 0
tx44_xmit_more: 0
tx44_recover: 0
tx44_cqes: 0
tx44_wake: 0
tx44_cqe_err: 0
tx45_packets: 0
tx45_bytes: 0
tx45_tso_packets: 0
tx45_tso_bytes: 0
tx45_tso_inner_packets: 0
tx45_tso_inner_bytes: 0
tx45_csum_partial: 0
tx45_csum_partial_inner: 0
tx45_added_vlan_packets: 0
tx45_nop: 0
tx45_csum_none: 0
tx45_stopped: 0
tx45_dropped: 0
tx45_xmit_more: 0
tx45_recover: 0
tx45_cqes: 0
tx45_wake: 0
tx45_cqe_err: 0
tx46_packets: 0
tx46_bytes: 0
tx46_tso_packets: 0
tx46_tso_bytes: 0
tx46_tso_inner_packets: 0
tx46_tso_inner_bytes: 0
tx46_csum_partial: 0
tx46_csum_partial_inner: 0
tx46_added_vlan_packets: 0
tx46_nop: 0
tx46_csum_none: 0
tx46_stopped: 0
tx46_dropped: 0
tx46_xmit_more: 0
tx46_recover: 0
tx46_cqes: 0
tx46_wake: 0
tx46_cqe_err: 0
tx47_packets: 0
tx47_bytes: 0
tx47_tso_packets: 0
tx47_tso_bytes: 0
tx47_tso_inner_packets: 0
tx47_tso_inner_bytes: 0
tx47_csum_partial: 0
tx47_csum_partial_inner: 0
tx47_added_vlan_packets: 0
tx47_nop: 0
tx47_csum_none: 0
tx47_stopped: 0
tx47_dropped: 0
tx47_xmit_more: 0
tx47_recover: 0
tx47_cqes: 0
tx47_wake: 0
tx47_cqe_err: 0
tx48_packets: 0
tx48_bytes: 0
tx48_tso_packets: 0
tx48_tso_bytes: 0
tx48_tso_inner_packets: 0
tx48_tso_inner_bytes: 0
tx48_csum_partial: 0
tx48_csum_partial_inner: 0
tx48_added_vlan_packets: 0
tx48_nop: 0
tx48_csum_none: 0
tx48_stopped: 0
tx48_dropped: 0
tx48_xmit_more: 0
tx48_recover: 0
tx48_cqes: 0
tx48_wake: 0
tx48_cqe_err: 0
tx49_packets: 0
tx49_bytes: 0
tx49_tso_packets: 0
tx49_tso_bytes: 0
tx49_tso_inner_packets: 0
tx49_tso_inner_bytes: 0
tx49_csum_partial: 0
tx49_csum_partial_inner: 0
tx49_added_vlan_packets: 0
tx49_nop: 0
tx49_csum_none: 0
tx49_stopped: 0
tx49_dropped: 0
tx49_xmit_more: 0
tx49_recover: 0
tx49_cqes: 0
tx49_wake: 0
tx49_cqe_err: 0
tx50_packets: 0
tx50_bytes: 0
tx50_tso_packets: 0
tx50_tso_bytes: 0
tx50_tso_inner_packets: 0
tx50_tso_inner_bytes: 0
tx50_csum_partial: 0
tx50_csum_partial_inner: 0
tx50_added_vlan_packets: 0
tx50_nop: 0
tx50_csum_none: 0
tx50_stopped: 0
tx50_dropped: 0
tx50_xmit_more: 0
tx50_recover: 0
tx50_cqes: 0
tx50_wake: 0
tx50_cqe_err: 0
tx51_packets: 0
tx51_bytes: 0
tx51_tso_packets: 0
tx51_tso_bytes: 0
tx51_tso_inner_packets: 0
tx51_tso_inner_bytes: 0
tx51_csum_partial: 0
tx51_csum_partial_inner: 0
tx51_added_vlan_packets: 0
tx51_nop: 0
tx51_csum_none: 0
tx51_stopped: 0
tx51_dropped: 0
tx51_xmit_more: 0
tx51_recover: 0
tx51_cqes: 0
tx51_wake: 0
tx51_cqe_err: 0
tx52_packets: 0
tx52_bytes: 0
tx52_tso_packets: 0
tx52_tso_bytes: 0
tx52_tso_inner_packets: 0
tx52_tso_inner_bytes: 0
tx52_csum_partial: 0
tx52_csum_partial_inner: 0
tx52_added_vlan_packets: 0
tx52_nop: 0
tx52_csum_none: 0
tx52_stopped: 0
tx52_dropped: 0
tx52_xmit_more: 0
tx52_recover: 0
tx52_cqes: 0
tx52_wake: 0
tx52_cqe_err: 0
tx53_packets: 0
tx53_bytes: 0
tx53_tso_packets: 0
tx53_tso_bytes: 0
tx53_tso_inner_packets: 0
tx53_tso_inner_bytes: 0
tx53_csum_partial: 0
tx53_csum_partial_inner: 0
tx53_added_vlan_packets: 0
tx53_nop: 0
tx53_csum_none: 0
tx53_stopped: 0
tx53_dropped: 0
tx53_xmit_more: 0
tx53_recover: 0
tx53_cqes: 0
tx53_wake: 0
tx53_cqe_err: 0
tx54_packets: 0
tx54_bytes: 0
tx54_tso_packets: 0
tx54_tso_bytes: 0
tx54_tso_inner_packets: 0
tx54_tso_inner_bytes: 0
tx54_csum_partial: 0
tx54_csum_partial_inner: 0
tx54_added_vlan_packets: 0
tx54_nop: 0
tx54_csum_none: 0
tx54_stopped: 0
tx54_dropped: 0
tx54_xmit_more: 0
tx54_recover: 0
tx54_cqes: 0
tx54_wake: 0
tx54_cqe_err: 0
tx55_packets: 0
tx55_bytes: 0
tx55_tso_packets: 0
tx55_tso_bytes: 0
tx55_tso_inner_packets: 0
tx55_tso_inner_bytes: 0
tx55_csum_partial: 0
tx55_csum_partial_inner: 0
tx55_added_vlan_packets: 0
tx55_nop: 0
tx55_csum_none: 0
tx55_stopped: 0
tx55_dropped: 0
tx55_xmit_more: 0
tx55_recover: 0
tx55_cqes: 0
tx55_wake: 0
tx55_cqe_err: 0
tx0_xdp_xmit: 0
tx0_xdp_full: 0
tx0_xdp_err: 0
tx0_xdp_cqes: 0
tx1_xdp_xmit: 0
tx1_xdp_full: 0
tx1_xdp_err: 0
tx1_xdp_cqes: 0
tx2_xdp_xmit: 0
tx2_xdp_full: 0
tx2_xdp_err: 0
tx2_xdp_cqes: 0
tx3_xdp_xmit: 0
tx3_xdp_full: 0
tx3_xdp_err: 0
tx3_xdp_cqes: 0
tx4_xdp_xmit: 0
tx4_xdp_full: 0
tx4_xdp_err: 0
tx4_xdp_cqes: 0
tx5_xdp_xmit: 0
tx5_xdp_full: 0
tx5_xdp_err: 0
tx5_xdp_cqes: 0
tx6_xdp_xmit: 0
tx6_xdp_full: 0
tx6_xdp_err: 0
tx6_xdp_cqes: 0
tx7_xdp_xmit: 0
tx7_xdp_full: 0
tx7_xdp_err: 0
tx7_xdp_cqes: 0
tx8_xdp_xmit: 0
tx8_xdp_full: 0
tx8_xdp_err: 0
tx8_xdp_cqes: 0
tx9_xdp_xmit: 0
tx9_xdp_full: 0
tx9_xdp_err: 0
tx9_xdp_cqes: 0
tx10_xdp_xmit: 0
tx10_xdp_full: 0
tx10_xdp_err: 0
tx10_xdp_cqes: 0
tx11_xdp_xmit: 0
tx11_xdp_full: 0
tx11_xdp_err: 0
tx11_xdp_cqes: 0
tx12_xdp_xmit: 0
tx12_xdp_full: 0
tx12_xdp_err: 0
tx12_xdp_cqes: 0
tx13_xdp_xmit: 0
tx13_xdp_full: 0
tx13_xdp_err: 0
tx13_xdp_cqes: 0
tx14_xdp_xmit: 0
tx14_xdp_full: 0
tx14_xdp_err: 0
tx14_xdp_cqes: 0
tx15_xdp_xmit: 0
tx15_xdp_full: 0
tx15_xdp_err: 0
tx15_xdp_cqes: 0
tx16_xdp_xmit: 0
tx16_xdp_full: 0
tx16_xdp_err: 0
tx16_xdp_cqes: 0
tx17_xdp_xmit: 0
tx17_xdp_full: 0
tx17_xdp_err: 0
tx17_xdp_cqes: 0
tx18_xdp_xmit: 0
tx18_xdp_full: 0
tx18_xdp_err: 0
tx18_xdp_cqes: 0
tx19_xdp_xmit: 0
tx19_xdp_full: 0
tx19_xdp_err: 0
tx19_xdp_cqes: 0
tx20_xdp_xmit: 0
tx20_xdp_full: 0
tx20_xdp_err: 0
tx20_xdp_cqes: 0
tx21_xdp_xmit: 0
tx21_xdp_full: 0
tx21_xdp_err: 0
tx21_xdp_cqes: 0
tx22_xdp_xmit: 0
tx22_xdp_full: 0
tx22_xdp_err: 0
tx22_xdp_cqes: 0
tx23_xdp_xmit: 0
tx23_xdp_full: 0
tx23_xdp_err: 0
tx23_xdp_cqes: 0
tx24_xdp_xmit: 0
tx24_xdp_full: 0
tx24_xdp_err: 0
tx24_xdp_cqes: 0
tx25_xdp_xmit: 0
tx25_xdp_full: 0
tx25_xdp_err: 0
tx25_xdp_cqes: 0
tx26_xdp_xmit: 0
tx26_xdp_full: 0
tx26_xdp_err: 0
tx26_xdp_cqes: 0
tx27_xdp_xmit: 0
tx27_xdp_full: 0
tx27_xdp_err: 0
tx27_xdp_cqes: 0
tx28_xdp_xmit: 0
tx28_xdp_full: 0
tx28_xdp_err: 0
tx28_xdp_cqes: 0
tx29_xdp_xmit: 0
tx29_xdp_full: 0
tx29_xdp_err: 0
tx29_xdp_cqes: 0
tx30_xdp_xmit: 0
tx30_xdp_full: 0
tx30_xdp_err: 0
tx30_xdp_cqes: 0
tx31_xdp_xmit: 0
tx31_xdp_full: 0
tx31_xdp_err: 0
tx31_xdp_cqes: 0
tx32_xdp_xmit: 0
tx32_xdp_full: 0
tx32_xdp_err: 0
tx32_xdp_cqes: 0
tx33_xdp_xmit: 0
tx33_xdp_full: 0
tx33_xdp_err: 0
tx33_xdp_cqes: 0
tx34_xdp_xmit: 0
tx34_xdp_full: 0
tx34_xdp_err: 0
tx34_xdp_cqes: 0
tx35_xdp_xmit: 0
tx35_xdp_full: 0
tx35_xdp_err: 0
tx35_xdp_cqes: 0
tx36_xdp_xmit: 0
tx36_xdp_full: 0
tx36_xdp_err: 0
tx36_xdp_cqes: 0
tx37_xdp_xmit: 0
tx37_xdp_full: 0
tx37_xdp_err: 0
tx37_xdp_cqes: 0
tx38_xdp_xmit: 0
tx38_xdp_full: 0
tx38_xdp_err: 0
tx38_xdp_cqes: 0
tx39_xdp_xmit: 0
tx39_xdp_full: 0
tx39_xdp_err: 0
tx39_xdp_cqes: 0
tx40_xdp_xmit: 0
tx40_xdp_full: 0
tx40_xdp_err: 0
tx40_xdp_cqes: 0
tx41_xdp_xmit: 0
tx41_xdp_full: 0
tx41_xdp_err: 0
tx41_xdp_cqes: 0
tx42_xdp_xmit: 0
tx42_xdp_full: 0
tx42_xdp_err: 0
tx42_xdp_cqes: 0
tx43_xdp_xmit: 0
tx43_xdp_full: 0
tx43_xdp_err: 0
tx43_xdp_cqes: 0
tx44_xdp_xmit: 0
tx44_xdp_full: 0
tx44_xdp_err: 0
tx44_xdp_cqes: 0
tx45_xdp_xmit: 0
tx45_xdp_full: 0
tx45_xdp_err: 0
tx45_xdp_cqes: 0
tx46_xdp_xmit: 0
tx46_xdp_full: 0
tx46_xdp_err: 0
tx46_xdp_cqes: 0
tx47_xdp_xmit: 0
tx47_xdp_full: 0
tx47_xdp_err: 0
tx47_xdp_cqes: 0
tx48_xdp_xmit: 0
tx48_xdp_full: 0
tx48_xdp_err: 0
tx48_xdp_cqes: 0
tx49_xdp_xmit: 0
tx49_xdp_full: 0
tx49_xdp_err: 0
tx49_xdp_cqes: 0
tx50_xdp_xmit: 0
tx50_xdp_full: 0
tx50_xdp_err: 0
tx50_xdp_cqes: 0
tx51_xdp_xmit: 0
tx51_xdp_full: 0
tx51_xdp_err: 0
tx51_xdp_cqes: 0
tx52_xdp_xmit: 0
tx52_xdp_full: 0
tx52_xdp_err: 0
tx52_xdp_cqes: 0
tx53_xdp_xmit: 0
tx53_xdp_full: 0
tx53_xdp_err: 0
tx53_xdp_cqes: 0
tx54_xdp_xmit: 0
tx54_xdp_full: 0
tx54_xdp_err: 0
tx54_xdp_cqes: 0
tx55_xdp_xmit: 0
tx55_xdp_full: 0
tx55_xdp_err: 0
tx55_xdp_cqes: 0
> [...]
>
>>>> ethtool -S enp175s0f0
>>>> NIC statistics:
>>>> rx_packets: 141574897253
>>>> rx_bytes: 184445040406258
>>>> tx_packets: 172569543894
>>>> tx_bytes: 99486882076365
>>>> tx_tso_packets: 9367664195
>>>> tx_tso_bytes: 56435233992948
>>>> tx_tso_inner_packets: 0
>>>> tx_tso_inner_bytes: 0
>>>> tx_added_vlan_packets: 141297671626
>>>> tx_nop: 2102916272
>>>> rx_lro_packets: 0
>>>> rx_lro_bytes: 0
>>>> rx_ecn_mark: 0
>>>> rx_removed_vlan_packets: 141574897252
>>>> rx_csum_unnecessary: 0
>>>> rx_csum_none: 23135854
>>>> rx_csum_complete: 141551761398
>>>> rx_csum_unnecessary_inner: 0
>>>> rx_xdp_drop: 0
>>>> rx_xdp_redirect: 0
>>>> rx_xdp_tx_xmit: 0
>>>> rx_xdp_tx_full: 0
>>>> rx_xdp_tx_err: 0
>>>> rx_xdp_tx_cqe: 0
>>>> tx_csum_none: 127934791664
>>> It is a good idea to look into this, tx is not requesting hw tx
>>> csumming for a lot of packets, maybe you are wasting a lot of cpu
>>> on
>>> calculating csum, or maybe this is just the rx csum complete..
>>>
>>>> tx_csum_partial: 13362879974
>>>> tx_csum_partial_inner: 0
>>>> tx_queue_stopped: 232561
>>> TX queues are stalling, could be an indentation for the pcie
>>> bottelneck.
>>>
>>>> tx_queue_dropped: 0
>>>> tx_xmit_more: 1266021946
>>>> tx_recover: 0
>>>> tx_cqes: 140031716469
>>>> tx_queue_wake: 232561
>>>> tx_udp_seg_rem: 0
>>>> tx_cqe_err: 0
>>>> tx_xdp_xmit: 0
>>>> tx_xdp_full: 0
>>>> tx_xdp_err: 0
>>>> tx_xdp_cqes: 0
>>>> rx_wqe_err: 0
>>>> rx_mpwqe_filler_cqes: 0
>>>> rx_mpwqe_filler_strides: 0
>>>> rx_buff_alloc_err: 0
>>>> rx_cqe_compress_blks: 0
>>>> rx_cqe_compress_pkts: 0
>>>> rx_page_reuse: 0
>>>> rx_cache_reuse: 16625975793
>>>> rx_cache_full: 54161465914
>>>> rx_cache_empty: 258048
>>>> rx_cache_busy: 54161472735
>>>> rx_cache_waive: 0
>>>> rx_congst_umr: 0
>>>> rx_arfs_err: 0
>>>> ch_events: 40572621887
>>>> ch_poll: 40885650979
>>>> ch_arm: 40429276692
>>>> ch_aff_change: 0
>>>> ch_eq_rearm: 0
>>>> rx_out_of_buffer: 2791690
>>>> rx_if_down_packets: 74
>>>> rx_vport_unicast_packets: 141843476308
>>>> rx_vport_unicast_bytes: 185421265403318
>>>> tx_vport_unicast_packets: 172569484005
>>>> tx_vport_unicast_bytes: 100019940094298
>>>> rx_vport_multicast_packets: 85122935
>>>> rx_vport_multicast_bytes: 5761316431
>>>> tx_vport_multicast_packets: 6452
>>>> tx_vport_multicast_bytes: 643540
>>>> rx_vport_broadcast_packets: 22423624
>>>> rx_vport_broadcast_bytes: 1390127090
>>>> tx_vport_broadcast_packets: 22024
>>>> tx_vport_broadcast_bytes: 1321440
>>>> rx_vport_rdma_unicast_packets: 0
>>>> rx_vport_rdma_unicast_bytes: 0
>>>> tx_vport_rdma_unicast_packets: 0
>>>> tx_vport_rdma_unicast_bytes: 0
>>>> rx_vport_rdma_multicast_packets: 0
>>>> rx_vport_rdma_multicast_bytes: 0
>>>> tx_vport_rdma_multicast_packets: 0
>>>> tx_vport_rdma_multicast_bytes: 0
>>>> tx_packets_phy: 172569501577
>>>> rx_packets_phy: 142871314588
>>>> rx_crc_errors_phy: 0
>>>> tx_bytes_phy: 100710212814151
>>>> rx_bytes_phy: 187209224289564
>>>> tx_multicast_phy: 6452
>>>> tx_broadcast_phy: 22024
>>>> rx_multicast_phy: 85122933
>>>> rx_broadcast_phy: 22423623
>>>> rx_in_range_len_errors_phy: 2
>>>> rx_out_of_range_len_phy: 0
>>>> rx_oversize_pkts_phy: 0
>>>> rx_symbol_err_phy: 0
>>>> tx_mac_control_phy: 0
>>>> rx_mac_control_phy: 0
>>>> rx_unsupported_op_phy: 0
>>>> rx_pause_ctrl_phy: 0
>>>> tx_pause_ctrl_phy: 0
>>>> rx_discards_phy: 920161423
>>> Ok, this port seem to be suffering more, RX is congested, maybe due
>>> to
>>> the pcie bottleneck.
>> Yes this side is receiving more traffic - second port is +10G more tx
>>
> [...]
>
>
>>>> Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
>>>> 0.00 0.00 0.00 31.30
>>>> Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
>>>> 0.00 0.00 0.00 24.90
>>>> Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
>>>> 0.00 0.00 0.00 19.68
>>>> Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
>>>> 0.00 0.00 0.00 18.00
>>>> Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
>>>> 0.00 0.00 0.00 17.40
>>>> Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
>>>> 0.00 0.00 0.00 26.03
>>>> Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
>>>> 0.00 0.00 0.00 17.52
>>>> Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
>>>> 0.00 0.00 0.00 18.20
>>>> Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
>>>> 0.00 0.00 0.00 17.68
>>>> Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
>>>> 0.00 0.00 0.00 18.20
>>>> Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
>>>> 0.00 0.00 0.00 20.42
>>> The numa cores are not at 100% util, you have around 20% of idle on
>>> each one.
>> Yes - no 100% cpu - but the difference between 80% and 100% is like
>> push
>> aditional 1-2Gbit/s
>>
> yes but, it doens't look like the bottleneck is the cpu, although it is
> close to be :)..
>
>>>> Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 96.10
>>>> Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.70
>>>> Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.50
>>>> Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 97.40
>>>> Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.10
>>>> Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.40
>>>> Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
>>>> 0.00 0.00 0.00 19.42
>>>> Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
>>>> 0.00 0.00 0.00 26.60
>>>> Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
>>>> 0.00 0.00 0.00 21.60
>>>> Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
>>>> 0.00 0.00 0.00 20.50
>>>> Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
>>>> 0.00 0.00 0.00 21.60
>>>> Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
>>>> 0.00 0.00 0.00 24.58
>>>> Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
>>>> 0.00 0.00 0.00 24.68
>>>> Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
>>>> 0.00 0.00 0.00 28.34
>>>> Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
>>>> 0.00 0.00 0.00 25.83
>>>> Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
>>>> 0.00 0.00 0.00 27.20
>>>> Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
>>>> 0.00 0.00 0.00 28.03
>>>> Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
>>>> 0.00 0.00 0.00 24.70
>>>> Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
>>>> 0.00 0.00 0.00 25.30
>>>> Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
>>>> 0.00 0.00 0.00 26.77
>>>>
>>>>
>>>> ethtool -k enp175s0f0
>>>> Features for enp175s0f0:
>>>> rx-checksumming: on
>>>> tx-checksumming: on
>>>> tx-checksum-ipv4: on
>>>> tx-checksum-ip-generic: off [fixed]
>>>> tx-checksum-ipv6: on
>>>> tx-checksum-fcoe-crc: off [fixed]
>>>> tx-checksum-sctp: off [fixed]
>>>> scatter-gather: on
>>>> tx-scatter-gather: on
>>>> tx-scatter-gather-fraglist: off [fixed]
>>>> tcp-segmentation-offload: on
>>>> tx-tcp-segmentation: on
>>>> tx-tcp-ecn-segmentation: off [fixed]
>>>> tx-tcp-mangleid-segmentation: off
>>>> tx-tcp6-segmentation: on
>>>> udp-fragmentation-offload: off
>>>> generic-segmentation-offload: on
>>>> generic-receive-offload: on
>>>> large-receive-offload: off [fixed]
>>>> rx-vlan-offload: on
>>>> tx-vlan-offload: on
>>>> ntuple-filters: off
>>>> receive-hashing: on
>>>> highdma: on [fixed]
>>>> rx-vlan-filter: on
>>>> vlan-challenged: off [fixed]
>>>> tx-lockless: off [fixed]
>>>> netns-local: off [fixed]
>>>> tx-gso-robust: off [fixed]
>>>> tx-fcoe-segmentation: off [fixed]
>>>> tx-gre-segmentation: on
>>>> tx-gre-csum-segmentation: on
>>>> tx-ipxip4-segmentation: off [fixed]
>>>> tx-ipxip6-segmentation: off [fixed]
>>>> tx-udp_tnl-segmentation: on
>>>> tx-udp_tnl-csum-segmentation: on
>>>> tx-gso-partial: on
>>>> tx-sctp-segmentation: off [fixed]
>>>> tx-esp-segmentation: off [fixed]
>>>> tx-udp-segmentation: on
>>>> fcoe-mtu: off [fixed]
>>>> tx-nocache-copy: off
>>>> loopback: off [fixed]
>>>> rx-fcs: off
>>>> rx-all: off
>>>> tx-vlan-stag-hw-insert: on
>>>> rx-vlan-stag-hw-parse: off [fixed]
>>>> rx-vlan-stag-filter: on [fixed]
>>>> l2-fwd-offload: off [fixed]
>>>> hw-tc-offload: off
>>>> esp-hw-offload: off [fixed]
>>>> esp-tx-csum-hw-offload: off [fixed]
>>>> rx-udp_tunnel-port-offload: on
>>>> tls-hw-tx-offload: off [fixed]
>>>> tls-hw-rx-offload: off [fixed]
>>>> rx-gro-hw: off [fixed]
>>>> tls-hw-record: off [fixed]
>>>>
>>>> ethtool -c enp175s0f0
>>>> Coalesce parameters for enp175s0f0:
>>>> Adaptive RX: off TX: on
>>>> stats-block-usecs: 0
>>>> sample-interval: 0
>>>> pkt-rate-low: 0
>>>> pkt-rate-high: 0
>>>> dmac: 32703
>>>>
>>>> rx-usecs: 256
>>>> rx-frames: 128
>>>> rx-usecs-irq: 0
>>>> rx-frames-irq: 0
>>>>
>>>> tx-usecs: 8
>>>> tx-frames: 128
>>>> tx-usecs-irq: 0
>>>> tx-frames-irq: 0
>>>>
>>>> rx-usecs-low: 0
>>>> rx-frame-low: 0
>>>> tx-usecs-low: 0
>>>> tx-frame-low: 0
>>>>
>>>> rx-usecs-high: 0
>>>> rx-frame-high: 0
>>>> tx-usecs-high: 0
>>>> tx-frame-high: 0
>>>>
>>>> ethtool -g enp175s0f0
>>>> Ring parameters for enp175s0f0:
>>>> Pre-set maximums:
>>>> RX: 8192
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 8192
>>>> Current hardware settings:
>>>> RX: 4096
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 4096
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>> Also changed a little coalesce params - and best for this config are:
>> ethtool -c enp175s0f0
>> Coalesce parameters for enp175s0f0:
>> Adaptive RX: off TX: off
>> stats-block-usecs: 0
>> sample-interval: 0
>> pkt-rate-low: 0
>> pkt-rate-high: 0
>> dmac: 32573
>>
>> rx-usecs: 40
>> rx-frames: 128
>> rx-usecs-irq: 0
>> rx-frames-irq: 0
>>
>> tx-usecs: 8
>> tx-frames: 8
>> tx-usecs-irq: 0
>> tx-frames-irq: 0
>>
>> rx-usecs-low: 0
>> rx-frame-low: 0
>> tx-usecs-low: 0
>> tx-frame-low: 0
>>
>> rx-usecs-high: 0
>> rx-frame-high: 0
>> tx-usecs-high: 0
>> tx-frame-high: 0
>>
>>
>> Less drops on RX side - and more pps in overall forwarded.
>>
> how much improvement ? maybe we can improve our adaptive rx coal to be
> efficient for this work load.
>
>
I can prepare more stats with ethtool maybee to compare - but normally
tested with simple icmp forwarded from interface to interface
- before change coalescence params:
adaptive-rx off rx-usecs 384 rx-frames 128
3% loss for icmp
- after change to:
adaptive-rx off rx-usecs 40 rx-frames 128 adaptive-tx off tx-usecs 8
tx-frames 8
2% loss for icmp
But yes - to know better will need to compare rx/tx counters from
ethtool + /proc/net/dev
Was trying to turn on adaptative-tx+rx - but 100% saturation at 43Gbit/s
RX / 43Gbit/s TX
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 21:18 ` Paweł Staszewski
@ 2018-11-01 21:24 ` Paweł Staszewski
2018-11-01 21:34 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 21:24 UTC (permalink / raw)
To: Saeed Mahameed, netdev
W dniu 01.11.2018 o 22:18, Paweł Staszewski pisze:
>
>
> W dniu 01.11.2018 o 21:37, Saeed Mahameed pisze:
>> On Thu, 2018-11-01 at 12:09 +0100, Paweł Staszewski wrote:
>>> W dniu 01.11.2018 o 10:50, Saeed Mahameed pisze:
>>>> On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
>>>>> Hi
>>>>>
>>>>> So maybee someone will be interested how linux kernel handles
>>>>> normal
>>>>> traffic (not pktgen :) )
>>>>>
>>>>>
>>>>> Server HW configuration:
>>>>>
>>>>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>>>>
>>>>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>>>>
>>>>>
>>>>> Server software:
>>>>>
>>>>> FRR - as routing daemon
>>>>>
>>>>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to
>>>>> local
>>>>> numa
>>>>> node)
>>>>>
>>>>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local
>>>>> numa
>>>>> node)
>>>>>
>>>>>
>>>>> Maximum traffic that server can handle:
>>>>>
>>>>> Bandwidth
>>>>>
>>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>>> input: /proc/net/dev type: rate
>>>>> \ iface Rx Tx Total
>>>>> =================================================================
>>>>> ====
>>>>> =========
>>>>> enp175s0f1: 28.51 Gb/s 37.24
>>>>> Gb/s
>>>>> 65.74 Gb/s
>>>>> enp175s0f0: 38.07 Gb/s 28.44
>>>>> Gb/s
>>>>> 66.51 Gb/s
>>>>> ---------------------------------------------------------------
>>>>> ----
>>>>> -----------
>>>>> total: 66.58 Gb/s 65.67
>>>>> Gb/s
>>>>> 132.25 Gb/s
>>>>>
>>>>>
>>>>> Packets per second:
>>>>>
>>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>>> input: /proc/net/dev type: rate
>>>>> - iface Rx Tx Total
>>>>> =================================================================
>>>>> ====
>>>>> =========
>>>>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
>>>>> 8735207.00 P/s
>>>>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
>>>>> 8790460.00 P/s
>>>>> ---------------------------------------------------------------
>>>>> ----
>>>>> -----------
>>>>> total: 8806533.00 P/s 8719134.00 P/s
>>>>> 17525668.00 P/s
>>>>>
>>>>>
>>>>> After reaching that limits nics on the upstream side (more RX
>>>>> traffic)
>>>>> start to drop packets
>>>>>
>>>>>
>>>>> I just dont understand that server can't handle more bandwidth
>>>>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
>>>>> RX
>>>>> side are increasing.
>>>>>
>>>> Where do you see 40 Gb/s ? you showed that both ports on the same
>>>> NIC (
>>>> same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) =
>>>> 132.25
>>>> Gb/s which aligns with your pcie link limit, what am i missing ?
>>> hmm yes that was my concern also - cause cant find anywhere
>>> informations
>>> about that bandwidth is uni or bidirectional - so if 126Gbit for x16
>>> 8GT
>>> is unidir - then bidir will be 126/2 ~68Gbit - which will fit total
>>> bw
>>> on both ports
>> i think it is bidir
>>
>>> This can explain maybee also why cpuload is rising rapidly from
>>> 120Gbit/s in total to 132Gbit (counters of bwmng are from /proc/net -
>>> so
>>> there can be some error in reading them when offloading (gro/gso/tso)
>>> on
>>> nic's is enabled that is why
>>>
>>>>> Was thinking that maybee reached some pcie x16 limit - but x16
>>>>> 8GT
>>>>> is
>>>>> 126Gbit - and also when testing with pktgen i can reach more bw
>>>>> and
>>>>> pps
>>>>> (like 4x more comparing to normal internet traffic)
>>>>>
>>>> Are you forwarding when using pktgen as well or you just testing
>>>> the RX
>>>> side pps ?
>>> Yes pktgen was tested on single port RX
>>> Can check also forwarding to eliminate pciex limits
>>>
>> So this explains why you have more RX pps, since tx is idle and pcie
>> will be free to do only rx.
>>
>> [...]
>>
>>
>>>>> ethtool -S enp175s0f1
>>>>> NIC statistics:
>>>>> rx_packets: 173730800927
>>>>> rx_bytes: 99827422751332
>>>>> tx_packets: 142532009512
>>>>> tx_bytes: 184633045911222
>>>>> tx_tso_packets: 25989113891
>>>>> tx_tso_bytes: 132933363384458
>>>>> tx_tso_inner_packets: 0
>>>>> tx_tso_inner_bytes: 0
>>>>> tx_added_vlan_packets: 74630239613
>>>>> tx_nop: 2029817748
>>>>> rx_lro_packets: 0
>>>>> rx_lro_bytes: 0
>>>>> rx_ecn_mark: 0
>>>>> rx_removed_vlan_packets: 173730800927
>>>>> rx_csum_unnecessary: 0
>>>>> rx_csum_none: 434357
>>>>> rx_csum_complete: 173730366570
>>>>> rx_csum_unnecessary_inner: 0
>>>>> rx_xdp_drop: 0
>>>>> rx_xdp_redirect: 0
>>>>> rx_xdp_tx_xmit: 0
>>>>> rx_xdp_tx_full: 0
>>>>> rx_xdp_tx_err: 0
>>>>> rx_xdp_tx_cqe: 0
>>>>> tx_csum_none: 38260960853
>>>>> tx_csum_partial: 36369278774
>>>>> tx_csum_partial_inner: 0
>>>>> tx_queue_stopped: 1
>>>>> tx_queue_dropped: 0
>>>>> tx_xmit_more: 748638099
>>>>> tx_recover: 0
>>>>> tx_cqes: 73881645031
>>>>> tx_queue_wake: 1
>>>>> tx_udp_seg_rem: 0
>>>>> tx_cqe_err: 0
>>>>> tx_xdp_xmit: 0
>>>>> tx_xdp_full: 0
>>>>> tx_xdp_err: 0
>>>>> tx_xdp_cqes: 0
>>>>> rx_wqe_err: 0
>>>>> rx_mpwqe_filler_cqes: 0
>>>>> rx_mpwqe_filler_strides: 0
>>>>> rx_buff_alloc_err: 0
>>>>> rx_cqe_compress_blks: 0
>>>>> rx_cqe_compress_pkts: 0
>>>> If this is a pcie bottleneck it might be useful to enable CQE
>>>> compression (to reduce PCIe completion descriptors transactions)
>>>> you should see the above rx_cqe_compress_pkts increasing when
>>>> enabled.
>>>>
>>>> $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
>>>> $ ethtool --show-priv-flags enp175s0f1
>>>> Private flags for p6p1:
>>>> rx_cqe_moder : on
>>>> cqe_moder : off
>>>> rx_cqe_compress : on
>>>> ...
>>>>
>>>> try this on both interfaces.
>>> Done
>>> ethtool --show-priv-flags enp175s0f1
>>> Private flags for enp175s0f1:
>>> rx_cqe_moder : on
>>> tx_cqe_moder : off
>>> rx_cqe_compress : on
>>> rx_striding_rq : off
>>> rx_no_csum_complete: off
>>>
>>> ethtool --show-priv-flags enp175s0f0
>>> Private flags for enp175s0f0:
>>> rx_cqe_moder : on
>>> tx_cqe_moder : off
>>> rx_cqe_compress : on
>>> rx_striding_rq : off
>>> rx_no_csum_complete: off
>>>
>> did it help reduce the load on the pcie ? do you see more pps ?
>> what is the ratio between rx_cqe_compress_pkts and over all rx packets
>> ?
> So - a little more pps
> Before change top - graph / after bottom -> image with graph stats
> from proc/net/dev
Attached link to graph
https://uploadfiles.io/5vgbh
> cqe_compress enabled at 11:55
>
> Sorry - but for real life traffic it is hard to do any counter
> differences - cause traffic just rising alone from minute to minute :)
> But for that time the change is visible on graph - cause was almost
> same for past 20minutes before change.
>
>
> full ethtool below:
> NIC statistics:
> rx_packets: 516522465438
> rx_bytes: 680052911258729
> tx_packets: 677697545586
> tx_bytes: 413647643141709
> tx_tso_packets: 42530913279
> tx_tso_bytes: 235655668554142
> tx_tso_inner_packets: 0
> tx_tso_inner_bytes: 0
> tx_added_vlan_packets: 551156530885
> tx_nop: 8536823558
> rx_lro_packets: 0
> rx_lro_bytes: 0
> rx_ecn_mark: 0
> rx_removed_vlan_packets: 516522465438
> rx_csum_unnecessary: 0
> rx_csum_none: 50382868
> rx_csum_complete: 516472082570
> rx_csum_unnecessary_inner: 0
> rx_xdp_drop: 0
> rx_xdp_redirect: 0
> rx_xdp_tx_xmit: 0
> rx_xdp_tx_full: 0
> rx_xdp_tx_err: 0
> rx_xdp_tx_cqe: 0
> tx_csum_none: 494075047017
> tx_csum_partial: 57081483898
> tx_csum_partial_inner: 0
> tx_queue_stopped: 518624
> tx_queue_dropped: 0
> tx_xmit_more: 1717880628
> tx_recover: 0
> tx_cqes: 549438869029
> tx_queue_wake: 518627
> tx_udp_seg_rem: 0
> tx_cqe_err: 0
> tx_xdp_xmit: 0
> tx_xdp_full: 0
> tx_xdp_err: 0
> tx_xdp_cqes: 0
> rx_wqe_err: 0
> rx_mpwqe_filler_cqes: 0
> rx_mpwqe_filler_strides: 0
> rx_buff_alloc_err: 0
> rx_cqe_compress_blks: 11483228712
> rx_cqe_compress_pkts: 25794213324
> rx_page_reuse: 0
> rx_cache_reuse: 63610249810
> rx_cache_full: 194650916511
> rx_cache_empty: 1118208
> rx_cache_busy: 194650982430
> rx_cache_waive: 0
> rx_congst_umr: 0
> rx_arfs_err: 0
> ch_events: 119556002196
> ch_poll: 121107424977
> ch_arm: 115856746008
> ch_aff_change: 31
> ch_eq_rearm: 0
> rx_out_of_buffer: 6880325
> rx_if_down_packets: 2062529
> rx_vport_unicast_packets: 517433716795
> rx_vport_unicast_bytes: 683464347301443
> tx_vport_unicast_packets: 677697453738
> tx_vport_unicast_bytes: 415788589663315
> rx_vport_multicast_packets: 208258309
> rx_vport_multicast_bytes: 14224046052
> tx_vport_multicast_packets: 21689
> tx_vport_multicast_bytes: 2158334
> rx_vport_broadcast_packets: 75838646
> rx_vport_broadcast_bytes: 4697944695
> tx_vport_broadcast_packets: 68730
> tx_vport_broadcast_bytes: 4123800
> rx_vport_rdma_unicast_packets: 0
> rx_vport_rdma_unicast_bytes: 0
> tx_vport_rdma_unicast_packets: 0
> tx_vport_rdma_unicast_bytes: 0
> rx_vport_rdma_multicast_packets: 0
> rx_vport_rdma_multicast_bytes: 0
> tx_vport_rdma_multicast_packets: 0
> tx_vport_rdma_multicast_bytes: 0
> tx_packets_phy: 677697543252
> rx_packets_phy: 521319491878
> rx_crc_errors_phy: 0
> tx_bytes_phy: 418499385791411
> rx_bytes_phy: 690322537017274
> tx_multicast_phy: 21689
> tx_broadcast_phy: 68730
> rx_multicast_phy: 208258305
> rx_broadcast_phy: 75838646
> rx_in_range_len_errors_phy: 4
> rx_out_of_range_len_phy: 0
> rx_oversize_pkts_phy: 0
> rx_symbol_err_phy: 0
> tx_mac_control_phy: 0
> rx_mac_control_phy: 0
> rx_unsupported_op_phy: 0
> rx_pause_ctrl_phy: 0
> tx_pause_ctrl_phy: 0
> rx_discards_phy: 3601449265
> tx_discards_phy: 0
> tx_errors_phy: 0
> rx_undersize_pkts_phy: 0
> rx_fragments_phy: 0
> rx_jabbers_phy: 0
> rx_64_bytes_phy: 1416456771
> rx_65_to_127_bytes_phy: 40750434737
> rx_128_to_255_bytes_phy: 11518110310
> rx_256_to_511_bytes_phy: 7055850637
> rx_512_to_1023_bytes_phy: 7811550424
> rx_1024_to_1518_bytes_phy: 265547564845
> rx_1519_to_2047_bytes_phy: 187219522899
> rx_2048_to_4095_bytes_phy: 0
> rx_4096_to_8191_bytes_phy: 0
> rx_8192_to_10239_bytes_phy: 0
> link_down_events_phy: 0
> rx_pcs_symbol_err_phy: 0
> rx_corrected_bits_phy: 0
> rx_pci_signal_integrity: 0
> tx_pci_signal_integrity: 48
> rx_prio0_bytes: 688807632117485
> rx_prio0_packets: 516310309931
> tx_prio0_bytes: 418499382756025
> tx_prio0_packets: 677697534982
> rx_prio1_bytes: 1497701612877
> rx_prio1_packets: 1206768094
> tx_prio1_bytes: 0
> tx_prio1_packets: 0
> rx_prio2_bytes: 112271227
> rx_prio2_packets: 337295
> tx_prio2_bytes: 0
> tx_prio2_packets: 0
> rx_prio3_bytes: 1165455555
> rx_prio3_packets: 1544310
> tx_prio3_bytes: 0
> tx_prio3_packets: 0
> rx_prio4_bytes: 161857240
> rx_prio4_packets: 341392
> tx_prio4_bytes: 0
> tx_prio4_packets: 0
> rx_prio5_bytes: 455031612
> rx_prio5_packets: 2861469
> tx_prio5_bytes: 0
> tx_prio5_packets: 0
> rx_prio6_bytes: 1873928697
> rx_prio6_packets: 5146981
> tx_prio6_bytes: 0
> tx_prio6_packets: 0
> rx_prio7_bytes: 13423452430
> rx_prio7_packets: 190724796
> tx_prio7_bytes: 0
> tx_prio7_packets: 0
> module_unplug: 0
> module_bus_stuck: 0
> module_high_temp: 0
> module_bad_shorted: 0
> ch0_events: 4252266777
> ch0_poll: 4330804273
> ch0_arm: 4120233182
> ch0_aff_change: 2
> ch0_eq_rearm: 0
> ch1_events: 3938415938
> ch1_poll: 4012621322
> ch1_arm: 3810131188
> ch1_aff_change: 2
> ch1_eq_rearm: 0
> ch2_events: 3897428860
> ch2_poll: 3973886848
> ch2_arm: 3773019397
> ch2_aff_change: 1
> ch2_eq_rearm: 0
> ch3_events: 4108000541
> ch3_poll: 4180139872
> ch3_arm: 3982093366
> ch3_aff_change: 1
> ch3_eq_rearm: 0
> ch4_events: 4652570079
> ch4_poll: 4720541090
> ch4_arm: 4524475054
> ch4_aff_change: 2
> ch4_eq_rearm: 0
> ch5_events: 3899177385
> ch5_poll: 3974274186
> ch5_arm: 3772299186
> ch5_aff_change: 2
> ch5_eq_rearm: 0
> ch6_events: 3915161350
> ch6_poll: 3992338199
> ch6_arm: 3794710989
> ch6_aff_change: 0
> ch6_eq_rearm: 0
> ch7_events: 4008175631
> ch7_poll: 4081321248
> ch7_arm: 3882263723
> ch7_aff_change: 0
> ch7_eq_rearm: 0
> ch8_events: 4207422352
> ch8_poll: 4276465449
> ch8_arm: 4077650366
> ch8_aff_change: 0
> ch8_eq_rearm: 0
> ch9_events: 4036491879
> ch9_poll: 4108975987
> ch9_arm: 3914493694
> ch9_aff_change: 0
> ch9_eq_rearm: 0
> ch10_events: 4066261595
> ch10_poll: 4134419606
> ch10_arm: 3936637711
> ch10_aff_change: 1
> ch10_eq_rearm: 0
> ch11_events: 4440494043
> ch11_poll: 4507578730
> ch11_arm: 4318629438
> ch11_aff_change: 0
> ch11_eq_rearm: 0
> ch12_events: 4066958252
> ch12_poll: 4130191506
> ch12_arm: 3934337782
> ch12_aff_change: 0
> ch12_eq_rearm: 0
> ch13_events: 4051309159
> ch13_poll: 4118864120
> ch13_arm: 3921011919
> ch13_aff_change: 0
> ch13_eq_rearm: 0
> ch14_events: 4321664800
> ch14_poll: 4382433680
> ch14_arm: 4186130552
> ch14_aff_change: 0
> ch14_eq_rearm: 0
> ch15_events: 4701102075
> ch15_poll: 4760373932
> ch15_arm: 4570151468
> ch15_aff_change: 0
> ch15_eq_rearm: 0
> ch16_events: 4311052687
> ch16_poll: 4345937129
> ch16_arm: 4170883819
> ch16_aff_change: 0
> ch16_eq_rearm: 0
> ch17_events: 4647570931
> ch17_poll: 4680218533
> ch17_arm: 4509426288
> ch17_aff_change: 0
> ch17_eq_rearm: 0
> ch18_events: 4598195702
> ch18_poll: 4631314898
> ch18_arm: 4457267084
> ch18_aff_change: 0
> ch18_eq_rearm: 0
> ch19_events: 4808094560
> ch19_poll: 4841368340
> ch19_arm: 4670604358
> ch19_aff_change: 0
> ch19_eq_rearm: 0
> ch20_events: 4240910605
> ch20_poll: 4276531502
> ch20_arm: 4101767278
> ch20_aff_change: 1
> ch20_eq_rearm: 0
> ch21_events: 4389371472
> ch21_poll: 4426870311
> ch21_arm: 4249339045
> ch21_aff_change: 2
> ch21_eq_rearm: 0
> ch22_events: 4282958754
> ch22_poll: 4319228073
> ch22_arm: 4145102991
> ch22_aff_change: 2
> ch22_eq_rearm: 0
> ch23_events: 4440196528
> ch23_poll: 4474090188
> ch23_arm: 4300837147
> ch23_aff_change: 2
> ch23_eq_rearm: 0
> ch24_events: 4326875785
> ch24_poll: 4364971263
> ch24_arm: 4186404526
> ch24_aff_change: 2
> ch24_eq_rearm: 0
> ch25_events: 4286528453
> ch25_poll: 4324089445
> ch25_arm: 4147222616
> ch25_aff_change: 3
> ch25_eq_rearm: 0
> ch26_events: 4098043104
> ch26_poll: 4138133745
> ch26_arm: 3967438971
> ch26_aff_change: 4
> ch26_eq_rearm: 0
> ch27_events: 4563302840
> ch27_poll: 4599441446
> ch27_arm: 4432182806
> ch27_aff_change: 4
> ch27_eq_rearm: 0
> ch28_events: 4
> ch28_poll: 4
> ch28_arm: 4
> ch28_aff_change: 0
> ch28_eq_rearm: 0
> ch29_events: 6
> ch29_poll: 6
> ch29_arm: 6
> ch29_aff_change: 0
> ch29_eq_rearm: 0
> ch30_events: 4
> ch30_poll: 4
> ch30_arm: 4
> ch30_aff_change: 0
> ch30_eq_rearm: 0
> ch31_events: 4
> ch31_poll: 4
> ch31_arm: 4
> ch31_aff_change: 0
> ch31_eq_rearm: 0
> ch32_events: 4
> ch32_poll: 4
> ch32_arm: 4
> ch32_aff_change: 0
> ch32_eq_rearm: 0
> ch33_events: 4
> ch33_poll: 4
> ch33_arm: 4
> ch33_aff_change: 0
> ch33_eq_rearm: 0
> ch34_events: 4
> ch34_poll: 4
> ch34_arm: 4
> ch34_aff_change: 0
> ch34_eq_rearm: 0
> ch35_events: 4
> ch35_poll: 4
> ch35_arm: 4
> ch35_aff_change: 0
> ch35_eq_rearm: 0
> ch36_events: 4
> ch36_poll: 4
> ch36_arm: 4
> ch36_aff_change: 0
> ch36_eq_rearm: 0
> ch37_events: 4
> ch37_poll: 4
> ch37_arm: 4
> ch37_aff_change: 0
> ch37_eq_rearm: 0
> ch38_events: 4
> ch38_poll: 4
> ch38_arm: 4
> ch38_aff_change: 0
> ch38_eq_rearm: 0
> ch39_events: 4
> ch39_poll: 4
> ch39_arm: 4
> ch39_aff_change: 0
> ch39_eq_rearm: 0
> ch40_events: 4
> ch40_poll: 4
> ch40_arm: 4
> ch40_aff_change: 0
> ch40_eq_rearm: 0
> ch41_events: 4
> ch41_poll: 4
> ch41_arm: 4
> ch41_aff_change: 0
> ch41_eq_rearm: 0
> ch42_events: 4
> ch42_poll: 4
> ch42_arm: 4
> ch42_aff_change: 0
> ch42_eq_rearm: 0
> ch43_events: 4
> ch43_poll: 4
> ch43_arm: 4
> ch43_aff_change: 0
> ch43_eq_rearm: 0
> ch44_events: 4
> ch44_poll: 4
> ch44_arm: 4
> ch44_aff_change: 0
> ch44_eq_rearm: 0
> ch45_events: 4
> ch45_poll: 4
> ch45_arm: 4
> ch45_aff_change: 0
> ch45_eq_rearm: 0
> ch46_events: 4
> ch46_poll: 4
> ch46_arm: 4
> ch46_aff_change: 0
> ch46_eq_rearm: 0
> ch47_events: 4
> ch47_poll: 4
> ch47_arm: 4
> ch47_aff_change: 0
> ch47_eq_rearm: 0
> ch48_events: 4
> ch48_poll: 4
> ch48_arm: 4
> ch48_aff_change: 0
> ch48_eq_rearm: 0
> ch49_events: 4
> ch49_poll: 4
> ch49_arm: 4
> ch49_aff_change: 0
> ch49_eq_rearm: 0
> ch50_events: 4
> ch50_poll: 4
> ch50_arm: 4
> ch50_aff_change: 0
> ch50_eq_rearm: 0
> ch51_events: 4
> ch51_poll: 4
> ch51_arm: 4
> ch51_aff_change: 0
> ch51_eq_rearm: 0
> ch52_events: 4
> ch52_poll: 4
> ch52_arm: 4
> ch52_aff_change: 0
> ch52_eq_rearm: 0
> ch53_events: 4
> ch53_poll: 4
> ch53_arm: 4
> ch53_aff_change: 0
> ch53_eq_rearm: 0
> ch54_events: 4
> ch54_poll: 4
> ch54_arm: 4
> ch54_aff_change: 0
> ch54_eq_rearm: 0
> ch55_events: 4
> ch55_poll: 4
> ch55_arm: 4
> ch55_aff_change: 0
> ch55_eq_rearm: 0
> rx0_packets: 21390033774
> rx0_bytes: 27326856299122
> rx0_csum_complete: 21339650906
> rx0_csum_unnecessary: 0
> rx0_csum_unnecessary_inner: 0
> rx0_csum_none: 50382868
> rx0_xdp_drop: 0
> rx0_xdp_redirect: 0
> rx0_lro_packets: 0
> rx0_lro_bytes: 0
> rx0_ecn_mark: 0
> rx0_removed_vlan_packets: 21390033774
> rx0_wqe_err: 0
> rx0_mpwqe_filler_cqes: 0
> rx0_mpwqe_filler_strides: 0
> rx0_buff_alloc_err: 0
> rx0_cqe_compress_blks: 481077641
> rx0_cqe_compress_pkts: 1085647489
> rx0_page_reuse: 0
> rx0_cache_reuse: 19050049
> rx0_cache_full: 10675964285
> rx0_cache_empty: 37376
> rx0_cache_busy: 10675966819
> rx0_cache_waive: 0
> rx0_congst_umr: 0
> rx0_arfs_err: 0
> rx0_xdp_tx_xmit: 0
> rx0_xdp_tx_full: 0
> rx0_xdp_tx_err: 0
> rx0_xdp_tx_cqes: 0
> rx1_packets: 19868919527
> rx1_bytes: 26149716991561
> rx1_csum_complete: 19868919527
> rx1_csum_unnecessary: 0
> rx1_csum_unnecessary_inner: 0
> rx1_csum_none: 0
> rx1_xdp_drop: 0
> rx1_xdp_redirect: 0
> rx1_lro_packets: 0
> rx1_lro_bytes: 0
> rx1_ecn_mark: 0
> rx1_removed_vlan_packets: 19868919527
> rx1_wqe_err: 0
> rx1_mpwqe_filler_cqes: 0
> rx1_mpwqe_filler_strides: 0
> rx1_buff_alloc_err: 0
> rx1_cqe_compress_blks: 420210560
> rx1_cqe_compress_pkts: 941233388
> rx1_page_reuse: 0
> rx1_cache_reuse: 46200002
> rx1_cache_full: 9888257242
> rx1_cache_empty: 37376
> rx1_cache_busy: 9888259746
> rx1_cache_waive: 0
> rx1_congst_umr: 0
> rx1_arfs_err: 0
> rx1_xdp_tx_xmit: 0
> rx1_xdp_tx_full: 0
> rx1_xdp_tx_err: 0
> rx1_xdp_tx_cqes: 0
> rx2_packets: 19575013662
> rx2_bytes: 25759818417945
> rx2_csum_complete: 19575013662
> rx2_csum_unnecessary: 0
> rx2_csum_unnecessary_inner: 0
> rx2_csum_none: 0
> rx2_xdp_drop: 0
> rx2_xdp_redirect: 0
> rx2_lro_packets: 0
> rx2_lro_bytes: 0
> rx2_ecn_mark: 0
> rx2_removed_vlan_packets: 19575013662
> rx2_wqe_err: 0
> rx2_mpwqe_filler_cqes: 0
> rx2_mpwqe_filler_strides: 0
> rx2_buff_alloc_err: 0
> rx2_cqe_compress_blks: 412345511
> rx2_cqe_compress_pkts: 923376167
> rx2_page_reuse: 0
> rx2_cache_reuse: 38837731
> rx2_cache_full: 9748666548
> rx2_cache_empty: 37376
> rx2_cache_busy: 9748669093
> rx2_cache_waive: 0
> rx2_congst_umr: 0
> rx2_arfs_err: 0
> rx2_xdp_tx_xmit: 0
> rx2_xdp_tx_full: 0
> rx2_xdp_tx_err: 0
> rx2_xdp_tx_cqes: 0
> rx3_packets: 19795911749
> rx3_bytes: 25969475566905
> rx3_csum_complete: 19795911749
> rx3_csum_unnecessary: 0
> rx3_csum_unnecessary_inner: 0
> rx3_csum_none: 0
> rx3_xdp_drop: 0
> rx3_xdp_redirect: 0
> rx3_lro_packets: 0
> rx3_lro_bytes: 0
> rx3_ecn_mark: 0
> rx3_removed_vlan_packets: 19795911749
> rx3_wqe_err: 0
> rx3_mpwqe_filler_cqes: 0
> rx3_mpwqe_filler_strides: 0
> rx3_buff_alloc_err: 0
> rx3_cqe_compress_blks: 416658765
> rx3_cqe_compress_pkts: 934986266
> rx3_page_reuse: 0
> rx3_cache_reuse: 34542124
> rx3_cache_full: 9863411232
> rx3_cache_empty: 37376
> rx3_cache_busy: 9863413732
> rx3_cache_waive: 0
> rx3_congst_umr: 0
> rx3_arfs_err: 0
> rx3_xdp_tx_xmit: 0
> rx3_xdp_tx_full: 0
> rx3_xdp_tx_err: 0
> rx3_xdp_tx_cqes: 0
> rx4_packets: 20445652378
> rx4_bytes: 26949065110265
> rx4_csum_complete: 20445652378
> rx4_csum_unnecessary: 0
> rx4_csum_unnecessary_inner: 0
> rx4_csum_none: 0
> rx4_xdp_drop: 0
> rx4_xdp_redirect: 0
> rx4_lro_packets: 0
> rx4_lro_bytes: 0
> rx4_ecn_mark: 0
> rx4_removed_vlan_packets: 20445652378
> rx4_wqe_err: 0
> rx4_mpwqe_filler_cqes: 0
> rx4_mpwqe_filler_strides: 0
> rx4_buff_alloc_err: 0
> rx4_cqe_compress_blks: 506085858
> rx4_cqe_compress_pkts: 1147860328
> rx4_page_reuse: 0
> rx4_cache_reuse: 10122542864
> rx4_cache_full: 100281206
> rx4_cache_empty: 37376
> rx4_cache_busy: 100283304
> rx4_cache_waive: 0
> rx4_congst_umr: 0
> rx4_arfs_err: 0
> rx4_xdp_tx_xmit: 0
> rx4_xdp_tx_full: 0
> rx4_xdp_tx_err: 0
> rx4_xdp_tx_cqes: 0
> rx5_packets: 19622362246
> rx5_bytes: 25843450982982
> rx5_csum_complete: 19622362246
> rx5_csum_unnecessary: 0
> rx5_csum_unnecessary_inner: 0
> rx5_csum_none: 0
> rx5_xdp_drop: 0
> rx5_xdp_redirect: 0
> rx5_lro_packets: 0
> rx5_lro_bytes: 0
> rx5_ecn_mark: 0
> rx5_removed_vlan_packets: 19622362246
> rx5_wqe_err: 0
> rx5_mpwqe_filler_cqes: 0
> rx5_mpwqe_filler_strides: 0
> rx5_buff_alloc_err: 0
> rx5_cqe_compress_blks: 422840924
> rx5_cqe_compress_pkts: 948005878
> rx5_page_reuse: 0
> rx5_cache_reuse: 31285453
> rx5_cache_full: 9779893117
> rx5_cache_empty: 37376
> rx5_cache_busy: 9779895647
> rx5_cache_waive: 0
> rx5_congst_umr: 0
> rx5_arfs_err: 0
> rx5_xdp_tx_xmit: 0
> rx5_xdp_tx_full: 0
> rx5_xdp_tx_err: 0
> rx5_xdp_tx_cqes: 0
> rx6_packets: 19788231278
> rx6_bytes: 25985783006486
> rx6_csum_complete: 19788231278
> rx6_csum_unnecessary: 0
> rx6_csum_unnecessary_inner: 0
> rx6_csum_none: 0
> rx6_xdp_drop: 0
> rx6_xdp_redirect: 0
> rx6_lro_packets: 0
> rx6_lro_bytes: 0
> rx6_ecn_mark: 0
> rx6_removed_vlan_packets: 19788231278
> rx6_wqe_err: 0
> rx6_mpwqe_filler_cqes: 0
> rx6_mpwqe_filler_strides: 0
> rx6_buff_alloc_err: 0
> rx6_cqe_compress_blks: 418799056
> rx6_cqe_compress_pkts: 938282685
> rx6_page_reuse: 0
> rx6_cache_reuse: 18114793
> rx6_cache_full: 9875998295
> rx6_cache_empty: 37376
> rx6_cache_busy: 9876000831
> rx6_cache_waive: 0
> rx6_congst_umr: 0
> rx6_arfs_err: 0
> rx6_xdp_tx_xmit: 0
> rx6_xdp_tx_full: 0
> rx6_xdp_tx_err: 0
> rx6_xdp_tx_cqes: 0
> rx7_packets: 19795759168
> rx7_bytes: 26085056586860
> rx7_csum_complete: 19795759168
> rx7_csum_unnecessary: 0
> rx7_csum_unnecessary_inner: 0
> rx7_csum_none: 0
> rx7_xdp_drop: 0
> rx7_xdp_redirect: 0
> rx7_lro_packets: 0
> rx7_lro_bytes: 0
> rx7_ecn_mark: 0
> rx7_removed_vlan_packets: 19795759168
> rx7_wqe_err: 0
> rx7_mpwqe_filler_cqes: 0
> rx7_mpwqe_filler_strides: 0
> rx7_buff_alloc_err: 0
> rx7_cqe_compress_blks: 413959224
> rx7_cqe_compress_pkts: 927675936
> rx7_page_reuse: 0
> rx7_cache_reuse: 23902990
> rx7_cache_full: 9873974042
> rx7_cache_empty: 37376
> rx7_cache_busy: 9873976574
> rx7_cache_waive: 0
> rx7_congst_umr: 0
> rx7_arfs_err: 0
> rx7_xdp_tx_xmit: 0
> rx7_xdp_tx_full: 0
> rx7_xdp_tx_err: 0
> rx7_xdp_tx_cqes: 0
> rx8_packets: 19963477439
> rx8_bytes: 26384640501789
> rx8_csum_complete: 19963477439
> rx8_csum_unnecessary: 0
> rx8_csum_unnecessary_inner: 0
> rx8_csum_none: 0
> rx8_xdp_drop: 0
> rx8_xdp_redirect: 0
> rx8_lro_packets: 0
> rx8_lro_bytes: 0
> rx8_ecn_mark: 0
> rx8_removed_vlan_packets: 19963477439
> rx8_wqe_err: 0
> rx8_mpwqe_filler_cqes: 0
> rx8_mpwqe_filler_strides: 0
> rx8_buff_alloc_err: 0
> rx8_cqe_compress_blks: 420422857
> rx8_cqe_compress_pkts: 942720292
> rx8_page_reuse: 0
> rx8_cache_reuse: 88181713
> rx8_cache_full: 9893554525
> rx8_cache_empty: 37376
> rx8_cache_busy: 9893556983
> rx8_cache_waive: 0
> rx8_congst_umr: 0
> rx8_arfs_err: 0
> rx8_xdp_tx_xmit: 0
> rx8_xdp_tx_full: 0
> rx8_xdp_tx_err: 0
> rx8_xdp_tx_cqes: 0
> rx9_packets: 19726642138
> rx9_bytes: 26063924286499
> rx9_csum_complete: 19726642138
> rx9_csum_unnecessary: 0
> rx9_csum_unnecessary_inner: 0
> rx9_csum_none: 0
> rx9_xdp_drop: 0
> rx9_xdp_redirect: 0
> rx9_lro_packets: 0
> rx9_lro_bytes: 0
> rx9_ecn_mark: 0
> rx9_removed_vlan_packets: 19726642138
> rx9_wqe_err: 0
> rx9_mpwqe_filler_cqes: 0
> rx9_mpwqe_filler_strides: 0
> rx9_buff_alloc_err: 0
> rx9_cqe_compress_blks: 424227411
> rx9_cqe_compress_pkts: 951534873
> rx9_page_reuse: 0
> rx9_cache_reuse: 482901440
> rx9_cache_full: 9380417487
> rx9_cache_empty: 37376
> rx9_cache_busy: 9380419608
> rx9_cache_waive: 0
> rx9_congst_umr: 0
> rx9_arfs_err: 0
> rx9_xdp_tx_xmit: 0
> rx9_xdp_tx_full: 0
> rx9_xdp_tx_err: 0
> rx9_xdp_tx_cqes: 0
> rx10_packets: 19901229170
> rx10_bytes: 26300854495044
> rx10_csum_complete: 19901229170
> rx10_csum_unnecessary: 0
> rx10_csum_unnecessary_inner: 0
> rx10_csum_none: 0
> rx10_xdp_drop: 0
> rx10_xdp_redirect: 0
> rx10_lro_packets: 0
> rx10_lro_bytes: 0
> rx10_ecn_mark: 0
> rx10_removed_vlan_packets: 19901229170
> rx10_wqe_err: 0
> rx10_mpwqe_filler_cqes: 0
> rx10_mpwqe_filler_strides: 0
> rx10_buff_alloc_err: 0
> rx10_cqe_compress_blks: 419082938
> rx10_cqe_compress_pkts: 940791347
> rx10_page_reuse: 0
> rx10_cache_reuse: 14896055
> rx10_cache_full: 9935715977
> rx10_cache_empty: 37376
> rx10_cache_busy: 9935718513
> rx10_cache_waive: 0
> rx10_congst_umr: 0
> rx10_arfs_err: 0
> rx10_xdp_tx_xmit: 0
> rx10_xdp_tx_full: 0
> rx10_xdp_tx_err: 0
> rx10_xdp_tx_cqes: 0
> rx11_packets: 20352190494
> rx11_bytes: 26851034425372
> rx11_csum_complete: 20352190494
> rx11_csum_unnecessary: 0
> rx11_csum_unnecessary_inner: 0
> rx11_csum_none: 0
> rx11_xdp_drop: 0
> rx11_xdp_redirect: 0
> rx11_lro_packets: 0
> rx11_lro_bytes: 0
> rx11_ecn_mark: 0
> rx11_removed_vlan_packets: 20352190494
> rx11_wqe_err: 0
> rx11_mpwqe_filler_cqes: 0
> rx11_mpwqe_filler_strides: 0
> rx11_buff_alloc_err: 0
> rx11_cqe_compress_blks: 501992147
> rx11_cqe_compress_pkts: 1140398610
> rx11_page_reuse: 0
> rx11_cache_reuse: 10071721531
> rx11_cache_full: 104371621
> rx11_cache_empty: 37376
> rx11_cache_busy: 104373697
> rx11_cache_waive: 0
> rx11_congst_umr: 0
> rx11_arfs_err: 0
> rx11_xdp_tx_xmit: 0
> rx11_xdp_tx_full: 0
> rx11_xdp_tx_err: 0
> rx11_xdp_tx_cqes: 0
> rx12_packets: 19934747149
> rx12_bytes: 26296478787829
> rx12_csum_complete: 19934747149
> rx12_csum_unnecessary: 0
> rx12_csum_unnecessary_inner: 0
> rx12_csum_none: 0
> rx12_xdp_drop: 0
> rx12_xdp_redirect: 0
> rx12_lro_packets: 0
> rx12_lro_bytes: 0
> rx12_ecn_mark: 0
> rx12_removed_vlan_packets: 19934747149
> rx12_wqe_err: 0
> rx12_mpwqe_filler_cqes: 0
> rx12_mpwqe_filler_strides: 0
> rx12_buff_alloc_err: 0
> rx12_cqe_compress_blks: 443350570
> rx12_cqe_compress_pkts: 995997220
> rx12_page_reuse: 0
> rx12_cache_reuse: 9864934174
> rx12_cache_full: 102437428
> rx12_cache_empty: 37376
> rx12_cache_busy: 102439382
> rx12_cache_waive: 0
> rx12_congst_umr: 0
> rx12_arfs_err: 0
> rx12_xdp_tx_xmit: 0
> rx12_xdp_tx_full: 0
> rx12_xdp_tx_err: 0
> rx12_xdp_tx_cqes: 0
> rx13_packets: 19866908096
> rx13_bytes: 26160931936186
> rx13_csum_complete: 19866908096
> rx13_csum_unnecessary: 0
> rx13_csum_unnecessary_inner: 0
> rx13_csum_none: 0
> rx13_xdp_drop: 0
> rx13_xdp_redirect: 0
> rx13_lro_packets: 0
> rx13_lro_bytes: 0
> rx13_ecn_mark: 0
> rx13_removed_vlan_packets: 19866908096
> rx13_wqe_err: 0
> rx13_mpwqe_filler_cqes: 0
> rx13_mpwqe_filler_strides: 0
> rx13_buff_alloc_err: 0
> rx13_cqe_compress_blks: 413640141
> rx13_cqe_compress_pkts: 926175066
> rx13_page_reuse: 0
> rx13_cache_reuse: 36358610
> rx13_cache_full: 9897092921
> rx13_cache_empty: 37376
> rx13_cache_busy: 9897095422
> rx13_cache_waive: 0
> rx13_congst_umr: 0
> rx13_arfs_err: 0
> rx13_xdp_tx_xmit: 0
> rx13_xdp_tx_full: 0
> rx13_xdp_tx_err: 0
> rx13_xdp_tx_cqes: 0
> rx14_packets: 20229035746
> rx14_bytes: 26655092809172
> rx14_csum_complete: 20229035746
> rx14_csum_unnecessary: 0
> rx14_csum_unnecessary_inner: 0
> rx14_csum_none: 0
> rx14_xdp_drop: 0
> rx14_xdp_redirect: 0
> rx14_lro_packets: 0
> rx14_lro_bytes: 0
> rx14_ecn_mark: 0
> rx14_removed_vlan_packets: 20229035746
> rx14_wqe_err: 0
> rx14_mpwqe_filler_cqes: 0
> rx14_mpwqe_filler_strides: 0
> rx14_buff_alloc_err: 0
> rx14_cqe_compress_blks: 460990337
> rx14_cqe_compress_pkts: 1041287948
> rx14_page_reuse: 0
> rx14_cache_reuse: 25649275
> rx14_cache_full: 10088866045
> rx14_cache_empty: 37376
> rx14_cache_busy: 10088868574
> rx14_cache_waive: 0
> rx14_congst_umr: 0
> rx14_arfs_err: 0
> rx14_xdp_tx_xmit: 0
> rx14_xdp_tx_full: 0
> rx14_xdp_tx_err: 0
> rx14_xdp_tx_cqes: 0
> rx15_packets: 20528177154
> rx15_bytes: 27029263893264
> rx15_csum_complete: 20528177154
> rx15_csum_unnecessary: 0
> rx15_csum_unnecessary_inner: 0
> rx15_csum_none: 0
> rx15_xdp_drop: 0
> rx15_xdp_redirect: 0
> rx15_lro_packets: 0
> rx15_lro_bytes: 0
> rx15_ecn_mark: 0
> rx15_removed_vlan_packets: 20528177154
> rx15_wqe_err: 0
> rx15_mpwqe_filler_cqes: 0
> rx15_mpwqe_filler_strides: 0
> rx15_buff_alloc_err: 0
> rx15_cqe_compress_blks: 476776176
> rx15_cqe_compress_pkts: 1076153263
> rx15_page_reuse: 0
> rx15_cache_reuse: 48426735
> rx15_cache_full: 10215659289
> rx15_cache_empty: 37376
> rx15_cache_busy: 10215661817
> rx15_cache_waive: 0
> rx15_congst_umr: 0
> rx15_arfs_err: 0
> rx15_xdp_tx_xmit: 0
> rx15_xdp_tx_full: 0
> rx15_xdp_tx_err: 0
> rx15_xdp_tx_cqes: 0
> rx16_packets: 16104078098
> rx16_bytes: 21256361789679
> rx16_csum_complete: 16104078098
> rx16_csum_unnecessary: 0
> rx16_csum_unnecessary_inner: 0
> rx16_csum_none: 0
> rx16_xdp_drop: 0
> rx16_xdp_redirect: 0
> rx16_lro_packets: 0
> rx16_lro_bytes: 0
> rx16_ecn_mark: 0
> rx16_removed_vlan_packets: 16104078098
> rx16_wqe_err: 0
> rx16_mpwqe_filler_cqes: 0
> rx16_mpwqe_filler_strides: 0
> rx16_buff_alloc_err: 0
> rx16_cqe_compress_blks: 352082054
> rx16_cqe_compress_pkts: 787161670
> rx16_page_reuse: 0
> rx16_cache_reuse: 25912567
> rx16_cache_full: 8026124051
> rx16_cache_empty: 37376
> rx16_cache_busy: 8026126465
> rx16_cache_waive: 0
> rx16_congst_umr: 0
> rx16_arfs_err: 0
> rx16_xdp_tx_xmit: 0
> rx16_xdp_tx_full: 0
> rx16_xdp_tx_err: 0
> rx16_xdp_tx_cqes: 0
> rx17_packets: 16314055017
> rx17_bytes: 21589139030173
> rx17_csum_complete: 16314055017
> rx17_csum_unnecessary: 0
> rx17_csum_unnecessary_inner: 0
> rx17_csum_none: 0
> rx17_xdp_drop: 0
> rx17_xdp_redirect: 0
> rx17_lro_packets: 0
> rx17_lro_bytes: 0
> rx17_ecn_mark: 0
> rx17_removed_vlan_packets: 16314055017
> rx17_wqe_err: 0
> rx17_mpwqe_filler_cqes: 0
> rx17_mpwqe_filler_strides: 0
> rx17_buff_alloc_err: 0
> rx17_cqe_compress_blks: 387834541
> rx17_cqe_compress_pkts: 871851081
> rx17_page_reuse: 0
> rx17_cache_reuse: 24021313
> rx17_cache_full: 8133003829
> rx17_cache_empty: 37376
> rx17_cache_busy: 8133006175
> rx17_cache_waive: 0
> rx17_congst_umr: 0
> rx17_arfs_err: 0
> rx17_xdp_tx_xmit: 0
> rx17_xdp_tx_full: 0
> rx17_xdp_tx_err: 0
> rx17_xdp_tx_cqes: 0
> rx18_packets: 16439016814
> rx18_bytes: 21648651917475
> rx18_csum_complete: 16439016814
> rx18_csum_unnecessary: 0
> rx18_csum_unnecessary_inner: 0
> rx18_csum_none: 0
> rx18_xdp_drop: 0
> rx18_xdp_redirect: 0
> rx18_lro_packets: 0
> rx18_lro_bytes: 0
> rx18_ecn_mark: 0
> rx18_removed_vlan_packets: 16439016814
> rx18_wqe_err: 0
> rx18_mpwqe_filler_cqes: 0
> rx18_mpwqe_filler_strides: 0
> rx18_buff_alloc_err: 0
> rx18_cqe_compress_blks: 375066666
> rx18_cqe_compress_pkts: 843563974
> rx18_page_reuse: 0
> rx18_cache_reuse: 8151064266
> rx18_cache_full: 68442025
> rx18_cache_empty: 37376
> rx18_cache_busy: 68444122
> rx18_cache_waive: 0
> rx18_congst_umr: 0
> rx18_arfs_err: 0
> rx18_xdp_tx_xmit: 0
> rx18_xdp_tx_full: 0
> rx18_xdp_tx_err: 0
> rx18_xdp_tx_cqes: 0
> rx19_packets: 16641223506
> rx19_bytes: 21964749940935
> rx19_csum_complete: 16641223506
> rx19_csum_unnecessary: 0
> rx19_csum_unnecessary_inner: 0
> rx19_csum_none: 0
> rx19_xdp_drop: 0
> rx19_xdp_redirect: 0
> rx19_lro_packets: 0
> rx19_lro_bytes: 0
> rx19_ecn_mark: 0
> rx19_removed_vlan_packets: 16641223506
> rx19_wqe_err: 0
> rx19_mpwqe_filler_cqes: 0
> rx19_mpwqe_filler_strides: 0
> rx19_buff_alloc_err: 0
> rx19_cqe_compress_blks: 387825932
> rx19_cqe_compress_pkts: 872266355
> rx19_page_reuse: 0
> rx19_cache_reuse: 116433620
> rx19_cache_full: 8204175954
> rx19_cache_empty: 37376
> rx19_cache_busy: 8204178120
> rx19_cache_waive: 0
> rx19_congst_umr: 0
> rx19_arfs_err: 0
> rx19_xdp_tx_xmit: 0
> rx19_xdp_tx_full: 0
> rx19_xdp_tx_err: 0
> rx19_xdp_tx_cqes: 0
> rx20_packets: 16206927741
> rx20_bytes: 21387447038430
> rx20_csum_complete: 16206927741
> rx20_csum_unnecessary: 0
> rx20_csum_unnecessary_inner: 0
> rx20_csum_none: 0
> rx20_xdp_drop: 0
> rx20_xdp_redirect: 0
> rx20_lro_packets: 0
> rx20_lro_bytes: 0
> rx20_ecn_mark: 0
> rx20_removed_vlan_packets: 16206927741
> rx20_wqe_err: 0
> rx20_mpwqe_filler_cqes: 0
> rx20_mpwqe_filler_strides: 0
> rx20_buff_alloc_err: 0
> rx20_cqe_compress_blks: 370144620
> rx20_cqe_compress_pkts: 829122671
> rx20_page_reuse: 0
> rx20_cache_reuse: 8053733744
> rx20_cache_full: 49728026
> rx20_cache_empty: 37376
> rx20_cache_busy: 49730116
> rx20_cache_waive: 0
> rx20_congst_umr: 0
> rx20_arfs_err: 0
> rx20_xdp_tx_xmit: 0
> rx20_xdp_tx_full: 0
> rx20_xdp_tx_err: 0
> rx20_xdp_tx_cqes: 0
> rx21_packets: 16562361314
> rx21_bytes: 21856653284356
> rx21_csum_complete: 16562361314
> rx21_csum_unnecessary: 0
> rx21_csum_unnecessary_inner: 0
> rx21_csum_none: 0
> rx21_xdp_drop: 0
> rx21_xdp_redirect: 0
> rx21_lro_packets: 0
> rx21_lro_bytes: 0
> rx21_ecn_mark: 0
> rx21_removed_vlan_packets: 16562361314
> rx21_wqe_err: 0
> rx21_mpwqe_filler_cqes: 0
> rx21_mpwqe_filler_strides: 0
> rx21_buff_alloc_err: 0
> rx21_cqe_compress_blks: 350790425
> rx21_cqe_compress_pkts: 783850729
> rx21_page_reuse: 0
> rx21_cache_reuse: 28077493
> rx21_cache_full: 8253100706
> rx21_cache_empty: 37376
> rx21_cache_busy: 8253103147
> rx21_cache_waive: 0
> rx21_congst_umr: 0
> rx21_arfs_err: 0
> rx21_xdp_tx_xmit: 0
> rx21_xdp_tx_full: 0
> rx21_xdp_tx_err: 0
> rx21_xdp_tx_cqes: 0
> rx22_packets: 16350307571
> rx22_bytes: 21408575325592
> rx22_csum_complete: 16350307571
> rx22_csum_unnecessary: 0
> rx22_csum_unnecessary_inner: 0
> rx22_csum_none: 0
> rx22_xdp_drop: 0
> rx22_xdp_redirect: 0
> rx22_lro_packets: 0
> rx22_lro_bytes: 0
> rx22_ecn_mark: 0
> rx22_removed_vlan_packets: 16350307571
> rx22_wqe_err: 0
> rx22_mpwqe_filler_cqes: 0
> rx22_mpwqe_filler_strides: 0
> rx22_buff_alloc_err: 0
> rx22_cqe_compress_blks: 353531065
> rx22_cqe_compress_pkts: 790814415
> rx22_page_reuse: 0
> rx22_cache_reuse: 16934343
> rx22_cache_full: 8158216889
> rx22_cache_empty: 37376
> rx22_cache_busy: 8158219417
> rx22_cache_waive: 0
> rx22_congst_umr: 0
> rx22_arfs_err: 0
> rx22_xdp_tx_xmit: 0
> rx22_xdp_tx_full: 0
> rx22_xdp_tx_err: 0
> rx22_xdp_tx_cqes: 0
> rx23_packets: 16019811764
> rx23_bytes: 21137182570985
> rx23_csum_complete: 16019811764
> rx23_csum_unnecessary: 0
> rx23_csum_unnecessary_inner: 0
> rx23_csum_none: 0
> rx23_xdp_drop: 0
> rx23_xdp_redirect: 0
> rx23_lro_packets: 0
> rx23_lro_bytes: 0
> rx23_ecn_mark: 0
> rx23_removed_vlan_packets: 16019811764
> rx23_wqe_err: 0
> rx23_mpwqe_filler_cqes: 0
> rx23_mpwqe_filler_strides: 0
> rx23_buff_alloc_err: 0
> rx23_cqe_compress_blks: 349733033
> rx23_cqe_compress_pkts: 781248862
> rx23_page_reuse: 0
> rx23_cache_reuse: 33422343
> rx23_cache_full: 7976481152
> rx23_cache_empty: 37376
> rx23_cache_busy: 7976483525
> rx23_cache_waive: 0
> rx23_congst_umr: 0
> rx23_arfs_err: 0
> rx23_xdp_tx_xmit: 0
> rx23_xdp_tx_full: 0
> rx23_xdp_tx_err: 0
> rx23_xdp_tx_cqes: 0
> rx24_packets: 16212040646
> rx24_bytes: 21393399325700
> rx24_csum_complete: 16212040646
> rx24_csum_unnecessary: 0
> rx24_csum_unnecessary_inner: 0
> rx24_csum_none: 0
> rx24_xdp_drop: 0
> rx24_xdp_redirect: 0
> rx24_lro_packets: 0
> rx24_lro_bytes: 0
> rx24_ecn_mark: 0
> rx24_removed_vlan_packets: 16212040646
> rx24_wqe_err: 0
> rx24_mpwqe_filler_cqes: 0
> rx24_mpwqe_filler_strides: 0
> rx24_buff_alloc_err: 0
> rx24_cqe_compress_blks: 379833752
> rx24_cqe_compress_pkts: 852020179
> rx24_page_reuse: 0
> rx24_cache_reuse: 8033552512
> rx24_cache_full: 72465843
> rx24_cache_empty: 37376
> rx24_cache_busy: 72467789
> rx24_cache_waive: 0
> rx24_congst_umr: 0
> rx24_arfs_err: 0
> rx24_xdp_tx_xmit: 0
> rx24_xdp_tx_full: 0
> rx24_xdp_tx_err: 0
> rx24_xdp_tx_cqes: 0
> rx25_packets: 16412186257
> rx25_bytes: 21651198388407
> rx25_csum_complete: 16412186257
> rx25_csum_unnecessary: 0
> rx25_csum_unnecessary_inner: 0
> rx25_csum_none: 0
> rx25_xdp_drop: 0
> rx25_xdp_redirect: 0
> rx25_lro_packets: 0
> rx25_lro_bytes: 0
> rx25_ecn_mark: 0
> rx25_removed_vlan_packets: 16412186257
> rx25_wqe_err: 0
> rx25_mpwqe_filler_cqes: 0
> rx25_mpwqe_filler_strides: 0
> rx25_buff_alloc_err: 0
> rx25_cqe_compress_blks: 383979685
> rx25_cqe_compress_pkts: 861985772
> rx25_page_reuse: 0
> rx25_cache_reuse: 8129807841
> rx25_cache_full: 76283342
> rx25_cache_empty: 37376
> rx25_cache_busy: 76285271
> rx25_cache_waive: 0
> rx25_congst_umr: 0
> rx25_arfs_err: 0
> rx25_xdp_tx_xmit: 0
> rx25_xdp_tx_full: 0
> rx25_xdp_tx_err: 0
> rx25_xdp_tx_cqes: 0
> rx26_packets: 16304310003
> rx26_bytes: 21571217538721
> rx26_csum_complete: 16304310003
> rx26_csum_unnecessary: 0
> rx26_csum_unnecessary_inner: 0
> rx26_csum_none: 0
> rx26_xdp_drop: 0
> rx26_xdp_redirect: 0
> rx26_lro_packets: 0
> rx26_lro_bytes: 0
> rx26_ecn_mark: 0
> rx26_removed_vlan_packets: 16304310003
> rx26_wqe_err: 0
> rx26_mpwqe_filler_cqes: 0
> rx26_mpwqe_filler_strides: 0
> rx26_buff_alloc_err: 0
> rx26_cqe_compress_blks: 353314041
> rx26_cqe_compress_pkts: 788838424
> rx26_page_reuse: 0
> rx26_cache_reuse: 19673790
> rx26_cache_full: 8132478659
> rx26_cache_empty: 37376
> rx26_cache_busy: 8132481198
> rx26_cache_waive: 0
> rx26_congst_umr: 0
> rx26_arfs_err: 0
> rx26_xdp_tx_xmit: 0
> rx26_xdp_tx_full: 0
> rx26_xdp_tx_err: 0
> rx26_xdp_tx_cqes: 0
> rx27_packets: 16171856079
> rx27_bytes: 21376891736540
> rx27_csum_complete: 16171856079
> rx27_csum_unnecessary: 0
> rx27_csum_unnecessary_inner: 0
> rx27_csum_none: 0
> rx27_xdp_drop: 0
> rx27_xdp_redirect: 0
> rx27_lro_packets: 0
> rx27_lro_bytes: 0
> rx27_ecn_mark: 0
> rx27_removed_vlan_packets: 16171856079
> rx27_wqe_err: 0
> rx27_mpwqe_filler_cqes: 0
> rx27_mpwqe_filler_strides: 0
> rx27_buff_alloc_err: 0
> rx27_cqe_compress_blks: 386632845
> rx27_cqe_compress_pkts: 869362576
> rx27_page_reuse: 0
> rx27_cache_reuse: 10070560
> rx27_cache_full: 8075854928
> rx27_cache_empty: 37376
> rx27_cache_busy: 8075857468
> rx27_cache_waive: 0
> rx27_congst_umr: 0
> rx27_arfs_err: 0
> rx27_xdp_tx_xmit: 0
> rx27_xdp_tx_full: 0
> rx27_xdp_tx_err: 0
> rx27_xdp_tx_cqes: 0
> rx28_packets: 0
> rx28_bytes: 0
> rx28_csum_complete: 0
> rx28_csum_unnecessary: 0
> rx28_csum_unnecessary_inner: 0
> rx28_csum_none: 0
> rx28_xdp_drop: 0
> rx28_xdp_redirect: 0
> rx28_lro_packets: 0
> rx28_lro_bytes: 0
> rx28_ecn_mark: 0
> rx28_removed_vlan_packets: 0
> rx28_wqe_err: 0
> rx28_mpwqe_filler_cqes: 0
> rx28_mpwqe_filler_strides: 0
> rx28_buff_alloc_err: 0
> rx28_cqe_compress_blks: 0
> rx28_cqe_compress_pkts: 0
> rx28_page_reuse: 0
> rx28_cache_reuse: 0
> rx28_cache_full: 0
> rx28_cache_empty: 2560
> rx28_cache_busy: 0
> rx28_cache_waive: 0
> rx28_congst_umr: 0
> rx28_arfs_err: 0
> rx28_xdp_tx_xmit: 0
> rx28_xdp_tx_full: 0
> rx28_xdp_tx_err: 0
> rx28_xdp_tx_cqes: 0
> rx29_packets: 0
> rx29_bytes: 0
> rx29_csum_complete: 0
> rx29_csum_unnecessary: 0
> rx29_csum_unnecessary_inner: 0
> rx29_csum_none: 0
> rx29_xdp_drop: 0
> rx29_xdp_redirect: 0
> rx29_lro_packets: 0
> rx29_lro_bytes: 0
> rx29_ecn_mark: 0
> rx29_removed_vlan_packets: 0
> rx29_wqe_err: 0
> rx29_mpwqe_filler_cqes: 0
> rx29_mpwqe_filler_strides: 0
> rx29_buff_alloc_err: 0
> rx29_cqe_compress_blks: 0
> rx29_cqe_compress_pkts: 0
> rx29_page_reuse: 0
> rx29_cache_reuse: 0
> rx29_cache_full: 0
> rx29_cache_empty: 2560
> rx29_cache_busy: 0
> rx29_cache_waive: 0
> rx29_congst_umr: 0
> rx29_arfs_err: 0
> rx29_xdp_tx_xmit: 0
> rx29_xdp_tx_full: 0
> rx29_xdp_tx_err: 0
> rx29_xdp_tx_cqes: 0
> rx30_packets: 0
> rx30_bytes: 0
> rx30_csum_complete: 0
> rx30_csum_unnecessary: 0
> rx30_csum_unnecessary_inner: 0
> rx30_csum_none: 0
> rx30_xdp_drop: 0
> rx30_xdp_redirect: 0
> rx30_lro_packets: 0
> rx30_lro_bytes: 0
> rx30_ecn_mark: 0
> rx30_removed_vlan_packets: 0
> rx30_wqe_err: 0
> rx30_mpwqe_filler_cqes: 0
> rx30_mpwqe_filler_strides: 0
> rx30_buff_alloc_err: 0
> rx30_cqe_compress_blks: 0
> rx30_cqe_compress_pkts: 0
> rx30_page_reuse: 0
> rx30_cache_reuse: 0
> rx30_cache_full: 0
> rx30_cache_empty: 2560
> rx30_cache_busy: 0
> rx30_cache_waive: 0
> rx30_congst_umr: 0
> rx30_arfs_err: 0
> rx30_xdp_tx_xmit: 0
> rx30_xdp_tx_full: 0
> rx30_xdp_tx_err: 0
> rx30_xdp_tx_cqes: 0
> rx31_packets: 0
> rx31_bytes: 0
> rx31_csum_complete: 0
> rx31_csum_unnecessary: 0
> rx31_csum_unnecessary_inner: 0
> rx31_csum_none: 0
> rx31_xdp_drop: 0
> rx31_xdp_redirect: 0
> rx31_lro_packets: 0
> rx31_lro_bytes: 0
> rx31_ecn_mark: 0
> rx31_removed_vlan_packets: 0
> rx31_wqe_err: 0
> rx31_mpwqe_filler_cqes: 0
> rx31_mpwqe_filler_strides: 0
> rx31_buff_alloc_err: 0
> rx31_cqe_compress_blks: 0
> rx31_cqe_compress_pkts: 0
> rx31_page_reuse: 0
> rx31_cache_reuse: 0
> rx31_cache_full: 0
> rx31_cache_empty: 2560
> rx31_cache_busy: 0
> rx31_cache_waive: 0
> rx31_congst_umr: 0
> rx31_arfs_err: 0
> rx31_xdp_tx_xmit: 0
> rx31_xdp_tx_full: 0
> rx31_xdp_tx_err: 0
> rx31_xdp_tx_cqes: 0
> rx32_packets: 0
> rx32_bytes: 0
> rx32_csum_complete: 0
> rx32_csum_unnecessary: 0
> rx32_csum_unnecessary_inner: 0
> rx32_csum_none: 0
> rx32_xdp_drop: 0
> rx32_xdp_redirect: 0
> rx32_lro_packets: 0
> rx32_lro_bytes: 0
> rx32_ecn_mark: 0
> rx32_removed_vlan_packets: 0
> rx32_wqe_err: 0
> rx32_mpwqe_filler_cqes: 0
> rx32_mpwqe_filler_strides: 0
> rx32_buff_alloc_err: 0
> rx32_cqe_compress_blks: 0
> rx32_cqe_compress_pkts: 0
> rx32_page_reuse: 0
> rx32_cache_reuse: 0
> rx32_cache_full: 0
> rx32_cache_empty: 2560
> rx32_cache_busy: 0
> rx32_cache_waive: 0
> rx32_congst_umr: 0
> rx32_arfs_err: 0
> rx32_xdp_tx_xmit: 0
> rx32_xdp_tx_full: 0
> rx32_xdp_tx_err: 0
> rx32_xdp_tx_cqes: 0
> rx33_packets: 0
> rx33_bytes: 0
> rx33_csum_complete: 0
> rx33_csum_unnecessary: 0
> rx33_csum_unnecessary_inner: 0
> rx33_csum_none: 0
> rx33_xdp_drop: 0
> rx33_xdp_redirect: 0
> rx33_lro_packets: 0
> rx33_lro_bytes: 0
> rx33_ecn_mark: 0
> rx33_removed_vlan_packets: 0
> rx33_wqe_err: 0
> rx33_mpwqe_filler_cqes: 0
> rx33_mpwqe_filler_strides: 0
> rx33_buff_alloc_err: 0
> rx33_cqe_compress_blks: 0
> rx33_cqe_compress_pkts: 0
> rx33_page_reuse: 0
> rx33_cache_reuse: 0
> rx33_cache_full: 0
> rx33_cache_empty: 2560
> rx33_cache_busy: 0
> rx33_cache_waive: 0
> rx33_congst_umr: 0
> rx33_arfs_err: 0
> rx33_xdp_tx_xmit: 0
> rx33_xdp_tx_full: 0
> rx33_xdp_tx_err: 0
> rx33_xdp_tx_cqes: 0
> rx34_packets: 0
> rx34_bytes: 0
> rx34_csum_complete: 0
> rx34_csum_unnecessary: 0
> rx34_csum_unnecessary_inner: 0
> rx34_csum_none: 0
> rx34_xdp_drop: 0
> rx34_xdp_redirect: 0
> rx34_lro_packets: 0
> rx34_lro_bytes: 0
> rx34_ecn_mark: 0
> rx34_removed_vlan_packets: 0
> rx34_wqe_err: 0
> rx34_mpwqe_filler_cqes: 0
> rx34_mpwqe_filler_strides: 0
> rx34_buff_alloc_err: 0
> rx34_cqe_compress_blks: 0
> rx34_cqe_compress_pkts: 0
> rx34_page_reuse: 0
> rx34_cache_reuse: 0
> rx34_cache_full: 0
> rx34_cache_empty: 2560
> rx34_cache_busy: 0
> rx34_cache_waive: 0
> rx34_congst_umr: 0
> rx34_arfs_err: 0
> rx34_xdp_tx_xmit: 0
> rx34_xdp_tx_full: 0
> rx34_xdp_tx_err: 0
> rx34_xdp_tx_cqes: 0
> rx35_packets: 0
> rx35_bytes: 0
> rx35_csum_complete: 0
> rx35_csum_unnecessary: 0
> rx35_csum_unnecessary_inner: 0
> rx35_csum_none: 0
> rx35_xdp_drop: 0
> rx35_xdp_redirect: 0
> rx35_lro_packets: 0
> rx35_lro_bytes: 0
> rx35_ecn_mark: 0
> rx35_removed_vlan_packets: 0
> rx35_wqe_err: 0
> rx35_mpwqe_filler_cqes: 0
> rx35_mpwqe_filler_strides: 0
> rx35_buff_alloc_err: 0
> rx35_cqe_compress_blks: 0
> rx35_cqe_compress_pkts: 0
> rx35_page_reuse: 0
> rx35_cache_reuse: 0
> rx35_cache_full: 0
> rx35_cache_empty: 2560
> rx35_cache_busy: 0
> rx35_cache_waive: 0
> rx35_congst_umr: 0
> rx35_arfs_err: 0
> rx35_xdp_tx_xmit: 0
> rx35_xdp_tx_full: 0
> rx35_xdp_tx_err: 0
> rx35_xdp_tx_cqes: 0
> rx36_packets: 0
> rx36_bytes: 0
> rx36_csum_complete: 0
> rx36_csum_unnecessary: 0
> rx36_csum_unnecessary_inner: 0
> rx36_csum_none: 0
> rx36_xdp_drop: 0
> rx36_xdp_redirect: 0
> rx36_lro_packets: 0
> rx36_lro_bytes: 0
> rx36_ecn_mark: 0
> rx36_removed_vlan_packets: 0
> rx36_wqe_err: 0
> rx36_mpwqe_filler_cqes: 0
> rx36_mpwqe_filler_strides: 0
> rx36_buff_alloc_err: 0
> rx36_cqe_compress_blks: 0
> rx36_cqe_compress_pkts: 0
> rx36_page_reuse: 0
> rx36_cache_reuse: 0
> rx36_cache_full: 0
> rx36_cache_empty: 2560
> rx36_cache_busy: 0
> rx36_cache_waive: 0
> rx36_congst_umr: 0
> rx36_arfs_err: 0
> rx36_xdp_tx_xmit: 0
> rx36_xdp_tx_full: 0
> rx36_xdp_tx_err: 0
> rx36_xdp_tx_cqes: 0
> rx37_packets: 0
> rx37_bytes: 0
> rx37_csum_complete: 0
> rx37_csum_unnecessary: 0
> rx37_csum_unnecessary_inner: 0
> rx37_csum_none: 0
> rx37_xdp_drop: 0
> rx37_xdp_redirect: 0
> rx37_lro_packets: 0
> rx37_lro_bytes: 0
> rx37_ecn_mark: 0
> rx37_removed_vlan_packets: 0
> rx37_wqe_err: 0
> rx37_mpwqe_filler_cqes: 0
> rx37_mpwqe_filler_strides: 0
> rx37_buff_alloc_err: 0
> rx37_cqe_compress_blks: 0
> rx37_cqe_compress_pkts: 0
> rx37_page_reuse: 0
> rx37_cache_reuse: 0
> rx37_cache_full: 0
> rx37_cache_empty: 2560
> rx37_cache_busy: 0
> rx37_cache_waive: 0
> rx37_congst_umr: 0
> rx37_arfs_err: 0
> rx37_xdp_tx_xmit: 0
> rx37_xdp_tx_full: 0
> rx37_xdp_tx_err: 0
> rx37_xdp_tx_cqes: 0
> rx38_packets: 0
> rx38_bytes: 0
> rx38_csum_complete: 0
> rx38_csum_unnecessary: 0
> rx38_csum_unnecessary_inner: 0
> rx38_csum_none: 0
> rx38_xdp_drop: 0
> rx38_xdp_redirect: 0
> rx38_lro_packets: 0
> rx38_lro_bytes: 0
> rx38_ecn_mark: 0
> rx38_removed_vlan_packets: 0
> rx38_wqe_err: 0
> rx38_mpwqe_filler_cqes: 0
> rx38_mpwqe_filler_strides: 0
> rx38_buff_alloc_err: 0
> rx38_cqe_compress_blks: 0
> rx38_cqe_compress_pkts: 0
> rx38_page_reuse: 0
> rx38_cache_reuse: 0
> rx38_cache_full: 0
> rx38_cache_empty: 2560
> rx38_cache_busy: 0
> rx38_cache_waive: 0
> rx38_congst_umr: 0
> rx38_arfs_err: 0
> rx38_xdp_tx_xmit: 0
> rx38_xdp_tx_full: 0
> rx38_xdp_tx_err: 0
> rx38_xdp_tx_cqes: 0
> rx39_packets: 0
> rx39_bytes: 0
> rx39_csum_complete: 0
> rx39_csum_unnecessary: 0
> rx39_csum_unnecessary_inner: 0
> rx39_csum_none: 0
> rx39_xdp_drop: 0
> rx39_xdp_redirect: 0
> rx39_lro_packets: 0
> rx39_lro_bytes: 0
> rx39_ecn_mark: 0
> rx39_removed_vlan_packets: 0
> rx39_wqe_err: 0
> rx39_mpwqe_filler_cqes: 0
> rx39_mpwqe_filler_strides: 0
> rx39_buff_alloc_err: 0
> rx39_cqe_compress_blks: 0
> rx39_cqe_compress_pkts: 0
> rx39_page_reuse: 0
> rx39_cache_reuse: 0
> rx39_cache_full: 0
> rx39_cache_empty: 2560
> rx39_cache_busy: 0
> rx39_cache_waive: 0
> rx39_congst_umr: 0
> rx39_arfs_err: 0
> rx39_xdp_tx_xmit: 0
> rx39_xdp_tx_full: 0
> rx39_xdp_tx_err: 0
> rx39_xdp_tx_cqes: 0
> rx40_packets: 0
> rx40_bytes: 0
> rx40_csum_complete: 0
> rx40_csum_unnecessary: 0
> rx40_csum_unnecessary_inner: 0
> rx40_csum_none: 0
> rx40_xdp_drop: 0
> rx40_xdp_redirect: 0
> rx40_lro_packets: 0
> rx40_lro_bytes: 0
> rx40_ecn_mark: 0
> rx40_removed_vlan_packets: 0
> rx40_wqe_err: 0
> rx40_mpwqe_filler_cqes: 0
> rx40_mpwqe_filler_strides: 0
> rx40_buff_alloc_err: 0
> rx40_cqe_compress_blks: 0
> rx40_cqe_compress_pkts: 0
> rx40_page_reuse: 0
> rx40_cache_reuse: 0
> rx40_cache_full: 0
> rx40_cache_empty: 2560
> rx40_cache_busy: 0
> rx40_cache_waive: 0
> rx40_congst_umr: 0
> rx40_arfs_err: 0
> rx40_xdp_tx_xmit: 0
> rx40_xdp_tx_full: 0
> rx40_xdp_tx_err: 0
> rx40_xdp_tx_cqes: 0
> rx41_packets: 0
> rx41_bytes: 0
> rx41_csum_complete: 0
> rx41_csum_unnecessary: 0
> rx41_csum_unnecessary_inner: 0
> rx41_csum_none: 0
> rx41_xdp_drop: 0
> rx41_xdp_redirect: 0
> rx41_lro_packets: 0
> rx41_lro_bytes: 0
> rx41_ecn_mark: 0
> rx41_removed_vlan_packets: 0
> rx41_wqe_err: 0
> rx41_mpwqe_filler_cqes: 0
> rx41_mpwqe_filler_strides: 0
> rx41_buff_alloc_err: 0
> rx41_cqe_compress_blks: 0
> rx41_cqe_compress_pkts: 0
> rx41_page_reuse: 0
> rx41_cache_reuse: 0
> rx41_cache_full: 0
> rx41_cache_empty: 2560
> rx41_cache_busy: 0
> rx41_cache_waive: 0
> rx41_congst_umr: 0
> rx41_arfs_err: 0
> rx41_xdp_tx_xmit: 0
> rx41_xdp_tx_full: 0
> rx41_xdp_tx_err: 0
> rx41_xdp_tx_cqes: 0
> rx42_packets: 0
> rx42_bytes: 0
> rx42_csum_complete: 0
> rx42_csum_unnecessary: 0
> rx42_csum_unnecessary_inner: 0
> rx42_csum_none: 0
> rx42_xdp_drop: 0
> rx42_xdp_redirect: 0
> rx42_lro_packets: 0
> rx42_lro_bytes: 0
> rx42_ecn_mark: 0
> rx42_removed_vlan_packets: 0
> rx42_wqe_err: 0
> rx42_mpwqe_filler_cqes: 0
> rx42_mpwqe_filler_strides: 0
> rx42_buff_alloc_err: 0
> rx42_cqe_compress_blks: 0
> rx42_cqe_compress_pkts: 0
> rx42_page_reuse: 0
> rx42_cache_reuse: 0
> rx42_cache_full: 0
> rx42_cache_empty: 2560
> rx42_cache_busy: 0
> rx42_cache_waive: 0
> rx42_congst_umr: 0
> rx42_arfs_err: 0
> rx42_xdp_tx_xmit: 0
> rx42_xdp_tx_full: 0
> rx42_xdp_tx_err: 0
> rx42_xdp_tx_cqes: 0
> rx43_packets: 0
> rx43_bytes: 0
> rx43_csum_complete: 0
> rx43_csum_unnecessary: 0
> rx43_csum_unnecessary_inner: 0
> rx43_csum_none: 0
> rx43_xdp_drop: 0
> rx43_xdp_redirect: 0
> rx43_lro_packets: 0
> rx43_lro_bytes: 0
> rx43_ecn_mark: 0
> rx43_removed_vlan_packets: 0
> rx43_wqe_err: 0
> rx43_mpwqe_filler_cqes: 0
> rx43_mpwqe_filler_strides: 0
> rx43_buff_alloc_err: 0
> rx43_cqe_compress_blks: 0
> rx43_cqe_compress_pkts: 0
> rx43_page_reuse: 0
> rx43_cache_reuse: 0
> rx43_cache_full: 0
> rx43_cache_empty: 2560
> rx43_cache_busy: 0
> rx43_cache_waive: 0
> rx43_congst_umr: 0
> rx43_arfs_err: 0
> rx43_xdp_tx_xmit: 0
> rx43_xdp_tx_full: 0
> rx43_xdp_tx_err: 0
> rx43_xdp_tx_cqes: 0
> rx44_packets: 0
> rx44_bytes: 0
> rx44_csum_complete: 0
> rx44_csum_unnecessary: 0
> rx44_csum_unnecessary_inner: 0
> rx44_csum_none: 0
> rx44_xdp_drop: 0
> rx44_xdp_redirect: 0
> rx44_lro_packets: 0
> rx44_lro_bytes: 0
> rx44_ecn_mark: 0
> rx44_removed_vlan_packets: 0
> rx44_wqe_err: 0
> rx44_mpwqe_filler_cqes: 0
> rx44_mpwqe_filler_strides: 0
> rx44_buff_alloc_err: 0
> rx44_cqe_compress_blks: 0
> rx44_cqe_compress_pkts: 0
> rx44_page_reuse: 0
> rx44_cache_reuse: 0
> rx44_cache_full: 0
> rx44_cache_empty: 2560
> rx44_cache_busy: 0
> rx44_cache_waive: 0
> rx44_congst_umr: 0
> rx44_arfs_err: 0
> rx44_xdp_tx_xmit: 0
> rx44_xdp_tx_full: 0
> rx44_xdp_tx_err: 0
> rx44_xdp_tx_cqes: 0
> rx45_packets: 0
> rx45_bytes: 0
> rx45_csum_complete: 0
> rx45_csum_unnecessary: 0
> rx45_csum_unnecessary_inner: 0
> rx45_csum_none: 0
> rx45_xdp_drop: 0
> rx45_xdp_redirect: 0
> rx45_lro_packets: 0
> rx45_lro_bytes: 0
> rx45_ecn_mark: 0
> rx45_removed_vlan_packets: 0
> rx45_wqe_err: 0
> rx45_mpwqe_filler_cqes: 0
> rx45_mpwqe_filler_strides: 0
> rx45_buff_alloc_err: 0
> rx45_cqe_compress_blks: 0
> rx45_cqe_compress_pkts: 0
> rx45_page_reuse: 0
> rx45_cache_reuse: 0
> rx45_cache_full: 0
> rx45_cache_empty: 2560
> rx45_cache_busy: 0
> rx45_cache_waive: 0
> rx45_congst_umr: 0
> rx45_arfs_err: 0
> rx45_xdp_tx_xmit: 0
> rx45_xdp_tx_full: 0
> rx45_xdp_tx_err: 0
> rx45_xdp_tx_cqes: 0
> rx46_packets: 0
> rx46_bytes: 0
> rx46_csum_complete: 0
> rx46_csum_unnecessary: 0
> rx46_csum_unnecessary_inner: 0
> rx46_csum_none: 0
> rx46_xdp_drop: 0
> rx46_xdp_redirect: 0
> rx46_lro_packets: 0
> rx46_lro_bytes: 0
> rx46_ecn_mark: 0
> rx46_removed_vlan_packets: 0
> rx46_wqe_err: 0
> rx46_mpwqe_filler_cqes: 0
> rx46_mpwqe_filler_strides: 0
> rx46_buff_alloc_err: 0
> rx46_cqe_compress_blks: 0
> rx46_cqe_compress_pkts: 0
> rx46_page_reuse: 0
> rx46_cache_reuse: 0
> rx46_cache_full: 0
> rx46_cache_empty: 2560
> rx46_cache_busy: 0
> rx46_cache_waive: 0
> rx46_congst_umr: 0
> rx46_arfs_err: 0
> rx46_xdp_tx_xmit: 0
> rx46_xdp_tx_full: 0
> rx46_xdp_tx_err: 0
> rx46_xdp_tx_cqes: 0
> rx47_packets: 0
> rx47_bytes: 0
> rx47_csum_complete: 0
> rx47_csum_unnecessary: 0
> rx47_csum_unnecessary_inner: 0
> rx47_csum_none: 0
> rx47_xdp_drop: 0
> rx47_xdp_redirect: 0
> rx47_lro_packets: 0
> rx47_lro_bytes: 0
> rx47_ecn_mark: 0
> rx47_removed_vlan_packets: 0
> rx47_wqe_err: 0
> rx47_mpwqe_filler_cqes: 0
> rx47_mpwqe_filler_strides: 0
> rx47_buff_alloc_err: 0
> rx47_cqe_compress_blks: 0
> rx47_cqe_compress_pkts: 0
> rx47_page_reuse: 0
> rx47_cache_reuse: 0
> rx47_cache_full: 0
> rx47_cache_empty: 2560
> rx47_cache_busy: 0
> rx47_cache_waive: 0
> rx47_congst_umr: 0
> rx47_arfs_err: 0
> rx47_xdp_tx_xmit: 0
> rx47_xdp_tx_full: 0
> rx47_xdp_tx_err: 0
> rx47_xdp_tx_cqes: 0
> rx48_packets: 0
> rx48_bytes: 0
> rx48_csum_complete: 0
> rx48_csum_unnecessary: 0
> rx48_csum_unnecessary_inner: 0
> rx48_csum_none: 0
> rx48_xdp_drop: 0
> rx48_xdp_redirect: 0
> rx48_lro_packets: 0
> rx48_lro_bytes: 0
> rx48_ecn_mark: 0
> rx48_removed_vlan_packets: 0
> rx48_wqe_err: 0
> rx48_mpwqe_filler_cqes: 0
> rx48_mpwqe_filler_strides: 0
> rx48_buff_alloc_err: 0
> rx48_cqe_compress_blks: 0
> rx48_cqe_compress_pkts: 0
> rx48_page_reuse: 0
> rx48_cache_reuse: 0
> rx48_cache_full: 0
> rx48_cache_empty: 2560
> rx48_cache_busy: 0
> rx48_cache_waive: 0
> rx48_congst_umr: 0
> rx48_arfs_err: 0
> rx48_xdp_tx_xmit: 0
> rx48_xdp_tx_full: 0
> rx48_xdp_tx_err: 0
> rx48_xdp_tx_cqes: 0
> rx49_packets: 0
> rx49_bytes: 0
> rx49_csum_complete: 0
> rx49_csum_unnecessary: 0
> rx49_csum_unnecessary_inner: 0
> rx49_csum_none: 0
> rx49_xdp_drop: 0
> rx49_xdp_redirect: 0
> rx49_lro_packets: 0
> rx49_lro_bytes: 0
> rx49_ecn_mark: 0
> rx49_removed_vlan_packets: 0
> rx49_wqe_err: 0
> rx49_mpwqe_filler_cqes: 0
> rx49_mpwqe_filler_strides: 0
> rx49_buff_alloc_err: 0
> rx49_cqe_compress_blks: 0
> rx49_cqe_compress_pkts: 0
> rx49_page_reuse: 0
> rx49_cache_reuse: 0
> rx49_cache_full: 0
> rx49_cache_empty: 2560
> rx49_cache_busy: 0
> rx49_cache_waive: 0
> rx49_congst_umr: 0
> rx49_arfs_err: 0
> rx49_xdp_tx_xmit: 0
> rx49_xdp_tx_full: 0
> rx49_xdp_tx_err: 0
> rx49_xdp_tx_cqes: 0
> rx50_packets: 0
> rx50_bytes: 0
> rx50_csum_complete: 0
> rx50_csum_unnecessary: 0
> rx50_csum_unnecessary_inner: 0
> rx50_csum_none: 0
> rx50_xdp_drop: 0
> rx50_xdp_redirect: 0
> rx50_lro_packets: 0
> rx50_lro_bytes: 0
> rx50_ecn_mark: 0
> rx50_removed_vlan_packets: 0
> rx50_wqe_err: 0
> rx50_mpwqe_filler_cqes: 0
> rx50_mpwqe_filler_strides: 0
> rx50_buff_alloc_err: 0
> rx50_cqe_compress_blks: 0
> rx50_cqe_compress_pkts: 0
> rx50_page_reuse: 0
> rx50_cache_reuse: 0
> rx50_cache_full: 0
> rx50_cache_empty: 2560
> rx50_cache_busy: 0
> rx50_cache_waive: 0
> rx50_congst_umr: 0
> rx50_arfs_err: 0
> rx50_xdp_tx_xmit: 0
> rx50_xdp_tx_full: 0
> rx50_xdp_tx_err: 0
> rx50_xdp_tx_cqes: 0
> rx51_packets: 0
> rx51_bytes: 0
> rx51_csum_complete: 0
> rx51_csum_unnecessary: 0
> rx51_csum_unnecessary_inner: 0
> rx51_csum_none: 0
> rx51_xdp_drop: 0
> rx51_xdp_redirect: 0
> rx51_lro_packets: 0
> rx51_lro_bytes: 0
> rx51_ecn_mark: 0
> rx51_removed_vlan_packets: 0
> rx51_wqe_err: 0
> rx51_mpwqe_filler_cqes: 0
> rx51_mpwqe_filler_strides: 0
> rx51_buff_alloc_err: 0
> rx51_cqe_compress_blks: 0
> rx51_cqe_compress_pkts: 0
> rx51_page_reuse: 0
> rx51_cache_reuse: 0
> rx51_cache_full: 0
> rx51_cache_empty: 2560
> rx51_cache_busy: 0
> rx51_cache_waive: 0
> rx51_congst_umr: 0
> rx51_arfs_err: 0
> rx51_xdp_tx_xmit: 0
> rx51_xdp_tx_full: 0
> rx51_xdp_tx_err: 0
> rx51_xdp_tx_cqes: 0
> rx52_packets: 0
> rx52_bytes: 0
> rx52_csum_complete: 0
> rx52_csum_unnecessary: 0
> rx52_csum_unnecessary_inner: 0
> rx52_csum_none: 0
> rx52_xdp_drop: 0
> rx52_xdp_redirect: 0
> rx52_lro_packets: 0
> rx52_lro_bytes: 0
> rx52_ecn_mark: 0
> rx52_removed_vlan_packets: 0
> rx52_wqe_err: 0
> rx52_mpwqe_filler_cqes: 0
> rx52_mpwqe_filler_strides: 0
> rx52_buff_alloc_err: 0
> rx52_cqe_compress_blks: 0
> rx52_cqe_compress_pkts: 0
> rx52_page_reuse: 0
> rx52_cache_reuse: 0
> rx52_cache_full: 0
> rx52_cache_empty: 2560
> rx52_cache_busy: 0
> rx52_cache_waive: 0
> rx52_congst_umr: 0
> rx52_arfs_err: 0
> rx52_xdp_tx_xmit: 0
> rx52_xdp_tx_full: 0
> rx52_xdp_tx_err: 0
> rx52_xdp_tx_cqes: 0
> rx53_packets: 0
> rx53_bytes: 0
> rx53_csum_complete: 0
> rx53_csum_unnecessary: 0
> rx53_csum_unnecessary_inner: 0
> rx53_csum_none: 0
> rx53_xdp_drop: 0
> rx53_xdp_redirect: 0
> rx53_lro_packets: 0
> rx53_lro_bytes: 0
> rx53_ecn_mark: 0
> rx53_removed_vlan_packets: 0
> rx53_wqe_err: 0
> rx53_mpwqe_filler_cqes: 0
> rx53_mpwqe_filler_strides: 0
> rx53_buff_alloc_err: 0
> rx53_cqe_compress_blks: 0
> rx53_cqe_compress_pkts: 0
> rx53_page_reuse: 0
> rx53_cache_reuse: 0
> rx53_cache_full: 0
> rx53_cache_empty: 2560
> rx53_cache_busy: 0
> rx53_cache_waive: 0
> rx53_congst_umr: 0
> rx53_arfs_err: 0
> rx53_xdp_tx_xmit: 0
> rx53_xdp_tx_full: 0
> rx53_xdp_tx_err: 0
> rx53_xdp_tx_cqes: 0
> rx54_packets: 0
> rx54_bytes: 0
> rx54_csum_complete: 0
> rx54_csum_unnecessary: 0
> rx54_csum_unnecessary_inner: 0
> rx54_csum_none: 0
> rx54_xdp_drop: 0
> rx54_xdp_redirect: 0
> rx54_lro_packets: 0
> rx54_lro_bytes: 0
> rx54_ecn_mark: 0
> rx54_removed_vlan_packets: 0
> rx54_wqe_err: 0
> rx54_mpwqe_filler_cqes: 0
> rx54_mpwqe_filler_strides: 0
> rx54_buff_alloc_err: 0
> rx54_cqe_compress_blks: 0
> rx54_cqe_compress_pkts: 0
> rx54_page_reuse: 0
> rx54_cache_reuse: 0
> rx54_cache_full: 0
> rx54_cache_empty: 2560
> rx54_cache_busy: 0
> rx54_cache_waive: 0
> rx54_congst_umr: 0
> rx54_arfs_err: 0
> rx54_xdp_tx_xmit: 0
> rx54_xdp_tx_full: 0
> rx54_xdp_tx_err: 0
> rx54_xdp_tx_cqes: 0
> rx55_packets: 0
> rx55_bytes: 0
> rx55_csum_complete: 0
> rx55_csum_unnecessary: 0
> rx55_csum_unnecessary_inner: 0
> rx55_csum_none: 0
> rx55_xdp_drop: 0
> rx55_xdp_redirect: 0
> rx55_lro_packets: 0
> rx55_lro_bytes: 0
> rx55_ecn_mark: 0
> rx55_removed_vlan_packets: 0
> rx55_wqe_err: 0
> rx55_mpwqe_filler_cqes: 0
> rx55_mpwqe_filler_strides: 0
> rx55_buff_alloc_err: 0
> rx55_cqe_compress_blks: 0
> rx55_cqe_compress_pkts: 0
> rx55_page_reuse: 0
> rx55_cache_reuse: 0
> rx55_cache_full: 0
> rx55_cache_empty: 2560
> rx55_cache_busy: 0
> rx55_cache_waive: 0
> rx55_congst_umr: 0
> rx55_arfs_err: 0
> rx55_xdp_tx_xmit: 0
> rx55_xdp_tx_full: 0
> rx55_xdp_tx_err: 0
> rx55_xdp_tx_cqes: 0
> tx0_packets: 24512439668
> tx0_bytes: 15287569052791
> tx0_tso_packets: 1536157106
> tx0_tso_bytes: 8571753637944
> tx0_tso_inner_packets: 0
> tx0_tso_inner_bytes: 0
> tx0_csum_partial: 2132156117
> tx0_csum_partial_inner: 0
> tx0_added_vlan_packets: 19906601448
> tx0_nop: 308098536
> tx0_csum_none: 17774445331
> tx0_stopped: 19625
> tx0_dropped: 0
> tx0_xmit_more: 67864870
> tx0_recover: 0
> tx0_cqes: 19838744246
> tx0_wake: 19624
> tx0_cqe_err: 0
> tx1_packets: 22598557053
> tx1_bytes: 13568850145010
> tx1_tso_packets: 1369529475
> tx1_tso_bytes: 7661777265382
> tx1_tso_inner_packets: 0
> tx1_tso_inner_bytes: 0
> tx1_csum_partial: 1884639496
> tx1_csum_partial_inner: 0
> tx1_added_vlan_packets: 18468333696
> tx1_nop: 281301783
> tx1_csum_none: 16583694200
> tx1_stopped: 19457
> tx1_dropped: 0
> tx1_xmit_more: 55170875
> tx1_recover: 0
> tx1_cqes: 18413169824
> tx1_wake: 19455
> tx1_cqe_err: 0
> tx2_packets: 22821611433
> tx2_bytes: 13752535163683
> tx2_tso_packets: 1396978825
> tx2_tso_bytes: 7774704508463
> tx2_tso_inner_packets: 0
> tx2_tso_inner_bytes: 0
> tx2_csum_partial: 1897834558
> tx2_csum_partial_inner: 0
> tx2_added_vlan_packets: 18641958085
> tx2_nop: 286934891
> tx2_csum_none: 16744123527
> tx2_stopped: 13214
> tx2_dropped: 0
> tx2_xmit_more: 61749446
> tx2_recover: 0
> tx2_cqes: 18580215654
> tx2_wake: 13214
> tx2_cqe_err: 0
> tx3_packets: 22580809948
> tx3_bytes: 13730542936609
> tx3_tso_packets: 1370434579
> tx3_tso_bytes: 7605636711455
> tx3_tso_inner_packets: 0
> tx3_tso_inner_bytes: 0
> tx3_csum_partial: 1865573748
> tx3_csum_partial_inner: 0
> tx3_added_vlan_packets: 18491873644
> tx3_nop: 281195875
> tx3_csum_none: 16626299896
> tx3_stopped: 12542
> tx3_dropped: 0
> tx3_xmit_more: 57681647
> tx3_recover: 0
> tx3_cqes: 18434198757
> tx3_wake: 12540
> tx3_cqe_err: 0
> tx4_packets: 27801801208
> tx4_bytes: 17058453171137
> tx4_tso_packets: 1740500105
> tx4_tso_bytes: 9474905622036
> tx4_tso_inner_packets: 0
> tx4_tso_inner_bytes: 0
> tx4_csum_partial: 2279225376
> tx4_csum_partial_inner: 0
> tx4_added_vlan_packets: 22744081633
> tx4_nop: 349753979
> tx4_csum_none: 20464856257
> tx4_stopped: 14816
> tx4_dropped: 0
> tx4_xmit_more: 65469322
> tx4_recover: 0
> tx4_cqes: 22678618972
> tx4_wake: 14816
> tx4_cqe_err: 0
> tx5_packets: 25099783024
> tx5_bytes: 14917740698381
> tx5_tso_packets: 1512988013
> tx5_tso_bytes: 8571208921023
> tx5_tso_inner_packets: 0
> tx5_tso_inner_bytes: 0
> tx5_csum_partial: 2078498561
> tx5_csum_partial_inner: 0
> tx5_added_vlan_packets: 20465533760
> tx5_nop: 312614719
> tx5_csum_none: 18387035199
> tx5_stopped: 4605
> tx5_dropped: 0
> tx5_xmit_more: 64188936
> tx5_recover: 0
> tx5_cqes: 20401350718
> tx5_wake: 4604
> tx5_cqe_err: 0
> tx6_packets: 25025504896
> tx6_bytes: 14908021946070
> tx6_tso_packets: 1515718977
> tx6_tso_bytes: 8511442522461
> tx6_tso_inner_packets: 0
> tx6_tso_inner_bytes: 0
> tx6_csum_partial: 2056378610
> tx6_csum_partial_inner: 0
> tx6_added_vlan_packets: 20434066400
> tx6_nop: 310594020
> tx6_csum_none: 18377687790
> tx6_stopped: 15234
> tx6_dropped: 0
> tx6_xmit_more: 61130422
> tx6_recover: 0
> tx6_cqes: 20372943611
> tx6_wake: 15234
> tx6_cqe_err: 0
> tx7_packets: 25457096169
> tx7_bytes: 15456289446172
> tx7_tso_packets: 1553342799
> tx7_tso_bytes: 8764550988105
> tx7_tso_inner_packets: 0
> tx7_tso_inner_bytes: 0
> tx7_csum_partial: 2105765233
> tx7_csum_partial_inner: 0
> tx7_added_vlan_packets: 20720382377
> tx7_nop: 319044853
> tx7_csum_none: 18614617145
> tx7_stopped: 18745
> tx7_dropped: 0
> tx7_xmit_more: 57050107
> tx7_recover: 0
> tx7_cqes: 20663340775
> tx7_wake: 18746
> tx7_cqe_err: 0
> tx8_packets: 25389771649
> tx8_bytes: 15225503883962
> tx8_tso_packets: 1563367648
> tx8_tso_bytes: 8710384514258
> tx8_tso_inner_packets: 0
> tx8_tso_inner_bytes: 0
> tx8_csum_partial: 2106586634
> tx8_csum_partial_inner: 0
> tx8_added_vlan_packets: 20704676274
> tx8_nop: 318149261
> tx8_csum_none: 18598089640
> tx8_stopped: 4733
> tx8_dropped: 0
> tx8_xmit_more: 61014317
> tx8_recover: 0
> tx8_cqes: 20643667301
> tx8_wake: 4735
> tx8_cqe_err: 0
> tx9_packets: 25521500166
> tx9_bytes: 15302293145755
> tx9_tso_packets: 1546316697
> tx9_tso_bytes: 8770688145926
> tx9_tso_inner_packets: 0
> tx9_tso_inner_bytes: 0
> tx9_csum_partial: 2097652880
> tx9_csum_partial_inner: 0
> tx9_added_vlan_packets: 20778408432
> tx9_nop: 318538543
> tx9_csum_none: 18680755556
> tx9_stopped: 16118
> tx9_dropped: 0
> tx9_xmit_more: 68509728
> tx9_recover: 0
> tx9_cqes: 20709906498
> tx9_wake: 16118
> tx9_cqe_err: 0
> tx10_packets: 25451605829
> tx10_bytes: 15386896170792
> tx10_tso_packets: 1576473520
> tx10_tso_bytes: 8880888676383
> tx10_tso_inner_packets: 0
> tx10_tso_inner_bytes: 0
> tx10_csum_partial: 2129796141
> tx10_csum_partial_inner: 0
> tx10_added_vlan_packets: 20659622590
> tx10_nop: 319117433
> tx10_csum_none: 18529826450
> tx10_stopped: 20187
> tx10_dropped: 0
> tx10_xmit_more: 58892184
> tx10_recover: 0
> tx10_cqes: 20600737739
> tx10_wake: 20188
> tx10_cqe_err: 0
> tx11_packets: 27008919793
> tx11_bytes: 16587719213058
> tx11_tso_packets: 1734884654
> tx11_tso_bytes: 9475681471870
> tx11_tso_inner_packets: 0
> tx11_tso_inner_bytes: 0
> tx11_csum_partial: 2296162292
> tx11_csum_partial_inner: 0
> tx11_added_vlan_packets: 21943096263
> tx11_nop: 344188182
> tx11_csum_none: 19646933971
> tx11_stopped: 9703
> tx11_dropped: 0
> tx11_xmit_more: 66530718
> tx11_recover: 0
> tx11_cqes: 21876571667
> tx11_wake: 9704
> tx11_cqe_err: 0
> tx12_packets: 25969493269
> tx12_bytes: 15980767963416
> tx12_tso_packets: 1671396456
> tx12_tso_bytes: 9268973672821
> tx12_tso_inner_packets: 0
> tx12_tso_inner_bytes: 0
> tx12_csum_partial: 2243809182
> tx12_csum_partial_inner: 0
> tx12_added_vlan_packets: 20980642456
> tx12_nop: 330241007
> tx12_csum_none: 18736833276
> tx12_stopped: 10341
> tx12_dropped: 0
> tx12_xmit_more: 57834100
> tx12_recover: 0
> tx12_cqes: 20922815079
> tx12_wake: 10342
> tx12_cqe_err: 0
> tx13_packets: 25332762261
> tx13_bytes: 15353213283280
> tx13_tso_packets: 1577433599
> tx13_tso_bytes: 8785240284281
> tx13_tso_inner_packets: 0
> tx13_tso_inner_bytes: 0
> tx13_csum_partial: 2110640515
> tx13_csum_partial_inner: 0
> tx13_added_vlan_packets: 20605670910
> tx13_nop: 319805741
> tx13_csum_none: 18495030395
> tx13_stopped: 7006
> tx13_dropped: 0
> tx13_xmit_more: 58314402
> tx13_recover: 0
> tx13_cqes: 20547362770
> tx13_wake: 7008
> tx13_cqe_err: 0
> tx14_packets: 26333743548
> tx14_bytes: 16070719060573
> tx14_tso_packets: 1677922970
> tx14_tso_bytes: 9240299765487
> tx14_tso_inner_packets: 0
> tx14_tso_inner_bytes: 0
> tx14_csum_partial: 2215668906
> tx14_csum_partial_inner: 0
> tx14_added_vlan_packets: 21384410786
> tx14_nop: 332734939
> tx14_csum_none: 19168741880
> tx14_stopped: 13160
> tx14_dropped: 0
> tx14_xmit_more: 57650391
> tx14_recover: 0
> tx14_cqes: 21326767783
> tx14_wake: 13161
> tx14_cqe_err: 0
> tx15_packets: 26824968971
> tx15_bytes: 16687994233452
> tx15_tso_packets: 1755745052
> tx15_tso_bytes: 9533814012441
> tx15_tso_inner_packets: 0
> tx15_tso_inner_bytes: 0
> tx15_csum_partial: 2304778064
> tx15_csum_partial_inner: 0
> tx15_added_vlan_packets: 21740906107
> tx15_nop: 344143287
> tx15_csum_none: 19436128058
> tx15_stopped: 75
> tx15_dropped: 0
> tx15_xmit_more: 63325832
> tx15_recover: 0
> tx15_cqes: 21677585345
> tx15_wake: 74
> tx15_cqe_err: 0
> tx16_packets: 24488158946
> tx16_bytes: 15027415004570
> tx16_tso_packets: 1559127391
> tx16_tso_bytes: 8658691917845
> tx16_tso_inner_packets: 0
> tx16_tso_inner_bytes: 0
> tx16_csum_partial: 2075856395
> tx16_csum_partial_inner: 0
> tx16_added_vlan_packets: 19835695731
> tx16_nop: 308464189
> tx16_csum_none: 17759839340
> tx16_stopped: 4567
> tx16_dropped: 0
> tx16_xmit_more: 62631422
> tx16_recover: 0
> tx16_cqes: 19773070012
> tx16_wake: 4568
> tx16_cqe_err: 0
> tx17_packets: 24700413784
> tx17_bytes: 15216529713715
> tx17_tso_packets: 1597555108
> tx17_tso_bytes: 8773728661243
> tx17_tso_inner_packets: 0
> tx17_tso_inner_bytes: 0
> tx17_csum_partial: 2127177297
> tx17_csum_partial_inner: 0
> tx17_added_vlan_packets: 20003144561
> tx17_nop: 313356918
> tx17_csum_none: 17875967264
> tx17_stopped: 12572
> tx17_dropped: 0
> tx17_xmit_more: 62742980
> tx17_recover: 0
> tx17_cqes: 19940407615
> tx17_wake: 12573
> tx17_cqe_err: 0
> tx18_packets: 24887710046
> tx18_bytes: 15245034034664
> tx18_tso_packets: 1582550520
> tx18_tso_bytes: 8782692335483
> tx18_tso_inner_packets: 0
> tx18_tso_inner_bytes: 0
> tx18_csum_partial: 2084514331
> tx18_csum_partial_inner: 0
> tx18_added_vlan_packets: 20173879181
> tx18_nop: 314818702
> tx18_csum_none: 18089364850
> tx18_stopped: 21366
> tx18_dropped: 0
> tx18_xmit_more: 62485819
> tx18_recover: 0
> tx18_cqes: 20111400935
> tx18_wake: 21366
> tx18_cqe_err: 0
> tx19_packets: 24831057648
> tx19_bytes: 15164663890576
> tx19_tso_packets: 1599135489
> tx19_tso_bytes: 8756045449746
> tx19_tso_inner_packets: 0
> tx19_tso_inner_bytes: 0
> tx19_csum_partial: 2119746608
> tx19_csum_partial_inner: 0
> tx19_added_vlan_packets: 20143573903
> tx19_nop: 316966450
> tx19_csum_none: 18023827295
> tx19_stopped: 11431
> tx19_dropped: 0
> tx19_xmit_more: 57535904
> tx19_recover: 0
> tx19_cqes: 20086045325
> tx19_wake: 11431
> tx19_cqe_err: 0
> tx20_packets: 21943735263
> tx20_bytes: 13528749492187
> tx20_tso_packets: 1390048103
> tx20_tso_bytes: 7629058809637
> tx20_tso_inner_packets: 0
> tx20_tso_inner_bytes: 0
> tx20_csum_partial: 1848533941
> tx20_csum_partial_inner: 0
> tx20_added_vlan_packets: 17861417651
> tx20_nop: 276840365
> tx20_csum_none: 16012883710
> tx20_stopped: 38457
> tx20_dropped: 0
> tx20_xmit_more: 57042753
> tx20_recover: 0
> tx20_cqes: 17804384839
> tx20_wake: 38457
> tx20_cqe_err: 0
> tx21_packets: 21476926958
> tx21_bytes: 13096410597896
> tx21_tso_packets: 1367724090
> tx21_tso_bytes: 7568364585127
> tx21_tso_inner_packets: 0
> tx21_tso_inner_bytes: 0
> tx21_csum_partial: 1830570727
> tx21_csum_partial_inner: 0
> tx21_added_vlan_packets: 17421087814
> tx21_nop: 270611519
> tx21_csum_none: 15590517087
> tx21_stopped: 31213
> tx21_dropped: 0
> tx21_xmit_more: 60305389
> tx21_recover: 0
> tx21_cqes: 17360791205
> tx21_wake: 31213
> tx21_cqe_err: 0
> tx22_packets: 21819106444
> tx22_bytes: 13492871887100
> tx22_tso_packets: 1387002018
> tx22_tso_bytes: 7617705727669
> tx22_tso_inner_packets: 0
> tx22_tso_inner_bytes: 0
> tx22_csum_partial: 1853632107
> tx22_csum_partial_inner: 0
> tx22_added_vlan_packets: 17743255447
> tx22_nop: 274820992
> tx22_csum_none: 15889623340
> tx22_stopped: 24814
> tx22_dropped: 0
> tx22_xmit_more: 60811304
> tx22_recover: 0
> tx22_cqes: 17682451111
> tx22_wake: 24815
> tx22_cqe_err: 0
> tx23_packets: 21830455800
> tx23_bytes: 13427551902532
> tx23_tso_packets: 1388556038
> tx23_tso_bytes: 7604040587125
> tx23_tso_inner_packets: 0
> tx23_tso_inner_bytes: 0
> tx23_csum_partial: 1850819694
> tx23_csum_partial_inner: 0
> tx23_added_vlan_packets: 17761271122
> tx23_nop: 275142775
> tx23_csum_none: 15910451428
> tx23_stopped: 29899
> tx23_dropped: 0
> tx23_xmit_more: 58924909
> tx23_recover: 0
> tx23_cqes: 17702355187
> tx23_wake: 29898
> tx23_cqe_err: 0
> tx24_packets: 21961484213
> tx24_bytes: 13531373062497
> tx24_tso_packets: 1394697504
> tx24_tso_bytes: 7663866609308
> tx24_tso_inner_packets: 0
> tx24_tso_inner_bytes: 0
> tx24_csum_partial: 1857072074
> tx24_csum_partial_inner: 0
> tx24_added_vlan_packets: 17856887568
> tx24_nop: 276352855
> tx24_csum_none: 15999815494
> tx24_stopped: 33924
> tx24_dropped: 0
> tx24_xmit_more: 63992426
> tx24_recover: 0
> tx24_cqes: 17792905243
> tx24_wake: 33923
> tx24_cqe_err: 0
> tx25_packets: 21853593838
> tx25_bytes: 13357487830519
> tx25_tso_packets: 1398822411
> tx25_tso_bytes: 7691191518838
> tx25_tso_inner_packets: 0
> tx25_tso_inner_bytes: 0
> tx25_csum_partial: 1869483109
> tx25_csum_partial_inner: 0
> tx25_added_vlan_packets: 17734634614
> tx25_nop: 276327643
> tx25_csum_none: 15865151505
> tx25_stopped: 38651
> tx25_dropped: 0
> tx25_xmit_more: 56410535
> tx25_recover: 0
> tx25_cqes: 17678234537
> tx25_wake: 38650
> tx25_cqe_err: 0
> tx26_packets: 21480261205
> tx26_bytes: 13148973015935
> tx26_tso_packets: 1348132284
> tx26_tso_bytes: 7523489481775
> tx26_tso_inner_packets: 0
> tx26_tso_inner_bytes: 0
> tx26_csum_partial: 1839740745
> tx26_csum_partial_inner: 0
> tx26_added_vlan_packets: 17430592911
> tx26_nop: 270367836
> tx26_csum_none: 15590852166
> tx26_stopped: 34044
> tx26_dropped: 0
> tx26_xmit_more: 59870114
> tx26_recover: 0
> tx26_cqes: 17370736612
> tx26_wake: 34043
> tx26_cqe_err: 0
> tx27_packets: 22694273108
> tx27_bytes: 14135473431004
> tx27_tso_packets: 1418371875
> tx27_tso_bytes: 7784842263038
> tx27_tso_inner_packets: 0
> tx27_tso_inner_bytes: 0
> tx27_csum_partial: 1919170584
> tx27_csum_partial_inner: 0
> tx27_added_vlan_packets: 18520826023
> tx27_nop: 286296272
> tx27_csum_none: 16601655439
> tx27_stopped: 38125
> tx27_dropped: 0
> tx27_xmit_more: 72749775
> tx27_recover: 0
> tx27_cqes: 18448090270
> tx27_wake: 38127
> tx27_cqe_err: 0
> tx28_packets: 0
> tx28_bytes: 0
> tx28_tso_packets: 0
> tx28_tso_bytes: 0
> tx28_tso_inner_packets: 0
> tx28_tso_inner_bytes: 0
> tx28_csum_partial: 0
> tx28_csum_partial_inner: 0
> tx28_added_vlan_packets: 0
> tx28_nop: 0
> tx28_csum_none: 0
> tx28_stopped: 0
> tx28_dropped: 0
> tx28_xmit_more: 0
> tx28_recover: 0
> tx28_cqes: 0
> tx28_wake: 0
> tx28_cqe_err: 0
> tx29_packets: 3
> tx29_bytes: 266
> tx29_tso_packets: 0
> tx29_tso_bytes: 0
> tx29_tso_inner_packets: 0
> tx29_tso_inner_bytes: 0
> tx29_csum_partial: 0
> tx29_csum_partial_inner: 0
> tx29_added_vlan_packets: 0
> tx29_nop: 0
> tx29_csum_none: 3
> tx29_stopped: 0
> tx29_dropped: 0
> tx29_xmit_more: 1
> tx29_recover: 0
> tx29_cqes: 2
> tx29_wake: 0
> tx29_cqe_err: 0
> tx30_packets: 0
> tx30_bytes: 0
> tx30_tso_packets: 0
> tx30_tso_bytes: 0
> tx30_tso_inner_packets: 0
> tx30_tso_inner_bytes: 0
> tx30_csum_partial: 0
> tx30_csum_partial_inner: 0
> tx30_added_vlan_packets: 0
> tx30_nop: 0
> tx30_csum_none: 0
> tx30_stopped: 0
> tx30_dropped: 0
> tx30_xmit_more: 0
> tx30_recover: 0
> tx30_cqes: 0
> tx30_wake: 0
> tx30_cqe_err: 0
> tx31_packets: 0
> tx31_bytes: 0
> tx31_tso_packets: 0
> tx31_tso_bytes: 0
> tx31_tso_inner_packets: 0
> tx31_tso_inner_bytes: 0
> tx31_csum_partial: 0
> tx31_csum_partial_inner: 0
> tx31_added_vlan_packets: 0
> tx31_nop: 0
> tx31_csum_none: 0
> tx31_stopped: 0
> tx31_dropped: 0
> tx31_xmit_more: 0
> tx31_recover: 0
> tx31_cqes: 0
> tx31_wake: 0
> tx31_cqe_err: 0
> tx32_packets: 0
> tx32_bytes: 0
> tx32_tso_packets: 0
> tx32_tso_bytes: 0
> tx32_tso_inner_packets: 0
> tx32_tso_inner_bytes: 0
> tx32_csum_partial: 0
> tx32_csum_partial_inner: 0
> tx32_added_vlan_packets: 0
> tx32_nop: 0
> tx32_csum_none: 0
> tx32_stopped: 0
> tx32_dropped: 0
> tx32_xmit_more: 0
> tx32_recover: 0
> tx32_cqes: 0
> tx32_wake: 0
> tx32_cqe_err: 0
> tx33_packets: 0
> tx33_bytes: 0
> tx33_tso_packets: 0
> tx33_tso_bytes: 0
> tx33_tso_inner_packets: 0
> tx33_tso_inner_bytes: 0
> tx33_csum_partial: 0
> tx33_csum_partial_inner: 0
> tx33_added_vlan_packets: 0
> tx33_nop: 0
> tx33_csum_none: 0
> tx33_stopped: 0
> tx33_dropped: 0
> tx33_xmit_more: 0
> tx33_recover: 0
> tx33_cqes: 0
> tx33_wake: 0
> tx33_cqe_err: 0
> tx34_packets: 0
> tx34_bytes: 0
> tx34_tso_packets: 0
> tx34_tso_bytes: 0
> tx34_tso_inner_packets: 0
> tx34_tso_inner_bytes: 0
> tx34_csum_partial: 0
> tx34_csum_partial_inner: 0
> tx34_added_vlan_packets: 0
> tx34_nop: 0
> tx34_csum_none: 0
> tx34_stopped: 0
> tx34_dropped: 0
> tx34_xmit_more: 0
> tx34_recover: 0
> tx34_cqes: 0
> tx34_wake: 0
> tx34_cqe_err: 0
> tx35_packets: 0
> tx35_bytes: 0
> tx35_tso_packets: 0
> tx35_tso_bytes: 0
> tx35_tso_inner_packets: 0
> tx35_tso_inner_bytes: 0
> tx35_csum_partial: 0
> tx35_csum_partial_inner: 0
> tx35_added_vlan_packets: 0
> tx35_nop: 0
> tx35_csum_none: 0
> tx35_stopped: 0
> tx35_dropped: 0
> tx35_xmit_more: 0
> tx35_recover: 0
> tx35_cqes: 0
> tx35_wake: 0
> tx35_cqe_err: 0
> tx36_packets: 0
> tx36_bytes: 0
> tx36_tso_packets: 0
> tx36_tso_bytes: 0
> tx36_tso_inner_packets: 0
> tx36_tso_inner_bytes: 0
> tx36_csum_partial: 0
> tx36_csum_partial_inner: 0
> tx36_added_vlan_packets: 0
> tx36_nop: 0
> tx36_csum_none: 0
> tx36_stopped: 0
> tx36_dropped: 0
> tx36_xmit_more: 0
> tx36_recover: 0
> tx36_cqes: 0
> tx36_wake: 0
> tx36_cqe_err: 0
> tx37_packets: 0
> tx37_bytes: 0
> tx37_tso_packets: 0
> tx37_tso_bytes: 0
> tx37_tso_inner_packets: 0
> tx37_tso_inner_bytes: 0
> tx37_csum_partial: 0
> tx37_csum_partial_inner: 0
> tx37_added_vlan_packets: 0
> tx37_nop: 0
> tx37_csum_none: 0
> tx37_stopped: 0
> tx37_dropped: 0
> tx37_xmit_more: 0
> tx37_recover: 0
> tx37_cqes: 0
> tx37_wake: 0
> tx37_cqe_err: 0
> tx38_packets: 0
> tx38_bytes: 0
> tx38_tso_packets: 0
> tx38_tso_bytes: 0
> tx38_tso_inner_packets: 0
> tx38_tso_inner_bytes: 0
> tx38_csum_partial: 0
> tx38_csum_partial_inner: 0
> tx38_added_vlan_packets: 0
> tx38_nop: 0
> tx38_csum_none: 0
> tx38_stopped: 0
> tx38_dropped: 0
> tx38_xmit_more: 0
> tx38_recover: 0
> tx38_cqes: 0
> tx38_wake: 0
> tx38_cqe_err: 0
> tx39_packets: 0
> tx39_bytes: 0
> tx39_tso_packets: 0
> tx39_tso_bytes: 0
> tx39_tso_inner_packets: 0
> tx39_tso_inner_bytes: 0
> tx39_csum_partial: 0
> tx39_csum_partial_inner: 0
> tx39_added_vlan_packets: 0
> tx39_nop: 0
> tx39_csum_none: 0
> tx39_stopped: 0
> tx39_dropped: 0
> tx39_xmit_more: 0
> tx39_recover: 0
> tx39_cqes: 0
> tx39_wake: 0
> tx39_cqe_err: 0
> tx40_packets: 0
> tx40_bytes: 0
> tx40_tso_packets: 0
> tx40_tso_bytes: 0
> tx40_tso_inner_packets: 0
> tx40_tso_inner_bytes: 0
> tx40_csum_partial: 0
> tx40_csum_partial_inner: 0
> tx40_added_vlan_packets: 0
> tx40_nop: 0
> tx40_csum_none: 0
> tx40_stopped: 0
> tx40_dropped: 0
> tx40_xmit_more: 0
> tx40_recover: 0
> tx40_cqes: 0
> tx40_wake: 0
> tx40_cqe_err: 0
> tx41_packets: 0
> tx41_bytes: 0
> tx41_tso_packets: 0
> tx41_tso_bytes: 0
> tx41_tso_inner_packets: 0
> tx41_tso_inner_bytes: 0
> tx41_csum_partial: 0
> tx41_csum_partial_inner: 0
> tx41_added_vlan_packets: 0
> tx41_nop: 0
> tx41_csum_none: 0
> tx41_stopped: 0
> tx41_dropped: 0
> tx41_xmit_more: 0
> tx41_recover: 0
> tx41_cqes: 0
> tx41_wake: 0
> tx41_cqe_err: 0
> tx42_packets: 0
> tx42_bytes: 0
> tx42_tso_packets: 0
> tx42_tso_bytes: 0
> tx42_tso_inner_packets: 0
> tx42_tso_inner_bytes: 0
> tx42_csum_partial: 0
> tx42_csum_partial_inner: 0
> tx42_added_vlan_packets: 0
> tx42_nop: 0
> tx42_csum_none: 0
> tx42_stopped: 0
> tx42_dropped: 0
> tx42_xmit_more: 0
> tx42_recover: 0
> tx42_cqes: 0
> tx42_wake: 0
> tx42_cqe_err: 0
> tx43_packets: 0
> tx43_bytes: 0
> tx43_tso_packets: 0
> tx43_tso_bytes: 0
> tx43_tso_inner_packets: 0
> tx43_tso_inner_bytes: 0
> tx43_csum_partial: 0
> tx43_csum_partial_inner: 0
> tx43_added_vlan_packets: 0
> tx43_nop: 0
> tx43_csum_none: 0
> tx43_stopped: 0
> tx43_dropped: 0
> tx43_xmit_more: 0
> tx43_recover: 0
> tx43_cqes: 0
> tx43_wake: 0
> tx43_cqe_err: 0
> tx44_packets: 0
> tx44_bytes: 0
> tx44_tso_packets: 0
> tx44_tso_bytes: 0
> tx44_tso_inner_packets: 0
> tx44_tso_inner_bytes: 0
> tx44_csum_partial: 0
> tx44_csum_partial_inner: 0
> tx44_added_vlan_packets: 0
> tx44_nop: 0
> tx44_csum_none: 0
> tx44_stopped: 0
> tx44_dropped: 0
> tx44_xmit_more: 0
> tx44_recover: 0
> tx44_cqes: 0
> tx44_wake: 0
> tx44_cqe_err: 0
> tx45_packets: 0
> tx45_bytes: 0
> tx45_tso_packets: 0
> tx45_tso_bytes: 0
> tx45_tso_inner_packets: 0
> tx45_tso_inner_bytes: 0
> tx45_csum_partial: 0
> tx45_csum_partial_inner: 0
> tx45_added_vlan_packets: 0
> tx45_nop: 0
> tx45_csum_none: 0
> tx45_stopped: 0
> tx45_dropped: 0
> tx45_xmit_more: 0
> tx45_recover: 0
> tx45_cqes: 0
> tx45_wake: 0
> tx45_cqe_err: 0
> tx46_packets: 0
> tx46_bytes: 0
> tx46_tso_packets: 0
> tx46_tso_bytes: 0
> tx46_tso_inner_packets: 0
> tx46_tso_inner_bytes: 0
> tx46_csum_partial: 0
> tx46_csum_partial_inner: 0
> tx46_added_vlan_packets: 0
> tx46_nop: 0
> tx46_csum_none: 0
> tx46_stopped: 0
> tx46_dropped: 0
> tx46_xmit_more: 0
> tx46_recover: 0
> tx46_cqes: 0
> tx46_wake: 0
> tx46_cqe_err: 0
> tx47_packets: 0
> tx47_bytes: 0
> tx47_tso_packets: 0
> tx47_tso_bytes: 0
> tx47_tso_inner_packets: 0
> tx47_tso_inner_bytes: 0
> tx47_csum_partial: 0
> tx47_csum_partial_inner: 0
> tx47_added_vlan_packets: 0
> tx47_nop: 0
> tx47_csum_none: 0
> tx47_stopped: 0
> tx47_dropped: 0
> tx47_xmit_more: 0
> tx47_recover: 0
> tx47_cqes: 0
> tx47_wake: 0
> tx47_cqe_err: 0
> tx48_packets: 0
> tx48_bytes: 0
> tx48_tso_packets: 0
> tx48_tso_bytes: 0
> tx48_tso_inner_packets: 0
> tx48_tso_inner_bytes: 0
> tx48_csum_partial: 0
> tx48_csum_partial_inner: 0
> tx48_added_vlan_packets: 0
> tx48_nop: 0
> tx48_csum_none: 0
> tx48_stopped: 0
> tx48_dropped: 0
> tx48_xmit_more: 0
> tx48_recover: 0
> tx48_cqes: 0
> tx48_wake: 0
> tx48_cqe_err: 0
> tx49_packets: 0
> tx49_bytes: 0
> tx49_tso_packets: 0
> tx49_tso_bytes: 0
> tx49_tso_inner_packets: 0
> tx49_tso_inner_bytes: 0
> tx49_csum_partial: 0
> tx49_csum_partial_inner: 0
> tx49_added_vlan_packets: 0
> tx49_nop: 0
> tx49_csum_none: 0
> tx49_stopped: 0
> tx49_dropped: 0
> tx49_xmit_more: 0
> tx49_recover: 0
> tx49_cqes: 0
> tx49_wake: 0
> tx49_cqe_err: 0
> tx50_packets: 0
> tx50_bytes: 0
> tx50_tso_packets: 0
> tx50_tso_bytes: 0
> tx50_tso_inner_packets: 0
> tx50_tso_inner_bytes: 0
> tx50_csum_partial: 0
> tx50_csum_partial_inner: 0
> tx50_added_vlan_packets: 0
> tx50_nop: 0
> tx50_csum_none: 0
> tx50_stopped: 0
> tx50_dropped: 0
> tx50_xmit_more: 0
> tx50_recover: 0
> tx50_cqes: 0
> tx50_wake: 0
> tx50_cqe_err: 0
> tx51_packets: 0
> tx51_bytes: 0
> tx51_tso_packets: 0
> tx51_tso_bytes: 0
> tx51_tso_inner_packets: 0
> tx51_tso_inner_bytes: 0
> tx51_csum_partial: 0
> tx51_csum_partial_inner: 0
> tx51_added_vlan_packets: 0
> tx51_nop: 0
> tx51_csum_none: 0
> tx51_stopped: 0
> tx51_dropped: 0
> tx51_xmit_more: 0
> tx51_recover: 0
> tx51_cqes: 0
> tx51_wake: 0
> tx51_cqe_err: 0
> tx52_packets: 0
> tx52_bytes: 0
> tx52_tso_packets: 0
> tx52_tso_bytes: 0
> tx52_tso_inner_packets: 0
> tx52_tso_inner_bytes: 0
> tx52_csum_partial: 0
> tx52_csum_partial_inner: 0
> tx52_added_vlan_packets: 0
> tx52_nop: 0
> tx52_csum_none: 0
> tx52_stopped: 0
> tx52_dropped: 0
> tx52_xmit_more: 0
> tx52_recover: 0
> tx52_cqes: 0
> tx52_wake: 0
> tx52_cqe_err: 0
> tx53_packets: 0
> tx53_bytes: 0
> tx53_tso_packets: 0
> tx53_tso_bytes: 0
> tx53_tso_inner_packets: 0
> tx53_tso_inner_bytes: 0
> tx53_csum_partial: 0
> tx53_csum_partial_inner: 0
> tx53_added_vlan_packets: 0
> tx53_nop: 0
> tx53_csum_none: 0
> tx53_stopped: 0
> tx53_dropped: 0
> tx53_xmit_more: 0
> tx53_recover: 0
> tx53_cqes: 0
> tx53_wake: 0
> tx53_cqe_err: 0
> tx54_packets: 0
> tx54_bytes: 0
> tx54_tso_packets: 0
> tx54_tso_bytes: 0
> tx54_tso_inner_packets: 0
> tx54_tso_inner_bytes: 0
> tx54_csum_partial: 0
> tx54_csum_partial_inner: 0
> tx54_added_vlan_packets: 0
> tx54_nop: 0
> tx54_csum_none: 0
> tx54_stopped: 0
> tx54_dropped: 0
> tx54_xmit_more: 0
> tx54_recover: 0
> tx54_cqes: 0
> tx54_wake: 0
> tx54_cqe_err: 0
> tx55_packets: 0
> tx55_bytes: 0
> tx55_tso_packets: 0
> tx55_tso_bytes: 0
> tx55_tso_inner_packets: 0
> tx55_tso_inner_bytes: 0
> tx55_csum_partial: 0
> tx55_csum_partial_inner: 0
> tx55_added_vlan_packets: 0
> tx55_nop: 0
> tx55_csum_none: 0
> tx55_stopped: 0
> tx55_dropped: 0
> tx55_xmit_more: 0
> tx55_recover: 0
> tx55_cqes: 0
> tx55_wake: 0
> tx55_cqe_err: 0
> tx0_xdp_xmit: 0
> tx0_xdp_full: 0
> tx0_xdp_err: 0
> tx0_xdp_cqes: 0
> tx1_xdp_xmit: 0
> tx1_xdp_full: 0
> tx1_xdp_err: 0
> tx1_xdp_cqes: 0
> tx2_xdp_xmit: 0
> tx2_xdp_full: 0
> tx2_xdp_err: 0
> tx2_xdp_cqes: 0
> tx3_xdp_xmit: 0
> tx3_xdp_full: 0
> tx3_xdp_err: 0
> tx3_xdp_cqes: 0
> tx4_xdp_xmit: 0
> tx4_xdp_full: 0
> tx4_xdp_err: 0
> tx4_xdp_cqes: 0
> tx5_xdp_xmit: 0
> tx5_xdp_full: 0
> tx5_xdp_err: 0
> tx5_xdp_cqes: 0
> tx6_xdp_xmit: 0
> tx6_xdp_full: 0
> tx6_xdp_err: 0
> tx6_xdp_cqes: 0
> tx7_xdp_xmit: 0
> tx7_xdp_full: 0
> tx7_xdp_err: 0
> tx7_xdp_cqes: 0
> tx8_xdp_xmit: 0
> tx8_xdp_full: 0
> tx8_xdp_err: 0
> tx8_xdp_cqes: 0
> tx9_xdp_xmit: 0
> tx9_xdp_full: 0
> tx9_xdp_err: 0
> tx9_xdp_cqes: 0
> tx10_xdp_xmit: 0
> tx10_xdp_full: 0
> tx10_xdp_err: 0
> tx10_xdp_cqes: 0
> tx11_xdp_xmit: 0
> tx11_xdp_full: 0
> tx11_xdp_err: 0
> tx11_xdp_cqes: 0
> tx12_xdp_xmit: 0
> tx12_xdp_full: 0
> tx12_xdp_err: 0
> tx12_xdp_cqes: 0
> tx13_xdp_xmit: 0
> tx13_xdp_full: 0
> tx13_xdp_err: 0
> tx13_xdp_cqes: 0
> tx14_xdp_xmit: 0
> tx14_xdp_full: 0
> tx14_xdp_err: 0
> tx14_xdp_cqes: 0
> tx15_xdp_xmit: 0
> tx15_xdp_full: 0
> tx15_xdp_err: 0
> tx15_xdp_cqes: 0
> tx16_xdp_xmit: 0
> tx16_xdp_full: 0
> tx16_xdp_err: 0
> tx16_xdp_cqes: 0
> tx17_xdp_xmit: 0
> tx17_xdp_full: 0
> tx17_xdp_err: 0
> tx17_xdp_cqes: 0
> tx18_xdp_xmit: 0
> tx18_xdp_full: 0
> tx18_xdp_err: 0
> tx18_xdp_cqes: 0
> tx19_xdp_xmit: 0
> tx19_xdp_full: 0
> tx19_xdp_err: 0
> tx19_xdp_cqes: 0
> tx20_xdp_xmit: 0
> tx20_xdp_full: 0
> tx20_xdp_err: 0
> tx20_xdp_cqes: 0
> tx21_xdp_xmit: 0
> tx21_xdp_full: 0
> tx21_xdp_err: 0
> tx21_xdp_cqes: 0
> tx22_xdp_xmit: 0
> tx22_xdp_full: 0
> tx22_xdp_err: 0
> tx22_xdp_cqes: 0
> tx23_xdp_xmit: 0
> tx23_xdp_full: 0
> tx23_xdp_err: 0
> tx23_xdp_cqes: 0
> tx24_xdp_xmit: 0
> tx24_xdp_full: 0
> tx24_xdp_err: 0
> tx24_xdp_cqes: 0
> tx25_xdp_xmit: 0
> tx25_xdp_full: 0
> tx25_xdp_err: 0
> tx25_xdp_cqes: 0
> tx26_xdp_xmit: 0
> tx26_xdp_full: 0
> tx26_xdp_err: 0
> tx26_xdp_cqes: 0
> tx27_xdp_xmit: 0
> tx27_xdp_full: 0
> tx27_xdp_err: 0
> tx27_xdp_cqes: 0
> tx28_xdp_xmit: 0
> tx28_xdp_full: 0
> tx28_xdp_err: 0
> tx28_xdp_cqes: 0
> tx29_xdp_xmit: 0
> tx29_xdp_full: 0
> tx29_xdp_err: 0
> tx29_xdp_cqes: 0
> tx30_xdp_xmit: 0
> tx30_xdp_full: 0
> tx30_xdp_err: 0
> tx30_xdp_cqes: 0
> tx31_xdp_xmit: 0
> tx31_xdp_full: 0
> tx31_xdp_err: 0
> tx31_xdp_cqes: 0
> tx32_xdp_xmit: 0
> tx32_xdp_full: 0
> tx32_xdp_err: 0
> tx32_xdp_cqes: 0
> tx33_xdp_xmit: 0
> tx33_xdp_full: 0
> tx33_xdp_err: 0
> tx33_xdp_cqes: 0
> tx34_xdp_xmit: 0
> tx34_xdp_full: 0
> tx34_xdp_err: 0
> tx34_xdp_cqes: 0
> tx35_xdp_xmit: 0
> tx35_xdp_full: 0
> tx35_xdp_err: 0
> tx35_xdp_cqes: 0
> tx36_xdp_xmit: 0
> tx36_xdp_full: 0
> tx36_xdp_err: 0
> tx36_xdp_cqes: 0
> tx37_xdp_xmit: 0
> tx37_xdp_full: 0
> tx37_xdp_err: 0
> tx37_xdp_cqes: 0
> tx38_xdp_xmit: 0
> tx38_xdp_full: 0
> tx38_xdp_err: 0
> tx38_xdp_cqes: 0
> tx39_xdp_xmit: 0
> tx39_xdp_full: 0
> tx39_xdp_err: 0
> tx39_xdp_cqes: 0
> tx40_xdp_xmit: 0
> tx40_xdp_full: 0
> tx40_xdp_err: 0
> tx40_xdp_cqes: 0
> tx41_xdp_xmit: 0
> tx41_xdp_full: 0
> tx41_xdp_err: 0
> tx41_xdp_cqes: 0
> tx42_xdp_xmit: 0
> tx42_xdp_full: 0
> tx42_xdp_err: 0
> tx42_xdp_cqes: 0
> tx43_xdp_xmit: 0
> tx43_xdp_full: 0
> tx43_xdp_err: 0
> tx43_xdp_cqes: 0
> tx44_xdp_xmit: 0
> tx44_xdp_full: 0
> tx44_xdp_err: 0
> tx44_xdp_cqes: 0
> tx45_xdp_xmit: 0
> tx45_xdp_full: 0
> tx45_xdp_err: 0
> tx45_xdp_cqes: 0
> tx46_xdp_xmit: 0
> tx46_xdp_full: 0
> tx46_xdp_err: 0
> tx46_xdp_cqes: 0
> tx47_xdp_xmit: 0
> tx47_xdp_full: 0
> tx47_xdp_err: 0
> tx47_xdp_cqes: 0
> tx48_xdp_xmit: 0
> tx48_xdp_full: 0
> tx48_xdp_err: 0
> tx48_xdp_cqes: 0
> tx49_xdp_xmit: 0
> tx49_xdp_full: 0
> tx49_xdp_err: 0
> tx49_xdp_cqes: 0
> tx50_xdp_xmit: 0
> tx50_xdp_full: 0
> tx50_xdp_err: 0
> tx50_xdp_cqes: 0
> tx51_xdp_xmit: 0
> tx51_xdp_full: 0
> tx51_xdp_err: 0
> tx51_xdp_cqes: 0
> tx52_xdp_xmit: 0
> tx52_xdp_full: 0
> tx52_xdp_err: 0
> tx52_xdp_cqes: 0
> tx53_xdp_xmit: 0
> tx53_xdp_full: 0
> tx53_xdp_err: 0
> tx53_xdp_cqes: 0
> tx54_xdp_xmit: 0
> tx54_xdp_full: 0
> tx54_xdp_err: 0
> tx54_xdp_cqes: 0
> tx55_xdp_xmit: 0
> tx55_xdp_full: 0
> tx55_xdp_err: 0
> tx55_xdp_cqes: 0
>
>
>> [...]
>>
>>>>> ethtool -S enp175s0f0
>>>>> NIC statistics:
>>>>> rx_packets: 141574897253
>>>>> rx_bytes: 184445040406258
>>>>> tx_packets: 172569543894
>>>>> tx_bytes: 99486882076365
>>>>> tx_tso_packets: 9367664195
>>>>> tx_tso_bytes: 56435233992948
>>>>> tx_tso_inner_packets: 0
>>>>> tx_tso_inner_bytes: 0
>>>>> tx_added_vlan_packets: 141297671626
>>>>> tx_nop: 2102916272
>>>>> rx_lro_packets: 0
>>>>> rx_lro_bytes: 0
>>>>> rx_ecn_mark: 0
>>>>> rx_removed_vlan_packets: 141574897252
>>>>> rx_csum_unnecessary: 0
>>>>> rx_csum_none: 23135854
>>>>> rx_csum_complete: 141551761398
>>>>> rx_csum_unnecessary_inner: 0
>>>>> rx_xdp_drop: 0
>>>>> rx_xdp_redirect: 0
>>>>> rx_xdp_tx_xmit: 0
>>>>> rx_xdp_tx_full: 0
>>>>> rx_xdp_tx_err: 0
>>>>> rx_xdp_tx_cqe: 0
>>>>> tx_csum_none: 127934791664
>>>> It is a good idea to look into this, tx is not requesting hw tx
>>>> csumming for a lot of packets, maybe you are wasting a lot of cpu
>>>> on
>>>> calculating csum, or maybe this is just the rx csum complete..
>>>>
>>>>> tx_csum_partial: 13362879974
>>>>> tx_csum_partial_inner: 0
>>>>> tx_queue_stopped: 232561
>>>> TX queues are stalling, could be an indentation for the pcie
>>>> bottelneck.
>>>>
>>>>> tx_queue_dropped: 0
>>>>> tx_xmit_more: 1266021946
>>>>> tx_recover: 0
>>>>> tx_cqes: 140031716469
>>>>> tx_queue_wake: 232561
>>>>> tx_udp_seg_rem: 0
>>>>> tx_cqe_err: 0
>>>>> tx_xdp_xmit: 0
>>>>> tx_xdp_full: 0
>>>>> tx_xdp_err: 0
>>>>> tx_xdp_cqes: 0
>>>>> rx_wqe_err: 0
>>>>> rx_mpwqe_filler_cqes: 0
>>>>> rx_mpwqe_filler_strides: 0
>>>>> rx_buff_alloc_err: 0
>>>>> rx_cqe_compress_blks: 0
>>>>> rx_cqe_compress_pkts: 0
>>>>> rx_page_reuse: 0
>>>>> rx_cache_reuse: 16625975793
>>>>> rx_cache_full: 54161465914
>>>>> rx_cache_empty: 258048
>>>>> rx_cache_busy: 54161472735
>>>>> rx_cache_waive: 0
>>>>> rx_congst_umr: 0
>>>>> rx_arfs_err: 0
>>>>> ch_events: 40572621887
>>>>> ch_poll: 40885650979
>>>>> ch_arm: 40429276692
>>>>> ch_aff_change: 0
>>>>> ch_eq_rearm: 0
>>>>> rx_out_of_buffer: 2791690
>>>>> rx_if_down_packets: 74
>>>>> rx_vport_unicast_packets: 141843476308
>>>>> rx_vport_unicast_bytes: 185421265403318
>>>>> tx_vport_unicast_packets: 172569484005
>>>>> tx_vport_unicast_bytes: 100019940094298
>>>>> rx_vport_multicast_packets: 85122935
>>>>> rx_vport_multicast_bytes: 5761316431
>>>>> tx_vport_multicast_packets: 6452
>>>>> tx_vport_multicast_bytes: 643540
>>>>> rx_vport_broadcast_packets: 22423624
>>>>> rx_vport_broadcast_bytes: 1390127090
>>>>> tx_vport_broadcast_packets: 22024
>>>>> tx_vport_broadcast_bytes: 1321440
>>>>> rx_vport_rdma_unicast_packets: 0
>>>>> rx_vport_rdma_unicast_bytes: 0
>>>>> tx_vport_rdma_unicast_packets: 0
>>>>> tx_vport_rdma_unicast_bytes: 0
>>>>> rx_vport_rdma_multicast_packets: 0
>>>>> rx_vport_rdma_multicast_bytes: 0
>>>>> tx_vport_rdma_multicast_packets: 0
>>>>> tx_vport_rdma_multicast_bytes: 0
>>>>> tx_packets_phy: 172569501577
>>>>> rx_packets_phy: 142871314588
>>>>> rx_crc_errors_phy: 0
>>>>> tx_bytes_phy: 100710212814151
>>>>> rx_bytes_phy: 187209224289564
>>>>> tx_multicast_phy: 6452
>>>>> tx_broadcast_phy: 22024
>>>>> rx_multicast_phy: 85122933
>>>>> rx_broadcast_phy: 22423623
>>>>> rx_in_range_len_errors_phy: 2
>>>>> rx_out_of_range_len_phy: 0
>>>>> rx_oversize_pkts_phy: 0
>>>>> rx_symbol_err_phy: 0
>>>>> tx_mac_control_phy: 0
>>>>> rx_mac_control_phy: 0
>>>>> rx_unsupported_op_phy: 0
>>>>> rx_pause_ctrl_phy: 0
>>>>> tx_pause_ctrl_phy: 0
>>>>> rx_discards_phy: 920161423
>>>> Ok, this port seem to be suffering more, RX is congested, maybe due
>>>> to
>>>> the pcie bottleneck.
>>> Yes this side is receiving more traffic - second port is +10G more tx
>>>
>> [...]
>>
>>
>>>>> Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
>>>>> 0.00 0.00 0.00 31.30
>>>>> Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
>>>>> 0.00 0.00 0.00 24.90
>>>>> Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
>>>>> 0.00 0.00 0.00 19.68
>>>>> Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
>>>>> 0.00 0.00 0.00 18.00
>>>>> Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
>>>>> 0.00 0.00 0.00 17.40
>>>>> Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
>>>>> 0.00 0.00 0.00 26.03
>>>>> Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
>>>>> 0.00 0.00 0.00 17.52
>>>>> Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
>>>>> 0.00 0.00 0.00 18.20
>>>>> Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
>>>>> 0.00 0.00 0.00 17.68
>>>>> Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
>>>>> 0.00 0.00 0.00 18.20
>>>>> Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
>>>>> 0.00 0.00 0.00 20.42
>>>> The numa cores are not at 100% util, you have around 20% of idle on
>>>> each one.
>>> Yes - no 100% cpu - but the difference between 80% and 100% is like
>>> push
>>> aditional 1-2Gbit/s
>>>
>> yes but, it doens't look like the bottleneck is the cpu, although it is
>> close to be :)..
>>
>>>>> Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 96.10
>>>>> Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.70
>>>>> Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.50
>>>>> Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 97.40
>>>>> Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.10
>>>>> Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.40
>>>>> Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
>>>>> 0.00 0.00 0.00 19.42
>>>>> Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
>>>>> 0.00 0.00 0.00 26.60
>>>>> Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
>>>>> 0.00 0.00 0.00 21.60
>>>>> Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
>>>>> 0.00 0.00 0.00 20.50
>>>>> Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
>>>>> 0.00 0.00 0.00 21.60
>>>>> Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
>>>>> 0.00 0.00 0.00 24.58
>>>>> Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
>>>>> 0.00 0.00 0.00 24.68
>>>>> Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
>>>>> 0.00 0.00 0.00 28.34
>>>>> Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
>>>>> 0.00 0.00 0.00 25.83
>>>>> Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
>>>>> 0.00 0.00 0.00 27.20
>>>>> Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
>>>>> 0.00 0.00 0.00 28.03
>>>>> Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
>>>>> 0.00 0.00 0.00 24.70
>>>>> Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
>>>>> 0.00 0.00 0.00 25.30
>>>>> Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
>>>>> 0.00 0.00 0.00 26.77
>>>>>
>>>>>
>>>>> ethtool -k enp175s0f0
>>>>> Features for enp175s0f0:
>>>>> rx-checksumming: on
>>>>> tx-checksumming: on
>>>>> tx-checksum-ipv4: on
>>>>> tx-checksum-ip-generic: off [fixed]
>>>>> tx-checksum-ipv6: on
>>>>> tx-checksum-fcoe-crc: off [fixed]
>>>>> tx-checksum-sctp: off [fixed]
>>>>> scatter-gather: on
>>>>> tx-scatter-gather: on
>>>>> tx-scatter-gather-fraglist: off [fixed]
>>>>> tcp-segmentation-offload: on
>>>>> tx-tcp-segmentation: on
>>>>> tx-tcp-ecn-segmentation: off [fixed]
>>>>> tx-tcp-mangleid-segmentation: off
>>>>> tx-tcp6-segmentation: on
>>>>> udp-fragmentation-offload: off
>>>>> generic-segmentation-offload: on
>>>>> generic-receive-offload: on
>>>>> large-receive-offload: off [fixed]
>>>>> rx-vlan-offload: on
>>>>> tx-vlan-offload: on
>>>>> ntuple-filters: off
>>>>> receive-hashing: on
>>>>> highdma: on [fixed]
>>>>> rx-vlan-filter: on
>>>>> vlan-challenged: off [fixed]
>>>>> tx-lockless: off [fixed]
>>>>> netns-local: off [fixed]
>>>>> tx-gso-robust: off [fixed]
>>>>> tx-fcoe-segmentation: off [fixed]
>>>>> tx-gre-segmentation: on
>>>>> tx-gre-csum-segmentation: on
>>>>> tx-ipxip4-segmentation: off [fixed]
>>>>> tx-ipxip6-segmentation: off [fixed]
>>>>> tx-udp_tnl-segmentation: on
>>>>> tx-udp_tnl-csum-segmentation: on
>>>>> tx-gso-partial: on
>>>>> tx-sctp-segmentation: off [fixed]
>>>>> tx-esp-segmentation: off [fixed]
>>>>> tx-udp-segmentation: on
>>>>> fcoe-mtu: off [fixed]
>>>>> tx-nocache-copy: off
>>>>> loopback: off [fixed]
>>>>> rx-fcs: off
>>>>> rx-all: off
>>>>> tx-vlan-stag-hw-insert: on
>>>>> rx-vlan-stag-hw-parse: off [fixed]
>>>>> rx-vlan-stag-filter: on [fixed]
>>>>> l2-fwd-offload: off [fixed]
>>>>> hw-tc-offload: off
>>>>> esp-hw-offload: off [fixed]
>>>>> esp-tx-csum-hw-offload: off [fixed]
>>>>> rx-udp_tunnel-port-offload: on
>>>>> tls-hw-tx-offload: off [fixed]
>>>>> tls-hw-rx-offload: off [fixed]
>>>>> rx-gro-hw: off [fixed]
>>>>> tls-hw-record: off [fixed]
>>>>>
>>>>> ethtool -c enp175s0f0
>>>>> Coalesce parameters for enp175s0f0:
>>>>> Adaptive RX: off TX: on
>>>>> stats-block-usecs: 0
>>>>> sample-interval: 0
>>>>> pkt-rate-low: 0
>>>>> pkt-rate-high: 0
>>>>> dmac: 32703
>>>>>
>>>>> rx-usecs: 256
>>>>> rx-frames: 128
>>>>> rx-usecs-irq: 0
>>>>> rx-frames-irq: 0
>>>>>
>>>>> tx-usecs: 8
>>>>> tx-frames: 128
>>>>> tx-usecs-irq: 0
>>>>> tx-frames-irq: 0
>>>>>
>>>>> rx-usecs-low: 0
>>>>> rx-frame-low: 0
>>>>> tx-usecs-low: 0
>>>>> tx-frame-low: 0
>>>>>
>>>>> rx-usecs-high: 0
>>>>> rx-frame-high: 0
>>>>> tx-usecs-high: 0
>>>>> tx-frame-high: 0
>>>>>
>>>>> ethtool -g enp175s0f0
>>>>> Ring parameters for enp175s0f0:
>>>>> Pre-set maximums:
>>>>> RX: 8192
>>>>> RX Mini: 0
>>>>> RX Jumbo: 0
>>>>> TX: 8192
>>>>> Current hardware settings:
>>>>> RX: 4096
>>>>> RX Mini: 0
>>>>> RX Jumbo: 0
>>>>> TX: 4096
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>> Also changed a little coalesce params - and best for this config are:
>>> ethtool -c enp175s0f0
>>> Coalesce parameters for enp175s0f0:
>>> Adaptive RX: off TX: off
>>> stats-block-usecs: 0
>>> sample-interval: 0
>>> pkt-rate-low: 0
>>> pkt-rate-high: 0
>>> dmac: 32573
>>>
>>> rx-usecs: 40
>>> rx-frames: 128
>>> rx-usecs-irq: 0
>>> rx-frames-irq: 0
>>>
>>> tx-usecs: 8
>>> tx-frames: 8
>>> tx-usecs-irq: 0
>>> tx-frames-irq: 0
>>>
>>> rx-usecs-low: 0
>>> rx-frame-low: 0
>>> tx-usecs-low: 0
>>> tx-frame-low: 0
>>>
>>> rx-usecs-high: 0
>>> rx-frame-high: 0
>>> tx-usecs-high: 0
>>> tx-frame-high: 0
>>>
>>>
>>> Less drops on RX side - and more pps in overall forwarded.
>>>
>> how much improvement ? maybe we can improve our adaptive rx coal to be
>> efficient for this work load.
>>
>>
> I can prepare more stats with ethtool maybee to compare - but normally
> tested with simple icmp forwarded from interface to interface
> - before change coalescence params:
> adaptive-rx off rx-usecs 384 rx-frames 128
> 3% loss for icmp
> - after change to:
> adaptive-rx off rx-usecs 40 rx-frames 128 adaptive-tx off tx-usecs 8
> tx-frames 8
> 2% loss for icmp
>
> But yes - to know better will need to compare rx/tx counters from
> ethtool + /proc/net/dev
>
>
> Was trying to turn on adaptative-tx+rx - but 100% saturation at
> 43Gbit/s RX / 43Gbit/s TX
>
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 21:24 ` Paweł Staszewski
@ 2018-11-01 21:34 ` Paweł Staszewski
0 siblings, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-01 21:34 UTC (permalink / raw)
To: Saeed Mahameed, netdev
W dniu 01.11.2018 o 22:24, Paweł Staszewski pisze:
>
>
> W dniu 01.11.2018 o 22:18, Paweł Staszewski pisze:
>>
>>
>> W dniu 01.11.2018 o 21:37, Saeed Mahameed pisze:
>>> On Thu, 2018-11-01 at 12:09 +0100, Paweł Staszewski wrote:
>>>> W dniu 01.11.2018 o 10:50, Saeed Mahameed pisze:
>>>>> On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
>>>>>> Hi
>>>>>>
>>>>>> So maybee someone will be interested how linux kernel handles
>>>>>> normal
>>>>>> traffic (not pktgen :) )
>>>>>>
>>>>>>
>>>>>> Server HW configuration:
>>>>>>
>>>>>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>>>>>
>>>>>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>>>>>
>>>>>>
>>>>>> Server software:
>>>>>>
>>>>>> FRR - as routing daemon
>>>>>>
>>>>>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to
>>>>>> local
>>>>>> numa
>>>>>> node)
>>>>>>
>>>>>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local
>>>>>> numa
>>>>>> node)
>>>>>>
>>>>>>
>>>>>> Maximum traffic that server can handle:
>>>>>>
>>>>>> Bandwidth
>>>>>>
>>>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>>>> input: /proc/net/dev type: rate
>>>>>> \ iface Rx Tx Total
>>>>>> =================================================================
>>>>>> ====
>>>>>> =========
>>>>>> enp175s0f1: 28.51 Gb/s 37.24
>>>>>> Gb/s
>>>>>> 65.74 Gb/s
>>>>>> enp175s0f0: 38.07 Gb/s 28.44
>>>>>> Gb/s
>>>>>> 66.51 Gb/s
>>>>>> ---------------------------------------------------------------
>>>>>> ----
>>>>>> -----------
>>>>>> total: 66.58 Gb/s 65.67
>>>>>> Gb/s
>>>>>> 132.25 Gb/s
>>>>>>
>>>>>>
>>>>>> Packets per second:
>>>>>>
>>>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>>>> input: /proc/net/dev type: rate
>>>>>> - iface Rx Tx Total
>>>>>> =================================================================
>>>>>> ====
>>>>>> =========
>>>>>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
>>>>>> 8735207.00 P/s
>>>>>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
>>>>>> 8790460.00 P/s
>>>>>> ---------------------------------------------------------------
>>>>>> ----
>>>>>> -----------
>>>>>> total: 8806533.00 P/s 8719134.00 P/s
>>>>>> 17525668.00 P/s
>>>>>>
>>>>>>
>>>>>> After reaching that limits nics on the upstream side (more RX
>>>>>> traffic)
>>>>>> start to drop packets
>>>>>>
>>>>>>
>>>>>> I just dont understand that server can't handle more bandwidth
>>>>>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
>>>>>> RX
>>>>>> side are increasing.
>>>>>>
>>>>> Where do you see 40 Gb/s ? you showed that both ports on the same
>>>>> NIC (
>>>>> same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) =
>>>>> 132.25
>>>>> Gb/s which aligns with your pcie link limit, what am i missing ?
>>>> hmm yes that was my concern also - cause cant find anywhere
>>>> informations
>>>> about that bandwidth is uni or bidirectional - so if 126Gbit for x16
>>>> 8GT
>>>> is unidir - then bidir will be 126/2 ~68Gbit - which will fit total
>>>> bw
>>>> on both ports
>>> i think it is bidir
>>>
>>>> This can explain maybee also why cpuload is rising rapidly from
>>>> 120Gbit/s in total to 132Gbit (counters of bwmng are from /proc/net -
>>>> so
>>>> there can be some error in reading them when offloading (gro/gso/tso)
>>>> on
>>>> nic's is enabled that is why
>>>>
>>>>>> Was thinking that maybee reached some pcie x16 limit - but x16
>>>>>> 8GT
>>>>>> is
>>>>>> 126Gbit - and also when testing with pktgen i can reach more bw
>>>>>> and
>>>>>> pps
>>>>>> (like 4x more comparing to normal internet traffic)
>>>>>>
>>>>> Are you forwarding when using pktgen as well or you just testing
>>>>> the RX
>>>>> side pps ?
>>>> Yes pktgen was tested on single port RX
>>>> Can check also forwarding to eliminate pciex limits
>>>>
>>> So this explains why you have more RX pps, since tx is idle and pcie
>>> will be free to do only rx.
>>>
>>> [...]
>>>
>>>
>>>>>> ethtool -S enp175s0f1
>>>>>> NIC statistics:
>>>>>> rx_packets: 173730800927
>>>>>> rx_bytes: 99827422751332
>>>>>> tx_packets: 142532009512
>>>>>> tx_bytes: 184633045911222
>>>>>> tx_tso_packets: 25989113891
>>>>>> tx_tso_bytes: 132933363384458
>>>>>> tx_tso_inner_packets: 0
>>>>>> tx_tso_inner_bytes: 0
>>>>>> tx_added_vlan_packets: 74630239613
>>>>>> tx_nop: 2029817748
>>>>>> rx_lro_packets: 0
>>>>>> rx_lro_bytes: 0
>>>>>> rx_ecn_mark: 0
>>>>>> rx_removed_vlan_packets: 173730800927
>>>>>> rx_csum_unnecessary: 0
>>>>>> rx_csum_none: 434357
>>>>>> rx_csum_complete: 173730366570
>>>>>> rx_csum_unnecessary_inner: 0
>>>>>> rx_xdp_drop: 0
>>>>>> rx_xdp_redirect: 0
>>>>>> rx_xdp_tx_xmit: 0
>>>>>> rx_xdp_tx_full: 0
>>>>>> rx_xdp_tx_err: 0
>>>>>> rx_xdp_tx_cqe: 0
>>>>>> tx_csum_none: 38260960853
>>>>>> tx_csum_partial: 36369278774
>>>>>> tx_csum_partial_inner: 0
>>>>>> tx_queue_stopped: 1
>>>>>> tx_queue_dropped: 0
>>>>>> tx_xmit_more: 748638099
>>>>>> tx_recover: 0
>>>>>> tx_cqes: 73881645031
>>>>>> tx_queue_wake: 1
>>>>>> tx_udp_seg_rem: 0
>>>>>> tx_cqe_err: 0
>>>>>> tx_xdp_xmit: 0
>>>>>> tx_xdp_full: 0
>>>>>> tx_xdp_err: 0
>>>>>> tx_xdp_cqes: 0
>>>>>> rx_wqe_err: 0
>>>>>> rx_mpwqe_filler_cqes: 0
>>>>>> rx_mpwqe_filler_strides: 0
>>>>>> rx_buff_alloc_err: 0
>>>>>> rx_cqe_compress_blks: 0
>>>>>> rx_cqe_compress_pkts: 0
>>>>> If this is a pcie bottleneck it might be useful to enable CQE
>>>>> compression (to reduce PCIe completion descriptors transactions)
>>>>> you should see the above rx_cqe_compress_pkts increasing when
>>>>> enabled.
>>>>>
>>>>> $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
>>>>> $ ethtool --show-priv-flags enp175s0f1
>>>>> Private flags for p6p1:
>>>>> rx_cqe_moder : on
>>>>> cqe_moder : off
>>>>> rx_cqe_compress : on
>>>>> ...
>>>>>
>>>>> try this on both interfaces.
>>>> Done
>>>> ethtool --show-priv-flags enp175s0f1
>>>> Private flags for enp175s0f1:
>>>> rx_cqe_moder : on
>>>> tx_cqe_moder : off
>>>> rx_cqe_compress : on
>>>> rx_striding_rq : off
>>>> rx_no_csum_complete: off
>>>>
>>>> ethtool --show-priv-flags enp175s0f0
>>>> Private flags for enp175s0f0:
>>>> rx_cqe_moder : on
>>>> tx_cqe_moder : off
>>>> rx_cqe_compress : on
>>>> rx_striding_rq : off
>>>> rx_no_csum_complete: off
>>>>
>>> did it help reduce the load on the pcie ? do you see more pps ?
>>> what is the ratio between rx_cqe_compress_pkts and over all rx packets
>>> ?
>> So - a little more pps
>> Before change top - graph / after bottom -> image with graph stats
>> from proc/net/dev
> Attached link to graph
> https://uploadfiles.io/5vgbh
>
>
Enabling tx_cqe_moder for both ports
helps alot with cpu load at same traffic lvl - the change is about -20%
on all 28 cores
>> cqe_compress enabled at 11:55
>>
>> Sorry - but for real life traffic it is hard to do any counter
>> differences - cause traffic just rising alone from minute to minute :)
>> But for that time the change is visible on graph - cause was almost
>> same for past 20minutes before change.
>>
>>
>> full ethtool below:
>> NIC statistics:
>> rx_packets: 516522465438
>> rx_bytes: 680052911258729
>> tx_packets: 677697545586
>> tx_bytes: 413647643141709
>> tx_tso_packets: 42530913279
>> tx_tso_bytes: 235655668554142
>> tx_tso_inner_packets: 0
>> tx_tso_inner_bytes: 0
>> tx_added_vlan_packets: 551156530885
>> tx_nop: 8536823558
>> rx_lro_packets: 0
>> rx_lro_bytes: 0
>> rx_ecn_mark: 0
>> rx_removed_vlan_packets: 516522465438
>> rx_csum_unnecessary: 0
>> rx_csum_none: 50382868
>> rx_csum_complete: 516472082570
>> rx_csum_unnecessary_inner: 0
>> rx_xdp_drop: 0
>> rx_xdp_redirect: 0
>> rx_xdp_tx_xmit: 0
>> rx_xdp_tx_full: 0
>> rx_xdp_tx_err: 0
>> rx_xdp_tx_cqe: 0
>> tx_csum_none: 494075047017
>> tx_csum_partial: 57081483898
>> tx_csum_partial_inner: 0
>> tx_queue_stopped: 518624
>> tx_queue_dropped: 0
>> tx_xmit_more: 1717880628
>> tx_recover: 0
>> tx_cqes: 549438869029
>> tx_queue_wake: 518627
>> tx_udp_seg_rem: 0
>> tx_cqe_err: 0
>> tx_xdp_xmit: 0
>> tx_xdp_full: 0
>> tx_xdp_err: 0
>> tx_xdp_cqes: 0
>> rx_wqe_err: 0
>> rx_mpwqe_filler_cqes: 0
>> rx_mpwqe_filler_strides: 0
>> rx_buff_alloc_err: 0
>> rx_cqe_compress_blks: 11483228712
>> rx_cqe_compress_pkts: 25794213324
>> rx_page_reuse: 0
>> rx_cache_reuse: 63610249810
>> rx_cache_full: 194650916511
>> rx_cache_empty: 1118208
>> rx_cache_busy: 194650982430
>> rx_cache_waive: 0
>> rx_congst_umr: 0
>> rx_arfs_err: 0
>> ch_events: 119556002196
>> ch_poll: 121107424977
>> ch_arm: 115856746008
>> ch_aff_change: 31
>> ch_eq_rearm: 0
>> rx_out_of_buffer: 6880325
>> rx_if_down_packets: 2062529
>> rx_vport_unicast_packets: 517433716795
>> rx_vport_unicast_bytes: 683464347301443
>> tx_vport_unicast_packets: 677697453738
>> tx_vport_unicast_bytes: 415788589663315
>> rx_vport_multicast_packets: 208258309
>> rx_vport_multicast_bytes: 14224046052
>> tx_vport_multicast_packets: 21689
>> tx_vport_multicast_bytes: 2158334
>> rx_vport_broadcast_packets: 75838646
>> rx_vport_broadcast_bytes: 4697944695
>> tx_vport_broadcast_packets: 68730
>> tx_vport_broadcast_bytes: 4123800
>> rx_vport_rdma_unicast_packets: 0
>> rx_vport_rdma_unicast_bytes: 0
>> tx_vport_rdma_unicast_packets: 0
>> tx_vport_rdma_unicast_bytes: 0
>> rx_vport_rdma_multicast_packets: 0
>> rx_vport_rdma_multicast_bytes: 0
>> tx_vport_rdma_multicast_packets: 0
>> tx_vport_rdma_multicast_bytes: 0
>> tx_packets_phy: 677697543252
>> rx_packets_phy: 521319491878
>> rx_crc_errors_phy: 0
>> tx_bytes_phy: 418499385791411
>> rx_bytes_phy: 690322537017274
>> tx_multicast_phy: 21689
>> tx_broadcast_phy: 68730
>> rx_multicast_phy: 208258305
>> rx_broadcast_phy: 75838646
>> rx_in_range_len_errors_phy: 4
>> rx_out_of_range_len_phy: 0
>> rx_oversize_pkts_phy: 0
>> rx_symbol_err_phy: 0
>> tx_mac_control_phy: 0
>> rx_mac_control_phy: 0
>> rx_unsupported_op_phy: 0
>> rx_pause_ctrl_phy: 0
>> tx_pause_ctrl_phy: 0
>> rx_discards_phy: 3601449265
>> tx_discards_phy: 0
>> tx_errors_phy: 0
>> rx_undersize_pkts_phy: 0
>> rx_fragments_phy: 0
>> rx_jabbers_phy: 0
>> rx_64_bytes_phy: 1416456771
>> rx_65_to_127_bytes_phy: 40750434737
>> rx_128_to_255_bytes_phy: 11518110310
>> rx_256_to_511_bytes_phy: 7055850637
>> rx_512_to_1023_bytes_phy: 7811550424
>> rx_1024_to_1518_bytes_phy: 265547564845
>> rx_1519_to_2047_bytes_phy: 187219522899
>> rx_2048_to_4095_bytes_phy: 0
>> rx_4096_to_8191_bytes_phy: 0
>> rx_8192_to_10239_bytes_phy: 0
>> link_down_events_phy: 0
>> rx_pcs_symbol_err_phy: 0
>> rx_corrected_bits_phy: 0
>> rx_pci_signal_integrity: 0
>> tx_pci_signal_integrity: 48
>> rx_prio0_bytes: 688807632117485
>> rx_prio0_packets: 516310309931
>> tx_prio0_bytes: 418499382756025
>> tx_prio0_packets: 677697534982
>> rx_prio1_bytes: 1497701612877
>> rx_prio1_packets: 1206768094
>> tx_prio1_bytes: 0
>> tx_prio1_packets: 0
>> rx_prio2_bytes: 112271227
>> rx_prio2_packets: 337295
>> tx_prio2_bytes: 0
>> tx_prio2_packets: 0
>> rx_prio3_bytes: 1165455555
>> rx_prio3_packets: 1544310
>> tx_prio3_bytes: 0
>> tx_prio3_packets: 0
>> rx_prio4_bytes: 161857240
>> rx_prio4_packets: 341392
>> tx_prio4_bytes: 0
>> tx_prio4_packets: 0
>> rx_prio5_bytes: 455031612
>> rx_prio5_packets: 2861469
>> tx_prio5_bytes: 0
>> tx_prio5_packets: 0
>> rx_prio6_bytes: 1873928697
>> rx_prio6_packets: 5146981
>> tx_prio6_bytes: 0
>> tx_prio6_packets: 0
>> rx_prio7_bytes: 13423452430
>> rx_prio7_packets: 190724796
>> tx_prio7_bytes: 0
>> tx_prio7_packets: 0
>> module_unplug: 0
>> module_bus_stuck: 0
>> module_high_temp: 0
>> module_bad_shorted: 0
>> ch0_events: 4252266777
>> ch0_poll: 4330804273
>> ch0_arm: 4120233182
>> ch0_aff_change: 2
>> ch0_eq_rearm: 0
>> ch1_events: 3938415938
>> ch1_poll: 4012621322
>> ch1_arm: 3810131188
>> ch1_aff_change: 2
>> ch1_eq_rearm: 0
>> ch2_events: 3897428860
>> ch2_poll: 3973886848
>> ch2_arm: 3773019397
>> ch2_aff_change: 1
>> ch2_eq_rearm: 0
>> ch3_events: 4108000541
>> ch3_poll: 4180139872
>> ch3_arm: 3982093366
>> ch3_aff_change: 1
>> ch3_eq_rearm: 0
>> ch4_events: 4652570079
>> ch4_poll: 4720541090
>> ch4_arm: 4524475054
>> ch4_aff_change: 2
>> ch4_eq_rearm: 0
>> ch5_events: 3899177385
>> ch5_poll: 3974274186
>> ch5_arm: 3772299186
>> ch5_aff_change: 2
>> ch5_eq_rearm: 0
>> ch6_events: 3915161350
>> ch6_poll: 3992338199
>> ch6_arm: 3794710989
>> ch6_aff_change: 0
>> ch6_eq_rearm: 0
>> ch7_events: 4008175631
>> ch7_poll: 4081321248
>> ch7_arm: 3882263723
>> ch7_aff_change: 0
>> ch7_eq_rearm: 0
>> ch8_events: 4207422352
>> ch8_poll: 4276465449
>> ch8_arm: 4077650366
>> ch8_aff_change: 0
>> ch8_eq_rearm: 0
>> ch9_events: 4036491879
>> ch9_poll: 4108975987
>> ch9_arm: 3914493694
>> ch9_aff_change: 0
>> ch9_eq_rearm: 0
>> ch10_events: 4066261595
>> ch10_poll: 4134419606
>> ch10_arm: 3936637711
>> ch10_aff_change: 1
>> ch10_eq_rearm: 0
>> ch11_events: 4440494043
>> ch11_poll: 4507578730
>> ch11_arm: 4318629438
>> ch11_aff_change: 0
>> ch11_eq_rearm: 0
>> ch12_events: 4066958252
>> ch12_poll: 4130191506
>> ch12_arm: 3934337782
>> ch12_aff_change: 0
>> ch12_eq_rearm: 0
>> ch13_events: 4051309159
>> ch13_poll: 4118864120
>> ch13_arm: 3921011919
>> ch13_aff_change: 0
>> ch13_eq_rearm: 0
>> ch14_events: 4321664800
>> ch14_poll: 4382433680
>> ch14_arm: 4186130552
>> ch14_aff_change: 0
>> ch14_eq_rearm: 0
>> ch15_events: 4701102075
>> ch15_poll: 4760373932
>> ch15_arm: 4570151468
>> ch15_aff_change: 0
>> ch15_eq_rearm: 0
>> ch16_events: 4311052687
>> ch16_poll: 4345937129
>> ch16_arm: 4170883819
>> ch16_aff_change: 0
>> ch16_eq_rearm: 0
>> ch17_events: 4647570931
>> ch17_poll: 4680218533
>> ch17_arm: 4509426288
>> ch17_aff_change: 0
>> ch17_eq_rearm: 0
>> ch18_events: 4598195702
>> ch18_poll: 4631314898
>> ch18_arm: 4457267084
>> ch18_aff_change: 0
>> ch18_eq_rearm: 0
>> ch19_events: 4808094560
>> ch19_poll: 4841368340
>> ch19_arm: 4670604358
>> ch19_aff_change: 0
>> ch19_eq_rearm: 0
>> ch20_events: 4240910605
>> ch20_poll: 4276531502
>> ch20_arm: 4101767278
>> ch20_aff_change: 1
>> ch20_eq_rearm: 0
>> ch21_events: 4389371472
>> ch21_poll: 4426870311
>> ch21_arm: 4249339045
>> ch21_aff_change: 2
>> ch21_eq_rearm: 0
>> ch22_events: 4282958754
>> ch22_poll: 4319228073
>> ch22_arm: 4145102991
>> ch22_aff_change: 2
>> ch22_eq_rearm: 0
>> ch23_events: 4440196528
>> ch23_poll: 4474090188
>> ch23_arm: 4300837147
>> ch23_aff_change: 2
>> ch23_eq_rearm: 0
>> ch24_events: 4326875785
>> ch24_poll: 4364971263
>> ch24_arm: 4186404526
>> ch24_aff_change: 2
>> ch24_eq_rearm: 0
>> ch25_events: 4286528453
>> ch25_poll: 4324089445
>> ch25_arm: 4147222616
>> ch25_aff_change: 3
>> ch25_eq_rearm: 0
>> ch26_events: 4098043104
>> ch26_poll: 4138133745
>> ch26_arm: 3967438971
>> ch26_aff_change: 4
>> ch26_eq_rearm: 0
>> ch27_events: 4563302840
>> ch27_poll: 4599441446
>> ch27_arm: 4432182806
>> ch27_aff_change: 4
>> ch27_eq_rearm: 0
>> ch28_events: 4
>> ch28_poll: 4
>> ch28_arm: 4
>> ch28_aff_change: 0
>> ch28_eq_rearm: 0
>> ch29_events: 6
>> ch29_poll: 6
>> ch29_arm: 6
>> ch29_aff_change: 0
>> ch29_eq_rearm: 0
>> ch30_events: 4
>> ch30_poll: 4
>> ch30_arm: 4
>> ch30_aff_change: 0
>> ch30_eq_rearm: 0
>> ch31_events: 4
>> ch31_poll: 4
>> ch31_arm: 4
>> ch31_aff_change: 0
>> ch31_eq_rearm: 0
>> ch32_events: 4
>> ch32_poll: 4
>> ch32_arm: 4
>> ch32_aff_change: 0
>> ch32_eq_rearm: 0
>> ch33_events: 4
>> ch33_poll: 4
>> ch33_arm: 4
>> ch33_aff_change: 0
>> ch33_eq_rearm: 0
>> ch34_events: 4
>> ch34_poll: 4
>> ch34_arm: 4
>> ch34_aff_change: 0
>> ch34_eq_rearm: 0
>> ch35_events: 4
>> ch35_poll: 4
>> ch35_arm: 4
>> ch35_aff_change: 0
>> ch35_eq_rearm: 0
>> ch36_events: 4
>> ch36_poll: 4
>> ch36_arm: 4
>> ch36_aff_change: 0
>> ch36_eq_rearm: 0
>> ch37_events: 4
>> ch37_poll: 4
>> ch37_arm: 4
>> ch37_aff_change: 0
>> ch37_eq_rearm: 0
>> ch38_events: 4
>> ch38_poll: 4
>> ch38_arm: 4
>> ch38_aff_change: 0
>> ch38_eq_rearm: 0
>> ch39_events: 4
>> ch39_poll: 4
>> ch39_arm: 4
>> ch39_aff_change: 0
>> ch39_eq_rearm: 0
>> ch40_events: 4
>> ch40_poll: 4
>> ch40_arm: 4
>> ch40_aff_change: 0
>> ch40_eq_rearm: 0
>> ch41_events: 4
>> ch41_poll: 4
>> ch41_arm: 4
>> ch41_aff_change: 0
>> ch41_eq_rearm: 0
>> ch42_events: 4
>> ch42_poll: 4
>> ch42_arm: 4
>> ch42_aff_change: 0
>> ch42_eq_rearm: 0
>> ch43_events: 4
>> ch43_poll: 4
>> ch43_arm: 4
>> ch43_aff_change: 0
>> ch43_eq_rearm: 0
>> ch44_events: 4
>> ch44_poll: 4
>> ch44_arm: 4
>> ch44_aff_change: 0
>> ch44_eq_rearm: 0
>> ch45_events: 4
>> ch45_poll: 4
>> ch45_arm: 4
>> ch45_aff_change: 0
>> ch45_eq_rearm: 0
>> ch46_events: 4
>> ch46_poll: 4
>> ch46_arm: 4
>> ch46_aff_change: 0
>> ch46_eq_rearm: 0
>> ch47_events: 4
>> ch47_poll: 4
>> ch47_arm: 4
>> ch47_aff_change: 0
>> ch47_eq_rearm: 0
>> ch48_events: 4
>> ch48_poll: 4
>> ch48_arm: 4
>> ch48_aff_change: 0
>> ch48_eq_rearm: 0
>> ch49_events: 4
>> ch49_poll: 4
>> ch49_arm: 4
>> ch49_aff_change: 0
>> ch49_eq_rearm: 0
>> ch50_events: 4
>> ch50_poll: 4
>> ch50_arm: 4
>> ch50_aff_change: 0
>> ch50_eq_rearm: 0
>> ch51_events: 4
>> ch51_poll: 4
>> ch51_arm: 4
>> ch51_aff_change: 0
>> ch51_eq_rearm: 0
>> ch52_events: 4
>> ch52_poll: 4
>> ch52_arm: 4
>> ch52_aff_change: 0
>> ch52_eq_rearm: 0
>> ch53_events: 4
>> ch53_poll: 4
>> ch53_arm: 4
>> ch53_aff_change: 0
>> ch53_eq_rearm: 0
>> ch54_events: 4
>> ch54_poll: 4
>> ch54_arm: 4
>> ch54_aff_change: 0
>> ch54_eq_rearm: 0
>> ch55_events: 4
>> ch55_poll: 4
>> ch55_arm: 4
>> ch55_aff_change: 0
>> ch55_eq_rearm: 0
>> rx0_packets: 21390033774
>> rx0_bytes: 27326856299122
>> rx0_csum_complete: 21339650906
>> rx0_csum_unnecessary: 0
>> rx0_csum_unnecessary_inner: 0
>> rx0_csum_none: 50382868
>> rx0_xdp_drop: 0
>> rx0_xdp_redirect: 0
>> rx0_lro_packets: 0
>> rx0_lro_bytes: 0
>> rx0_ecn_mark: 0
>> rx0_removed_vlan_packets: 21390033774
>> rx0_wqe_err: 0
>> rx0_mpwqe_filler_cqes: 0
>> rx0_mpwqe_filler_strides: 0
>> rx0_buff_alloc_err: 0
>> rx0_cqe_compress_blks: 481077641
>> rx0_cqe_compress_pkts: 1085647489
>> rx0_page_reuse: 0
>> rx0_cache_reuse: 19050049
>> rx0_cache_full: 10675964285
>> rx0_cache_empty: 37376
>> rx0_cache_busy: 10675966819
>> rx0_cache_waive: 0
>> rx0_congst_umr: 0
>> rx0_arfs_err: 0
>> rx0_xdp_tx_xmit: 0
>> rx0_xdp_tx_full: 0
>> rx0_xdp_tx_err: 0
>> rx0_xdp_tx_cqes: 0
>> rx1_packets: 19868919527
>> rx1_bytes: 26149716991561
>> rx1_csum_complete: 19868919527
>> rx1_csum_unnecessary: 0
>> rx1_csum_unnecessary_inner: 0
>> rx1_csum_none: 0
>> rx1_xdp_drop: 0
>> rx1_xdp_redirect: 0
>> rx1_lro_packets: 0
>> rx1_lro_bytes: 0
>> rx1_ecn_mark: 0
>> rx1_removed_vlan_packets: 19868919527
>> rx1_wqe_err: 0
>> rx1_mpwqe_filler_cqes: 0
>> rx1_mpwqe_filler_strides: 0
>> rx1_buff_alloc_err: 0
>> rx1_cqe_compress_blks: 420210560
>> rx1_cqe_compress_pkts: 941233388
>> rx1_page_reuse: 0
>> rx1_cache_reuse: 46200002
>> rx1_cache_full: 9888257242
>> rx1_cache_empty: 37376
>> rx1_cache_busy: 9888259746
>> rx1_cache_waive: 0
>> rx1_congst_umr: 0
>> rx1_arfs_err: 0
>> rx1_xdp_tx_xmit: 0
>> rx1_xdp_tx_full: 0
>> rx1_xdp_tx_err: 0
>> rx1_xdp_tx_cqes: 0
>> rx2_packets: 19575013662
>> rx2_bytes: 25759818417945
>> rx2_csum_complete: 19575013662
>> rx2_csum_unnecessary: 0
>> rx2_csum_unnecessary_inner: 0
>> rx2_csum_none: 0
>> rx2_xdp_drop: 0
>> rx2_xdp_redirect: 0
>> rx2_lro_packets: 0
>> rx2_lro_bytes: 0
>> rx2_ecn_mark: 0
>> rx2_removed_vlan_packets: 19575013662
>> rx2_wqe_err: 0
>> rx2_mpwqe_filler_cqes: 0
>> rx2_mpwqe_filler_strides: 0
>> rx2_buff_alloc_err: 0
>> rx2_cqe_compress_blks: 412345511
>> rx2_cqe_compress_pkts: 923376167
>> rx2_page_reuse: 0
>> rx2_cache_reuse: 38837731
>> rx2_cache_full: 9748666548
>> rx2_cache_empty: 37376
>> rx2_cache_busy: 9748669093
>> rx2_cache_waive: 0
>> rx2_congst_umr: 0
>> rx2_arfs_err: 0
>> rx2_xdp_tx_xmit: 0
>> rx2_xdp_tx_full: 0
>> rx2_xdp_tx_err: 0
>> rx2_xdp_tx_cqes: 0
>> rx3_packets: 19795911749
>> rx3_bytes: 25969475566905
>> rx3_csum_complete: 19795911749
>> rx3_csum_unnecessary: 0
>> rx3_csum_unnecessary_inner: 0
>> rx3_csum_none: 0
>> rx3_xdp_drop: 0
>> rx3_xdp_redirect: 0
>> rx3_lro_packets: 0
>> rx3_lro_bytes: 0
>> rx3_ecn_mark: 0
>> rx3_removed_vlan_packets: 19795911749
>> rx3_wqe_err: 0
>> rx3_mpwqe_filler_cqes: 0
>> rx3_mpwqe_filler_strides: 0
>> rx3_buff_alloc_err: 0
>> rx3_cqe_compress_blks: 416658765
>> rx3_cqe_compress_pkts: 934986266
>> rx3_page_reuse: 0
>> rx3_cache_reuse: 34542124
>> rx3_cache_full: 9863411232
>> rx3_cache_empty: 37376
>> rx3_cache_busy: 9863413732
>> rx3_cache_waive: 0
>> rx3_congst_umr: 0
>> rx3_arfs_err: 0
>> rx3_xdp_tx_xmit: 0
>> rx3_xdp_tx_full: 0
>> rx3_xdp_tx_err: 0
>> rx3_xdp_tx_cqes: 0
>> rx4_packets: 20445652378
>> rx4_bytes: 26949065110265
>> rx4_csum_complete: 20445652378
>> rx4_csum_unnecessary: 0
>> rx4_csum_unnecessary_inner: 0
>> rx4_csum_none: 0
>> rx4_xdp_drop: 0
>> rx4_xdp_redirect: 0
>> rx4_lro_packets: 0
>> rx4_lro_bytes: 0
>> rx4_ecn_mark: 0
>> rx4_removed_vlan_packets: 20445652378
>> rx4_wqe_err: 0
>> rx4_mpwqe_filler_cqes: 0
>> rx4_mpwqe_filler_strides: 0
>> rx4_buff_alloc_err: 0
>> rx4_cqe_compress_blks: 506085858
>> rx4_cqe_compress_pkts: 1147860328
>> rx4_page_reuse: 0
>> rx4_cache_reuse: 10122542864
>> rx4_cache_full: 100281206
>> rx4_cache_empty: 37376
>> rx4_cache_busy: 100283304
>> rx4_cache_waive: 0
>> rx4_congst_umr: 0
>> rx4_arfs_err: 0
>> rx4_xdp_tx_xmit: 0
>> rx4_xdp_tx_full: 0
>> rx4_xdp_tx_err: 0
>> rx4_xdp_tx_cqes: 0
>> rx5_packets: 19622362246
>> rx5_bytes: 25843450982982
>> rx5_csum_complete: 19622362246
>> rx5_csum_unnecessary: 0
>> rx5_csum_unnecessary_inner: 0
>> rx5_csum_none: 0
>> rx5_xdp_drop: 0
>> rx5_xdp_redirect: 0
>> rx5_lro_packets: 0
>> rx5_lro_bytes: 0
>> rx5_ecn_mark: 0
>> rx5_removed_vlan_packets: 19622362246
>> rx5_wqe_err: 0
>> rx5_mpwqe_filler_cqes: 0
>> rx5_mpwqe_filler_strides: 0
>> rx5_buff_alloc_err: 0
>> rx5_cqe_compress_blks: 422840924
>> rx5_cqe_compress_pkts: 948005878
>> rx5_page_reuse: 0
>> rx5_cache_reuse: 31285453
>> rx5_cache_full: 9779893117
>> rx5_cache_empty: 37376
>> rx5_cache_busy: 9779895647
>> rx5_cache_waive: 0
>> rx5_congst_umr: 0
>> rx5_arfs_err: 0
>> rx5_xdp_tx_xmit: 0
>> rx5_xdp_tx_full: 0
>> rx5_xdp_tx_err: 0
>> rx5_xdp_tx_cqes: 0
>> rx6_packets: 19788231278
>> rx6_bytes: 25985783006486
>> rx6_csum_complete: 19788231278
>> rx6_csum_unnecessary: 0
>> rx6_csum_unnecessary_inner: 0
>> rx6_csum_none: 0
>> rx6_xdp_drop: 0
>> rx6_xdp_redirect: 0
>> rx6_lro_packets: 0
>> rx6_lro_bytes: 0
>> rx6_ecn_mark: 0
>> rx6_removed_vlan_packets: 19788231278
>> rx6_wqe_err: 0
>> rx6_mpwqe_filler_cqes: 0
>> rx6_mpwqe_filler_strides: 0
>> rx6_buff_alloc_err: 0
>> rx6_cqe_compress_blks: 418799056
>> rx6_cqe_compress_pkts: 938282685
>> rx6_page_reuse: 0
>> rx6_cache_reuse: 18114793
>> rx6_cache_full: 9875998295
>> rx6_cache_empty: 37376
>> rx6_cache_busy: 9876000831
>> rx6_cache_waive: 0
>> rx6_congst_umr: 0
>> rx6_arfs_err: 0
>> rx6_xdp_tx_xmit: 0
>> rx6_xdp_tx_full: 0
>> rx6_xdp_tx_err: 0
>> rx6_xdp_tx_cqes: 0
>> rx7_packets: 19795759168
>> rx7_bytes: 26085056586860
>> rx7_csum_complete: 19795759168
>> rx7_csum_unnecessary: 0
>> rx7_csum_unnecessary_inner: 0
>> rx7_csum_none: 0
>> rx7_xdp_drop: 0
>> rx7_xdp_redirect: 0
>> rx7_lro_packets: 0
>> rx7_lro_bytes: 0
>> rx7_ecn_mark: 0
>> rx7_removed_vlan_packets: 19795759168
>> rx7_wqe_err: 0
>> rx7_mpwqe_filler_cqes: 0
>> rx7_mpwqe_filler_strides: 0
>> rx7_buff_alloc_err: 0
>> rx7_cqe_compress_blks: 413959224
>> rx7_cqe_compress_pkts: 927675936
>> rx7_page_reuse: 0
>> rx7_cache_reuse: 23902990
>> rx7_cache_full: 9873974042
>> rx7_cache_empty: 37376
>> rx7_cache_busy: 9873976574
>> rx7_cache_waive: 0
>> rx7_congst_umr: 0
>> rx7_arfs_err: 0
>> rx7_xdp_tx_xmit: 0
>> rx7_xdp_tx_full: 0
>> rx7_xdp_tx_err: 0
>> rx7_xdp_tx_cqes: 0
>> rx8_packets: 19963477439
>> rx8_bytes: 26384640501789
>> rx8_csum_complete: 19963477439
>> rx8_csum_unnecessary: 0
>> rx8_csum_unnecessary_inner: 0
>> rx8_csum_none: 0
>> rx8_xdp_drop: 0
>> rx8_xdp_redirect: 0
>> rx8_lro_packets: 0
>> rx8_lro_bytes: 0
>> rx8_ecn_mark: 0
>> rx8_removed_vlan_packets: 19963477439
>> rx8_wqe_err: 0
>> rx8_mpwqe_filler_cqes: 0
>> rx8_mpwqe_filler_strides: 0
>> rx8_buff_alloc_err: 0
>> rx8_cqe_compress_blks: 420422857
>> rx8_cqe_compress_pkts: 942720292
>> rx8_page_reuse: 0
>> rx8_cache_reuse: 88181713
>> rx8_cache_full: 9893554525
>> rx8_cache_empty: 37376
>> rx8_cache_busy: 9893556983
>> rx8_cache_waive: 0
>> rx8_congst_umr: 0
>> rx8_arfs_err: 0
>> rx8_xdp_tx_xmit: 0
>> rx8_xdp_tx_full: 0
>> rx8_xdp_tx_err: 0
>> rx8_xdp_tx_cqes: 0
>> rx9_packets: 19726642138
>> rx9_bytes: 26063924286499
>> rx9_csum_complete: 19726642138
>> rx9_csum_unnecessary: 0
>> rx9_csum_unnecessary_inner: 0
>> rx9_csum_none: 0
>> rx9_xdp_drop: 0
>> rx9_xdp_redirect: 0
>> rx9_lro_packets: 0
>> rx9_lro_bytes: 0
>> rx9_ecn_mark: 0
>> rx9_removed_vlan_packets: 19726642138
>> rx9_wqe_err: 0
>> rx9_mpwqe_filler_cqes: 0
>> rx9_mpwqe_filler_strides: 0
>> rx9_buff_alloc_err: 0
>> rx9_cqe_compress_blks: 424227411
>> rx9_cqe_compress_pkts: 951534873
>> rx9_page_reuse: 0
>> rx9_cache_reuse: 482901440
>> rx9_cache_full: 9380417487
>> rx9_cache_empty: 37376
>> rx9_cache_busy: 9380419608
>> rx9_cache_waive: 0
>> rx9_congst_umr: 0
>> rx9_arfs_err: 0
>> rx9_xdp_tx_xmit: 0
>> rx9_xdp_tx_full: 0
>> rx9_xdp_tx_err: 0
>> rx9_xdp_tx_cqes: 0
>> rx10_packets: 19901229170
>> rx10_bytes: 26300854495044
>> rx10_csum_complete: 19901229170
>> rx10_csum_unnecessary: 0
>> rx10_csum_unnecessary_inner: 0
>> rx10_csum_none: 0
>> rx10_xdp_drop: 0
>> rx10_xdp_redirect: 0
>> rx10_lro_packets: 0
>> rx10_lro_bytes: 0
>> rx10_ecn_mark: 0
>> rx10_removed_vlan_packets: 19901229170
>> rx10_wqe_err: 0
>> rx10_mpwqe_filler_cqes: 0
>> rx10_mpwqe_filler_strides: 0
>> rx10_buff_alloc_err: 0
>> rx10_cqe_compress_blks: 419082938
>> rx10_cqe_compress_pkts: 940791347
>> rx10_page_reuse: 0
>> rx10_cache_reuse: 14896055
>> rx10_cache_full: 9935715977
>> rx10_cache_empty: 37376
>> rx10_cache_busy: 9935718513
>> rx10_cache_waive: 0
>> rx10_congst_umr: 0
>> rx10_arfs_err: 0
>> rx10_xdp_tx_xmit: 0
>> rx10_xdp_tx_full: 0
>> rx10_xdp_tx_err: 0
>> rx10_xdp_tx_cqes: 0
>> rx11_packets: 20352190494
>> rx11_bytes: 26851034425372
>> rx11_csum_complete: 20352190494
>> rx11_csum_unnecessary: 0
>> rx11_csum_unnecessary_inner: 0
>> rx11_csum_none: 0
>> rx11_xdp_drop: 0
>> rx11_xdp_redirect: 0
>> rx11_lro_packets: 0
>> rx11_lro_bytes: 0
>> rx11_ecn_mark: 0
>> rx11_removed_vlan_packets: 20352190494
>> rx11_wqe_err: 0
>> rx11_mpwqe_filler_cqes: 0
>> rx11_mpwqe_filler_strides: 0
>> rx11_buff_alloc_err: 0
>> rx11_cqe_compress_blks: 501992147
>> rx11_cqe_compress_pkts: 1140398610
>> rx11_page_reuse: 0
>> rx11_cache_reuse: 10071721531
>> rx11_cache_full: 104371621
>> rx11_cache_empty: 37376
>> rx11_cache_busy: 104373697
>> rx11_cache_waive: 0
>> rx11_congst_umr: 0
>> rx11_arfs_err: 0
>> rx11_xdp_tx_xmit: 0
>> rx11_xdp_tx_full: 0
>> rx11_xdp_tx_err: 0
>> rx11_xdp_tx_cqes: 0
>> rx12_packets: 19934747149
>> rx12_bytes: 26296478787829
>> rx12_csum_complete: 19934747149
>> rx12_csum_unnecessary: 0
>> rx12_csum_unnecessary_inner: 0
>> rx12_csum_none: 0
>> rx12_xdp_drop: 0
>> rx12_xdp_redirect: 0
>> rx12_lro_packets: 0
>> rx12_lro_bytes: 0
>> rx12_ecn_mark: 0
>> rx12_removed_vlan_packets: 19934747149
>> rx12_wqe_err: 0
>> rx12_mpwqe_filler_cqes: 0
>> rx12_mpwqe_filler_strides: 0
>> rx12_buff_alloc_err: 0
>> rx12_cqe_compress_blks: 443350570
>> rx12_cqe_compress_pkts: 995997220
>> rx12_page_reuse: 0
>> rx12_cache_reuse: 9864934174
>> rx12_cache_full: 102437428
>> rx12_cache_empty: 37376
>> rx12_cache_busy: 102439382
>> rx12_cache_waive: 0
>> rx12_congst_umr: 0
>> rx12_arfs_err: 0
>> rx12_xdp_tx_xmit: 0
>> rx12_xdp_tx_full: 0
>> rx12_xdp_tx_err: 0
>> rx12_xdp_tx_cqes: 0
>> rx13_packets: 19866908096
>> rx13_bytes: 26160931936186
>> rx13_csum_complete: 19866908096
>> rx13_csum_unnecessary: 0
>> rx13_csum_unnecessary_inner: 0
>> rx13_csum_none: 0
>> rx13_xdp_drop: 0
>> rx13_xdp_redirect: 0
>> rx13_lro_packets: 0
>> rx13_lro_bytes: 0
>> rx13_ecn_mark: 0
>> rx13_removed_vlan_packets: 19866908096
>> rx13_wqe_err: 0
>> rx13_mpwqe_filler_cqes: 0
>> rx13_mpwqe_filler_strides: 0
>> rx13_buff_alloc_err: 0
>> rx13_cqe_compress_blks: 413640141
>> rx13_cqe_compress_pkts: 926175066
>> rx13_page_reuse: 0
>> rx13_cache_reuse: 36358610
>> rx13_cache_full: 9897092921
>> rx13_cache_empty: 37376
>> rx13_cache_busy: 9897095422
>> rx13_cache_waive: 0
>> rx13_congst_umr: 0
>> rx13_arfs_err: 0
>> rx13_xdp_tx_xmit: 0
>> rx13_xdp_tx_full: 0
>> rx13_xdp_tx_err: 0
>> rx13_xdp_tx_cqes: 0
>> rx14_packets: 20229035746
>> rx14_bytes: 26655092809172
>> rx14_csum_complete: 20229035746
>> rx14_csum_unnecessary: 0
>> rx14_csum_unnecessary_inner: 0
>> rx14_csum_none: 0
>> rx14_xdp_drop: 0
>> rx14_xdp_redirect: 0
>> rx14_lro_packets: 0
>> rx14_lro_bytes: 0
>> rx14_ecn_mark: 0
>> rx14_removed_vlan_packets: 20229035746
>> rx14_wqe_err: 0
>> rx14_mpwqe_filler_cqes: 0
>> rx14_mpwqe_filler_strides: 0
>> rx14_buff_alloc_err: 0
>> rx14_cqe_compress_blks: 460990337
>> rx14_cqe_compress_pkts: 1041287948
>> rx14_page_reuse: 0
>> rx14_cache_reuse: 25649275
>> rx14_cache_full: 10088866045
>> rx14_cache_empty: 37376
>> rx14_cache_busy: 10088868574
>> rx14_cache_waive: 0
>> rx14_congst_umr: 0
>> rx14_arfs_err: 0
>> rx14_xdp_tx_xmit: 0
>> rx14_xdp_tx_full: 0
>> rx14_xdp_tx_err: 0
>> rx14_xdp_tx_cqes: 0
>> rx15_packets: 20528177154
>> rx15_bytes: 27029263893264
>> rx15_csum_complete: 20528177154
>> rx15_csum_unnecessary: 0
>> rx15_csum_unnecessary_inner: 0
>> rx15_csum_none: 0
>> rx15_xdp_drop: 0
>> rx15_xdp_redirect: 0
>> rx15_lro_packets: 0
>> rx15_lro_bytes: 0
>> rx15_ecn_mark: 0
>> rx15_removed_vlan_packets: 20528177154
>> rx15_wqe_err: 0
>> rx15_mpwqe_filler_cqes: 0
>> rx15_mpwqe_filler_strides: 0
>> rx15_buff_alloc_err: 0
>> rx15_cqe_compress_blks: 476776176
>> rx15_cqe_compress_pkts: 1076153263
>> rx15_page_reuse: 0
>> rx15_cache_reuse: 48426735
>> rx15_cache_full: 10215659289
>> rx15_cache_empty: 37376
>> rx15_cache_busy: 10215661817
>> rx15_cache_waive: 0
>> rx15_congst_umr: 0
>> rx15_arfs_err: 0
>> rx15_xdp_tx_xmit: 0
>> rx15_xdp_tx_full: 0
>> rx15_xdp_tx_err: 0
>> rx15_xdp_tx_cqes: 0
>> rx16_packets: 16104078098
>> rx16_bytes: 21256361789679
>> rx16_csum_complete: 16104078098
>> rx16_csum_unnecessary: 0
>> rx16_csum_unnecessary_inner: 0
>> rx16_csum_none: 0
>> rx16_xdp_drop: 0
>> rx16_xdp_redirect: 0
>> rx16_lro_packets: 0
>> rx16_lro_bytes: 0
>> rx16_ecn_mark: 0
>> rx16_removed_vlan_packets: 16104078098
>> rx16_wqe_err: 0
>> rx16_mpwqe_filler_cqes: 0
>> rx16_mpwqe_filler_strides: 0
>> rx16_buff_alloc_err: 0
>> rx16_cqe_compress_blks: 352082054
>> rx16_cqe_compress_pkts: 787161670
>> rx16_page_reuse: 0
>> rx16_cache_reuse: 25912567
>> rx16_cache_full: 8026124051
>> rx16_cache_empty: 37376
>> rx16_cache_busy: 8026126465
>> rx16_cache_waive: 0
>> rx16_congst_umr: 0
>> rx16_arfs_err: 0
>> rx16_xdp_tx_xmit: 0
>> rx16_xdp_tx_full: 0
>> rx16_xdp_tx_err: 0
>> rx16_xdp_tx_cqes: 0
>> rx17_packets: 16314055017
>> rx17_bytes: 21589139030173
>> rx17_csum_complete: 16314055017
>> rx17_csum_unnecessary: 0
>> rx17_csum_unnecessary_inner: 0
>> rx17_csum_none: 0
>> rx17_xdp_drop: 0
>> rx17_xdp_redirect: 0
>> rx17_lro_packets: 0
>> rx17_lro_bytes: 0
>> rx17_ecn_mark: 0
>> rx17_removed_vlan_packets: 16314055017
>> rx17_wqe_err: 0
>> rx17_mpwqe_filler_cqes: 0
>> rx17_mpwqe_filler_strides: 0
>> rx17_buff_alloc_err: 0
>> rx17_cqe_compress_blks: 387834541
>> rx17_cqe_compress_pkts: 871851081
>> rx17_page_reuse: 0
>> rx17_cache_reuse: 24021313
>> rx17_cache_full: 8133003829
>> rx17_cache_empty: 37376
>> rx17_cache_busy: 8133006175
>> rx17_cache_waive: 0
>> rx17_congst_umr: 0
>> rx17_arfs_err: 0
>> rx17_xdp_tx_xmit: 0
>> rx17_xdp_tx_full: 0
>> rx17_xdp_tx_err: 0
>> rx17_xdp_tx_cqes: 0
>> rx18_packets: 16439016814
>> rx18_bytes: 21648651917475
>> rx18_csum_complete: 16439016814
>> rx18_csum_unnecessary: 0
>> rx18_csum_unnecessary_inner: 0
>> rx18_csum_none: 0
>> rx18_xdp_drop: 0
>> rx18_xdp_redirect: 0
>> rx18_lro_packets: 0
>> rx18_lro_bytes: 0
>> rx18_ecn_mark: 0
>> rx18_removed_vlan_packets: 16439016814
>> rx18_wqe_err: 0
>> rx18_mpwqe_filler_cqes: 0
>> rx18_mpwqe_filler_strides: 0
>> rx18_buff_alloc_err: 0
>> rx18_cqe_compress_blks: 375066666
>> rx18_cqe_compress_pkts: 843563974
>> rx18_page_reuse: 0
>> rx18_cache_reuse: 8151064266
>> rx18_cache_full: 68442025
>> rx18_cache_empty: 37376
>> rx18_cache_busy: 68444122
>> rx18_cache_waive: 0
>> rx18_congst_umr: 0
>> rx18_arfs_err: 0
>> rx18_xdp_tx_xmit: 0
>> rx18_xdp_tx_full: 0
>> rx18_xdp_tx_err: 0
>> rx18_xdp_tx_cqes: 0
>> rx19_packets: 16641223506
>> rx19_bytes: 21964749940935
>> rx19_csum_complete: 16641223506
>> rx19_csum_unnecessary: 0
>> rx19_csum_unnecessary_inner: 0
>> rx19_csum_none: 0
>> rx19_xdp_drop: 0
>> rx19_xdp_redirect: 0
>> rx19_lro_packets: 0
>> rx19_lro_bytes: 0
>> rx19_ecn_mark: 0
>> rx19_removed_vlan_packets: 16641223506
>> rx19_wqe_err: 0
>> rx19_mpwqe_filler_cqes: 0
>> rx19_mpwqe_filler_strides: 0
>> rx19_buff_alloc_err: 0
>> rx19_cqe_compress_blks: 387825932
>> rx19_cqe_compress_pkts: 872266355
>> rx19_page_reuse: 0
>> rx19_cache_reuse: 116433620
>> rx19_cache_full: 8204175954
>> rx19_cache_empty: 37376
>> rx19_cache_busy: 8204178120
>> rx19_cache_waive: 0
>> rx19_congst_umr: 0
>> rx19_arfs_err: 0
>> rx19_xdp_tx_xmit: 0
>> rx19_xdp_tx_full: 0
>> rx19_xdp_tx_err: 0
>> rx19_xdp_tx_cqes: 0
>> rx20_packets: 16206927741
>> rx20_bytes: 21387447038430
>> rx20_csum_complete: 16206927741
>> rx20_csum_unnecessary: 0
>> rx20_csum_unnecessary_inner: 0
>> rx20_csum_none: 0
>> rx20_xdp_drop: 0
>> rx20_xdp_redirect: 0
>> rx20_lro_packets: 0
>> rx20_lro_bytes: 0
>> rx20_ecn_mark: 0
>> rx20_removed_vlan_packets: 16206927741
>> rx20_wqe_err: 0
>> rx20_mpwqe_filler_cqes: 0
>> rx20_mpwqe_filler_strides: 0
>> rx20_buff_alloc_err: 0
>> rx20_cqe_compress_blks: 370144620
>> rx20_cqe_compress_pkts: 829122671
>> rx20_page_reuse: 0
>> rx20_cache_reuse: 8053733744
>> rx20_cache_full: 49728026
>> rx20_cache_empty: 37376
>> rx20_cache_busy: 49730116
>> rx20_cache_waive: 0
>> rx20_congst_umr: 0
>> rx20_arfs_err: 0
>> rx20_xdp_tx_xmit: 0
>> rx20_xdp_tx_full: 0
>> rx20_xdp_tx_err: 0
>> rx20_xdp_tx_cqes: 0
>> rx21_packets: 16562361314
>> rx21_bytes: 21856653284356
>> rx21_csum_complete: 16562361314
>> rx21_csum_unnecessary: 0
>> rx21_csum_unnecessary_inner: 0
>> rx21_csum_none: 0
>> rx21_xdp_drop: 0
>> rx21_xdp_redirect: 0
>> rx21_lro_packets: 0
>> rx21_lro_bytes: 0
>> rx21_ecn_mark: 0
>> rx21_removed_vlan_packets: 16562361314
>> rx21_wqe_err: 0
>> rx21_mpwqe_filler_cqes: 0
>> rx21_mpwqe_filler_strides: 0
>> rx21_buff_alloc_err: 0
>> rx21_cqe_compress_blks: 350790425
>> rx21_cqe_compress_pkts: 783850729
>> rx21_page_reuse: 0
>> rx21_cache_reuse: 28077493
>> rx21_cache_full: 8253100706
>> rx21_cache_empty: 37376
>> rx21_cache_busy: 8253103147
>> rx21_cache_waive: 0
>> rx21_congst_umr: 0
>> rx21_arfs_err: 0
>> rx21_xdp_tx_xmit: 0
>> rx21_xdp_tx_full: 0
>> rx21_xdp_tx_err: 0
>> rx21_xdp_tx_cqes: 0
>> rx22_packets: 16350307571
>> rx22_bytes: 21408575325592
>> rx22_csum_complete: 16350307571
>> rx22_csum_unnecessary: 0
>> rx22_csum_unnecessary_inner: 0
>> rx22_csum_none: 0
>> rx22_xdp_drop: 0
>> rx22_xdp_redirect: 0
>> rx22_lro_packets: 0
>> rx22_lro_bytes: 0
>> rx22_ecn_mark: 0
>> rx22_removed_vlan_packets: 16350307571
>> rx22_wqe_err: 0
>> rx22_mpwqe_filler_cqes: 0
>> rx22_mpwqe_filler_strides: 0
>> rx22_buff_alloc_err: 0
>> rx22_cqe_compress_blks: 353531065
>> rx22_cqe_compress_pkts: 790814415
>> rx22_page_reuse: 0
>> rx22_cache_reuse: 16934343
>> rx22_cache_full: 8158216889
>> rx22_cache_empty: 37376
>> rx22_cache_busy: 8158219417
>> rx22_cache_waive: 0
>> rx22_congst_umr: 0
>> rx22_arfs_err: 0
>> rx22_xdp_tx_xmit: 0
>> rx22_xdp_tx_full: 0
>> rx22_xdp_tx_err: 0
>> rx22_xdp_tx_cqes: 0
>> rx23_packets: 16019811764
>> rx23_bytes: 21137182570985
>> rx23_csum_complete: 16019811764
>> rx23_csum_unnecessary: 0
>> rx23_csum_unnecessary_inner: 0
>> rx23_csum_none: 0
>> rx23_xdp_drop: 0
>> rx23_xdp_redirect: 0
>> rx23_lro_packets: 0
>> rx23_lro_bytes: 0
>> rx23_ecn_mark: 0
>> rx23_removed_vlan_packets: 16019811764
>> rx23_wqe_err: 0
>> rx23_mpwqe_filler_cqes: 0
>> rx23_mpwqe_filler_strides: 0
>> rx23_buff_alloc_err: 0
>> rx23_cqe_compress_blks: 349733033
>> rx23_cqe_compress_pkts: 781248862
>> rx23_page_reuse: 0
>> rx23_cache_reuse: 33422343
>> rx23_cache_full: 7976481152
>> rx23_cache_empty: 37376
>> rx23_cache_busy: 7976483525
>> rx23_cache_waive: 0
>> rx23_congst_umr: 0
>> rx23_arfs_err: 0
>> rx23_xdp_tx_xmit: 0
>> rx23_xdp_tx_full: 0
>> rx23_xdp_tx_err: 0
>> rx23_xdp_tx_cqes: 0
>> rx24_packets: 16212040646
>> rx24_bytes: 21393399325700
>> rx24_csum_complete: 16212040646
>> rx24_csum_unnecessary: 0
>> rx24_csum_unnecessary_inner: 0
>> rx24_csum_none: 0
>> rx24_xdp_drop: 0
>> rx24_xdp_redirect: 0
>> rx24_lro_packets: 0
>> rx24_lro_bytes: 0
>> rx24_ecn_mark: 0
>> rx24_removed_vlan_packets: 16212040646
>> rx24_wqe_err: 0
>> rx24_mpwqe_filler_cqes: 0
>> rx24_mpwqe_filler_strides: 0
>> rx24_buff_alloc_err: 0
>> rx24_cqe_compress_blks: 379833752
>> rx24_cqe_compress_pkts: 852020179
>> rx24_page_reuse: 0
>> rx24_cache_reuse: 8033552512
>> rx24_cache_full: 72465843
>> rx24_cache_empty: 37376
>> rx24_cache_busy: 72467789
>> rx24_cache_waive: 0
>> rx24_congst_umr: 0
>> rx24_arfs_err: 0
>> rx24_xdp_tx_xmit: 0
>> rx24_xdp_tx_full: 0
>> rx24_xdp_tx_err: 0
>> rx24_xdp_tx_cqes: 0
>> rx25_packets: 16412186257
>> rx25_bytes: 21651198388407
>> rx25_csum_complete: 16412186257
>> rx25_csum_unnecessary: 0
>> rx25_csum_unnecessary_inner: 0
>> rx25_csum_none: 0
>> rx25_xdp_drop: 0
>> rx25_xdp_redirect: 0
>> rx25_lro_packets: 0
>> rx25_lro_bytes: 0
>> rx25_ecn_mark: 0
>> rx25_removed_vlan_packets: 16412186257
>> rx25_wqe_err: 0
>> rx25_mpwqe_filler_cqes: 0
>> rx25_mpwqe_filler_strides: 0
>> rx25_buff_alloc_err: 0
>> rx25_cqe_compress_blks: 383979685
>> rx25_cqe_compress_pkts: 861985772
>> rx25_page_reuse: 0
>> rx25_cache_reuse: 8129807841
>> rx25_cache_full: 76283342
>> rx25_cache_empty: 37376
>> rx25_cache_busy: 76285271
>> rx25_cache_waive: 0
>> rx25_congst_umr: 0
>> rx25_arfs_err: 0
>> rx25_xdp_tx_xmit: 0
>> rx25_xdp_tx_full: 0
>> rx25_xdp_tx_err: 0
>> rx25_xdp_tx_cqes: 0
>> rx26_packets: 16304310003
>> rx26_bytes: 21571217538721
>> rx26_csum_complete: 16304310003
>> rx26_csum_unnecessary: 0
>> rx26_csum_unnecessary_inner: 0
>> rx26_csum_none: 0
>> rx26_xdp_drop: 0
>> rx26_xdp_redirect: 0
>> rx26_lro_packets: 0
>> rx26_lro_bytes: 0
>> rx26_ecn_mark: 0
>> rx26_removed_vlan_packets: 16304310003
>> rx26_wqe_err: 0
>> rx26_mpwqe_filler_cqes: 0
>> rx26_mpwqe_filler_strides: 0
>> rx26_buff_alloc_err: 0
>> rx26_cqe_compress_blks: 353314041
>> rx26_cqe_compress_pkts: 788838424
>> rx26_page_reuse: 0
>> rx26_cache_reuse: 19673790
>> rx26_cache_full: 8132478659
>> rx26_cache_empty: 37376
>> rx26_cache_busy: 8132481198
>> rx26_cache_waive: 0
>> rx26_congst_umr: 0
>> rx26_arfs_err: 0
>> rx26_xdp_tx_xmit: 0
>> rx26_xdp_tx_full: 0
>> rx26_xdp_tx_err: 0
>> rx26_xdp_tx_cqes: 0
>> rx27_packets: 16171856079
>> rx27_bytes: 21376891736540
>> rx27_csum_complete: 16171856079
>> rx27_csum_unnecessary: 0
>> rx27_csum_unnecessary_inner: 0
>> rx27_csum_none: 0
>> rx27_xdp_drop: 0
>> rx27_xdp_redirect: 0
>> rx27_lro_packets: 0
>> rx27_lro_bytes: 0
>> rx27_ecn_mark: 0
>> rx27_removed_vlan_packets: 16171856079
>> rx27_wqe_err: 0
>> rx27_mpwqe_filler_cqes: 0
>> rx27_mpwqe_filler_strides: 0
>> rx27_buff_alloc_err: 0
>> rx27_cqe_compress_blks: 386632845
>> rx27_cqe_compress_pkts: 869362576
>> rx27_page_reuse: 0
>> rx27_cache_reuse: 10070560
>> rx27_cache_full: 8075854928
>> rx27_cache_empty: 37376
>> rx27_cache_busy: 8075857468
>> rx27_cache_waive: 0
>> rx27_congst_umr: 0
>> rx27_arfs_err: 0
>> rx27_xdp_tx_xmit: 0
>> rx27_xdp_tx_full: 0
>> rx27_xdp_tx_err: 0
>> rx27_xdp_tx_cqes: 0
>> rx28_packets: 0
>> rx28_bytes: 0
>> rx28_csum_complete: 0
>> rx28_csum_unnecessary: 0
>> rx28_csum_unnecessary_inner: 0
>> rx28_csum_none: 0
>> rx28_xdp_drop: 0
>> rx28_xdp_redirect: 0
>> rx28_lro_packets: 0
>> rx28_lro_bytes: 0
>> rx28_ecn_mark: 0
>> rx28_removed_vlan_packets: 0
>> rx28_wqe_err: 0
>> rx28_mpwqe_filler_cqes: 0
>> rx28_mpwqe_filler_strides: 0
>> rx28_buff_alloc_err: 0
>> rx28_cqe_compress_blks: 0
>> rx28_cqe_compress_pkts: 0
>> rx28_page_reuse: 0
>> rx28_cache_reuse: 0
>> rx28_cache_full: 0
>> rx28_cache_empty: 2560
>> rx28_cache_busy: 0
>> rx28_cache_waive: 0
>> rx28_congst_umr: 0
>> rx28_arfs_err: 0
>> rx28_xdp_tx_xmit: 0
>> rx28_xdp_tx_full: 0
>> rx28_xdp_tx_err: 0
>> rx28_xdp_tx_cqes: 0
>> rx29_packets: 0
>> rx29_bytes: 0
>> rx29_csum_complete: 0
>> rx29_csum_unnecessary: 0
>> rx29_csum_unnecessary_inner: 0
>> rx29_csum_none: 0
>> rx29_xdp_drop: 0
>> rx29_xdp_redirect: 0
>> rx29_lro_packets: 0
>> rx29_lro_bytes: 0
>> rx29_ecn_mark: 0
>> rx29_removed_vlan_packets: 0
>> rx29_wqe_err: 0
>> rx29_mpwqe_filler_cqes: 0
>> rx29_mpwqe_filler_strides: 0
>> rx29_buff_alloc_err: 0
>> rx29_cqe_compress_blks: 0
>> rx29_cqe_compress_pkts: 0
>> rx29_page_reuse: 0
>> rx29_cache_reuse: 0
>> rx29_cache_full: 0
>> rx29_cache_empty: 2560
>> rx29_cache_busy: 0
>> rx29_cache_waive: 0
>> rx29_congst_umr: 0
>> rx29_arfs_err: 0
>> rx29_xdp_tx_xmit: 0
>> rx29_xdp_tx_full: 0
>> rx29_xdp_tx_err: 0
>> rx29_xdp_tx_cqes: 0
>> rx30_packets: 0
>> rx30_bytes: 0
>> rx30_csum_complete: 0
>> rx30_csum_unnecessary: 0
>> rx30_csum_unnecessary_inner: 0
>> rx30_csum_none: 0
>> rx30_xdp_drop: 0
>> rx30_xdp_redirect: 0
>> rx30_lro_packets: 0
>> rx30_lro_bytes: 0
>> rx30_ecn_mark: 0
>> rx30_removed_vlan_packets: 0
>> rx30_wqe_err: 0
>> rx30_mpwqe_filler_cqes: 0
>> rx30_mpwqe_filler_strides: 0
>> rx30_buff_alloc_err: 0
>> rx30_cqe_compress_blks: 0
>> rx30_cqe_compress_pkts: 0
>> rx30_page_reuse: 0
>> rx30_cache_reuse: 0
>> rx30_cache_full: 0
>> rx30_cache_empty: 2560
>> rx30_cache_busy: 0
>> rx30_cache_waive: 0
>> rx30_congst_umr: 0
>> rx30_arfs_err: 0
>> rx30_xdp_tx_xmit: 0
>> rx30_xdp_tx_full: 0
>> rx30_xdp_tx_err: 0
>> rx30_xdp_tx_cqes: 0
>> rx31_packets: 0
>> rx31_bytes: 0
>> rx31_csum_complete: 0
>> rx31_csum_unnecessary: 0
>> rx31_csum_unnecessary_inner: 0
>> rx31_csum_none: 0
>> rx31_xdp_drop: 0
>> rx31_xdp_redirect: 0
>> rx31_lro_packets: 0
>> rx31_lro_bytes: 0
>> rx31_ecn_mark: 0
>> rx31_removed_vlan_packets: 0
>> rx31_wqe_err: 0
>> rx31_mpwqe_filler_cqes: 0
>> rx31_mpwqe_filler_strides: 0
>> rx31_buff_alloc_err: 0
>> rx31_cqe_compress_blks: 0
>> rx31_cqe_compress_pkts: 0
>> rx31_page_reuse: 0
>> rx31_cache_reuse: 0
>> rx31_cache_full: 0
>> rx31_cache_empty: 2560
>> rx31_cache_busy: 0
>> rx31_cache_waive: 0
>> rx31_congst_umr: 0
>> rx31_arfs_err: 0
>> rx31_xdp_tx_xmit: 0
>> rx31_xdp_tx_full: 0
>> rx31_xdp_tx_err: 0
>> rx31_xdp_tx_cqes: 0
>> rx32_packets: 0
>> rx32_bytes: 0
>> rx32_csum_complete: 0
>> rx32_csum_unnecessary: 0
>> rx32_csum_unnecessary_inner: 0
>> rx32_csum_none: 0
>> rx32_xdp_drop: 0
>> rx32_xdp_redirect: 0
>> rx32_lro_packets: 0
>> rx32_lro_bytes: 0
>> rx32_ecn_mark: 0
>> rx32_removed_vlan_packets: 0
>> rx32_wqe_err: 0
>> rx32_mpwqe_filler_cqes: 0
>> rx32_mpwqe_filler_strides: 0
>> rx32_buff_alloc_err: 0
>> rx32_cqe_compress_blks: 0
>> rx32_cqe_compress_pkts: 0
>> rx32_page_reuse: 0
>> rx32_cache_reuse: 0
>> rx32_cache_full: 0
>> rx32_cache_empty: 2560
>> rx32_cache_busy: 0
>> rx32_cache_waive: 0
>> rx32_congst_umr: 0
>> rx32_arfs_err: 0
>> rx32_xdp_tx_xmit: 0
>> rx32_xdp_tx_full: 0
>> rx32_xdp_tx_err: 0
>> rx32_xdp_tx_cqes: 0
>> rx33_packets: 0
>> rx33_bytes: 0
>> rx33_csum_complete: 0
>> rx33_csum_unnecessary: 0
>> rx33_csum_unnecessary_inner: 0
>> rx33_csum_none: 0
>> rx33_xdp_drop: 0
>> rx33_xdp_redirect: 0
>> rx33_lro_packets: 0
>> rx33_lro_bytes: 0
>> rx33_ecn_mark: 0
>> rx33_removed_vlan_packets: 0
>> rx33_wqe_err: 0
>> rx33_mpwqe_filler_cqes: 0
>> rx33_mpwqe_filler_strides: 0
>> rx33_buff_alloc_err: 0
>> rx33_cqe_compress_blks: 0
>> rx33_cqe_compress_pkts: 0
>> rx33_page_reuse: 0
>> rx33_cache_reuse: 0
>> rx33_cache_full: 0
>> rx33_cache_empty: 2560
>> rx33_cache_busy: 0
>> rx33_cache_waive: 0
>> rx33_congst_umr: 0
>> rx33_arfs_err: 0
>> rx33_xdp_tx_xmit: 0
>> rx33_xdp_tx_full: 0
>> rx33_xdp_tx_err: 0
>> rx33_xdp_tx_cqes: 0
>> rx34_packets: 0
>> rx34_bytes: 0
>> rx34_csum_complete: 0
>> rx34_csum_unnecessary: 0
>> rx34_csum_unnecessary_inner: 0
>> rx34_csum_none: 0
>> rx34_xdp_drop: 0
>> rx34_xdp_redirect: 0
>> rx34_lro_packets: 0
>> rx34_lro_bytes: 0
>> rx34_ecn_mark: 0
>> rx34_removed_vlan_packets: 0
>> rx34_wqe_err: 0
>> rx34_mpwqe_filler_cqes: 0
>> rx34_mpwqe_filler_strides: 0
>> rx34_buff_alloc_err: 0
>> rx34_cqe_compress_blks: 0
>> rx34_cqe_compress_pkts: 0
>> rx34_page_reuse: 0
>> rx34_cache_reuse: 0
>> rx34_cache_full: 0
>> rx34_cache_empty: 2560
>> rx34_cache_busy: 0
>> rx34_cache_waive: 0
>> rx34_congst_umr: 0
>> rx34_arfs_err: 0
>> rx34_xdp_tx_xmit: 0
>> rx34_xdp_tx_full: 0
>> rx34_xdp_tx_err: 0
>> rx34_xdp_tx_cqes: 0
>> rx35_packets: 0
>> rx35_bytes: 0
>> rx35_csum_complete: 0
>> rx35_csum_unnecessary: 0
>> rx35_csum_unnecessary_inner: 0
>> rx35_csum_none: 0
>> rx35_xdp_drop: 0
>> rx35_xdp_redirect: 0
>> rx35_lro_packets: 0
>> rx35_lro_bytes: 0
>> rx35_ecn_mark: 0
>> rx35_removed_vlan_packets: 0
>> rx35_wqe_err: 0
>> rx35_mpwqe_filler_cqes: 0
>> rx35_mpwqe_filler_strides: 0
>> rx35_buff_alloc_err: 0
>> rx35_cqe_compress_blks: 0
>> rx35_cqe_compress_pkts: 0
>> rx35_page_reuse: 0
>> rx35_cache_reuse: 0
>> rx35_cache_full: 0
>> rx35_cache_empty: 2560
>> rx35_cache_busy: 0
>> rx35_cache_waive: 0
>> rx35_congst_umr: 0
>> rx35_arfs_err: 0
>> rx35_xdp_tx_xmit: 0
>> rx35_xdp_tx_full: 0
>> rx35_xdp_tx_err: 0
>> rx35_xdp_tx_cqes: 0
>> rx36_packets: 0
>> rx36_bytes: 0
>> rx36_csum_complete: 0
>> rx36_csum_unnecessary: 0
>> rx36_csum_unnecessary_inner: 0
>> rx36_csum_none: 0
>> rx36_xdp_drop: 0
>> rx36_xdp_redirect: 0
>> rx36_lro_packets: 0
>> rx36_lro_bytes: 0
>> rx36_ecn_mark: 0
>> rx36_removed_vlan_packets: 0
>> rx36_wqe_err: 0
>> rx36_mpwqe_filler_cqes: 0
>> rx36_mpwqe_filler_strides: 0
>> rx36_buff_alloc_err: 0
>> rx36_cqe_compress_blks: 0
>> rx36_cqe_compress_pkts: 0
>> rx36_page_reuse: 0
>> rx36_cache_reuse: 0
>> rx36_cache_full: 0
>> rx36_cache_empty: 2560
>> rx36_cache_busy: 0
>> rx36_cache_waive: 0
>> rx36_congst_umr: 0
>> rx36_arfs_err: 0
>> rx36_xdp_tx_xmit: 0
>> rx36_xdp_tx_full: 0
>> rx36_xdp_tx_err: 0
>> rx36_xdp_tx_cqes: 0
>> rx37_packets: 0
>> rx37_bytes: 0
>> rx37_csum_complete: 0
>> rx37_csum_unnecessary: 0
>> rx37_csum_unnecessary_inner: 0
>> rx37_csum_none: 0
>> rx37_xdp_drop: 0
>> rx37_xdp_redirect: 0
>> rx37_lro_packets: 0
>> rx37_lro_bytes: 0
>> rx37_ecn_mark: 0
>> rx37_removed_vlan_packets: 0
>> rx37_wqe_err: 0
>> rx37_mpwqe_filler_cqes: 0
>> rx37_mpwqe_filler_strides: 0
>> rx37_buff_alloc_err: 0
>> rx37_cqe_compress_blks: 0
>> rx37_cqe_compress_pkts: 0
>> rx37_page_reuse: 0
>> rx37_cache_reuse: 0
>> rx37_cache_full: 0
>> rx37_cache_empty: 2560
>> rx37_cache_busy: 0
>> rx37_cache_waive: 0
>> rx37_congst_umr: 0
>> rx37_arfs_err: 0
>> rx37_xdp_tx_xmit: 0
>> rx37_xdp_tx_full: 0
>> rx37_xdp_tx_err: 0
>> rx37_xdp_tx_cqes: 0
>> rx38_packets: 0
>> rx38_bytes: 0
>> rx38_csum_complete: 0
>> rx38_csum_unnecessary: 0
>> rx38_csum_unnecessary_inner: 0
>> rx38_csum_none: 0
>> rx38_xdp_drop: 0
>> rx38_xdp_redirect: 0
>> rx38_lro_packets: 0
>> rx38_lro_bytes: 0
>> rx38_ecn_mark: 0
>> rx38_removed_vlan_packets: 0
>> rx38_wqe_err: 0
>> rx38_mpwqe_filler_cqes: 0
>> rx38_mpwqe_filler_strides: 0
>> rx38_buff_alloc_err: 0
>> rx38_cqe_compress_blks: 0
>> rx38_cqe_compress_pkts: 0
>> rx38_page_reuse: 0
>> rx38_cache_reuse: 0
>> rx38_cache_full: 0
>> rx38_cache_empty: 2560
>> rx38_cache_busy: 0
>> rx38_cache_waive: 0
>> rx38_congst_umr: 0
>> rx38_arfs_err: 0
>> rx38_xdp_tx_xmit: 0
>> rx38_xdp_tx_full: 0
>> rx38_xdp_tx_err: 0
>> rx38_xdp_tx_cqes: 0
>> rx39_packets: 0
>> rx39_bytes: 0
>> rx39_csum_complete: 0
>> rx39_csum_unnecessary: 0
>> rx39_csum_unnecessary_inner: 0
>> rx39_csum_none: 0
>> rx39_xdp_drop: 0
>> rx39_xdp_redirect: 0
>> rx39_lro_packets: 0
>> rx39_lro_bytes: 0
>> rx39_ecn_mark: 0
>> rx39_removed_vlan_packets: 0
>> rx39_wqe_err: 0
>> rx39_mpwqe_filler_cqes: 0
>> rx39_mpwqe_filler_strides: 0
>> rx39_buff_alloc_err: 0
>> rx39_cqe_compress_blks: 0
>> rx39_cqe_compress_pkts: 0
>> rx39_page_reuse: 0
>> rx39_cache_reuse: 0
>> rx39_cache_full: 0
>> rx39_cache_empty: 2560
>> rx39_cache_busy: 0
>> rx39_cache_waive: 0
>> rx39_congst_umr: 0
>> rx39_arfs_err: 0
>> rx39_xdp_tx_xmit: 0
>> rx39_xdp_tx_full: 0
>> rx39_xdp_tx_err: 0
>> rx39_xdp_tx_cqes: 0
>> rx40_packets: 0
>> rx40_bytes: 0
>> rx40_csum_complete: 0
>> rx40_csum_unnecessary: 0
>> rx40_csum_unnecessary_inner: 0
>> rx40_csum_none: 0
>> rx40_xdp_drop: 0
>> rx40_xdp_redirect: 0
>> rx40_lro_packets: 0
>> rx40_lro_bytes: 0
>> rx40_ecn_mark: 0
>> rx40_removed_vlan_packets: 0
>> rx40_wqe_err: 0
>> rx40_mpwqe_filler_cqes: 0
>> rx40_mpwqe_filler_strides: 0
>> rx40_buff_alloc_err: 0
>> rx40_cqe_compress_blks: 0
>> rx40_cqe_compress_pkts: 0
>> rx40_page_reuse: 0
>> rx40_cache_reuse: 0
>> rx40_cache_full: 0
>> rx40_cache_empty: 2560
>> rx40_cache_busy: 0
>> rx40_cache_waive: 0
>> rx40_congst_umr: 0
>> rx40_arfs_err: 0
>> rx40_xdp_tx_xmit: 0
>> rx40_xdp_tx_full: 0
>> rx40_xdp_tx_err: 0
>> rx40_xdp_tx_cqes: 0
>> rx41_packets: 0
>> rx41_bytes: 0
>> rx41_csum_complete: 0
>> rx41_csum_unnecessary: 0
>> rx41_csum_unnecessary_inner: 0
>> rx41_csum_none: 0
>> rx41_xdp_drop: 0
>> rx41_xdp_redirect: 0
>> rx41_lro_packets: 0
>> rx41_lro_bytes: 0
>> rx41_ecn_mark: 0
>> rx41_removed_vlan_packets: 0
>> rx41_wqe_err: 0
>> rx41_mpwqe_filler_cqes: 0
>> rx41_mpwqe_filler_strides: 0
>> rx41_buff_alloc_err: 0
>> rx41_cqe_compress_blks: 0
>> rx41_cqe_compress_pkts: 0
>> rx41_page_reuse: 0
>> rx41_cache_reuse: 0
>> rx41_cache_full: 0
>> rx41_cache_empty: 2560
>> rx41_cache_busy: 0
>> rx41_cache_waive: 0
>> rx41_congst_umr: 0
>> rx41_arfs_err: 0
>> rx41_xdp_tx_xmit: 0
>> rx41_xdp_tx_full: 0
>> rx41_xdp_tx_err: 0
>> rx41_xdp_tx_cqes: 0
>> rx42_packets: 0
>> rx42_bytes: 0
>> rx42_csum_complete: 0
>> rx42_csum_unnecessary: 0
>> rx42_csum_unnecessary_inner: 0
>> rx42_csum_none: 0
>> rx42_xdp_drop: 0
>> rx42_xdp_redirect: 0
>> rx42_lro_packets: 0
>> rx42_lro_bytes: 0
>> rx42_ecn_mark: 0
>> rx42_removed_vlan_packets: 0
>> rx42_wqe_err: 0
>> rx42_mpwqe_filler_cqes: 0
>> rx42_mpwqe_filler_strides: 0
>> rx42_buff_alloc_err: 0
>> rx42_cqe_compress_blks: 0
>> rx42_cqe_compress_pkts: 0
>> rx42_page_reuse: 0
>> rx42_cache_reuse: 0
>> rx42_cache_full: 0
>> rx42_cache_empty: 2560
>> rx42_cache_busy: 0
>> rx42_cache_waive: 0
>> rx42_congst_umr: 0
>> rx42_arfs_err: 0
>> rx42_xdp_tx_xmit: 0
>> rx42_xdp_tx_full: 0
>> rx42_xdp_tx_err: 0
>> rx42_xdp_tx_cqes: 0
>> rx43_packets: 0
>> rx43_bytes: 0
>> rx43_csum_complete: 0
>> rx43_csum_unnecessary: 0
>> rx43_csum_unnecessary_inner: 0
>> rx43_csum_none: 0
>> rx43_xdp_drop: 0
>> rx43_xdp_redirect: 0
>> rx43_lro_packets: 0
>> rx43_lro_bytes: 0
>> rx43_ecn_mark: 0
>> rx43_removed_vlan_packets: 0
>> rx43_wqe_err: 0
>> rx43_mpwqe_filler_cqes: 0
>> rx43_mpwqe_filler_strides: 0
>> rx43_buff_alloc_err: 0
>> rx43_cqe_compress_blks: 0
>> rx43_cqe_compress_pkts: 0
>> rx43_page_reuse: 0
>> rx43_cache_reuse: 0
>> rx43_cache_full: 0
>> rx43_cache_empty: 2560
>> rx43_cache_busy: 0
>> rx43_cache_waive: 0
>> rx43_congst_umr: 0
>> rx43_arfs_err: 0
>> rx43_xdp_tx_xmit: 0
>> rx43_xdp_tx_full: 0
>> rx43_xdp_tx_err: 0
>> rx43_xdp_tx_cqes: 0
>> rx44_packets: 0
>> rx44_bytes: 0
>> rx44_csum_complete: 0
>> rx44_csum_unnecessary: 0
>> rx44_csum_unnecessary_inner: 0
>> rx44_csum_none: 0
>> rx44_xdp_drop: 0
>> rx44_xdp_redirect: 0
>> rx44_lro_packets: 0
>> rx44_lro_bytes: 0
>> rx44_ecn_mark: 0
>> rx44_removed_vlan_packets: 0
>> rx44_wqe_err: 0
>> rx44_mpwqe_filler_cqes: 0
>> rx44_mpwqe_filler_strides: 0
>> rx44_buff_alloc_err: 0
>> rx44_cqe_compress_blks: 0
>> rx44_cqe_compress_pkts: 0
>> rx44_page_reuse: 0
>> rx44_cache_reuse: 0
>> rx44_cache_full: 0
>> rx44_cache_empty: 2560
>> rx44_cache_busy: 0
>> rx44_cache_waive: 0
>> rx44_congst_umr: 0
>> rx44_arfs_err: 0
>> rx44_xdp_tx_xmit: 0
>> rx44_xdp_tx_full: 0
>> rx44_xdp_tx_err: 0
>> rx44_xdp_tx_cqes: 0
>> rx45_packets: 0
>> rx45_bytes: 0
>> rx45_csum_complete: 0
>> rx45_csum_unnecessary: 0
>> rx45_csum_unnecessary_inner: 0
>> rx45_csum_none: 0
>> rx45_xdp_drop: 0
>> rx45_xdp_redirect: 0
>> rx45_lro_packets: 0
>> rx45_lro_bytes: 0
>> rx45_ecn_mark: 0
>> rx45_removed_vlan_packets: 0
>> rx45_wqe_err: 0
>> rx45_mpwqe_filler_cqes: 0
>> rx45_mpwqe_filler_strides: 0
>> rx45_buff_alloc_err: 0
>> rx45_cqe_compress_blks: 0
>> rx45_cqe_compress_pkts: 0
>> rx45_page_reuse: 0
>> rx45_cache_reuse: 0
>> rx45_cache_full: 0
>> rx45_cache_empty: 2560
>> rx45_cache_busy: 0
>> rx45_cache_waive: 0
>> rx45_congst_umr: 0
>> rx45_arfs_err: 0
>> rx45_xdp_tx_xmit: 0
>> rx45_xdp_tx_full: 0
>> rx45_xdp_tx_err: 0
>> rx45_xdp_tx_cqes: 0
>> rx46_packets: 0
>> rx46_bytes: 0
>> rx46_csum_complete: 0
>> rx46_csum_unnecessary: 0
>> rx46_csum_unnecessary_inner: 0
>> rx46_csum_none: 0
>> rx46_xdp_drop: 0
>> rx46_xdp_redirect: 0
>> rx46_lro_packets: 0
>> rx46_lro_bytes: 0
>> rx46_ecn_mark: 0
>> rx46_removed_vlan_packets: 0
>> rx46_wqe_err: 0
>> rx46_mpwqe_filler_cqes: 0
>> rx46_mpwqe_filler_strides: 0
>> rx46_buff_alloc_err: 0
>> rx46_cqe_compress_blks: 0
>> rx46_cqe_compress_pkts: 0
>> rx46_page_reuse: 0
>> rx46_cache_reuse: 0
>> rx46_cache_full: 0
>> rx46_cache_empty: 2560
>> rx46_cache_busy: 0
>> rx46_cache_waive: 0
>> rx46_congst_umr: 0
>> rx46_arfs_err: 0
>> rx46_xdp_tx_xmit: 0
>> rx46_xdp_tx_full: 0
>> rx46_xdp_tx_err: 0
>> rx46_xdp_tx_cqes: 0
>> rx47_packets: 0
>> rx47_bytes: 0
>> rx47_csum_complete: 0
>> rx47_csum_unnecessary: 0
>> rx47_csum_unnecessary_inner: 0
>> rx47_csum_none: 0
>> rx47_xdp_drop: 0
>> rx47_xdp_redirect: 0
>> rx47_lro_packets: 0
>> rx47_lro_bytes: 0
>> rx47_ecn_mark: 0
>> rx47_removed_vlan_packets: 0
>> rx47_wqe_err: 0
>> rx47_mpwqe_filler_cqes: 0
>> rx47_mpwqe_filler_strides: 0
>> rx47_buff_alloc_err: 0
>> rx47_cqe_compress_blks: 0
>> rx47_cqe_compress_pkts: 0
>> rx47_page_reuse: 0
>> rx47_cache_reuse: 0
>> rx47_cache_full: 0
>> rx47_cache_empty: 2560
>> rx47_cache_busy: 0
>> rx47_cache_waive: 0
>> rx47_congst_umr: 0
>> rx47_arfs_err: 0
>> rx47_xdp_tx_xmit: 0
>> rx47_xdp_tx_full: 0
>> rx47_xdp_tx_err: 0
>> rx47_xdp_tx_cqes: 0
>> rx48_packets: 0
>> rx48_bytes: 0
>> rx48_csum_complete: 0
>> rx48_csum_unnecessary: 0
>> rx48_csum_unnecessary_inner: 0
>> rx48_csum_none: 0
>> rx48_xdp_drop: 0
>> rx48_xdp_redirect: 0
>> rx48_lro_packets: 0
>> rx48_lro_bytes: 0
>> rx48_ecn_mark: 0
>> rx48_removed_vlan_packets: 0
>> rx48_wqe_err: 0
>> rx48_mpwqe_filler_cqes: 0
>> rx48_mpwqe_filler_strides: 0
>> rx48_buff_alloc_err: 0
>> rx48_cqe_compress_blks: 0
>> rx48_cqe_compress_pkts: 0
>> rx48_page_reuse: 0
>> rx48_cache_reuse: 0
>> rx48_cache_full: 0
>> rx48_cache_empty: 2560
>> rx48_cache_busy: 0
>> rx48_cache_waive: 0
>> rx48_congst_umr: 0
>> rx48_arfs_err: 0
>> rx48_xdp_tx_xmit: 0
>> rx48_xdp_tx_full: 0
>> rx48_xdp_tx_err: 0
>> rx48_xdp_tx_cqes: 0
>> rx49_packets: 0
>> rx49_bytes: 0
>> rx49_csum_complete: 0
>> rx49_csum_unnecessary: 0
>> rx49_csum_unnecessary_inner: 0
>> rx49_csum_none: 0
>> rx49_xdp_drop: 0
>> rx49_xdp_redirect: 0
>> rx49_lro_packets: 0
>> rx49_lro_bytes: 0
>> rx49_ecn_mark: 0
>> rx49_removed_vlan_packets: 0
>> rx49_wqe_err: 0
>> rx49_mpwqe_filler_cqes: 0
>> rx49_mpwqe_filler_strides: 0
>> rx49_buff_alloc_err: 0
>> rx49_cqe_compress_blks: 0
>> rx49_cqe_compress_pkts: 0
>> rx49_page_reuse: 0
>> rx49_cache_reuse: 0
>> rx49_cache_full: 0
>> rx49_cache_empty: 2560
>> rx49_cache_busy: 0
>> rx49_cache_waive: 0
>> rx49_congst_umr: 0
>> rx49_arfs_err: 0
>> rx49_xdp_tx_xmit: 0
>> rx49_xdp_tx_full: 0
>> rx49_xdp_tx_err: 0
>> rx49_xdp_tx_cqes: 0
>> rx50_packets: 0
>> rx50_bytes: 0
>> rx50_csum_complete: 0
>> rx50_csum_unnecessary: 0
>> rx50_csum_unnecessary_inner: 0
>> rx50_csum_none: 0
>> rx50_xdp_drop: 0
>> rx50_xdp_redirect: 0
>> rx50_lro_packets: 0
>> rx50_lro_bytes: 0
>> rx50_ecn_mark: 0
>> rx50_removed_vlan_packets: 0
>> rx50_wqe_err: 0
>> rx50_mpwqe_filler_cqes: 0
>> rx50_mpwqe_filler_strides: 0
>> rx50_buff_alloc_err: 0
>> rx50_cqe_compress_blks: 0
>> rx50_cqe_compress_pkts: 0
>> rx50_page_reuse: 0
>> rx50_cache_reuse: 0
>> rx50_cache_full: 0
>> rx50_cache_empty: 2560
>> rx50_cache_busy: 0
>> rx50_cache_waive: 0
>> rx50_congst_umr: 0
>> rx50_arfs_err: 0
>> rx50_xdp_tx_xmit: 0
>> rx50_xdp_tx_full: 0
>> rx50_xdp_tx_err: 0
>> rx50_xdp_tx_cqes: 0
>> rx51_packets: 0
>> rx51_bytes: 0
>> rx51_csum_complete: 0
>> rx51_csum_unnecessary: 0
>> rx51_csum_unnecessary_inner: 0
>> rx51_csum_none: 0
>> rx51_xdp_drop: 0
>> rx51_xdp_redirect: 0
>> rx51_lro_packets: 0
>> rx51_lro_bytes: 0
>> rx51_ecn_mark: 0
>> rx51_removed_vlan_packets: 0
>> rx51_wqe_err: 0
>> rx51_mpwqe_filler_cqes: 0
>> rx51_mpwqe_filler_strides: 0
>> rx51_buff_alloc_err: 0
>> rx51_cqe_compress_blks: 0
>> rx51_cqe_compress_pkts: 0
>> rx51_page_reuse: 0
>> rx51_cache_reuse: 0
>> rx51_cache_full: 0
>> rx51_cache_empty: 2560
>> rx51_cache_busy: 0
>> rx51_cache_waive: 0
>> rx51_congst_umr: 0
>> rx51_arfs_err: 0
>> rx51_xdp_tx_xmit: 0
>> rx51_xdp_tx_full: 0
>> rx51_xdp_tx_err: 0
>> rx51_xdp_tx_cqes: 0
>> rx52_packets: 0
>> rx52_bytes: 0
>> rx52_csum_complete: 0
>> rx52_csum_unnecessary: 0
>> rx52_csum_unnecessary_inner: 0
>> rx52_csum_none: 0
>> rx52_xdp_drop: 0
>> rx52_xdp_redirect: 0
>> rx52_lro_packets: 0
>> rx52_lro_bytes: 0
>> rx52_ecn_mark: 0
>> rx52_removed_vlan_packets: 0
>> rx52_wqe_err: 0
>> rx52_mpwqe_filler_cqes: 0
>> rx52_mpwqe_filler_strides: 0
>> rx52_buff_alloc_err: 0
>> rx52_cqe_compress_blks: 0
>> rx52_cqe_compress_pkts: 0
>> rx52_page_reuse: 0
>> rx52_cache_reuse: 0
>> rx52_cache_full: 0
>> rx52_cache_empty: 2560
>> rx52_cache_busy: 0
>> rx52_cache_waive: 0
>> rx52_congst_umr: 0
>> rx52_arfs_err: 0
>> rx52_xdp_tx_xmit: 0
>> rx52_xdp_tx_full: 0
>> rx52_xdp_tx_err: 0
>> rx52_xdp_tx_cqes: 0
>> rx53_packets: 0
>> rx53_bytes: 0
>> rx53_csum_complete: 0
>> rx53_csum_unnecessary: 0
>> rx53_csum_unnecessary_inner: 0
>> rx53_csum_none: 0
>> rx53_xdp_drop: 0
>> rx53_xdp_redirect: 0
>> rx53_lro_packets: 0
>> rx53_lro_bytes: 0
>> rx53_ecn_mark: 0
>> rx53_removed_vlan_packets: 0
>> rx53_wqe_err: 0
>> rx53_mpwqe_filler_cqes: 0
>> rx53_mpwqe_filler_strides: 0
>> rx53_buff_alloc_err: 0
>> rx53_cqe_compress_blks: 0
>> rx53_cqe_compress_pkts: 0
>> rx53_page_reuse: 0
>> rx53_cache_reuse: 0
>> rx53_cache_full: 0
>> rx53_cache_empty: 2560
>> rx53_cache_busy: 0
>> rx53_cache_waive: 0
>> rx53_congst_umr: 0
>> rx53_arfs_err: 0
>> rx53_xdp_tx_xmit: 0
>> rx53_xdp_tx_full: 0
>> rx53_xdp_tx_err: 0
>> rx53_xdp_tx_cqes: 0
>> rx54_packets: 0
>> rx54_bytes: 0
>> rx54_csum_complete: 0
>> rx54_csum_unnecessary: 0
>> rx54_csum_unnecessary_inner: 0
>> rx54_csum_none: 0
>> rx54_xdp_drop: 0
>> rx54_xdp_redirect: 0
>> rx54_lro_packets: 0
>> rx54_lro_bytes: 0
>> rx54_ecn_mark: 0
>> rx54_removed_vlan_packets: 0
>> rx54_wqe_err: 0
>> rx54_mpwqe_filler_cqes: 0
>> rx54_mpwqe_filler_strides: 0
>> rx54_buff_alloc_err: 0
>> rx54_cqe_compress_blks: 0
>> rx54_cqe_compress_pkts: 0
>> rx54_page_reuse: 0
>> rx54_cache_reuse: 0
>> rx54_cache_full: 0
>> rx54_cache_empty: 2560
>> rx54_cache_busy: 0
>> rx54_cache_waive: 0
>> rx54_congst_umr: 0
>> rx54_arfs_err: 0
>> rx54_xdp_tx_xmit: 0
>> rx54_xdp_tx_full: 0
>> rx54_xdp_tx_err: 0
>> rx54_xdp_tx_cqes: 0
>> rx55_packets: 0
>> rx55_bytes: 0
>> rx55_csum_complete: 0
>> rx55_csum_unnecessary: 0
>> rx55_csum_unnecessary_inner: 0
>> rx55_csum_none: 0
>> rx55_xdp_drop: 0
>> rx55_xdp_redirect: 0
>> rx55_lro_packets: 0
>> rx55_lro_bytes: 0
>> rx55_ecn_mark: 0
>> rx55_removed_vlan_packets: 0
>> rx55_wqe_err: 0
>> rx55_mpwqe_filler_cqes: 0
>> rx55_mpwqe_filler_strides: 0
>> rx55_buff_alloc_err: 0
>> rx55_cqe_compress_blks: 0
>> rx55_cqe_compress_pkts: 0
>> rx55_page_reuse: 0
>> rx55_cache_reuse: 0
>> rx55_cache_full: 0
>> rx55_cache_empty: 2560
>> rx55_cache_busy: 0
>> rx55_cache_waive: 0
>> rx55_congst_umr: 0
>> rx55_arfs_err: 0
>> rx55_xdp_tx_xmit: 0
>> rx55_xdp_tx_full: 0
>> rx55_xdp_tx_err: 0
>> rx55_xdp_tx_cqes: 0
>> tx0_packets: 24512439668
>> tx0_bytes: 15287569052791
>> tx0_tso_packets: 1536157106
>> tx0_tso_bytes: 8571753637944
>> tx0_tso_inner_packets: 0
>> tx0_tso_inner_bytes: 0
>> tx0_csum_partial: 2132156117
>> tx0_csum_partial_inner: 0
>> tx0_added_vlan_packets: 19906601448
>> tx0_nop: 308098536
>> tx0_csum_none: 17774445331
>> tx0_stopped: 19625
>> tx0_dropped: 0
>> tx0_xmit_more: 67864870
>> tx0_recover: 0
>> tx0_cqes: 19838744246
>> tx0_wake: 19624
>> tx0_cqe_err: 0
>> tx1_packets: 22598557053
>> tx1_bytes: 13568850145010
>> tx1_tso_packets: 1369529475
>> tx1_tso_bytes: 7661777265382
>> tx1_tso_inner_packets: 0
>> tx1_tso_inner_bytes: 0
>> tx1_csum_partial: 1884639496
>> tx1_csum_partial_inner: 0
>> tx1_added_vlan_packets: 18468333696
>> tx1_nop: 281301783
>> tx1_csum_none: 16583694200
>> tx1_stopped: 19457
>> tx1_dropped: 0
>> tx1_xmit_more: 55170875
>> tx1_recover: 0
>> tx1_cqes: 18413169824
>> tx1_wake: 19455
>> tx1_cqe_err: 0
>> tx2_packets: 22821611433
>> tx2_bytes: 13752535163683
>> tx2_tso_packets: 1396978825
>> tx2_tso_bytes: 7774704508463
>> tx2_tso_inner_packets: 0
>> tx2_tso_inner_bytes: 0
>> tx2_csum_partial: 1897834558
>> tx2_csum_partial_inner: 0
>> tx2_added_vlan_packets: 18641958085
>> tx2_nop: 286934891
>> tx2_csum_none: 16744123527
>> tx2_stopped: 13214
>> tx2_dropped: 0
>> tx2_xmit_more: 61749446
>> tx2_recover: 0
>> tx2_cqes: 18580215654
>> tx2_wake: 13214
>> tx2_cqe_err: 0
>> tx3_packets: 22580809948
>> tx3_bytes: 13730542936609
>> tx3_tso_packets: 1370434579
>> tx3_tso_bytes: 7605636711455
>> tx3_tso_inner_packets: 0
>> tx3_tso_inner_bytes: 0
>> tx3_csum_partial: 1865573748
>> tx3_csum_partial_inner: 0
>> tx3_added_vlan_packets: 18491873644
>> tx3_nop: 281195875
>> tx3_csum_none: 16626299896
>> tx3_stopped: 12542
>> tx3_dropped: 0
>> tx3_xmit_more: 57681647
>> tx3_recover: 0
>> tx3_cqes: 18434198757
>> tx3_wake: 12540
>> tx3_cqe_err: 0
>> tx4_packets: 27801801208
>> tx4_bytes: 17058453171137
>> tx4_tso_packets: 1740500105
>> tx4_tso_bytes: 9474905622036
>> tx4_tso_inner_packets: 0
>> tx4_tso_inner_bytes: 0
>> tx4_csum_partial: 2279225376
>> tx4_csum_partial_inner: 0
>> tx4_added_vlan_packets: 22744081633
>> tx4_nop: 349753979
>> tx4_csum_none: 20464856257
>> tx4_stopped: 14816
>> tx4_dropped: 0
>> tx4_xmit_more: 65469322
>> tx4_recover: 0
>> tx4_cqes: 22678618972
>> tx4_wake: 14816
>> tx4_cqe_err: 0
>> tx5_packets: 25099783024
>> tx5_bytes: 14917740698381
>> tx5_tso_packets: 1512988013
>> tx5_tso_bytes: 8571208921023
>> tx5_tso_inner_packets: 0
>> tx5_tso_inner_bytes: 0
>> tx5_csum_partial: 2078498561
>> tx5_csum_partial_inner: 0
>> tx5_added_vlan_packets: 20465533760
>> tx5_nop: 312614719
>> tx5_csum_none: 18387035199
>> tx5_stopped: 4605
>> tx5_dropped: 0
>> tx5_xmit_more: 64188936
>> tx5_recover: 0
>> tx5_cqes: 20401350718
>> tx5_wake: 4604
>> tx5_cqe_err: 0
>> tx6_packets: 25025504896
>> tx6_bytes: 14908021946070
>> tx6_tso_packets: 1515718977
>> tx6_tso_bytes: 8511442522461
>> tx6_tso_inner_packets: 0
>> tx6_tso_inner_bytes: 0
>> tx6_csum_partial: 2056378610
>> tx6_csum_partial_inner: 0
>> tx6_added_vlan_packets: 20434066400
>> tx6_nop: 310594020
>> tx6_csum_none: 18377687790
>> tx6_stopped: 15234
>> tx6_dropped: 0
>> tx6_xmit_more: 61130422
>> tx6_recover: 0
>> tx6_cqes: 20372943611
>> tx6_wake: 15234
>> tx6_cqe_err: 0
>> tx7_packets: 25457096169
>> tx7_bytes: 15456289446172
>> tx7_tso_packets: 1553342799
>> tx7_tso_bytes: 8764550988105
>> tx7_tso_inner_packets: 0
>> tx7_tso_inner_bytes: 0
>> tx7_csum_partial: 2105765233
>> tx7_csum_partial_inner: 0
>> tx7_added_vlan_packets: 20720382377
>> tx7_nop: 319044853
>> tx7_csum_none: 18614617145
>> tx7_stopped: 18745
>> tx7_dropped: 0
>> tx7_xmit_more: 57050107
>> tx7_recover: 0
>> tx7_cqes: 20663340775
>> tx7_wake: 18746
>> tx7_cqe_err: 0
>> tx8_packets: 25389771649
>> tx8_bytes: 15225503883962
>> tx8_tso_packets: 1563367648
>> tx8_tso_bytes: 8710384514258
>> tx8_tso_inner_packets: 0
>> tx8_tso_inner_bytes: 0
>> tx8_csum_partial: 2106586634
>> tx8_csum_partial_inner: 0
>> tx8_added_vlan_packets: 20704676274
>> tx8_nop: 318149261
>> tx8_csum_none: 18598089640
>> tx8_stopped: 4733
>> tx8_dropped: 0
>> tx8_xmit_more: 61014317
>> tx8_recover: 0
>> tx8_cqes: 20643667301
>> tx8_wake: 4735
>> tx8_cqe_err: 0
>> tx9_packets: 25521500166
>> tx9_bytes: 15302293145755
>> tx9_tso_packets: 1546316697
>> tx9_tso_bytes: 8770688145926
>> tx9_tso_inner_packets: 0
>> tx9_tso_inner_bytes: 0
>> tx9_csum_partial: 2097652880
>> tx9_csum_partial_inner: 0
>> tx9_added_vlan_packets: 20778408432
>> tx9_nop: 318538543
>> tx9_csum_none: 18680755556
>> tx9_stopped: 16118
>> tx9_dropped: 0
>> tx9_xmit_more: 68509728
>> tx9_recover: 0
>> tx9_cqes: 20709906498
>> tx9_wake: 16118
>> tx9_cqe_err: 0
>> tx10_packets: 25451605829
>> tx10_bytes: 15386896170792
>> tx10_tso_packets: 1576473520
>> tx10_tso_bytes: 8880888676383
>> tx10_tso_inner_packets: 0
>> tx10_tso_inner_bytes: 0
>> tx10_csum_partial: 2129796141
>> tx10_csum_partial_inner: 0
>> tx10_added_vlan_packets: 20659622590
>> tx10_nop: 319117433
>> tx10_csum_none: 18529826450
>> tx10_stopped: 20187
>> tx10_dropped: 0
>> tx10_xmit_more: 58892184
>> tx10_recover: 0
>> tx10_cqes: 20600737739
>> tx10_wake: 20188
>> tx10_cqe_err: 0
>> tx11_packets: 27008919793
>> tx11_bytes: 16587719213058
>> tx11_tso_packets: 1734884654
>> tx11_tso_bytes: 9475681471870
>> tx11_tso_inner_packets: 0
>> tx11_tso_inner_bytes: 0
>> tx11_csum_partial: 2296162292
>> tx11_csum_partial_inner: 0
>> tx11_added_vlan_packets: 21943096263
>> tx11_nop: 344188182
>> tx11_csum_none: 19646933971
>> tx11_stopped: 9703
>> tx11_dropped: 0
>> tx11_xmit_more: 66530718
>> tx11_recover: 0
>> tx11_cqes: 21876571667
>> tx11_wake: 9704
>> tx11_cqe_err: 0
>> tx12_packets: 25969493269
>> tx12_bytes: 15980767963416
>> tx12_tso_packets: 1671396456
>> tx12_tso_bytes: 9268973672821
>> tx12_tso_inner_packets: 0
>> tx12_tso_inner_bytes: 0
>> tx12_csum_partial: 2243809182
>> tx12_csum_partial_inner: 0
>> tx12_added_vlan_packets: 20980642456
>> tx12_nop: 330241007
>> tx12_csum_none: 18736833276
>> tx12_stopped: 10341
>> tx12_dropped: 0
>> tx12_xmit_more: 57834100
>> tx12_recover: 0
>> tx12_cqes: 20922815079
>> tx12_wake: 10342
>> tx12_cqe_err: 0
>> tx13_packets: 25332762261
>> tx13_bytes: 15353213283280
>> tx13_tso_packets: 1577433599
>> tx13_tso_bytes: 8785240284281
>> tx13_tso_inner_packets: 0
>> tx13_tso_inner_bytes: 0
>> tx13_csum_partial: 2110640515
>> tx13_csum_partial_inner: 0
>> tx13_added_vlan_packets: 20605670910
>> tx13_nop: 319805741
>> tx13_csum_none: 18495030395
>> tx13_stopped: 7006
>> tx13_dropped: 0
>> tx13_xmit_more: 58314402
>> tx13_recover: 0
>> tx13_cqes: 20547362770
>> tx13_wake: 7008
>> tx13_cqe_err: 0
>> tx14_packets: 26333743548
>> tx14_bytes: 16070719060573
>> tx14_tso_packets: 1677922970
>> tx14_tso_bytes: 9240299765487
>> tx14_tso_inner_packets: 0
>> tx14_tso_inner_bytes: 0
>> tx14_csum_partial: 2215668906
>> tx14_csum_partial_inner: 0
>> tx14_added_vlan_packets: 21384410786
>> tx14_nop: 332734939
>> tx14_csum_none: 19168741880
>> tx14_stopped: 13160
>> tx14_dropped: 0
>> tx14_xmit_more: 57650391
>> tx14_recover: 0
>> tx14_cqes: 21326767783
>> tx14_wake: 13161
>> tx14_cqe_err: 0
>> tx15_packets: 26824968971
>> tx15_bytes: 16687994233452
>> tx15_tso_packets: 1755745052
>> tx15_tso_bytes: 9533814012441
>> tx15_tso_inner_packets: 0
>> tx15_tso_inner_bytes: 0
>> tx15_csum_partial: 2304778064
>> tx15_csum_partial_inner: 0
>> tx15_added_vlan_packets: 21740906107
>> tx15_nop: 344143287
>> tx15_csum_none: 19436128058
>> tx15_stopped: 75
>> tx15_dropped: 0
>> tx15_xmit_more: 63325832
>> tx15_recover: 0
>> tx15_cqes: 21677585345
>> tx15_wake: 74
>> tx15_cqe_err: 0
>> tx16_packets: 24488158946
>> tx16_bytes: 15027415004570
>> tx16_tso_packets: 1559127391
>> tx16_tso_bytes: 8658691917845
>> tx16_tso_inner_packets: 0
>> tx16_tso_inner_bytes: 0
>> tx16_csum_partial: 2075856395
>> tx16_csum_partial_inner: 0
>> tx16_added_vlan_packets: 19835695731
>> tx16_nop: 308464189
>> tx16_csum_none: 17759839340
>> tx16_stopped: 4567
>> tx16_dropped: 0
>> tx16_xmit_more: 62631422
>> tx16_recover: 0
>> tx16_cqes: 19773070012
>> tx16_wake: 4568
>> tx16_cqe_err: 0
>> tx17_packets: 24700413784
>> tx17_bytes: 15216529713715
>> tx17_tso_packets: 1597555108
>> tx17_tso_bytes: 8773728661243
>> tx17_tso_inner_packets: 0
>> tx17_tso_inner_bytes: 0
>> tx17_csum_partial: 2127177297
>> tx17_csum_partial_inner: 0
>> tx17_added_vlan_packets: 20003144561
>> tx17_nop: 313356918
>> tx17_csum_none: 17875967264
>> tx17_stopped: 12572
>> tx17_dropped: 0
>> tx17_xmit_more: 62742980
>> tx17_recover: 0
>> tx17_cqes: 19940407615
>> tx17_wake: 12573
>> tx17_cqe_err: 0
>> tx18_packets: 24887710046
>> tx18_bytes: 15245034034664
>> tx18_tso_packets: 1582550520
>> tx18_tso_bytes: 8782692335483
>> tx18_tso_inner_packets: 0
>> tx18_tso_inner_bytes: 0
>> tx18_csum_partial: 2084514331
>> tx18_csum_partial_inner: 0
>> tx18_added_vlan_packets: 20173879181
>> tx18_nop: 314818702
>> tx18_csum_none: 18089364850
>> tx18_stopped: 21366
>> tx18_dropped: 0
>> tx18_xmit_more: 62485819
>> tx18_recover: 0
>> tx18_cqes: 20111400935
>> tx18_wake: 21366
>> tx18_cqe_err: 0
>> tx19_packets: 24831057648
>> tx19_bytes: 15164663890576
>> tx19_tso_packets: 1599135489
>> tx19_tso_bytes: 8756045449746
>> tx19_tso_inner_packets: 0
>> tx19_tso_inner_bytes: 0
>> tx19_csum_partial: 2119746608
>> tx19_csum_partial_inner: 0
>> tx19_added_vlan_packets: 20143573903
>> tx19_nop: 316966450
>> tx19_csum_none: 18023827295
>> tx19_stopped: 11431
>> tx19_dropped: 0
>> tx19_xmit_more: 57535904
>> tx19_recover: 0
>> tx19_cqes: 20086045325
>> tx19_wake: 11431
>> tx19_cqe_err: 0
>> tx20_packets: 21943735263
>> tx20_bytes: 13528749492187
>> tx20_tso_packets: 1390048103
>> tx20_tso_bytes: 7629058809637
>> tx20_tso_inner_packets: 0
>> tx20_tso_inner_bytes: 0
>> tx20_csum_partial: 1848533941
>> tx20_csum_partial_inner: 0
>> tx20_added_vlan_packets: 17861417651
>> tx20_nop: 276840365
>> tx20_csum_none: 16012883710
>> tx20_stopped: 38457
>> tx20_dropped: 0
>> tx20_xmit_more: 57042753
>> tx20_recover: 0
>> tx20_cqes: 17804384839
>> tx20_wake: 38457
>> tx20_cqe_err: 0
>> tx21_packets: 21476926958
>> tx21_bytes: 13096410597896
>> tx21_tso_packets: 1367724090
>> tx21_tso_bytes: 7568364585127
>> tx21_tso_inner_packets: 0
>> tx21_tso_inner_bytes: 0
>> tx21_csum_partial: 1830570727
>> tx21_csum_partial_inner: 0
>> tx21_added_vlan_packets: 17421087814
>> tx21_nop: 270611519
>> tx21_csum_none: 15590517087
>> tx21_stopped: 31213
>> tx21_dropped: 0
>> tx21_xmit_more: 60305389
>> tx21_recover: 0
>> tx21_cqes: 17360791205
>> tx21_wake: 31213
>> tx21_cqe_err: 0
>> tx22_packets: 21819106444
>> tx22_bytes: 13492871887100
>> tx22_tso_packets: 1387002018
>> tx22_tso_bytes: 7617705727669
>> tx22_tso_inner_packets: 0
>> tx22_tso_inner_bytes: 0
>> tx22_csum_partial: 1853632107
>> tx22_csum_partial_inner: 0
>> tx22_added_vlan_packets: 17743255447
>> tx22_nop: 274820992
>> tx22_csum_none: 15889623340
>> tx22_stopped: 24814
>> tx22_dropped: 0
>> tx22_xmit_more: 60811304
>> tx22_recover: 0
>> tx22_cqes: 17682451111
>> tx22_wake: 24815
>> tx22_cqe_err: 0
>> tx23_packets: 21830455800
>> tx23_bytes: 13427551902532
>> tx23_tso_packets: 1388556038
>> tx23_tso_bytes: 7604040587125
>> tx23_tso_inner_packets: 0
>> tx23_tso_inner_bytes: 0
>> tx23_csum_partial: 1850819694
>> tx23_csum_partial_inner: 0
>> tx23_added_vlan_packets: 17761271122
>> tx23_nop: 275142775
>> tx23_csum_none: 15910451428
>> tx23_stopped: 29899
>> tx23_dropped: 0
>> tx23_xmit_more: 58924909
>> tx23_recover: 0
>> tx23_cqes: 17702355187
>> tx23_wake: 29898
>> tx23_cqe_err: 0
>> tx24_packets: 21961484213
>> tx24_bytes: 13531373062497
>> tx24_tso_packets: 1394697504
>> tx24_tso_bytes: 7663866609308
>> tx24_tso_inner_packets: 0
>> tx24_tso_inner_bytes: 0
>> tx24_csum_partial: 1857072074
>> tx24_csum_partial_inner: 0
>> tx24_added_vlan_packets: 17856887568
>> tx24_nop: 276352855
>> tx24_csum_none: 15999815494
>> tx24_stopped: 33924
>> tx24_dropped: 0
>> tx24_xmit_more: 63992426
>> tx24_recover: 0
>> tx24_cqes: 17792905243
>> tx24_wake: 33923
>> tx24_cqe_err: 0
>> tx25_packets: 21853593838
>> tx25_bytes: 13357487830519
>> tx25_tso_packets: 1398822411
>> tx25_tso_bytes: 7691191518838
>> tx25_tso_inner_packets: 0
>> tx25_tso_inner_bytes: 0
>> tx25_csum_partial: 1869483109
>> tx25_csum_partial_inner: 0
>> tx25_added_vlan_packets: 17734634614
>> tx25_nop: 276327643
>> tx25_csum_none: 15865151505
>> tx25_stopped: 38651
>> tx25_dropped: 0
>> tx25_xmit_more: 56410535
>> tx25_recover: 0
>> tx25_cqes: 17678234537
>> tx25_wake: 38650
>> tx25_cqe_err: 0
>> tx26_packets: 21480261205
>> tx26_bytes: 13148973015935
>> tx26_tso_packets: 1348132284
>> tx26_tso_bytes: 7523489481775
>> tx26_tso_inner_packets: 0
>> tx26_tso_inner_bytes: 0
>> tx26_csum_partial: 1839740745
>> tx26_csum_partial_inner: 0
>> tx26_added_vlan_packets: 17430592911
>> tx26_nop: 270367836
>> tx26_csum_none: 15590852166
>> tx26_stopped: 34044
>> tx26_dropped: 0
>> tx26_xmit_more: 59870114
>> tx26_recover: 0
>> tx26_cqes: 17370736612
>> tx26_wake: 34043
>> tx26_cqe_err: 0
>> tx27_packets: 22694273108
>> tx27_bytes: 14135473431004
>> tx27_tso_packets: 1418371875
>> tx27_tso_bytes: 7784842263038
>> tx27_tso_inner_packets: 0
>> tx27_tso_inner_bytes: 0
>> tx27_csum_partial: 1919170584
>> tx27_csum_partial_inner: 0
>> tx27_added_vlan_packets: 18520826023
>> tx27_nop: 286296272
>> tx27_csum_none: 16601655439
>> tx27_stopped: 38125
>> tx27_dropped: 0
>> tx27_xmit_more: 72749775
>> tx27_recover: 0
>> tx27_cqes: 18448090270
>> tx27_wake: 38127
>> tx27_cqe_err: 0
>> tx28_packets: 0
>> tx28_bytes: 0
>> tx28_tso_packets: 0
>> tx28_tso_bytes: 0
>> tx28_tso_inner_packets: 0
>> tx28_tso_inner_bytes: 0
>> tx28_csum_partial: 0
>> tx28_csum_partial_inner: 0
>> tx28_added_vlan_packets: 0
>> tx28_nop: 0
>> tx28_csum_none: 0
>> tx28_stopped: 0
>> tx28_dropped: 0
>> tx28_xmit_more: 0
>> tx28_recover: 0
>> tx28_cqes: 0
>> tx28_wake: 0
>> tx28_cqe_err: 0
>> tx29_packets: 3
>> tx29_bytes: 266
>> tx29_tso_packets: 0
>> tx29_tso_bytes: 0
>> tx29_tso_inner_packets: 0
>> tx29_tso_inner_bytes: 0
>> tx29_csum_partial: 0
>> tx29_csum_partial_inner: 0
>> tx29_added_vlan_packets: 0
>> tx29_nop: 0
>> tx29_csum_none: 3
>> tx29_stopped: 0
>> tx29_dropped: 0
>> tx29_xmit_more: 1
>> tx29_recover: 0
>> tx29_cqes: 2
>> tx29_wake: 0
>> tx29_cqe_err: 0
>> tx30_packets: 0
>> tx30_bytes: 0
>> tx30_tso_packets: 0
>> tx30_tso_bytes: 0
>> tx30_tso_inner_packets: 0
>> tx30_tso_inner_bytes: 0
>> tx30_csum_partial: 0
>> tx30_csum_partial_inner: 0
>> tx30_added_vlan_packets: 0
>> tx30_nop: 0
>> tx30_csum_none: 0
>> tx30_stopped: 0
>> tx30_dropped: 0
>> tx30_xmit_more: 0
>> tx30_recover: 0
>> tx30_cqes: 0
>> tx30_wake: 0
>> tx30_cqe_err: 0
>> tx31_packets: 0
>> tx31_bytes: 0
>> tx31_tso_packets: 0
>> tx31_tso_bytes: 0
>> tx31_tso_inner_packets: 0
>> tx31_tso_inner_bytes: 0
>> tx31_csum_partial: 0
>> tx31_csum_partial_inner: 0
>> tx31_added_vlan_packets: 0
>> tx31_nop: 0
>> tx31_csum_none: 0
>> tx31_stopped: 0
>> tx31_dropped: 0
>> tx31_xmit_more: 0
>> tx31_recover: 0
>> tx31_cqes: 0
>> tx31_wake: 0
>> tx31_cqe_err: 0
>> tx32_packets: 0
>> tx32_bytes: 0
>> tx32_tso_packets: 0
>> tx32_tso_bytes: 0
>> tx32_tso_inner_packets: 0
>> tx32_tso_inner_bytes: 0
>> tx32_csum_partial: 0
>> tx32_csum_partial_inner: 0
>> tx32_added_vlan_packets: 0
>> tx32_nop: 0
>> tx32_csum_none: 0
>> tx32_stopped: 0
>> tx32_dropped: 0
>> tx32_xmit_more: 0
>> tx32_recover: 0
>> tx32_cqes: 0
>> tx32_wake: 0
>> tx32_cqe_err: 0
>> tx33_packets: 0
>> tx33_bytes: 0
>> tx33_tso_packets: 0
>> tx33_tso_bytes: 0
>> tx33_tso_inner_packets: 0
>> tx33_tso_inner_bytes: 0
>> tx33_csum_partial: 0
>> tx33_csum_partial_inner: 0
>> tx33_added_vlan_packets: 0
>> tx33_nop: 0
>> tx33_csum_none: 0
>> tx33_stopped: 0
>> tx33_dropped: 0
>> tx33_xmit_more: 0
>> tx33_recover: 0
>> tx33_cqes: 0
>> tx33_wake: 0
>> tx33_cqe_err: 0
>> tx34_packets: 0
>> tx34_bytes: 0
>> tx34_tso_packets: 0
>> tx34_tso_bytes: 0
>> tx34_tso_inner_packets: 0
>> tx34_tso_inner_bytes: 0
>> tx34_csum_partial: 0
>> tx34_csum_partial_inner: 0
>> tx34_added_vlan_packets: 0
>> tx34_nop: 0
>> tx34_csum_none: 0
>> tx34_stopped: 0
>> tx34_dropped: 0
>> tx34_xmit_more: 0
>> tx34_recover: 0
>> tx34_cqes: 0
>> tx34_wake: 0
>> tx34_cqe_err: 0
>> tx35_packets: 0
>> tx35_bytes: 0
>> tx35_tso_packets: 0
>> tx35_tso_bytes: 0
>> tx35_tso_inner_packets: 0
>> tx35_tso_inner_bytes: 0
>> tx35_csum_partial: 0
>> tx35_csum_partial_inner: 0
>> tx35_added_vlan_packets: 0
>> tx35_nop: 0
>> tx35_csum_none: 0
>> tx35_stopped: 0
>> tx35_dropped: 0
>> tx35_xmit_more: 0
>> tx35_recover: 0
>> tx35_cqes: 0
>> tx35_wake: 0
>> tx35_cqe_err: 0
>> tx36_packets: 0
>> tx36_bytes: 0
>> tx36_tso_packets: 0
>> tx36_tso_bytes: 0
>> tx36_tso_inner_packets: 0
>> tx36_tso_inner_bytes: 0
>> tx36_csum_partial: 0
>> tx36_csum_partial_inner: 0
>> tx36_added_vlan_packets: 0
>> tx36_nop: 0
>> tx36_csum_none: 0
>> tx36_stopped: 0
>> tx36_dropped: 0
>> tx36_xmit_more: 0
>> tx36_recover: 0
>> tx36_cqes: 0
>> tx36_wake: 0
>> tx36_cqe_err: 0
>> tx37_packets: 0
>> tx37_bytes: 0
>> tx37_tso_packets: 0
>> tx37_tso_bytes: 0
>> tx37_tso_inner_packets: 0
>> tx37_tso_inner_bytes: 0
>> tx37_csum_partial: 0
>> tx37_csum_partial_inner: 0
>> tx37_added_vlan_packets: 0
>> tx37_nop: 0
>> tx37_csum_none: 0
>> tx37_stopped: 0
>> tx37_dropped: 0
>> tx37_xmit_more: 0
>> tx37_recover: 0
>> tx37_cqes: 0
>> tx37_wake: 0
>> tx37_cqe_err: 0
>> tx38_packets: 0
>> tx38_bytes: 0
>> tx38_tso_packets: 0
>> tx38_tso_bytes: 0
>> tx38_tso_inner_packets: 0
>> tx38_tso_inner_bytes: 0
>> tx38_csum_partial: 0
>> tx38_csum_partial_inner: 0
>> tx38_added_vlan_packets: 0
>> tx38_nop: 0
>> tx38_csum_none: 0
>> tx38_stopped: 0
>> tx38_dropped: 0
>> tx38_xmit_more: 0
>> tx38_recover: 0
>> tx38_cqes: 0
>> tx38_wake: 0
>> tx38_cqe_err: 0
>> tx39_packets: 0
>> tx39_bytes: 0
>> tx39_tso_packets: 0
>> tx39_tso_bytes: 0
>> tx39_tso_inner_packets: 0
>> tx39_tso_inner_bytes: 0
>> tx39_csum_partial: 0
>> tx39_csum_partial_inner: 0
>> tx39_added_vlan_packets: 0
>> tx39_nop: 0
>> tx39_csum_none: 0
>> tx39_stopped: 0
>> tx39_dropped: 0
>> tx39_xmit_more: 0
>> tx39_recover: 0
>> tx39_cqes: 0
>> tx39_wake: 0
>> tx39_cqe_err: 0
>> tx40_packets: 0
>> tx40_bytes: 0
>> tx40_tso_packets: 0
>> tx40_tso_bytes: 0
>> tx40_tso_inner_packets: 0
>> tx40_tso_inner_bytes: 0
>> tx40_csum_partial: 0
>> tx40_csum_partial_inner: 0
>> tx40_added_vlan_packets: 0
>> tx40_nop: 0
>> tx40_csum_none: 0
>> tx40_stopped: 0
>> tx40_dropped: 0
>> tx40_xmit_more: 0
>> tx40_recover: 0
>> tx40_cqes: 0
>> tx40_wake: 0
>> tx40_cqe_err: 0
>> tx41_packets: 0
>> tx41_bytes: 0
>> tx41_tso_packets: 0
>> tx41_tso_bytes: 0
>> tx41_tso_inner_packets: 0
>> tx41_tso_inner_bytes: 0
>> tx41_csum_partial: 0
>> tx41_csum_partial_inner: 0
>> tx41_added_vlan_packets: 0
>> tx41_nop: 0
>> tx41_csum_none: 0
>> tx41_stopped: 0
>> tx41_dropped: 0
>> tx41_xmit_more: 0
>> tx41_recover: 0
>> tx41_cqes: 0
>> tx41_wake: 0
>> tx41_cqe_err: 0
>> tx42_packets: 0
>> tx42_bytes: 0
>> tx42_tso_packets: 0
>> tx42_tso_bytes: 0
>> tx42_tso_inner_packets: 0
>> tx42_tso_inner_bytes: 0
>> tx42_csum_partial: 0
>> tx42_csum_partial_inner: 0
>> tx42_added_vlan_packets: 0
>> tx42_nop: 0
>> tx42_csum_none: 0
>> tx42_stopped: 0
>> tx42_dropped: 0
>> tx42_xmit_more: 0
>> tx42_recover: 0
>> tx42_cqes: 0
>> tx42_wake: 0
>> tx42_cqe_err: 0
>> tx43_packets: 0
>> tx43_bytes: 0
>> tx43_tso_packets: 0
>> tx43_tso_bytes: 0
>> tx43_tso_inner_packets: 0
>> tx43_tso_inner_bytes: 0
>> tx43_csum_partial: 0
>> tx43_csum_partial_inner: 0
>> tx43_added_vlan_packets: 0
>> tx43_nop: 0
>> tx43_csum_none: 0
>> tx43_stopped: 0
>> tx43_dropped: 0
>> tx43_xmit_more: 0
>> tx43_recover: 0
>> tx43_cqes: 0
>> tx43_wake: 0
>> tx43_cqe_err: 0
>> tx44_packets: 0
>> tx44_bytes: 0
>> tx44_tso_packets: 0
>> tx44_tso_bytes: 0
>> tx44_tso_inner_packets: 0
>> tx44_tso_inner_bytes: 0
>> tx44_csum_partial: 0
>> tx44_csum_partial_inner: 0
>> tx44_added_vlan_packets: 0
>> tx44_nop: 0
>> tx44_csum_none: 0
>> tx44_stopped: 0
>> tx44_dropped: 0
>> tx44_xmit_more: 0
>> tx44_recover: 0
>> tx44_cqes: 0
>> tx44_wake: 0
>> tx44_cqe_err: 0
>> tx45_packets: 0
>> tx45_bytes: 0
>> tx45_tso_packets: 0
>> tx45_tso_bytes: 0
>> tx45_tso_inner_packets: 0
>> tx45_tso_inner_bytes: 0
>> tx45_csum_partial: 0
>> tx45_csum_partial_inner: 0
>> tx45_added_vlan_packets: 0
>> tx45_nop: 0
>> tx45_csum_none: 0
>> tx45_stopped: 0
>> tx45_dropped: 0
>> tx45_xmit_more: 0
>> tx45_recover: 0
>> tx45_cqes: 0
>> tx45_wake: 0
>> tx45_cqe_err: 0
>> tx46_packets: 0
>> tx46_bytes: 0
>> tx46_tso_packets: 0
>> tx46_tso_bytes: 0
>> tx46_tso_inner_packets: 0
>> tx46_tso_inner_bytes: 0
>> tx46_csum_partial: 0
>> tx46_csum_partial_inner: 0
>> tx46_added_vlan_packets: 0
>> tx46_nop: 0
>> tx46_csum_none: 0
>> tx46_stopped: 0
>> tx46_dropped: 0
>> tx46_xmit_more: 0
>> tx46_recover: 0
>> tx46_cqes: 0
>> tx46_wake: 0
>> tx46_cqe_err: 0
>> tx47_packets: 0
>> tx47_bytes: 0
>> tx47_tso_packets: 0
>> tx47_tso_bytes: 0
>> tx47_tso_inner_packets: 0
>> tx47_tso_inner_bytes: 0
>> tx47_csum_partial: 0
>> tx47_csum_partial_inner: 0
>> tx47_added_vlan_packets: 0
>> tx47_nop: 0
>> tx47_csum_none: 0
>> tx47_stopped: 0
>> tx47_dropped: 0
>> tx47_xmit_more: 0
>> tx47_recover: 0
>> tx47_cqes: 0
>> tx47_wake: 0
>> tx47_cqe_err: 0
>> tx48_packets: 0
>> tx48_bytes: 0
>> tx48_tso_packets: 0
>> tx48_tso_bytes: 0
>> tx48_tso_inner_packets: 0
>> tx48_tso_inner_bytes: 0
>> tx48_csum_partial: 0
>> tx48_csum_partial_inner: 0
>> tx48_added_vlan_packets: 0
>> tx48_nop: 0
>> tx48_csum_none: 0
>> tx48_stopped: 0
>> tx48_dropped: 0
>> tx48_xmit_more: 0
>> tx48_recover: 0
>> tx48_cqes: 0
>> tx48_wake: 0
>> tx48_cqe_err: 0
>> tx49_packets: 0
>> tx49_bytes: 0
>> tx49_tso_packets: 0
>> tx49_tso_bytes: 0
>> tx49_tso_inner_packets: 0
>> tx49_tso_inner_bytes: 0
>> tx49_csum_partial: 0
>> tx49_csum_partial_inner: 0
>> tx49_added_vlan_packets: 0
>> tx49_nop: 0
>> tx49_csum_none: 0
>> tx49_stopped: 0
>> tx49_dropped: 0
>> tx49_xmit_more: 0
>> tx49_recover: 0
>> tx49_cqes: 0
>> tx49_wake: 0
>> tx49_cqe_err: 0
>> tx50_packets: 0
>> tx50_bytes: 0
>> tx50_tso_packets: 0
>> tx50_tso_bytes: 0
>> tx50_tso_inner_packets: 0
>> tx50_tso_inner_bytes: 0
>> tx50_csum_partial: 0
>> tx50_csum_partial_inner: 0
>> tx50_added_vlan_packets: 0
>> tx50_nop: 0
>> tx50_csum_none: 0
>> tx50_stopped: 0
>> tx50_dropped: 0
>> tx50_xmit_more: 0
>> tx50_recover: 0
>> tx50_cqes: 0
>> tx50_wake: 0
>> tx50_cqe_err: 0
>> tx51_packets: 0
>> tx51_bytes: 0
>> tx51_tso_packets: 0
>> tx51_tso_bytes: 0
>> tx51_tso_inner_packets: 0
>> tx51_tso_inner_bytes: 0
>> tx51_csum_partial: 0
>> tx51_csum_partial_inner: 0
>> tx51_added_vlan_packets: 0
>> tx51_nop: 0
>> tx51_csum_none: 0
>> tx51_stopped: 0
>> tx51_dropped: 0
>> tx51_xmit_more: 0
>> tx51_recover: 0
>> tx51_cqes: 0
>> tx51_wake: 0
>> tx51_cqe_err: 0
>> tx52_packets: 0
>> tx52_bytes: 0
>> tx52_tso_packets: 0
>> tx52_tso_bytes: 0
>> tx52_tso_inner_packets: 0
>> tx52_tso_inner_bytes: 0
>> tx52_csum_partial: 0
>> tx52_csum_partial_inner: 0
>> tx52_added_vlan_packets: 0
>> tx52_nop: 0
>> tx52_csum_none: 0
>> tx52_stopped: 0
>> tx52_dropped: 0
>> tx52_xmit_more: 0
>> tx52_recover: 0
>> tx52_cqes: 0
>> tx52_wake: 0
>> tx52_cqe_err: 0
>> tx53_packets: 0
>> tx53_bytes: 0
>> tx53_tso_packets: 0
>> tx53_tso_bytes: 0
>> tx53_tso_inner_packets: 0
>> tx53_tso_inner_bytes: 0
>> tx53_csum_partial: 0
>> tx53_csum_partial_inner: 0
>> tx53_added_vlan_packets: 0
>> tx53_nop: 0
>> tx53_csum_none: 0
>> tx53_stopped: 0
>> tx53_dropped: 0
>> tx53_xmit_more: 0
>> tx53_recover: 0
>> tx53_cqes: 0
>> tx53_wake: 0
>> tx53_cqe_err: 0
>> tx54_packets: 0
>> tx54_bytes: 0
>> tx54_tso_packets: 0
>> tx54_tso_bytes: 0
>> tx54_tso_inner_packets: 0
>> tx54_tso_inner_bytes: 0
>> tx54_csum_partial: 0
>> tx54_csum_partial_inner: 0
>> tx54_added_vlan_packets: 0
>> tx54_nop: 0
>> tx54_csum_none: 0
>> tx54_stopped: 0
>> tx54_dropped: 0
>> tx54_xmit_more: 0
>> tx54_recover: 0
>> tx54_cqes: 0
>> tx54_wake: 0
>> tx54_cqe_err: 0
>> tx55_packets: 0
>> tx55_bytes: 0
>> tx55_tso_packets: 0
>> tx55_tso_bytes: 0
>> tx55_tso_inner_packets: 0
>> tx55_tso_inner_bytes: 0
>> tx55_csum_partial: 0
>> tx55_csum_partial_inner: 0
>> tx55_added_vlan_packets: 0
>> tx55_nop: 0
>> tx55_csum_none: 0
>> tx55_stopped: 0
>> tx55_dropped: 0
>> tx55_xmit_more: 0
>> tx55_recover: 0
>> tx55_cqes: 0
>> tx55_wake: 0
>> tx55_cqe_err: 0
>> tx0_xdp_xmit: 0
>> tx0_xdp_full: 0
>> tx0_xdp_err: 0
>> tx0_xdp_cqes: 0
>> tx1_xdp_xmit: 0
>> tx1_xdp_full: 0
>> tx1_xdp_err: 0
>> tx1_xdp_cqes: 0
>> tx2_xdp_xmit: 0
>> tx2_xdp_full: 0
>> tx2_xdp_err: 0
>> tx2_xdp_cqes: 0
>> tx3_xdp_xmit: 0
>> tx3_xdp_full: 0
>> tx3_xdp_err: 0
>> tx3_xdp_cqes: 0
>> tx4_xdp_xmit: 0
>> tx4_xdp_full: 0
>> tx4_xdp_err: 0
>> tx4_xdp_cqes: 0
>> tx5_xdp_xmit: 0
>> tx5_xdp_full: 0
>> tx5_xdp_err: 0
>> tx5_xdp_cqes: 0
>> tx6_xdp_xmit: 0
>> tx6_xdp_full: 0
>> tx6_xdp_err: 0
>> tx6_xdp_cqes: 0
>> tx7_xdp_xmit: 0
>> tx7_xdp_full: 0
>> tx7_xdp_err: 0
>> tx7_xdp_cqes: 0
>> tx8_xdp_xmit: 0
>> tx8_xdp_full: 0
>> tx8_xdp_err: 0
>> tx8_xdp_cqes: 0
>> tx9_xdp_xmit: 0
>> tx9_xdp_full: 0
>> tx9_xdp_err: 0
>> tx9_xdp_cqes: 0
>> tx10_xdp_xmit: 0
>> tx10_xdp_full: 0
>> tx10_xdp_err: 0
>> tx10_xdp_cqes: 0
>> tx11_xdp_xmit: 0
>> tx11_xdp_full: 0
>> tx11_xdp_err: 0
>> tx11_xdp_cqes: 0
>> tx12_xdp_xmit: 0
>> tx12_xdp_full: 0
>> tx12_xdp_err: 0
>> tx12_xdp_cqes: 0
>> tx13_xdp_xmit: 0
>> tx13_xdp_full: 0
>> tx13_xdp_err: 0
>> tx13_xdp_cqes: 0
>> tx14_xdp_xmit: 0
>> tx14_xdp_full: 0
>> tx14_xdp_err: 0
>> tx14_xdp_cqes: 0
>> tx15_xdp_xmit: 0
>> tx15_xdp_full: 0
>> tx15_xdp_err: 0
>> tx15_xdp_cqes: 0
>> tx16_xdp_xmit: 0
>> tx16_xdp_full: 0
>> tx16_xdp_err: 0
>> tx16_xdp_cqes: 0
>> tx17_xdp_xmit: 0
>> tx17_xdp_full: 0
>> tx17_xdp_err: 0
>> tx17_xdp_cqes: 0
>> tx18_xdp_xmit: 0
>> tx18_xdp_full: 0
>> tx18_xdp_err: 0
>> tx18_xdp_cqes: 0
>> tx19_xdp_xmit: 0
>> tx19_xdp_full: 0
>> tx19_xdp_err: 0
>> tx19_xdp_cqes: 0
>> tx20_xdp_xmit: 0
>> tx20_xdp_full: 0
>> tx20_xdp_err: 0
>> tx20_xdp_cqes: 0
>> tx21_xdp_xmit: 0
>> tx21_xdp_full: 0
>> tx21_xdp_err: 0
>> tx21_xdp_cqes: 0
>> tx22_xdp_xmit: 0
>> tx22_xdp_full: 0
>> tx22_xdp_err: 0
>> tx22_xdp_cqes: 0
>> tx23_xdp_xmit: 0
>> tx23_xdp_full: 0
>> tx23_xdp_err: 0
>> tx23_xdp_cqes: 0
>> tx24_xdp_xmit: 0
>> tx24_xdp_full: 0
>> tx24_xdp_err: 0
>> tx24_xdp_cqes: 0
>> tx25_xdp_xmit: 0
>> tx25_xdp_full: 0
>> tx25_xdp_err: 0
>> tx25_xdp_cqes: 0
>> tx26_xdp_xmit: 0
>> tx26_xdp_full: 0
>> tx26_xdp_err: 0
>> tx26_xdp_cqes: 0
>> tx27_xdp_xmit: 0
>> tx27_xdp_full: 0
>> tx27_xdp_err: 0
>> tx27_xdp_cqes: 0
>> tx28_xdp_xmit: 0
>> tx28_xdp_full: 0
>> tx28_xdp_err: 0
>> tx28_xdp_cqes: 0
>> tx29_xdp_xmit: 0
>> tx29_xdp_full: 0
>> tx29_xdp_err: 0
>> tx29_xdp_cqes: 0
>> tx30_xdp_xmit: 0
>> tx30_xdp_full: 0
>> tx30_xdp_err: 0
>> tx30_xdp_cqes: 0
>> tx31_xdp_xmit: 0
>> tx31_xdp_full: 0
>> tx31_xdp_err: 0
>> tx31_xdp_cqes: 0
>> tx32_xdp_xmit: 0
>> tx32_xdp_full: 0
>> tx32_xdp_err: 0
>> tx32_xdp_cqes: 0
>> tx33_xdp_xmit: 0
>> tx33_xdp_full: 0
>> tx33_xdp_err: 0
>> tx33_xdp_cqes: 0
>> tx34_xdp_xmit: 0
>> tx34_xdp_full: 0
>> tx34_xdp_err: 0
>> tx34_xdp_cqes: 0
>> tx35_xdp_xmit: 0
>> tx35_xdp_full: 0
>> tx35_xdp_err: 0
>> tx35_xdp_cqes: 0
>> tx36_xdp_xmit: 0
>> tx36_xdp_full: 0
>> tx36_xdp_err: 0
>> tx36_xdp_cqes: 0
>> tx37_xdp_xmit: 0
>> tx37_xdp_full: 0
>> tx37_xdp_err: 0
>> tx37_xdp_cqes: 0
>> tx38_xdp_xmit: 0
>> tx38_xdp_full: 0
>> tx38_xdp_err: 0
>> tx38_xdp_cqes: 0
>> tx39_xdp_xmit: 0
>> tx39_xdp_full: 0
>> tx39_xdp_err: 0
>> tx39_xdp_cqes: 0
>> tx40_xdp_xmit: 0
>> tx40_xdp_full: 0
>> tx40_xdp_err: 0
>> tx40_xdp_cqes: 0
>> tx41_xdp_xmit: 0
>> tx41_xdp_full: 0
>> tx41_xdp_err: 0
>> tx41_xdp_cqes: 0
>> tx42_xdp_xmit: 0
>> tx42_xdp_full: 0
>> tx42_xdp_err: 0
>> tx42_xdp_cqes: 0
>> tx43_xdp_xmit: 0
>> tx43_xdp_full: 0
>> tx43_xdp_err: 0
>> tx43_xdp_cqes: 0
>> tx44_xdp_xmit: 0
>> tx44_xdp_full: 0
>> tx44_xdp_err: 0
>> tx44_xdp_cqes: 0
>> tx45_xdp_xmit: 0
>> tx45_xdp_full: 0
>> tx45_xdp_err: 0
>> tx45_xdp_cqes: 0
>> tx46_xdp_xmit: 0
>> tx46_xdp_full: 0
>> tx46_xdp_err: 0
>> tx46_xdp_cqes: 0
>> tx47_xdp_xmit: 0
>> tx47_xdp_full: 0
>> tx47_xdp_err: 0
>> tx47_xdp_cqes: 0
>> tx48_xdp_xmit: 0
>> tx48_xdp_full: 0
>> tx48_xdp_err: 0
>> tx48_xdp_cqes: 0
>> tx49_xdp_xmit: 0
>> tx49_xdp_full: 0
>> tx49_xdp_err: 0
>> tx49_xdp_cqes: 0
>> tx50_xdp_xmit: 0
>> tx50_xdp_full: 0
>> tx50_xdp_err: 0
>> tx50_xdp_cqes: 0
>> tx51_xdp_xmit: 0
>> tx51_xdp_full: 0
>> tx51_xdp_err: 0
>> tx51_xdp_cqes: 0
>> tx52_xdp_xmit: 0
>> tx52_xdp_full: 0
>> tx52_xdp_err: 0
>> tx52_xdp_cqes: 0
>> tx53_xdp_xmit: 0
>> tx53_xdp_full: 0
>> tx53_xdp_err: 0
>> tx53_xdp_cqes: 0
>> tx54_xdp_xmit: 0
>> tx54_xdp_full: 0
>> tx54_xdp_err: 0
>> tx54_xdp_cqes: 0
>> tx55_xdp_xmit: 0
>> tx55_xdp_full: 0
>> tx55_xdp_err: 0
>> tx55_xdp_cqes: 0
>>
>>
>>> [...]
>>>
>>>>>> ethtool -S enp175s0f0
>>>>>> NIC statistics:
>>>>>> rx_packets: 141574897253
>>>>>> rx_bytes: 184445040406258
>>>>>> tx_packets: 172569543894
>>>>>> tx_bytes: 99486882076365
>>>>>> tx_tso_packets: 9367664195
>>>>>> tx_tso_bytes: 56435233992948
>>>>>> tx_tso_inner_packets: 0
>>>>>> tx_tso_inner_bytes: 0
>>>>>> tx_added_vlan_packets: 141297671626
>>>>>> tx_nop: 2102916272
>>>>>> rx_lro_packets: 0
>>>>>> rx_lro_bytes: 0
>>>>>> rx_ecn_mark: 0
>>>>>> rx_removed_vlan_packets: 141574897252
>>>>>> rx_csum_unnecessary: 0
>>>>>> rx_csum_none: 23135854
>>>>>> rx_csum_complete: 141551761398
>>>>>> rx_csum_unnecessary_inner: 0
>>>>>> rx_xdp_drop: 0
>>>>>> rx_xdp_redirect: 0
>>>>>> rx_xdp_tx_xmit: 0
>>>>>> rx_xdp_tx_full: 0
>>>>>> rx_xdp_tx_err: 0
>>>>>> rx_xdp_tx_cqe: 0
>>>>>> tx_csum_none: 127934791664
>>>>> It is a good idea to look into this, tx is not requesting hw tx
>>>>> csumming for a lot of packets, maybe you are wasting a lot of cpu
>>>>> on
>>>>> calculating csum, or maybe this is just the rx csum complete..
>>>>>
>>>>>> tx_csum_partial: 13362879974
>>>>>> tx_csum_partial_inner: 0
>>>>>> tx_queue_stopped: 232561
>>>>> TX queues are stalling, could be an indentation for the pcie
>>>>> bottelneck.
>>>>>
>>>>>> tx_queue_dropped: 0
>>>>>> tx_xmit_more: 1266021946
>>>>>> tx_recover: 0
>>>>>> tx_cqes: 140031716469
>>>>>> tx_queue_wake: 232561
>>>>>> tx_udp_seg_rem: 0
>>>>>> tx_cqe_err: 0
>>>>>> tx_xdp_xmit: 0
>>>>>> tx_xdp_full: 0
>>>>>> tx_xdp_err: 0
>>>>>> tx_xdp_cqes: 0
>>>>>> rx_wqe_err: 0
>>>>>> rx_mpwqe_filler_cqes: 0
>>>>>> rx_mpwqe_filler_strides: 0
>>>>>> rx_buff_alloc_err: 0
>>>>>> rx_cqe_compress_blks: 0
>>>>>> rx_cqe_compress_pkts: 0
>>>>>> rx_page_reuse: 0
>>>>>> rx_cache_reuse: 16625975793
>>>>>> rx_cache_full: 54161465914
>>>>>> rx_cache_empty: 258048
>>>>>> rx_cache_busy: 54161472735
>>>>>> rx_cache_waive: 0
>>>>>> rx_congst_umr: 0
>>>>>> rx_arfs_err: 0
>>>>>> ch_events: 40572621887
>>>>>> ch_poll: 40885650979
>>>>>> ch_arm: 40429276692
>>>>>> ch_aff_change: 0
>>>>>> ch_eq_rearm: 0
>>>>>> rx_out_of_buffer: 2791690
>>>>>> rx_if_down_packets: 74
>>>>>> rx_vport_unicast_packets: 141843476308
>>>>>> rx_vport_unicast_bytes: 185421265403318
>>>>>> tx_vport_unicast_packets: 172569484005
>>>>>> tx_vport_unicast_bytes: 100019940094298
>>>>>> rx_vport_multicast_packets: 85122935
>>>>>> rx_vport_multicast_bytes: 5761316431
>>>>>> tx_vport_multicast_packets: 6452
>>>>>> tx_vport_multicast_bytes: 643540
>>>>>> rx_vport_broadcast_packets: 22423624
>>>>>> rx_vport_broadcast_bytes: 1390127090
>>>>>> tx_vport_broadcast_packets: 22024
>>>>>> tx_vport_broadcast_bytes: 1321440
>>>>>> rx_vport_rdma_unicast_packets: 0
>>>>>> rx_vport_rdma_unicast_bytes: 0
>>>>>> tx_vport_rdma_unicast_packets: 0
>>>>>> tx_vport_rdma_unicast_bytes: 0
>>>>>> rx_vport_rdma_multicast_packets: 0
>>>>>> rx_vport_rdma_multicast_bytes: 0
>>>>>> tx_vport_rdma_multicast_packets: 0
>>>>>> tx_vport_rdma_multicast_bytes: 0
>>>>>> tx_packets_phy: 172569501577
>>>>>> rx_packets_phy: 142871314588
>>>>>> rx_crc_errors_phy: 0
>>>>>> tx_bytes_phy: 100710212814151
>>>>>> rx_bytes_phy: 187209224289564
>>>>>> tx_multicast_phy: 6452
>>>>>> tx_broadcast_phy: 22024
>>>>>> rx_multicast_phy: 85122933
>>>>>> rx_broadcast_phy: 22423623
>>>>>> rx_in_range_len_errors_phy: 2
>>>>>> rx_out_of_range_len_phy: 0
>>>>>> rx_oversize_pkts_phy: 0
>>>>>> rx_symbol_err_phy: 0
>>>>>> tx_mac_control_phy: 0
>>>>>> rx_mac_control_phy: 0
>>>>>> rx_unsupported_op_phy: 0
>>>>>> rx_pause_ctrl_phy: 0
>>>>>> tx_pause_ctrl_phy: 0
>>>>>> rx_discards_phy: 920161423
>>>>> Ok, this port seem to be suffering more, RX is congested, maybe due
>>>>> to
>>>>> the pcie bottleneck.
>>>> Yes this side is receiving more traffic - second port is +10G more tx
>>>>
>>> [...]
>>>
>>>
>>>>>> Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
>>>>>> 0.00 0.00 0.00 31.30
>>>>>> Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
>>>>>> 0.00 0.00 0.00 24.90
>>>>>> Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
>>>>>> 0.00 0.00 0.00 19.68
>>>>>> Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
>>>>>> 0.00 0.00 0.00 18.00
>>>>>> Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
>>>>>> 0.00 0.00 0.00 17.40
>>>>>> Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
>>>>>> 0.00 0.00 0.00 26.03
>>>>>> Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
>>>>>> 0.00 0.00 0.00 17.52
>>>>>> Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
>>>>>> 0.00 0.00 0.00 18.20
>>>>>> Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
>>>>>> 0.00 0.00 0.00 17.68
>>>>>> Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
>>>>>> 0.00 0.00 0.00 18.20
>>>>>> Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
>>>>>> 0.00 0.00 0.00 20.42
>>>>> The numa cores are not at 100% util, you have around 20% of idle on
>>>>> each one.
>>>> Yes - no 100% cpu - but the difference between 80% and 100% is like
>>>> push
>>>> aditional 1-2Gbit/s
>>>>
>>> yes but, it doens't look like the bottleneck is the cpu, although it is
>>> close to be :)..
>>>
>>>>>> Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 96.10
>>>>>> Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 99.70
>>>>>> Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 99.50
>>>>>> Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 100.00
>>>>>> Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 97.40
>>>>>> Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 99.10
>>>>>> Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
>>>>>> 0.00
>>>>>> 0.00 0.00 99.40
>>>>>> Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
>>>>>> 0.00 0.00 0.00 19.42
>>>>>> Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
>>>>>> 0.00 0.00 0.00 26.60
>>>>>> Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
>>>>>> 0.00 0.00 0.00 21.60
>>>>>> Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
>>>>>> 0.00 0.00 0.00 20.50
>>>>>> Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
>>>>>> 0.00 0.00 0.00 21.60
>>>>>> Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
>>>>>> 0.00 0.00 0.00 24.58
>>>>>> Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
>>>>>> 0.00 0.00 0.00 24.68
>>>>>> Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
>>>>>> 0.00 0.00 0.00 28.34
>>>>>> Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
>>>>>> 0.00 0.00 0.00 25.83
>>>>>> Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
>>>>>> 0.00 0.00 0.00 27.20
>>>>>> Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
>>>>>> 0.00 0.00 0.00 28.03
>>>>>> Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
>>>>>> 0.00 0.00 0.00 24.70
>>>>>> Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
>>>>>> 0.00 0.00 0.00 25.30
>>>>>> Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
>>>>>> 0.00 0.00 0.00 26.77
>>>>>>
>>>>>>
>>>>>> ethtool -k enp175s0f0
>>>>>> Features for enp175s0f0:
>>>>>> rx-checksumming: on
>>>>>> tx-checksumming: on
>>>>>> tx-checksum-ipv4: on
>>>>>> tx-checksum-ip-generic: off [fixed]
>>>>>> tx-checksum-ipv6: on
>>>>>> tx-checksum-fcoe-crc: off [fixed]
>>>>>> tx-checksum-sctp: off [fixed]
>>>>>> scatter-gather: on
>>>>>> tx-scatter-gather: on
>>>>>> tx-scatter-gather-fraglist: off [fixed]
>>>>>> tcp-segmentation-offload: on
>>>>>> tx-tcp-segmentation: on
>>>>>> tx-tcp-ecn-segmentation: off [fixed]
>>>>>> tx-tcp-mangleid-segmentation: off
>>>>>> tx-tcp6-segmentation: on
>>>>>> udp-fragmentation-offload: off
>>>>>> generic-segmentation-offload: on
>>>>>> generic-receive-offload: on
>>>>>> large-receive-offload: off [fixed]
>>>>>> rx-vlan-offload: on
>>>>>> tx-vlan-offload: on
>>>>>> ntuple-filters: off
>>>>>> receive-hashing: on
>>>>>> highdma: on [fixed]
>>>>>> rx-vlan-filter: on
>>>>>> vlan-challenged: off [fixed]
>>>>>> tx-lockless: off [fixed]
>>>>>> netns-local: off [fixed]
>>>>>> tx-gso-robust: off [fixed]
>>>>>> tx-fcoe-segmentation: off [fixed]
>>>>>> tx-gre-segmentation: on
>>>>>> tx-gre-csum-segmentation: on
>>>>>> tx-ipxip4-segmentation: off [fixed]
>>>>>> tx-ipxip6-segmentation: off [fixed]
>>>>>> tx-udp_tnl-segmentation: on
>>>>>> tx-udp_tnl-csum-segmentation: on
>>>>>> tx-gso-partial: on
>>>>>> tx-sctp-segmentation: off [fixed]
>>>>>> tx-esp-segmentation: off [fixed]
>>>>>> tx-udp-segmentation: on
>>>>>> fcoe-mtu: off [fixed]
>>>>>> tx-nocache-copy: off
>>>>>> loopback: off [fixed]
>>>>>> rx-fcs: off
>>>>>> rx-all: off
>>>>>> tx-vlan-stag-hw-insert: on
>>>>>> rx-vlan-stag-hw-parse: off [fixed]
>>>>>> rx-vlan-stag-filter: on [fixed]
>>>>>> l2-fwd-offload: off [fixed]
>>>>>> hw-tc-offload: off
>>>>>> esp-hw-offload: off [fixed]
>>>>>> esp-tx-csum-hw-offload: off [fixed]
>>>>>> rx-udp_tunnel-port-offload: on
>>>>>> tls-hw-tx-offload: off [fixed]
>>>>>> tls-hw-rx-offload: off [fixed]
>>>>>> rx-gro-hw: off [fixed]
>>>>>> tls-hw-record: off [fixed]
>>>>>>
>>>>>> ethtool -c enp175s0f0
>>>>>> Coalesce parameters for enp175s0f0:
>>>>>> Adaptive RX: off TX: on
>>>>>> stats-block-usecs: 0
>>>>>> sample-interval: 0
>>>>>> pkt-rate-low: 0
>>>>>> pkt-rate-high: 0
>>>>>> dmac: 32703
>>>>>>
>>>>>> rx-usecs: 256
>>>>>> rx-frames: 128
>>>>>> rx-usecs-irq: 0
>>>>>> rx-frames-irq: 0
>>>>>>
>>>>>> tx-usecs: 8
>>>>>> tx-frames: 128
>>>>>> tx-usecs-irq: 0
>>>>>> tx-frames-irq: 0
>>>>>>
>>>>>> rx-usecs-low: 0
>>>>>> rx-frame-low: 0
>>>>>> tx-usecs-low: 0
>>>>>> tx-frame-low: 0
>>>>>>
>>>>>> rx-usecs-high: 0
>>>>>> rx-frame-high: 0
>>>>>> tx-usecs-high: 0
>>>>>> tx-frame-high: 0
>>>>>>
>>>>>> ethtool -g enp175s0f0
>>>>>> Ring parameters for enp175s0f0:
>>>>>> Pre-set maximums:
>>>>>> RX: 8192
>>>>>> RX Mini: 0
>>>>>> RX Jumbo: 0
>>>>>> TX: 8192
>>>>>> Current hardware settings:
>>>>>> RX: 4096
>>>>>> RX Mini: 0
>>>>>> RX Jumbo: 0
>>>>>> TX: 4096
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>> Also changed a little coalesce params - and best for this config are:
>>>> ethtool -c enp175s0f0
>>>> Coalesce parameters for enp175s0f0:
>>>> Adaptive RX: off TX: off
>>>> stats-block-usecs: 0
>>>> sample-interval: 0
>>>> pkt-rate-low: 0
>>>> pkt-rate-high: 0
>>>> dmac: 32573
>>>>
>>>> rx-usecs: 40
>>>> rx-frames: 128
>>>> rx-usecs-irq: 0
>>>> rx-frames-irq: 0
>>>>
>>>> tx-usecs: 8
>>>> tx-frames: 8
>>>> tx-usecs-irq: 0
>>>> tx-frames-irq: 0
>>>>
>>>> rx-usecs-low: 0
>>>> rx-frame-low: 0
>>>> tx-usecs-low: 0
>>>> tx-frame-low: 0
>>>>
>>>> rx-usecs-high: 0
>>>> rx-frame-high: 0
>>>> tx-usecs-high: 0
>>>> tx-frame-high: 0
>>>>
>>>>
>>>> Less drops on RX side - and more pps in overall forwarded.
>>>>
>>> how much improvement ? maybe we can improve our adaptive rx coal to be
>>> efficient for this work load.
>>>
>>>
>> I can prepare more stats with ethtool maybee to compare - but
>> normally tested with simple icmp forwarded from interface to interface
>> - before change coalescence params:
>> adaptive-rx off rx-usecs 384 rx-frames 128
>> 3% loss for icmp
>> - after change to:
>> adaptive-rx off rx-usecs 40 rx-frames 128 adaptive-tx off tx-usecs 8
>> tx-frames 8
>> 2% loss for icmp
>>
>> But yes - to know better will need to compare rx/tx counters from
>> ethtool + /proc/net/dev
>>
>>
>> Was trying to turn on adaptative-tx+rx - but 100% saturation at
>> 43Gbit/s RX / 43Gbit/s TX
>>
>>
>>
>>
>>
>>
>>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 20:23 ` Saeed Mahameed
@ 2018-11-02 5:23 ` Aaron Lu
2018-11-02 11:40 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 77+ messages in thread
From: Aaron Lu @ 2018-11-02 5:23 UTC (permalink / raw)
To: Saeed Mahameed
Cc: brouer, pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > wrote:
> > ... ...
> > > Section copied out:
> > >
> > > mlx5e_poll_tx_cq
> > > |
> > > --16.34%--napi_consume_skb
> > > |
> > > |--12.65%--__free_pages_ok
> > > | |
> > > | --11.86%--free_one_page
> > > | |
> > > | |--10.10%
> > > --queued_spin_lock_slowpath
> > > | |
> > > | --0.65%--_raw_spin_lock
> >
> > This callchain looks like it is freeing higher order pages than order
> > 0:
> > __free_pages_ok is only called for pages whose order are bigger than
> > 0.
>
> mlx5 rx uses only order 0 pages, so i don't know where these high order
> tx SKBs are coming from..
Perhaps here:
__netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
__napi_alloc_frag() will all call page_frag_alloc(), which will use
__page_frag_cache_refill() to get an order 3 page if possible, or fall
back to an order 0 page if order 3 page is not available.
I'm not sure if your workload will use the above code path though.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-02 5:23 ` Aaron Lu
@ 2018-11-02 11:40 ` Jesper Dangaard Brouer
2018-11-02 14:20 ` Aaron Lu
0 siblings, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-02 11:40 UTC (permalink / raw)
To: Aaron Lu
Cc: Saeed Mahameed, pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman, brouer
On Fri, 2 Nov 2018 13:23:56 +0800
Aaron Lu <aaron.lu@intel.com> wrote:
> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> > On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> > > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > > wrote:
> > > ... ...
> > > > Section copied out:
> > > >
> > > > mlx5e_poll_tx_cq
> > > > |
> > > > --16.34%--napi_consume_skb
> > > > |
> > > > |--12.65%--__free_pages_ok
> > > > | |
> > > > | --11.86%--free_one_page
> > > > | |
> > > > | |--10.10%
> > > > --queued_spin_lock_slowpath
> > > > | |
> > > > | --0.65%--_raw_spin_lock
> > >
> > > This callchain looks like it is freeing higher order pages than order
> > > 0:
> > > __free_pages_ok is only called for pages whose order are bigger than
> > > 0.
> >
> > mlx5 rx uses only order 0 pages, so i don't know where these high order
> > tx SKBs are coming from..
>
> Perhaps here:
> __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
> __napi_alloc_frag() will all call page_frag_alloc(), which will use
> __page_frag_cache_refill() to get an order 3 page if possible, or fall
> back to an order 0 page if order 3 page is not available.
>
> I'm not sure if your workload will use the above code path though.
TL;DR: this is order-0 pages (code-walk trough proof below)
To Aaron, the network stack *can* call __free_pages_ok() with order-0
pages, via:
static void skb_free_head(struct sk_buff *skb)
{
unsigned char *head = skb->head;
if (skb->head_frag)
skb_free_frag(head);
else
kfree(head);
}
static inline void skb_free_frag(void *addr)
{
page_frag_free(addr);
}
/*
* Frees a page fragment allocated out of either a compound or order 0 page.
*/
void page_frag_free(void *addr)
{
struct page *page = virt_to_head_page(addr);
if (unlikely(put_page_testzero(page)))
__free_pages_ok(page, compound_order(page));
}
EXPORT_SYMBOL(page_frag_free);
Notice for the mlx5 driver it support several RX-memory models, so it
can be hard to follow, but from the perf report output we can see that
is uses mlx5e_skb_from_cqe_linear, which use build_skb.
--13.63%--mlx5e_skb_from_cqe_linear
|
--5.02%--build_skb
|
--1.85%--__build_skb
|
--1.00%--kmem_cache_alloc
/* build_skb() is wrapper over __build_skb(), that specifically
* takes care of skb->head and skb->pfmemalloc
* This means that if @frag_size is not zero, then @data must be backed
* by a page fragment, not kmalloc() or vmalloc()
*/
struct sk_buff *build_skb(void *data, unsigned int frag_size)
{
struct sk_buff *skb = __build_skb(data, frag_size);
if (skb && frag_size) {
skb->head_frag = 1;
if (page_is_pfmemalloc(virt_to_head_page(data)))
skb->pfmemalloc = 1;
}
return skb;
}
EXPORT_SYMBOL(build_skb);
It still doesn't prove, that the @data is backed by by a order-0 page.
For the mlx5 driver is uses mlx5e_page_alloc_mapped ->
page_pool_dev_alloc_pages(), and I can see perf report using
__page_pool_alloc_pages_slow().
The setup for page_pool in mlx5 uses order=0.
/* Create a page_pool and register it with rxq */
pp_params.order = 0;
pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
pp_params.pool_size = pool_size;
pp_params.nid = cpu_to_node(c->cpu);
pp_params.dev = c->pdev;
pp_params.dma_dir = rq->buff.map_dir;
/* page_pool can be used even when there is no rq->xdp_prog,
* given page_pool does not handle DMA mapping there is no
* required state to clear. And page_pool gracefully handle
* elevated refcnt.
*/
rq->page_pool = page_pool_create(&pp_params);
if (IS_ERR(rq->page_pool)) {
err = PTR_ERR(rq->page_pool);
rq->page_pool = NULL;
goto err_free;
}
err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
MEM_TYPE_PAGE_POOL, rq->page_pool);
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-02 11:40 ` Jesper Dangaard Brouer
@ 2018-11-02 14:20 ` Aaron Lu
2018-11-02 19:02 ` Paweł Staszewski
2018-11-03 12:53 ` Jesper Dangaard Brouer
0 siblings, 2 replies; 77+ messages in thread
From: Aaron Lu @ 2018-11-02 14:20 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: Saeed Mahameed, pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
> On Fri, 2 Nov 2018 13:23:56 +0800
> Aaron Lu <aaron.lu@intel.com> wrote:
>
> > On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> > > On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> > > > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > > > wrote:
> > > > ... ...
> > > > > Section copied out:
> > > > >
> > > > > mlx5e_poll_tx_cq
> > > > > |
> > > > > --16.34%--napi_consume_skb
> > > > > |
> > > > > |--12.65%--__free_pages_ok
> > > > > | |
> > > > > | --11.86%--free_one_page
> > > > > | |
> > > > > | |--10.10%
> > > > > --queued_spin_lock_slowpath
> > > > > | |
> > > > > | --0.65%--_raw_spin_lock
> > > >
> > > > This callchain looks like it is freeing higher order pages than order
> > > > 0:
> > > > __free_pages_ok is only called for pages whose order are bigger than
> > > > 0.
> > >
> > > mlx5 rx uses only order 0 pages, so i don't know where these high order
> > > tx SKBs are coming from..
> >
> > Perhaps here:
> > __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
> > __napi_alloc_frag() will all call page_frag_alloc(), which will use
> > __page_frag_cache_refill() to get an order 3 page if possible, or fall
> > back to an order 0 page if order 3 page is not available.
> >
> > I'm not sure if your workload will use the above code path though.
>
> TL;DR: this is order-0 pages (code-walk trough proof below)
>
> To Aaron, the network stack *can* call __free_pages_ok() with order-0
> pages, via:
>
> static void skb_free_head(struct sk_buff *skb)
> {
> unsigned char *head = skb->head;
>
> if (skb->head_frag)
> skb_free_frag(head);
> else
> kfree(head);
> }
>
> static inline void skb_free_frag(void *addr)
> {
> page_frag_free(addr);
> }
>
> /*
> * Frees a page fragment allocated out of either a compound or order 0 page.
> */
> void page_frag_free(void *addr)
> {
> struct page *page = virt_to_head_page(addr);
>
> if (unlikely(put_page_testzero(page)))
> __free_pages_ok(page, compound_order(page));
> }
> EXPORT_SYMBOL(page_frag_free);
I think here is a problem - order 0 pages are freed directly to buddy,
bypassing per-cpu-pages. This might be the reason lock contention
appeared on free path. Can someone apply below diff and see if lock
contention is gone?
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e2ef1c17942f..65c0ae13215a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
{
struct page *page = virt_to_head_page(addr);
- if (unlikely(put_page_testzero(page)))
- __free_pages_ok(page, compound_order(page));
+ if (unlikely(put_page_testzero(page))) {
+ unsigned int order = compound_order(page);
+
+ if (order == 0)
+ free_unref_page(page);
+ else
+ __free_pages_ok(page, order);
+ }
}
EXPORT_SYMBOL(page_frag_free);
> Notice for the mlx5 driver it support several RX-memory models, so it
> can be hard to follow, but from the perf report output we can see that
> is uses mlx5e_skb_from_cqe_linear, which use build_skb.
>
> --13.63%--mlx5e_skb_from_cqe_linear
> |
> --5.02%--build_skb
> |
> --1.85%--__build_skb
> |
> --1.00%--kmem_cache_alloc
>
> /* build_skb() is wrapper over __build_skb(), that specifically
> * takes care of skb->head and skb->pfmemalloc
> * This means that if @frag_size is not zero, then @data must be backed
> * by a page fragment, not kmalloc() or vmalloc()
> */
> struct sk_buff *build_skb(void *data, unsigned int frag_size)
> {
> struct sk_buff *skb = __build_skb(data, frag_size);
>
> if (skb && frag_size) {
> skb->head_frag = 1;
> if (page_is_pfmemalloc(virt_to_head_page(data)))
> skb->pfmemalloc = 1;
> }
> return skb;
> }
> EXPORT_SYMBOL(build_skb);
>
> It still doesn't prove, that the @data is backed by by a order-0 page.
> For the mlx5 driver is uses mlx5e_page_alloc_mapped ->
> page_pool_dev_alloc_pages(), and I can see perf report using
> __page_pool_alloc_pages_slow().
>
> The setup for page_pool in mlx5 uses order=0.
>
> /* Create a page_pool and register it with rxq */
> pp_params.order = 0;
> pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
> pp_params.pool_size = pool_size;
> pp_params.nid = cpu_to_node(c->cpu);
> pp_params.dev = c->pdev;
> pp_params.dma_dir = rq->buff.map_dir;
>
> /* page_pool can be used even when there is no rq->xdp_prog,
> * given page_pool does not handle DMA mapping there is no
> * required state to clear. And page_pool gracefully handle
> * elevated refcnt.
> */
> rq->page_pool = page_pool_create(&pp_params);
> if (IS_ERR(rq->page_pool)) {
> err = PTR_ERR(rq->page_pool);
> rq->page_pool = NULL;
> goto err_free;
> }
> err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
> MEM_TYPE_PAGE_POOL, rq->page_pool);
Thanks for the detailed analysis, I'll need more time to understand the
whole picture :-)
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-02 14:20 ` Aaron Lu
@ 2018-11-02 19:02 ` Paweł Staszewski
2018-11-03 0:16 ` Paweł Staszewski
2018-11-03 12:53 ` Jesper Dangaard Brouer
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-02 19:02 UTC (permalink / raw)
To: Aaron Lu, Jesper Dangaard Brouer
Cc: Saeed Mahameed, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
W dniu 02.11.2018 o 15:20, Aaron Lu pisze:
> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
>> On Fri, 2 Nov 2018 13:23:56 +0800
>> Aaron Lu <aaron.lu@intel.com> wrote:
>>
>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
>>>>> wrote:
>>>>> ... ...
>>>>>> Section copied out:
>>>>>>
>>>>>> mlx5e_poll_tx_cq
>>>>>> |
>>>>>> --16.34%--napi_consume_skb
>>>>>> |
>>>>>> |--12.65%--__free_pages_ok
>>>>>> | |
>>>>>> | --11.86%--free_one_page
>>>>>> | |
>>>>>> | |--10.10%
>>>>>> --queued_spin_lock_slowpath
>>>>>> | |
>>>>>> | --0.65%--_raw_spin_lock
>>>>> This callchain looks like it is freeing higher order pages than order
>>>>> 0:
>>>>> __free_pages_ok is only called for pages whose order are bigger than
>>>>> 0.
>>>> mlx5 rx uses only order 0 pages, so i don't know where these high order
>>>> tx SKBs are coming from..
>>> Perhaps here:
>>> __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
>>> __napi_alloc_frag() will all call page_frag_alloc(), which will use
>>> __page_frag_cache_refill() to get an order 3 page if possible, or fall
>>> back to an order 0 page if order 3 page is not available.
>>>
>>> I'm not sure if your workload will use the above code path though.
>> TL;DR: this is order-0 pages (code-walk trough proof below)
>>
>> To Aaron, the network stack *can* call __free_pages_ok() with order-0
>> pages, via:
>>
>> static void skb_free_head(struct sk_buff *skb)
>> {
>> unsigned char *head = skb->head;
>>
>> if (skb->head_frag)
>> skb_free_frag(head);
>> else
>> kfree(head);
>> }
>>
>> static inline void skb_free_frag(void *addr)
>> {
>> page_frag_free(addr);
>> }
>>
>> /*
>> * Frees a page fragment allocated out of either a compound or order 0 page.
>> */
>> void page_frag_free(void *addr)
>> {
>> struct page *page = virt_to_head_page(addr);
>>
>> if (unlikely(put_page_testzero(page)))
>> __free_pages_ok(page, compound_order(page));
>> }
>> EXPORT_SYMBOL(page_frag_free);
> I think here is a problem - order 0 pages are freed directly to buddy,
> bypassing per-cpu-pages. This might be the reason lock contention
> appeared on free path. Can someone apply below diff and see if lock
> contention is gone?
Will test it tonight
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e2ef1c17942f..65c0ae13215a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> {
> struct page *page = virt_to_head_page(addr);
>
> - if (unlikely(put_page_testzero(page)))
> - __free_pages_ok(page, compound_order(page));
> + if (unlikely(put_page_testzero(page))) {
> + unsigned int order = compound_order(page);
> +
> + if (order == 0)
> + free_unref_page(page);
> + else
> + __free_pages_ok(page, order);
> + }
> }
> EXPORT_SYMBOL(page_frag_free);
>
>> Notice for the mlx5 driver it support several RX-memory models, so it
>> can be hard to follow, but from the perf report output we can see that
>> is uses mlx5e_skb_from_cqe_linear, which use build_skb.
>>
>> --13.63%--mlx5e_skb_from_cqe_linear
>> |
>> --5.02%--build_skb
>> |
>> --1.85%--__build_skb
>> |
>> --1.00%--kmem_cache_alloc
>>
>> /* build_skb() is wrapper over __build_skb(), that specifically
>> * takes care of skb->head and skb->pfmemalloc
>> * This means that if @frag_size is not zero, then @data must be backed
>> * by a page fragment, not kmalloc() or vmalloc()
>> */
>> struct sk_buff *build_skb(void *data, unsigned int frag_size)
>> {
>> struct sk_buff *skb = __build_skb(data, frag_size);
>>
>> if (skb && frag_size) {
>> skb->head_frag = 1;
>> if (page_is_pfmemalloc(virt_to_head_page(data)))
>> skb->pfmemalloc = 1;
>> }
>> return skb;
>> }
>> EXPORT_SYMBOL(build_skb);
>>
>> It still doesn't prove, that the @data is backed by by a order-0 page.
>> For the mlx5 driver is uses mlx5e_page_alloc_mapped ->
>> page_pool_dev_alloc_pages(), and I can see perf report using
>> __page_pool_alloc_pages_slow().
>>
>> The setup for page_pool in mlx5 uses order=0.
>>
>> /* Create a page_pool and register it with rxq */
>> pp_params.order = 0;
>> pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
>> pp_params.pool_size = pool_size;
>> pp_params.nid = cpu_to_node(c->cpu);
>> pp_params.dev = c->pdev;
>> pp_params.dma_dir = rq->buff.map_dir;
>>
>> /* page_pool can be used even when there is no rq->xdp_prog,
>> * given page_pool does not handle DMA mapping there is no
>> * required state to clear. And page_pool gracefully handle
>> * elevated refcnt.
>> */
>> rq->page_pool = page_pool_create(&pp_params);
>> if (IS_ERR(rq->page_pool)) {
>> err = PTR_ERR(rq->page_pool);
>> rq->page_pool = NULL;
>> goto err_free;
>> }
>> err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
>> MEM_TYPE_PAGE_POOL, rq->page_pool);
> Thanks for the detailed analysis, I'll need more time to understand the
> whole picture :-)
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-02 19:02 ` Paweł Staszewski
@ 2018-11-03 0:16 ` Paweł Staszewski
2018-11-03 12:01 ` Paweł Staszewski
2018-11-03 12:58 ` Jesper Dangaard Brouer
0 siblings, 2 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-03 0:16 UTC (permalink / raw)
To: Aaron Lu, Jesper Dangaard Brouer
Cc: Saeed Mahameed, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
W dniu 02.11.2018 o 20:02, Paweł Staszewski pisze:
>
>
> W dniu 02.11.2018 o 15:20, Aaron Lu pisze:
>> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
>>> On Fri, 2 Nov 2018 13:23:56 +0800
>>> Aaron Lu <aaron.lu@intel.com> wrote:
>>>
>>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
>>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
>>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
>>>>>> wrote:
>>>>>> ... ...
>>>>>>> Section copied out:
>>>>>>>
>>>>>>> mlx5e_poll_tx_cq
>>>>>>> |
>>>>>>> --16.34%--napi_consume_skb
>>>>>>> |
>>>>>>> |--12.65%--__free_pages_ok
>>>>>>> | |
>>>>>>> | --11.86%--free_one_page
>>>>>>> | |
>>>>>>> | |--10.10%
>>>>>>> --queued_spin_lock_slowpath
>>>>>>> | |
>>>>>>> | --0.65%--_raw_spin_lock
>>>>>> This callchain looks like it is freeing higher order pages than
>>>>>> order
>>>>>> 0:
>>>>>> __free_pages_ok is only called for pages whose order are bigger than
>>>>>> 0.
>>>>> mlx5 rx uses only order 0 pages, so i don't know where these high
>>>>> order
>>>>> tx SKBs are coming from..
>>>> Perhaps here:
>>>> __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
>>>> __napi_alloc_frag() will all call page_frag_alloc(), which will use
>>>> __page_frag_cache_refill() to get an order 3 page if possible, or fall
>>>> back to an order 0 page if order 3 page is not available.
>>>>
>>>> I'm not sure if your workload will use the above code path though.
>>> TL;DR: this is order-0 pages (code-walk trough proof below)
>>>
>>> To Aaron, the network stack *can* call __free_pages_ok() with order-0
>>> pages, via:
>>>
>>> static void skb_free_head(struct sk_buff *skb)
>>> {
>>> unsigned char *head = skb->head;
>>>
>>> if (skb->head_frag)
>>> skb_free_frag(head);
>>> else
>>> kfree(head);
>>> }
>>>
>>> static inline void skb_free_frag(void *addr)
>>> {
>>> page_frag_free(addr);
>>> }
>>>
>>> /*
>>> * Frees a page fragment allocated out of either a compound or
>>> order 0 page.
>>> */
>>> void page_frag_free(void *addr)
>>> {
>>> struct page *page = virt_to_head_page(addr);
>>>
>>> if (unlikely(put_page_testzero(page)))
>>> __free_pages_ok(page, compound_order(page));
>>> }
>>> EXPORT_SYMBOL(page_frag_free);
>> I think here is a problem - order 0 pages are freed directly to buddy,
>> bypassing per-cpu-pages. This might be the reason lock contention
>> appeared on free path. Can someone apply below diff and see if lock
>> contention is gone?
> Will test it tonight
>
Patch applied
perf report:
https://ufile.io/sytfh
But i need to wait also with more traffic currently cpu's are sleeping
>
>
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e2ef1c17942f..65c0ae13215a 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
>> {
>> struct page *page = virt_to_head_page(addr);
>> - if (unlikely(put_page_testzero(page)))
>> - __free_pages_ok(page, compound_order(page));
>> + if (unlikely(put_page_testzero(page))) {
>> + unsigned int order = compound_order(page);
>> +
>> + if (order == 0)
>> + free_unref_page(page);
>> + else
>> + __free_pages_ok(page, order);
>> + }
>> }
>> EXPORT_SYMBOL(page_frag_free);
>>> Notice for the mlx5 driver it support several RX-memory models, so it
>>> can be hard to follow, but from the perf report output we can see that
>>> is uses mlx5e_skb_from_cqe_linear, which use build_skb.
>>>
>>> --13.63%--mlx5e_skb_from_cqe_linear
>>> |
>>> --5.02%--build_skb
>>> |
>>> --1.85%--__build_skb
>>> |
>>> --1.00%--kmem_cache_alloc
>>>
>>> /* build_skb() is wrapper over __build_skb(), that specifically
>>> * takes care of skb->head and skb->pfmemalloc
>>> * This means that if @frag_size is not zero, then @data must be
>>> backed
>>> * by a page fragment, not kmalloc() or vmalloc()
>>> */
>>> struct sk_buff *build_skb(void *data, unsigned int frag_size)
>>> {
>>> struct sk_buff *skb = __build_skb(data, frag_size);
>>>
>>> if (skb && frag_size) {
>>> skb->head_frag = 1;
>>> if (page_is_pfmemalloc(virt_to_head_page(data)))
>>> skb->pfmemalloc = 1;
>>> }
>>> return skb;
>>> }
>>> EXPORT_SYMBOL(build_skb);
>>>
>>> It still doesn't prove, that the @data is backed by by a order-0 page.
>>> For the mlx5 driver is uses mlx5e_page_alloc_mapped ->
>>> page_pool_dev_alloc_pages(), and I can see perf report using
>>> __page_pool_alloc_pages_slow().
>>>
>>> The setup for page_pool in mlx5 uses order=0.
>>>
>>> /* Create a page_pool and register it with rxq */
>>> pp_params.order = 0;
>>> pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
>>> pp_params.pool_size = pool_size;
>>> pp_params.nid = cpu_to_node(c->cpu);
>>> pp_params.dev = c->pdev;
>>> pp_params.dma_dir = rq->buff.map_dir;
>>>
>>> /* page_pool can be used even when there is no rq->xdp_prog,
>>> * given page_pool does not handle DMA mapping there is no
>>> * required state to clear. And page_pool gracefully handle
>>> * elevated refcnt.
>>> */
>>> rq->page_pool = page_pool_create(&pp_params);
>>> if (IS_ERR(rq->page_pool)) {
>>> err = PTR_ERR(rq->page_pool);
>>> rq->page_pool = NULL;
>>> goto err_free;
>>> }
>>> err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
>>> MEM_TYPE_PAGE_POOL, rq->page_pool);
>> Thanks for the detailed analysis, I'll need more time to understand the
>> whole picture :-)
>>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 20:37 ` Saeed Mahameed
2018-11-01 21:18 ` Paweł Staszewski
@ 2018-11-03 0:18 ` Paweł Staszewski
2018-11-08 19:12 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-03 0:18 UTC (permalink / raw)
To: Saeed Mahameed, netdev
W dniu 01.11.2018 o 21:37, Saeed Mahameed pisze:
> On Thu, 2018-11-01 at 12:09 +0100, Paweł Staszewski wrote:
>> W dniu 01.11.2018 o 10:50, Saeed Mahameed pisze:
>>> On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
>>>> Hi
>>>>
>>>> So maybee someone will be interested how linux kernel handles
>>>> normal
>>>> traffic (not pktgen :) )
>>>>
>>>>
>>>> Server HW configuration:
>>>>
>>>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>>>
>>>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>>>
>>>>
>>>> Server software:
>>>>
>>>> FRR - as routing daemon
>>>>
>>>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to
>>>> local
>>>> numa
>>>> node)
>>>>
>>>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local
>>>> numa
>>>> node)
>>>>
>>>>
>>>> Maximum traffic that server can handle:
>>>>
>>>> Bandwidth
>>>>
>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>> input: /proc/net/dev type: rate
>>>> \ iface Rx Tx Total
>>>> =================================================================
>>>> ====
>>>> =========
>>>> enp175s0f1: 28.51 Gb/s 37.24
>>>> Gb/s
>>>> 65.74 Gb/s
>>>> enp175s0f0: 38.07 Gb/s 28.44
>>>> Gb/s
>>>> 66.51 Gb/s
>>>> ---------------------------------------------------------------
>>>> ----
>>>> -----------
>>>> total: 66.58 Gb/s 65.67
>>>> Gb/s
>>>> 132.25 Gb/s
>>>>
>>>>
>>>> Packets per second:
>>>>
>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>> input: /proc/net/dev type: rate
>>>> - iface Rx Tx Total
>>>> =================================================================
>>>> ====
>>>> =========
>>>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
>>>> 8735207.00 P/s
>>>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
>>>> 8790460.00 P/s
>>>> ---------------------------------------------------------------
>>>> ----
>>>> -----------
>>>> total: 8806533.00 P/s 8719134.00 P/s
>>>> 17525668.00 P/s
>>>>
>>>>
>>>> After reaching that limits nics on the upstream side (more RX
>>>> traffic)
>>>> start to drop packets
>>>>
>>>>
>>>> I just dont understand that server can't handle more bandwidth
>>>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
>>>> RX
>>>> side are increasing.
>>>>
>>> Where do you see 40 Gb/s ? you showed that both ports on the same
>>> NIC (
>>> same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) =
>>> 132.25
>>> Gb/s which aligns with your pcie link limit, what am i missing ?
>> hmm yes that was my concern also - cause cant find anywhere
>> informations
>> about that bandwidth is uni or bidirectional - so if 126Gbit for x16
>> 8GT
>> is unidir - then bidir will be 126/2 ~68Gbit - which will fit total
>> bw
>> on both ports
> i think it is bidir
So yes - we are hitting there other problem i think pcie is most
probabbly bidirectional max bw 126Gbit so RX 126Gbit and at same time TX
should be 126Gbit
>> This can explain maybee also why cpuload is rising rapidly from
>> 120Gbit/s in total to 132Gbit (counters of bwmng are from /proc/net -
>> so
>> there can be some error in reading them when offloading (gro/gso/tso)
>> on
>> nic's is enabled that is why
>>
>>>> Was thinking that maybee reached some pcie x16 limit - but x16
>>>> 8GT
>>>> is
>>>> 126Gbit - and also when testing with pktgen i can reach more bw
>>>> and
>>>> pps
>>>> (like 4x more comparing to normal internet traffic)
>>>>
>>> Are you forwarding when using pktgen as well or you just testing
>>> the RX
>>> side pps ?
>> Yes pktgen was tested on single port RX
>> Can check also forwarding to eliminate pciex limits
>>
> So this explains why you have more RX pps, since tx is idle and pcie
> will be free to do only rx.
>
> [...]
>
>
>>>> ethtool -S enp175s0f1
>>>> NIC statistics:
>>>> rx_packets: 173730800927
>>>> rx_bytes: 99827422751332
>>>> tx_packets: 142532009512
>>>> tx_bytes: 184633045911222
>>>> tx_tso_packets: 25989113891
>>>> tx_tso_bytes: 132933363384458
>>>> tx_tso_inner_packets: 0
>>>> tx_tso_inner_bytes: 0
>>>> tx_added_vlan_packets: 74630239613
>>>> tx_nop: 2029817748
>>>> rx_lro_packets: 0
>>>> rx_lro_bytes: 0
>>>> rx_ecn_mark: 0
>>>> rx_removed_vlan_packets: 173730800927
>>>> rx_csum_unnecessary: 0
>>>> rx_csum_none: 434357
>>>> rx_csum_complete: 173730366570
>>>> rx_csum_unnecessary_inner: 0
>>>> rx_xdp_drop: 0
>>>> rx_xdp_redirect: 0
>>>> rx_xdp_tx_xmit: 0
>>>> rx_xdp_tx_full: 0
>>>> rx_xdp_tx_err: 0
>>>> rx_xdp_tx_cqe: 0
>>>> tx_csum_none: 38260960853
>>>> tx_csum_partial: 36369278774
>>>> tx_csum_partial_inner: 0
>>>> tx_queue_stopped: 1
>>>> tx_queue_dropped: 0
>>>> tx_xmit_more: 748638099
>>>> tx_recover: 0
>>>> tx_cqes: 73881645031
>>>> tx_queue_wake: 1
>>>> tx_udp_seg_rem: 0
>>>> tx_cqe_err: 0
>>>> tx_xdp_xmit: 0
>>>> tx_xdp_full: 0
>>>> tx_xdp_err: 0
>>>> tx_xdp_cqes: 0
>>>> rx_wqe_err: 0
>>>> rx_mpwqe_filler_cqes: 0
>>>> rx_mpwqe_filler_strides: 0
>>>> rx_buff_alloc_err: 0
>>>> rx_cqe_compress_blks: 0
>>>> rx_cqe_compress_pkts: 0
>>> If this is a pcie bottleneck it might be useful to enable CQE
>>> compression (to reduce PCIe completion descriptors transactions)
>>> you should see the above rx_cqe_compress_pkts increasing when
>>> enabled.
>>>
>>> $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
>>> $ ethtool --show-priv-flags enp175s0f1
>>> Private flags for p6p1:
>>> rx_cqe_moder : on
>>> cqe_moder : off
>>> rx_cqe_compress : on
>>> ...
>>>
>>> try this on both interfaces.
>> Done
>> ethtool --show-priv-flags enp175s0f1
>> Private flags for enp175s0f1:
>> rx_cqe_moder : on
>> tx_cqe_moder : off
>> rx_cqe_compress : on
>> rx_striding_rq : off
>> rx_no_csum_complete: off
>>
>> ethtool --show-priv-flags enp175s0f0
>> Private flags for enp175s0f0:
>> rx_cqe_moder : on
>> tx_cqe_moder : off
>> rx_cqe_compress : on
>> rx_striding_rq : off
>> rx_no_csum_complete: off
>>
> did it help reduce the load on the pcie ? do you see more pps ?
> what is the ratio between rx_cqe_compress_pkts and over all rx packets
> ?
>
> [...]
>
>>>> ethtool -S enp175s0f0
>>>> NIC statistics:
>>>> rx_packets: 141574897253
>>>> rx_bytes: 184445040406258
>>>> tx_packets: 172569543894
>>>> tx_bytes: 99486882076365
>>>> tx_tso_packets: 9367664195
>>>> tx_tso_bytes: 56435233992948
>>>> tx_tso_inner_packets: 0
>>>> tx_tso_inner_bytes: 0
>>>> tx_added_vlan_packets: 141297671626
>>>> tx_nop: 2102916272
>>>> rx_lro_packets: 0
>>>> rx_lro_bytes: 0
>>>> rx_ecn_mark: 0
>>>> rx_removed_vlan_packets: 141574897252
>>>> rx_csum_unnecessary: 0
>>>> rx_csum_none: 23135854
>>>> rx_csum_complete: 141551761398
>>>> rx_csum_unnecessary_inner: 0
>>>> rx_xdp_drop: 0
>>>> rx_xdp_redirect: 0
>>>> rx_xdp_tx_xmit: 0
>>>> rx_xdp_tx_full: 0
>>>> rx_xdp_tx_err: 0
>>>> rx_xdp_tx_cqe: 0
>>>> tx_csum_none: 127934791664
>>> It is a good idea to look into this, tx is not requesting hw tx
>>> csumming for a lot of packets, maybe you are wasting a lot of cpu
>>> on
>>> calculating csum, or maybe this is just the rx csum complete..
>>>
>>>> tx_csum_partial: 13362879974
>>>> tx_csum_partial_inner: 0
>>>> tx_queue_stopped: 232561
>>> TX queues are stalling, could be an indentation for the pcie
>>> bottelneck.
>>>
>>>> tx_queue_dropped: 0
>>>> tx_xmit_more: 1266021946
>>>> tx_recover: 0
>>>> tx_cqes: 140031716469
>>>> tx_queue_wake: 232561
>>>> tx_udp_seg_rem: 0
>>>> tx_cqe_err: 0
>>>> tx_xdp_xmit: 0
>>>> tx_xdp_full: 0
>>>> tx_xdp_err: 0
>>>> tx_xdp_cqes: 0
>>>> rx_wqe_err: 0
>>>> rx_mpwqe_filler_cqes: 0
>>>> rx_mpwqe_filler_strides: 0
>>>> rx_buff_alloc_err: 0
>>>> rx_cqe_compress_blks: 0
>>>> rx_cqe_compress_pkts: 0
>>>> rx_page_reuse: 0
>>>> rx_cache_reuse: 16625975793
>>>> rx_cache_full: 54161465914
>>>> rx_cache_empty: 258048
>>>> rx_cache_busy: 54161472735
>>>> rx_cache_waive: 0
>>>> rx_congst_umr: 0
>>>> rx_arfs_err: 0
>>>> ch_events: 40572621887
>>>> ch_poll: 40885650979
>>>> ch_arm: 40429276692
>>>> ch_aff_change: 0
>>>> ch_eq_rearm: 0
>>>> rx_out_of_buffer: 2791690
>>>> rx_if_down_packets: 74
>>>> rx_vport_unicast_packets: 141843476308
>>>> rx_vport_unicast_bytes: 185421265403318
>>>> tx_vport_unicast_packets: 172569484005
>>>> tx_vport_unicast_bytes: 100019940094298
>>>> rx_vport_multicast_packets: 85122935
>>>> rx_vport_multicast_bytes: 5761316431
>>>> tx_vport_multicast_packets: 6452
>>>> tx_vport_multicast_bytes: 643540
>>>> rx_vport_broadcast_packets: 22423624
>>>> rx_vport_broadcast_bytes: 1390127090
>>>> tx_vport_broadcast_packets: 22024
>>>> tx_vport_broadcast_bytes: 1321440
>>>> rx_vport_rdma_unicast_packets: 0
>>>> rx_vport_rdma_unicast_bytes: 0
>>>> tx_vport_rdma_unicast_packets: 0
>>>> tx_vport_rdma_unicast_bytes: 0
>>>> rx_vport_rdma_multicast_packets: 0
>>>> rx_vport_rdma_multicast_bytes: 0
>>>> tx_vport_rdma_multicast_packets: 0
>>>> tx_vport_rdma_multicast_bytes: 0
>>>> tx_packets_phy: 172569501577
>>>> rx_packets_phy: 142871314588
>>>> rx_crc_errors_phy: 0
>>>> tx_bytes_phy: 100710212814151
>>>> rx_bytes_phy: 187209224289564
>>>> tx_multicast_phy: 6452
>>>> tx_broadcast_phy: 22024
>>>> rx_multicast_phy: 85122933
>>>> rx_broadcast_phy: 22423623
>>>> rx_in_range_len_errors_phy: 2
>>>> rx_out_of_range_len_phy: 0
>>>> rx_oversize_pkts_phy: 0
>>>> rx_symbol_err_phy: 0
>>>> tx_mac_control_phy: 0
>>>> rx_mac_control_phy: 0
>>>> rx_unsupported_op_phy: 0
>>>> rx_pause_ctrl_phy: 0
>>>> tx_pause_ctrl_phy: 0
>>>> rx_discards_phy: 920161423
>>> Ok, this port seem to be suffering more, RX is congested, maybe due
>>> to
>>> the pcie bottleneck.
>> Yes this side is receiving more traffic - second port is +10G more tx
>>
> [...]
>
>
>>>> Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
>>>> 0.00 0.00 0.00 31.30
>>>> Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
>>>> 0.00 0.00 0.00 24.90
>>>> Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
>>>> 0.00 0.00 0.00 19.68
>>>> Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
>>>> 0.00 0.00 0.00 18.00
>>>> Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
>>>> 0.00 0.00 0.00 17.40
>>>> Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
>>>> 0.00 0.00 0.00 26.03
>>>> Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
>>>> 0.00 0.00 0.00 17.52
>>>> Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
>>>> 0.00 0.00 0.00 18.20
>>>> Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
>>>> 0.00 0.00 0.00 17.68
>>>> Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
>>>> 0.00 0.00 0.00 18.20
>>>> Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
>>>> 0.00 0.00 0.00 20.42
>>> The numa cores are not at 100% util, you have around 20% of idle on
>>> each one.
>> Yes - no 100% cpu - but the difference between 80% and 100% is like
>> push
>> aditional 1-2Gbit/s
>>
> yes but, it doens't look like the bottleneck is the cpu, although it is
> close to be :)..
>
>>>> Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 96.10
>>>> Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.70
>>>> Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.50
>>>> Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 100.00
>>>> Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 97.40
>>>> Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.10
>>>> Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
>>>> 0.00
>>>> 0.00 0.00 99.40
>>>> Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
>>>> 0.00 0.00 0.00 19.42
>>>> Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
>>>> 0.00 0.00 0.00 26.60
>>>> Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
>>>> 0.00 0.00 0.00 21.60
>>>> Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
>>>> 0.00 0.00 0.00 20.50
>>>> Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
>>>> 0.00 0.00 0.00 21.60
>>>> Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
>>>> 0.00 0.00 0.00 24.58
>>>> Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
>>>> 0.00 0.00 0.00 24.68
>>>> Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
>>>> 0.00 0.00 0.00 28.34
>>>> Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
>>>> 0.00 0.00 0.00 25.83
>>>> Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
>>>> 0.00 0.00 0.00 27.20
>>>> Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
>>>> 0.00 0.00 0.00 28.03
>>>> Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
>>>> 0.00 0.00 0.00 24.70
>>>> Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
>>>> 0.00 0.00 0.00 25.30
>>>> Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
>>>> 0.00 0.00 0.00 26.77
>>>>
>>>>
>>>> ethtool -k enp175s0f0
>>>> Features for enp175s0f0:
>>>> rx-checksumming: on
>>>> tx-checksumming: on
>>>> tx-checksum-ipv4: on
>>>> tx-checksum-ip-generic: off [fixed]
>>>> tx-checksum-ipv6: on
>>>> tx-checksum-fcoe-crc: off [fixed]
>>>> tx-checksum-sctp: off [fixed]
>>>> scatter-gather: on
>>>> tx-scatter-gather: on
>>>> tx-scatter-gather-fraglist: off [fixed]
>>>> tcp-segmentation-offload: on
>>>> tx-tcp-segmentation: on
>>>> tx-tcp-ecn-segmentation: off [fixed]
>>>> tx-tcp-mangleid-segmentation: off
>>>> tx-tcp6-segmentation: on
>>>> udp-fragmentation-offload: off
>>>> generic-segmentation-offload: on
>>>> generic-receive-offload: on
>>>> large-receive-offload: off [fixed]
>>>> rx-vlan-offload: on
>>>> tx-vlan-offload: on
>>>> ntuple-filters: off
>>>> receive-hashing: on
>>>> highdma: on [fixed]
>>>> rx-vlan-filter: on
>>>> vlan-challenged: off [fixed]
>>>> tx-lockless: off [fixed]
>>>> netns-local: off [fixed]
>>>> tx-gso-robust: off [fixed]
>>>> tx-fcoe-segmentation: off [fixed]
>>>> tx-gre-segmentation: on
>>>> tx-gre-csum-segmentation: on
>>>> tx-ipxip4-segmentation: off [fixed]
>>>> tx-ipxip6-segmentation: off [fixed]
>>>> tx-udp_tnl-segmentation: on
>>>> tx-udp_tnl-csum-segmentation: on
>>>> tx-gso-partial: on
>>>> tx-sctp-segmentation: off [fixed]
>>>> tx-esp-segmentation: off [fixed]
>>>> tx-udp-segmentation: on
>>>> fcoe-mtu: off [fixed]
>>>> tx-nocache-copy: off
>>>> loopback: off [fixed]
>>>> rx-fcs: off
>>>> rx-all: off
>>>> tx-vlan-stag-hw-insert: on
>>>> rx-vlan-stag-hw-parse: off [fixed]
>>>> rx-vlan-stag-filter: on [fixed]
>>>> l2-fwd-offload: off [fixed]
>>>> hw-tc-offload: off
>>>> esp-hw-offload: off [fixed]
>>>> esp-tx-csum-hw-offload: off [fixed]
>>>> rx-udp_tunnel-port-offload: on
>>>> tls-hw-tx-offload: off [fixed]
>>>> tls-hw-rx-offload: off [fixed]
>>>> rx-gro-hw: off [fixed]
>>>> tls-hw-record: off [fixed]
>>>>
>>>> ethtool -c enp175s0f0
>>>> Coalesce parameters for enp175s0f0:
>>>> Adaptive RX: off TX: on
>>>> stats-block-usecs: 0
>>>> sample-interval: 0
>>>> pkt-rate-low: 0
>>>> pkt-rate-high: 0
>>>> dmac: 32703
>>>>
>>>> rx-usecs: 256
>>>> rx-frames: 128
>>>> rx-usecs-irq: 0
>>>> rx-frames-irq: 0
>>>>
>>>> tx-usecs: 8
>>>> tx-frames: 128
>>>> tx-usecs-irq: 0
>>>> tx-frames-irq: 0
>>>>
>>>> rx-usecs-low: 0
>>>> rx-frame-low: 0
>>>> tx-usecs-low: 0
>>>> tx-frame-low: 0
>>>>
>>>> rx-usecs-high: 0
>>>> rx-frame-high: 0
>>>> tx-usecs-high: 0
>>>> tx-frame-high: 0
>>>>
>>>> ethtool -g enp175s0f0
>>>> Ring parameters for enp175s0f0:
>>>> Pre-set maximums:
>>>> RX: 8192
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 8192
>>>> Current hardware settings:
>>>> RX: 4096
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 4096
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>> Also changed a little coalesce params - and best for this config are:
>> ethtool -c enp175s0f0
>> Coalesce parameters for enp175s0f0:
>> Adaptive RX: off TX: off
>> stats-block-usecs: 0
>> sample-interval: 0
>> pkt-rate-low: 0
>> pkt-rate-high: 0
>> dmac: 32573
>>
>> rx-usecs: 40
>> rx-frames: 128
>> rx-usecs-irq: 0
>> rx-frames-irq: 0
>>
>> tx-usecs: 8
>> tx-frames: 8
>> tx-usecs-irq: 0
>> tx-frames-irq: 0
>>
>> rx-usecs-low: 0
>> rx-frame-low: 0
>> tx-usecs-low: 0
>> tx-frame-low: 0
>>
>> rx-usecs-high: 0
>> rx-frame-high: 0
>> tx-usecs-high: 0
>> tx-frame-high: 0
>>
>>
>> Less drops on RX side - and more pps in overall forwarded.
>>
> how much improvement ? maybe we can improve our adaptive rx coal to be
> efficient for this work load.
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 0:16 ` Paweł Staszewski
@ 2018-11-03 12:01 ` Paweł Staszewski
2018-11-03 12:58 ` Jesper Dangaard Brouer
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-03 12:01 UTC (permalink / raw)
To: Aaron Lu, Jesper Dangaard Brouer
Cc: Saeed Mahameed, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
W dniu 03.11.2018 o 01:16, Paweł Staszewski pisze:
>
>
> W dniu 02.11.2018 o 20:02, Paweł Staszewski pisze:
>>
>>
>> W dniu 02.11.2018 o 15:20, Aaron Lu pisze:
>>> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
>>>> On Fri, 2 Nov 2018 13:23:56 +0800
>>>> Aaron Lu <aaron.lu@intel.com> wrote:
>>>>
>>>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
>>>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
>>>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
>>>>>>> wrote:
>>>>>>> ... ...
>>>>>>>> Section copied out:
>>>>>>>>
>>>>>>>> mlx5e_poll_tx_cq
>>>>>>>> |
>>>>>>>> --16.34%--napi_consume_skb
>>>>>>>> |
>>>>>>>> |--12.65%--__free_pages_ok
>>>>>>>> | |
>>>>>>>> | --11.86%--free_one_page
>>>>>>>> | |
>>>>>>>> | |--10.10%
>>>>>>>> --queued_spin_lock_slowpath
>>>>>>>> | |
>>>>>>>> | --0.65%--_raw_spin_lock
>>>>>>> This callchain looks like it is freeing higher order pages than
>>>>>>> order
>>>>>>> 0:
>>>>>>> __free_pages_ok is only called for pages whose order are bigger
>>>>>>> than
>>>>>>> 0.
>>>>>> mlx5 rx uses only order 0 pages, so i don't know where these high
>>>>>> order
>>>>>> tx SKBs are coming from..
>>>>> Perhaps here:
>>>>> __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
>>>>> __napi_alloc_frag() will all call page_frag_alloc(), which will use
>>>>> __page_frag_cache_refill() to get an order 3 page if possible, or
>>>>> fall
>>>>> back to an order 0 page if order 3 page is not available.
>>>>>
>>>>> I'm not sure if your workload will use the above code path though.
>>>> TL;DR: this is order-0 pages (code-walk trough proof below)
>>>>
>>>> To Aaron, the network stack *can* call __free_pages_ok() with order-0
>>>> pages, via:
>>>>
>>>> static void skb_free_head(struct sk_buff *skb)
>>>> {
>>>> unsigned char *head = skb->head;
>>>>
>>>> if (skb->head_frag)
>>>> skb_free_frag(head);
>>>> else
>>>> kfree(head);
>>>> }
>>>>
>>>> static inline void skb_free_frag(void *addr)
>>>> {
>>>> page_frag_free(addr);
>>>> }
>>>>
>>>> /*
>>>> * Frees a page fragment allocated out of either a compound or
>>>> order 0 page.
>>>> */
>>>> void page_frag_free(void *addr)
>>>> {
>>>> struct page *page = virt_to_head_page(addr);
>>>>
>>>> if (unlikely(put_page_testzero(page)))
>>>> __free_pages_ok(page, compound_order(page));
>>>> }
>>>> EXPORT_SYMBOL(page_frag_free);
>>> I think here is a problem - order 0 pages are freed directly to buddy,
>>> bypassing per-cpu-pages. This might be the reason lock contention
>>> appeared on free path. Can someone apply below diff and see if lock
>>> contention is gone?
>> Will test it tonight
>>
> Patch applied
> perf report:
> https://ufile.io/sytfh
>
>
>
> But i need to wait also with more traffic currently cpu's are sleeping
before patch:
| | | |
|--13.55%--mlx5e_poll_tx_cq
| | |
| | |
| | |
| | --10.32%--napi_consume_skb
| | |
| | |
| | |
| | |--8.52%--__free_pages_ok
| | |
| | | |
| | |
| | | --7.67%--free_one_page
| | |
| | | |
| | |
| | | |--6.05%--queued_spin_lock_slowpath
| | |
| | | |
| | |
| | | --0.64%--_raw_spin_lock
| | |
| | |
| | |
| | |--0.77%--skb_release_data
| | |
| | |
| | |
| | --0.72%--page_frag_free
after patch:
| | | | |
|--3.75%--mlx5e_poll_tx_cq
| | | |
| | |
| | | |
| | --1.53%--napi_consume_skb
| | | |
| | |
| | | |
| | --0.54%--skb_release_data
| | | |
| |
| | | | |
--3.09%--mlx5e_post_rx_wqes
| | | |
| |
| | | | |
--1.21%--__page_pool_alloc_pages_slow
| | | |
| |
| | | | |
--1.16%--__alloc_pages_nodemask
| | | |
| |
| | | | |
--1.05%--get_page_from_freelist
currently waiting for more traffic also
>
>
>
>
>>
>>
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index e2ef1c17942f..65c0ae13215a 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
>>> {
>>> struct page *page = virt_to_head_page(addr);
>>> - if (unlikely(put_page_testzero(page)))
>>> - __free_pages_ok(page, compound_order(page));
>>> + if (unlikely(put_page_testzero(page))) {
>>> + unsigned int order = compound_order(page);
>>> +
>>> + if (order == 0)
>>> + free_unref_page(page);
>>> + else
>>> + __free_pages_ok(page, order);
>>> + }
>>> }
>>> EXPORT_SYMBOL(page_frag_free);
>>>> Notice for the mlx5 driver it support several RX-memory models, so it
>>>> can be hard to follow, but from the perf report output we can see that
>>>> is uses mlx5e_skb_from_cqe_linear, which use build_skb.
>>>>
>>>> --13.63%--mlx5e_skb_from_cqe_linear
>>>> |
>>>> --5.02%--build_skb
>>>> |
>>>> --1.85%--__build_skb
>>>> |
>>>> --1.00%--kmem_cache_alloc
>>>>
>>>> /* build_skb() is wrapper over __build_skb(), that specifically
>>>> * takes care of skb->head and skb->pfmemalloc
>>>> * This means that if @frag_size is not zero, then @data must be
>>>> backed
>>>> * by a page fragment, not kmalloc() or vmalloc()
>>>> */
>>>> struct sk_buff *build_skb(void *data, unsigned int frag_size)
>>>> {
>>>> struct sk_buff *skb = __build_skb(data, frag_size);
>>>>
>>>> if (skb && frag_size) {
>>>> skb->head_frag = 1;
>>>> if (page_is_pfmemalloc(virt_to_head_page(data)))
>>>> skb->pfmemalloc = 1;
>>>> }
>>>> return skb;
>>>> }
>>>> EXPORT_SYMBOL(build_skb);
>>>>
>>>> It still doesn't prove, that the @data is backed by by a order-0 page.
>>>> For the mlx5 driver is uses mlx5e_page_alloc_mapped ->
>>>> page_pool_dev_alloc_pages(), and I can see perf report using
>>>> __page_pool_alloc_pages_slow().
>>>>
>>>> The setup for page_pool in mlx5 uses order=0.
>>>>
>>>> /* Create a page_pool and register it with rxq */
>>>> pp_params.order = 0;
>>>> pp_params.flags = 0; /* No-internal DMA mapping in
>>>> page_pool */
>>>> pp_params.pool_size = pool_size;
>>>> pp_params.nid = cpu_to_node(c->cpu);
>>>> pp_params.dev = c->pdev;
>>>> pp_params.dma_dir = rq->buff.map_dir;
>>>>
>>>> /* page_pool can be used even when there is no rq->xdp_prog,
>>>> * given page_pool does not handle DMA mapping there is no
>>>> * required state to clear. And page_pool gracefully handle
>>>> * elevated refcnt.
>>>> */
>>>> rq->page_pool = page_pool_create(&pp_params);
>>>> if (IS_ERR(rq->page_pool)) {
>>>> err = PTR_ERR(rq->page_pool);
>>>> rq->page_pool = NULL;
>>>> goto err_free;
>>>> }
>>>> err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
>>>> MEM_TYPE_PAGE_POOL, rq->page_pool);
>>> Thanks for the detailed analysis, I'll need more time to understand the
>>> whole picture :-)
>>>
>>
>>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-02 14:20 ` Aaron Lu
2018-11-02 19:02 ` Paweł Staszewski
@ 2018-11-03 12:53 ` Jesper Dangaard Brouer
2018-11-05 6:28 ` Aaron Lu
2018-11-05 8:42 ` Tariq Toukan
1 sibling, 2 replies; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-03 12:53 UTC (permalink / raw)
To: Aaron Lu
Cc: Saeed Mahameed, pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman, brouer
On Fri, 2 Nov 2018 22:20:24 +0800 Aaron Lu <aaron.lu@intel.com> wrote:
> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
> > On Fri, 2 Nov 2018 13:23:56 +0800
> > Aaron Lu <aaron.lu@intel.com> wrote:
> >
> > > On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> > > > On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> > > > > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > > > > wrote:
> > > > > ... ...
> > > > > > Section copied out:
> > > > > >
> > > > > > mlx5e_poll_tx_cq
> > > > > > |
> > > > > > --16.34%--napi_consume_skb
> > > > > > |
> > > > > > |--12.65%--__free_pages_ok
> > > > > > | |
> > > > > > | --11.86%--free_one_page
> > > > > > | |
> > > > > > | |--10.10%
> > > > > > --queued_spin_lock_slowpath
> > > > > > | |
> > > > > > | --0.65%--_raw_spin_lock
> > > > >
> > > > > This callchain looks like it is freeing higher order pages than order
> > > > > 0:
> > > > > __free_pages_ok is only called for pages whose order are bigger than
> > > > > 0.
> > > >
> > > > mlx5 rx uses only order 0 pages, so i don't know where these high order
> > > > tx SKBs are coming from..
> > >
> > > Perhaps here:
> > > __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
> > > __napi_alloc_frag() will all call page_frag_alloc(), which will use
> > > __page_frag_cache_refill() to get an order 3 page if possible, or fall
> > > back to an order 0 page if order 3 page is not available.
> > >
> > > I'm not sure if your workload will use the above code path though.
> >
> > TL;DR: this is order-0 pages (code-walk trough proof below)
> >
> > To Aaron, the network stack *can* call __free_pages_ok() with order-0
> > pages, via:
> >
> > static void skb_free_head(struct sk_buff *skb)
> > {
> > unsigned char *head = skb->head;
> >
> > if (skb->head_frag)
> > skb_free_frag(head);
> > else
> > kfree(head);
> > }
> >
> > static inline void skb_free_frag(void *addr)
> > {
> > page_frag_free(addr);
> > }
> >
> > /*
> > * Frees a page fragment allocated out of either a compound or order 0 page.
> > */
> > void page_frag_free(void *addr)
> > {
> > struct page *page = virt_to_head_page(addr);
> >
> > if (unlikely(put_page_testzero(page)))
> > __free_pages_ok(page, compound_order(page));
> > }
> > EXPORT_SYMBOL(page_frag_free);
>
> I think here is a problem - order 0 pages are freed directly to buddy,
> bypassing per-cpu-pages. This might be the reason lock contention
> appeared on free path.
OMG - you just found a significant issue with the network stacks
interaction with the page allocator! This explains why I could not get
the PCP (Per-Cpu-Pages) system to have good performance, in my
performance networking benchmarks. As we are basically only using the
alloc side of PCP, and not the free side.
We have spend years adding different driver level recycle tricks to
avoid this code path getting activated, exactly because it is rather
slow and problematic that we hit this zone->lock.
> Can someone apply below diff and see if lock contention is gone?
I have also applied and tested this patch, and yes the lock contention
is gone. As mentioned is it rather difficult to hit this code path, as
the driver page recycle mechanism tries to hide/avoid it, but mlx5 +
page_pool + CPU-map recycling have a known weakness that bypass the
driver page recycle scheme (that I've not fixed yet). I observed a 7%
speedup for this micro benchmark.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e2ef1c17942f..65c0ae13215a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> {
> struct page *page = virt_to_head_page(addr);
>
> - if (unlikely(put_page_testzero(page)))
> - __free_pages_ok(page, compound_order(page));
> + if (unlikely(put_page_testzero(page))) {
> + unsigned int order = compound_order(page);
> +
> + if (order == 0)
> + free_unref_page(page);
> + else
> + __free_pages_ok(page, order);
> + }
> }
> EXPORT_SYMBOL(page_frag_free);
Thank you Aaron for spotting this!!!
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 0:16 ` Paweł Staszewski
2018-11-03 12:01 ` Paweł Staszewski
@ 2018-11-03 12:58 ` Jesper Dangaard Brouer
2018-11-03 15:23 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-03 12:58 UTC (permalink / raw)
To: Paweł Staszewski
Cc: Aaron Lu, Saeed Mahameed, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman, brouer
On Sat, 3 Nov 2018 01:16:08 +0100
Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> W dniu 02.11.2018 o 20:02, Paweł Staszewski pisze:
> >
> >
> > W dniu 02.11.2018 o 15:20, Aaron Lu pisze:
> >> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
> >>> On Fri, 2 Nov 2018 13:23:56 +0800
> >>> Aaron Lu <aaron.lu@intel.com> wrote:
> >>>
> >>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> >>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> >>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> >>>>>> wrote:
> >>>>>> ... ...
[...]
> >>> TL;DR: this is order-0 pages (code-walk trough proof below)
> >>>
> >>> To Aaron, the network stack *can* call __free_pages_ok() with order-0
> >>> pages, via:
[...]
> >>
> >> I think here is a problem - order 0 pages are freed directly to buddy,
> >> bypassing per-cpu-pages. This might be the reason lock contention
> >> appeared on free path. Can someone apply below diff and see if lock
> >> contention is gone?
> >>
> > Will test it tonight
> >
> Patch applied
> perf report:
> https://ufile.io/sytfh
>
>
> But i need to wait also with more traffic currently cpu's are sleeping
>
Well, that would be the expected result, that the CPUs get more time to
sleep, if the lock contention is gone...
What is the measured bandwidth now?
Notice, you might still be limited by the PCIe bandwidth, but then your
CPUs might actually decide to sleep, as they are getting data fast
enough.
[...]
> >>
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index e2ef1c17942f..65c0ae13215a 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> >> {
> >> struct page *page = virt_to_head_page(addr);
> >> - if (unlikely(put_page_testzero(page)))
> >> - __free_pages_ok(page, compound_order(page));
> >> + if (unlikely(put_page_testzero(page))) {
> >> + unsigned int order = compound_order(page);
> >> +
> >> + if (order == 0)
> >> + free_unref_page(page);
> >> + else
> >> + __free_pages_ok(page, order);
> >> + }
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 12:58 ` Jesper Dangaard Brouer
@ 2018-11-03 15:23 ` Paweł Staszewski
2018-11-03 15:43 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-03 15:23 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: Aaron Lu, Saeed Mahameed, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
W dniu 03.11.2018 o 13:58, Jesper Dangaard Brouer pisze:
> On Sat, 3 Nov 2018 01:16:08 +0100
> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> W dniu 02.11.2018 o 20:02, Paweł Staszewski pisze:
>>>
>>> W dniu 02.11.2018 o 15:20, Aaron Lu pisze:
>>>> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
>>>>> On Fri, 2 Nov 2018 13:23:56 +0800
>>>>> Aaron Lu <aaron.lu@intel.com> wrote:
>>>>>
>>>>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
>>>>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
>>>>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
>>>>>>>> wrote:
>>>>>>>> ... ...
> [...]
>>>>> TL;DR: this is order-0 pages (code-walk trough proof below)
>>>>>
>>>>> To Aaron, the network stack *can* call __free_pages_ok() with order-0
>>>>> pages, via:
> [...]
>>>>
>>>> I think here is a problem - order 0 pages are freed directly to buddy,
>>>> bypassing per-cpu-pages. This might be the reason lock contention
>>>> appeared on free path. Can someone apply below diff and see if lock
>>>> contention is gone?
>>>>
>>> Will test it tonight
>>>
>> Patch applied
>> perf report:
>> https://ufile.io/sytfh
>>
>>
>> But i need to wait also with more traffic currently cpu's are sleeping
>>
> Well, that would be the expected result, that the CPUs get more time to
> sleep, if the lock contention is gone...
>
> What is the measured bandwidth now?
30 RX /30 TX Gbit/s
>
> Notice, you might still be limited by the PCIe bandwidth, but then your
> CPUs might actually decide to sleep, as they are getting data fast
> enough.
Yes - i will replace network controller to two separate nic's in two
separate x16 pcie
But after monday.
But i dont think i hit pcie limit there - it looks like pcie x16 gen3
have 16GB/s RX and 16GB/s TX so bidirectional
>
> [...]
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index e2ef1c17942f..65c0ae13215a 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
>>>> {
>>>> struct page *page = virt_to_head_page(addr);
>>>> - if (unlikely(put_page_testzero(page)))
>>>> - __free_pages_ok(page, compound_order(page));
>>>> + if (unlikely(put_page_testzero(page))) {
>>>> + unsigned int order = compound_order(page);
>>>> +
>>>> + if (order == 0)
>>>> + free_unref_page(page);
>>>> + else
>>>> + __free_pages_ok(page, order);
>>>> + }
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 15:23 ` Paweł Staszewski
@ 2018-11-03 15:43 ` Paweł Staszewski
0 siblings, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-03 15:43 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: Aaron Lu, Saeed Mahameed, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
W dniu 03.11.2018 o 16:23, Paweł Staszewski pisze:
>
>
> W dniu 03.11.2018 o 13:58, Jesper Dangaard Brouer pisze:
>> On Sat, 3 Nov 2018 01:16:08 +0100
>> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>>
>>> W dniu 02.11.2018 o 20:02, Paweł Staszewski pisze:
>>>>
>>>> W dniu 02.11.2018 o 15:20, Aaron Lu pisze:
>>>>> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer
>>>>> wrote:
>>>>>> On Fri, 2 Nov 2018 13:23:56 +0800
>>>>>> Aaron Lu <aaron.lu@intel.com> wrote:
>>>>>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
>>>>>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
>>>>>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
>>>>>>>>> wrote:
>>>>>>>>> ... ...
>> [...]
>>>>>> TL;DR: this is order-0 pages (code-walk trough proof below)
>>>>>>
>>>>>> To Aaron, the network stack *can* call __free_pages_ok() with
>>>>>> order-0
>>>>>> pages, via:
>> [...]
>>>>> I think here is a problem - order 0 pages are freed directly to
>>>>> buddy,
>>>>> bypassing per-cpu-pages. This might be the reason lock contention
>>>>> appeared on free path. Can someone apply below diff and see if lock
>>>>> contention is gone?
>>>>>
>>>> Will test it tonight
>>> Patch applied
>>> perf report:
>>> https://ufile.io/sytfh
>>>
>>>
>>> But i need to wait also with more traffic currently cpu's are sleeping
>>>
>> Well, that would be the expected result, that the CPUs get more time to
>> sleep, if the lock contention is gone...
>>
>> What is the measured bandwidth now?
> 30 RX /30 TX Gbit/s
>
>>
>> Notice, you might still be limited by the PCIe bandwidth, but then your
>> CPUs might actually decide to sleep, as they are getting data fast
>> enough.
> Yes - i will replace network controller to two separate nic's in two
> separate x16 pcie
> But after monday.
>
> But i dont think i hit pcie limit there - it looks like pcie x16 gen3
> have 16GB/s RX and 16GB/s TX so bidirectional
>
Was thinking that maybee memory limit - but also there is 4 channel DDR4
2666MHz - so total bandwidth for memory is bigger (48GB/s) than needed
for 100Gbit ethernet
>
>>
>> [...]
>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>>> index e2ef1c17942f..65c0ae13215a 100644
>>>>> --- a/mm/page_alloc.c
>>>>> +++ b/mm/page_alloc.c
>>>>> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
>>>>> {
>>>>> struct page *page = virt_to_head_page(addr);
>>>>> - if (unlikely(put_page_testzero(page)))
>>>>> - __free_pages_ok(page, compound_order(page));
>>>>> + if (unlikely(put_page_testzero(page))) {
>>>>> + unsigned int order = compound_order(page);
>>>>> +
>>>>> + if (order == 0)
>>>>> + free_unref_page(page);
>>>>> + else
>>>>> + __free_pages_ok(page, order);
>>>>> + }
>>
>>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-01 17:30 ` Paweł Staszewski
@ 2018-11-03 17:32 ` David Ahern
2018-11-04 0:24 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: David Ahern @ 2018-11-03 17:32 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/1/18 11:30 AM, Paweł Staszewski wrote:
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/xdp_fwd_kern.c
>>>>
>>>>
>>> I can try some tests on same hw but testlab configuration - will give it
>>> a try :)
>>>
>> That version does not work with VLANs. I have patches for it but it
>> needs a bit more work before sending out. Perhaps I can get back to it
>> next week.
>>
> Will be nice - next week i will be able to replace network controller
> and install separate two 100Gbit nics into two pciex x16 slots - so can
> test without hitting pcie bandwidth limits.
>
>
Does your setup have any other device types besides physical ports with
VLANs (e.g., any macvlans or bonds)?
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 17:32 ` David Ahern
@ 2018-11-04 0:24 ` Paweł Staszewski
2018-11-05 20:17 ` Jesper Dangaard Brouer
2018-11-07 21:06 ` David Ahern
0 siblings, 2 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-04 0:24 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
n
W dniu 03.11.2018 o 18:32, David Ahern pisze:
> On 11/1/18 11:30 AM, Paweł Staszewski wrote:
>>>>>
>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/xdp_fwd_kern.c
>>>>>
>>>>>
>>>> I can try some tests on same hw but testlab configuration - will give it
>>>> a try :)
>>>>
>>> That version does not work with VLANs. I have patches for it but it
>>> needs a bit more work before sending out. Perhaps I can get back to it
>>> next week.
>>>
>> Will be nice - next week i will be able to replace network controller
>> and install separate two 100Gbit nics into two pciex x16 slots - so can
>> test without hitting pcie bandwidth limits.
>>
>>
> Does your setup have any other device types besides physical ports with
> VLANs (e.g., any macvlans or bonds)?
>
>
no.
just
phy(mlnx)->vlans only config
And today again after allpy patch for page allocator - reached again
64/64 Gbit/s
with only 50-60% cpu load
today no slowpath hit for netwoking :)
But again dropped pckt at 64GbitRX and 64TX ....
And as it should not be pcie express limit -i think something more is
going on there - and hard to catch - cause perf top doestn chenged
besides there is no queued slowpath hit now
I ordered now also intel cards to compare - but 3 weeks eta
Faster - cause 3 days - i will have mellanox connectx 5 - so can
separate traffic to two different x16 pcie busses
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 12:53 ` Jesper Dangaard Brouer
@ 2018-11-05 6:28 ` Aaron Lu
2018-11-05 9:10 ` Jesper Dangaard Brouer
2018-11-05 8:42 ` Tariq Toukan
1 sibling, 1 reply; 77+ messages in thread
From: Aaron Lu @ 2018-11-05 6:28 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: Saeed Mahameed, pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
On Sat, Nov 03, 2018 at 01:53:25PM +0100, Jesper Dangaard Brouer wrote:
>
> On Fri, 2 Nov 2018 22:20:24 +0800 Aaron Lu <aaron.lu@intel.com> wrote:
>
> > On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
> > > On Fri, 2 Nov 2018 13:23:56 +0800
> > > Aaron Lu <aaron.lu@intel.com> wrote:
> > >
> > > > On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> > > > > On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> > > > > > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > > > > > wrote:
> > > > > > ... ...
> > > > > > > Section copied out:
> > > > > > >
> > > > > > > mlx5e_poll_tx_cq
> > > > > > > |
> > > > > > > --16.34%--napi_consume_skb
> > > > > > > |
> > > > > > > |--12.65%--__free_pages_ok
> > > > > > > | |
> > > > > > > | --11.86%--free_one_page
> > > > > > > | |
> > > > > > > | |--10.10%
> > > > > > > --queued_spin_lock_slowpath
> > > > > > > | |
> > > > > > > | --0.65%--_raw_spin_lock
> > > > > >
> > > > > > This callchain looks like it is freeing higher order pages than order
> > > > > > 0:
> > > > > > __free_pages_ok is only called for pages whose order are bigger than
> > > > > > 0.
> > > > >
> > > > > mlx5 rx uses only order 0 pages, so i don't know where these high order
> > > > > tx SKBs are coming from..
> > > >
> > > > Perhaps here:
> > > > __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
> > > > __napi_alloc_frag() will all call page_frag_alloc(), which will use
> > > > __page_frag_cache_refill() to get an order 3 page if possible, or fall
> > > > back to an order 0 page if order 3 page is not available.
> > > >
> > > > I'm not sure if your workload will use the above code path though.
> > >
> > > TL;DR: this is order-0 pages (code-walk trough proof below)
> > >
> > > To Aaron, the network stack *can* call __free_pages_ok() with order-0
> > > pages, via:
> > >
> > > static void skb_free_head(struct sk_buff *skb)
> > > {
> > > unsigned char *head = skb->head;
> > >
> > > if (skb->head_frag)
> > > skb_free_frag(head);
> > > else
> > > kfree(head);
> > > }
> > >
> > > static inline void skb_free_frag(void *addr)
> > > {
> > > page_frag_free(addr);
> > > }
> > >
> > > /*
> > > * Frees a page fragment allocated out of either a compound or order 0 page.
> > > */
> > > void page_frag_free(void *addr)
> > > {
> > > struct page *page = virt_to_head_page(addr);
> > >
> > > if (unlikely(put_page_testzero(page)))
> > > __free_pages_ok(page, compound_order(page));
> > > }
> > > EXPORT_SYMBOL(page_frag_free);
> >
> > I think here is a problem - order 0 pages are freed directly to buddy,
> > bypassing per-cpu-pages. This might be the reason lock contention
> > appeared on free path.
>
> OMG - you just found a significant issue with the network stacks
> interaction with the page allocator! This explains why I could not get
> the PCP (Per-Cpu-Pages) system to have good performance, in my
> performance networking benchmarks. As we are basically only using the
> alloc side of PCP, and not the free side.
Exactly.
> We have spend years adding different driver level recycle tricks to
> avoid this code path getting activated, exactly because it is rather
> slow and problematic that we hit this zone->lock.
I can see when this code path is hit, it causes unnecessary taking of
zone lock for order-0 pages and cause lock contention.
>
> > Can someone apply below diff and see if lock contention is gone?
>
> I have also applied and tested this patch, and yes the lock contention
> is gone. As mentioned is it rather difficult to hit this code path, as
> the driver page recycle mechanism tries to hide/avoid it, but mlx5 +
> page_pool + CPU-map recycling have a known weakness that bypass the
> driver page recycle scheme (that I've not fixed yet). I observed a 7%
> speedup for this micro benchmark.
Good to know this, I will prepare a formal patch.
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index e2ef1c17942f..65c0ae13215a 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> > {
> > struct page *page = virt_to_head_page(addr);
> >
> > - if (unlikely(put_page_testzero(page)))
> > - __free_pages_ok(page, compound_order(page));
> > + if (unlikely(put_page_testzero(page))) {
> > + unsigned int order = compound_order(page);
> > +
> > + if (order == 0)
> > + free_unref_page(page);
> > + else
> > + __free_pages_ok(page, order);
> > + }
> > }
> > EXPORT_SYMBOL(page_frag_free);
>
> Thank you Aaron for spotting this!!!
Which is impossible without your analysis :-)
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 12:53 ` Jesper Dangaard Brouer
2018-11-05 6:28 ` Aaron Lu
@ 2018-11-05 8:42 ` Tariq Toukan
2018-11-05 8:48 ` Aaron Lu
1 sibling, 1 reply; 77+ messages in thread
From: Tariq Toukan @ 2018-11-05 8:42 UTC (permalink / raw)
To: Jesper Dangaard Brouer, Aaron Lu
Cc: Saeed Mahameed, pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman
On 03/11/2018 2:53 PM, Jesper Dangaard Brouer wrote:
>
> On Fri, 2 Nov 2018 22:20:24 +0800 Aaron Lu <aaron.lu@intel.com> wrote:
>
>> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
>>> On Fri, 2 Nov 2018 13:23:56 +0800
>>> Aaron Lu <aaron.lu@intel.com> wrote:
>>>
>>>> On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
>>>>> On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
>>>>>> On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
>>>>>> wrote:
>>>>>> ... ...
>>>>>>> Section copied out:
>>>>>>>
>>>>>>> mlx5e_poll_tx_cq
>>>>>>> |
>>>>>>> --16.34%--napi_consume_skb
>>>>>>> |
>>>>>>> |--12.65%--__free_pages_ok
>>>>>>> | |
>>>>>>> | --11.86%--free_one_page
>>>>>>> | |
>>>>>>> | |--10.10%
>>>>>>> --queued_spin_lock_slowpath
>>>>>>> | |
>>>>>>> | --0.65%--_raw_spin_lock
>>>>>>
>>>>>> This callchain looks like it is freeing higher order pages than order
>>>>>> 0:
>>>>>> __free_pages_ok is only called for pages whose order are bigger than
>>>>>> 0.
>>>>>
>>>>> mlx5 rx uses only order 0 pages, so i don't know where these high order
>>>>> tx SKBs are coming from..
>>>>
>>>> Perhaps here:
>>>> __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
>>>> __napi_alloc_frag() will all call page_frag_alloc(), which will use
>>>> __page_frag_cache_refill() to get an order 3 page if possible, or fall
>>>> back to an order 0 page if order 3 page is not available.
>>>>
>>>> I'm not sure if your workload will use the above code path though.
>>>
>>> TL;DR: this is order-0 pages (code-walk trough proof below)
>>>
>>> To Aaron, the network stack *can* call __free_pages_ok() with order-0
>>> pages, via:
>>>
>>> static void skb_free_head(struct sk_buff *skb)
>>> {
>>> unsigned char *head = skb->head;
>>>
>>> if (skb->head_frag)
>>> skb_free_frag(head);
>>> else
>>> kfree(head);
>>> }
>>>
>>> static inline void skb_free_frag(void *addr)
>>> {
>>> page_frag_free(addr);
>>> }
>>>
>>> /*
>>> * Frees a page fragment allocated out of either a compound or order 0 page.
>>> */
>>> void page_frag_free(void *addr)
>>> {
>>> struct page *page = virt_to_head_page(addr);
>>>
>>> if (unlikely(put_page_testzero(page)))
>>> __free_pages_ok(page, compound_order(page));
>>> }
>>> EXPORT_SYMBOL(page_frag_free);
>>
>> I think here is a problem - order 0 pages are freed directly to buddy,
>> bypassing per-cpu-pages. This might be the reason lock contention
>> appeared on free path.
>
> OMG - you just found a significant issue with the network stacks
> interaction with the page allocator! This explains why I could not get
> the PCP (Per-Cpu-Pages) system to have good performance, in my
> performance networking benchmarks. As we are basically only using the
> alloc side of PCP, and not the free side.
> We have spend years adding different driver level recycle tricks to
> avoid this code path getting activated, exactly because it is rather
> slow and problematic that we hit this zone->lock.
>
Oh! It has been behaving this way for too long.
Good catch!
>> Can someone apply below diff and see if lock contention is gone?
>
> I have also applied and tested this patch, and yes the lock contention
> is gone. As mentioned is it rather difficult to hit this code path, as
> the driver page recycle mechanism tries to hide/avoid it, but mlx5 +
> page_pool + CPU-map recycling have a known weakness that bypass the
> driver page recycle scheme (that I've not fixed yet). I observed a 7%
> speedup for this micro benchmark.
>
Great news. I also have a benchmark that uses orde-r0 pages and stresses
the zone-lock. I'll test your patch during this week.
>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e2ef1c17942f..65c0ae13215a 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
>> {
>> struct page *page = virt_to_head_page(addr);
>>
>> - if (unlikely(put_page_testzero(page)))
>> - __free_pages_ok(page, compound_order(page));
>> + if (unlikely(put_page_testzero(page))) {
>> + unsigned int order = compound_order(page);
>> +
>> + if (order == 0)
>> + free_unref_page(page);
>> + else
>> + __free_pages_ok(page, order);
>> + }
>> }
>> EXPORT_SYMBOL(page_frag_free);
>
> Thank you Aaron for spotting this!!!
>
Thanks Aaron :) !!
Does it conflict with your recent work that optimizes order-0 allocation?
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-05 8:42 ` Tariq Toukan
@ 2018-11-05 8:48 ` Aaron Lu
0 siblings, 0 replies; 77+ messages in thread
From: Aaron Lu @ 2018-11-05 8:48 UTC (permalink / raw)
To: Tariq Toukan
Cc: Jesper Dangaard Brouer, Saeed Mahameed, pstaszewski,
eric.dumazet, netdev, ilias.apalodimas, yoel, mgorman
On Mon, Nov 05, 2018 at 08:42:33AM +0000, Tariq Toukan wrote:
>
> On 03/11/2018 2:53 PM, Jesper Dangaard Brouer wrote:
> >
> > On Fri, 2 Nov 2018 22:20:24 +0800 Aaron Lu <aaron.lu@intel.com> wrote:
> >>
> >> I think here is a problem - order 0 pages are freed directly to buddy,
> >> bypassing per-cpu-pages. This might be the reason lock contention
> >> appeared on free path.
> >
> > OMG - you just found a significant issue with the network stacks
> > interaction with the page allocator! This explains why I could not get
> > the PCP (Per-Cpu-Pages) system to have good performance, in my
> > performance networking benchmarks. As we are basically only using the
> > alloc side of PCP, and not the free side.
> > We have spend years adding different driver level recycle tricks to
> > avoid this code path getting activated, exactly because it is rather
> > slow and problematic that we hit this zone->lock.
> >
>
> Oh! It has been behaving this way for too long.
> Good catch!
Thanks.
> >> Can someone apply below diff and see if lock contention is gone?
> >
> > I have also applied and tested this patch, and yes the lock contention
> > is gone. As mentioned is it rather difficult to hit this code path, as
> > the driver page recycle mechanism tries to hide/avoid it, but mlx5 +
> > page_pool + CPU-map recycling have a known weakness that bypass the
> > driver page recycle scheme (that I've not fixed yet). I observed a 7%
> > speedup for this micro benchmark.
> >
>
> Great news. I also have a benchmark that uses orde-r0 pages and stresses
> the zone-lock. I'll test your patch during this week.
Note this patch only helps when order-0 pages are freed through
page_frag_free().
I'll send a formal patch later.
> >
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index e2ef1c17942f..65c0ae13215a 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> >> {
> >> struct page *page = virt_to_head_page(addr);
> >>
> >> - if (unlikely(put_page_testzero(page)))
> >> - __free_pages_ok(page, compound_order(page));
> >> + if (unlikely(put_page_testzero(page))) {
> >> + unsigned int order = compound_order(page);
> >> +
> >> + if (order == 0)
> >> + free_unref_page(page);
> >> + else
> >> + __free_pages_ok(page, order);
> >> + }
> >> }
> >> EXPORT_SYMBOL(page_frag_free);
> >
> > Thank you Aaron for spotting this!!!
> >
> Thanks Aaron :) !!
>
> Does it conflict with your recent work that optimizes order-0 allocation?
No it doesn't. This patch optimize code outside of zone lock(by reducing
the need to take zone lock) while my recent work optimize code inside
the zone lock :-)
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-05 6:28 ` Aaron Lu
@ 2018-11-05 9:10 ` Jesper Dangaard Brouer
0 siblings, 0 replies; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-05 9:10 UTC (permalink / raw)
To: Aaron Lu
Cc: Saeed Mahameed, pstaszewski, eric.dumazet, netdev, Tariq Toukan,
ilias.apalodimas, yoel, mgorman, brouer, Jérôme Glisse
On Mon, 5 Nov 2018 14:28:36 +0800
Aaron Lu <aaron.lu@intel.com> wrote:
> On Sat, Nov 03, 2018 at 01:53:25PM +0100, Jesper Dangaard Brouer wrote:
> >
> > On Fri, 2 Nov 2018 22:20:24 +0800 Aaron Lu <aaron.lu@intel.com> wrote:
> >
> > > On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
> > > > On Fri, 2 Nov 2018 13:23:56 +0800
> > > > Aaron Lu <aaron.lu@intel.com> wrote:
> > > >
> > > > > On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> > > > > > On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> > > > > > > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > > > > > > wrote:
> > > > > > > ... ...
> > > > > > > > Section copied out:
> > > > > > > >
> > > > > > > > mlx5e_poll_tx_cq
> > > > > > > > |
> > > > > > > > --16.34%--napi_consume_skb
> > > > > > > > |
> > > > > > > > |--12.65%--__free_pages_ok
> > > > > > > > | |
> > > > > > > > | --11.86%--free_one_page
> > > > > > > > | |
> > > > > > > > | |--10.10%
> > > > > > > > --queued_spin_lock_slowpath
> > > > > > > > | |
> > > > > > > > | --0.65%--_raw_spin_lock
> > > > > > >
> > > > > > > This callchain looks like it is freeing higher order pages than order
> > > > > > > 0:
> > > > > > > __free_pages_ok is only called for pages whose order are bigger than
> > > > > > > 0.
> > > > > >
> > > > > > mlx5 rx uses only order 0 pages, so i don't know where these high order
> > > > > > tx SKBs are coming from..
> > > > >
> > > > > Perhaps here:
> > > > > __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
> > > > > __napi_alloc_frag() will all call page_frag_alloc(), which will use
> > > > > __page_frag_cache_refill() to get an order 3 page if possible, or fall
> > > > > back to an order 0 page if order 3 page is not available.
> > > > >
> > > > > I'm not sure if your workload will use the above code path though.
> > > >
> > > > TL;DR: this is order-0 pages (code-walk trough proof below)
> > > >
> > > > To Aaron, the network stack *can* call __free_pages_ok() with order-0
> > > > pages, via:
> > > >
> > > > static void skb_free_head(struct sk_buff *skb)
> > > > {
> > > > unsigned char *head = skb->head;
> > > >
> > > > if (skb->head_frag)
> > > > skb_free_frag(head);
> > > > else
> > > > kfree(head);
> > > > }
> > > >
> > > > static inline void skb_free_frag(void *addr)
> > > > {
> > > > page_frag_free(addr);
> > > > }
> > > >
> > > > /*
> > > > * Frees a page fragment allocated out of either a compound or order 0 page.
> > > > */
> > > > void page_frag_free(void *addr)
> > > > {
> > > > struct page *page = virt_to_head_page(addr);
> > > >
> > > > if (unlikely(put_page_testzero(page)))
> > > > __free_pages_ok(page, compound_order(page));
> > > > }
> > > > EXPORT_SYMBOL(page_frag_free);
> > >
> > > I think here is a problem - order 0 pages are freed directly to buddy,
> > > bypassing per-cpu-pages. This might be the reason lock contention
> > > appeared on free path.
> >
> > OMG - you just found a significant issue with the network stacks
> > interaction with the page allocator! This explains why I could not get
> > the PCP (Per-Cpu-Pages) system to have good performance, in my
> > performance networking benchmarks. As we are basically only using the
> > alloc side of PCP, and not the free side.
>
> Exactly.
>
> > We have spend years adding different driver level recycle tricks to
> > avoid this code path getting activated, exactly because it is rather
> > slow and problematic that we hit this zone->lock.
>
> I can see when this code path is hit, it causes unnecessary taking of
> zone lock for order-0 pages and cause lock contention.
>
> >
> > > Can someone apply below diff and see if lock contention is gone?
> >
> > I have also applied and tested this patch, and yes the lock contention
> > is gone. As mentioned is it rather difficult to hit this code path, as
> > the driver page recycle mechanism tries to hide/avoid it, but mlx5 +
> > page_pool + CPU-map recycling have a known weakness that bypass the
> > driver page recycle scheme (that I've not fixed yet). I observed a 7%
> > speedup for this micro benchmark.
>
> Good to know this, I will prepare a formal patch.
I wonder if this code is still missing something. I was looking at
using put_devmap_managed_page() infrastructure, but I realized that
page_frag_free() is also skipping this code path. I guess, I can add
it later when I show/proof (performance wise) that this is a good idea
(as we currently don't have any users).
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index e2ef1c17942f..65c0ae13215a 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> > > {
> > > struct page *page = virt_to_head_page(addr);
> > >
> > > - if (unlikely(put_page_testzero(page)))
> > > - __free_pages_ok(page, compound_order(page));
> > > + if (unlikely(put_page_testzero(page))) {
> > > + unsigned int order = compound_order(page);
> > > +
> > > + if (order == 0)
> > > + free_unref_page(page);
> > > + else
> > > + __free_pages_ok(page, order);
> > > + }
> > > }
> > > EXPORT_SYMBOL(page_frag_free);
> >
> > Thank you Aaron for spotting this!!!
>
> Which is impossible without your analysis :-)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-04 0:24 ` Paweł Staszewski
@ 2018-11-05 20:17 ` Jesper Dangaard Brouer
2018-11-08 0:59 ` Paweł Staszewski
2018-11-07 21:06 ` David Ahern
1 sibling, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-05 20:17 UTC (permalink / raw)
To: Paweł Staszewski; +Cc: David Ahern, netdev, Yoel Caspersen, brouer
On Sun, 4 Nov 2018 01:24:03 +0100 Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> And today again after allpy patch for page allocator - reached again
> 64/64 Gbit/s
>
> with only 50-60% cpu load
Great.
> today no slowpath hit for netwoking :)
>
> But again dropped pckt at 64GbitRX and 64TX ....
> And as it should not be pcie express limit -i think something more is
Well, this does sounds like a PCIe bandwidth limit to me.
See the PCIe BW here: https://en.wikipedia.org/wiki/PCI_Express
You likely have PCIe v3, where 1-lane have 984.6 MBytes/s or 7.87 Gbit/s
Thus, x16-lanes have 15.75 GBytes or 126 Gbit/s. It does say "in each
direction", but you are also forwarding this RX->TX on both (dual) ports
NIC that is sharing the same PCIe slot.
> going on there - and hard to catch - cause perf top doestn chenged
> besides there is no queued slowpath hit now
>
> I ordered now also intel cards to compare - but 3 weeks eta
> Faster - cause 3 days - i will have mellanox connectx 5 - so can
> separate traffic to two different x16 pcie busses
I do think you need to separate traffic to two different x16 PCIe
slots. I have found that the ConnectX-5 is significantly faster
packet-per-sec performance than ConnectX-4, but that is not your
use-case (max BW). I've not tested these NICs for maximum
_bidirectional_ bandwidth limits, I've only made sure I can do 100G
unidirectional, which can hit some funny motherboard memory limits
(remember to equip motherboard with 4 RAM blocks for full memory BW).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-04 0:24 ` Paweł Staszewski
2018-11-05 20:17 ` Jesper Dangaard Brouer
@ 2018-11-07 21:06 ` David Ahern
2018-11-08 13:33 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: David Ahern @ 2018-11-07 21:06 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>> Does your setup have any other device types besides physical ports with
>> VLANs (e.g., any macvlans or bonds)?
>>
>>
> no.
> just
> phy(mlnx)->vlans only config
VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
https://github.com/dsahern/linux.git bpf/kernel-tables-wip
I got lazy with the vlan exports; right now it requires 8021q to be
builtin (CONFIG_VLAN_8021Q=y)
You can use the xdp_fwd sample:
make O=kbuild -C samples/bpf -j 8
Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
and run:
./xdp_fwd <list of NIC ports>
e.g., in my testing I run:
xdp_fwd eth1 eth2 eth3 eth4
All of the relevant forwarding ports need to be on the same command
line. This version populates a second map to verify the egress port has
XDP enabled.
>
> And today again after allpy patch for page allocator - reached again
> 64/64 Gbit/s
>
> with only 50-60% cpu load
you should see the cpu load drop considerably.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-05 20:17 ` Jesper Dangaard Brouer
@ 2018-11-08 0:59 ` Paweł Staszewski
2018-11-08 1:13 ` Paweł Staszewski
2018-11-08 14:43 ` Paweł Staszewski
0 siblings, 2 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 0:59 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: David Ahern, netdev, Yoel Caspersen
W dniu 05.11.2018 o 21:17, Jesper Dangaard Brouer pisze:
> On Sun, 4 Nov 2018 01:24:03 +0100 Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> And today again after allpy patch for page allocator - reached again
>> 64/64 Gbit/s
>>
>> with only 50-60% cpu load
> Great.
>
>> today no slowpath hit for netwoking :)
>>
>> But again dropped pckt at 64GbitRX and 64TX ....
>> And as it should not be pcie express limit -i think something more is
> Well, this does sounds like a PCIe bandwidth limit to me.
>
> See the PCIe BW here: https://en.wikipedia.org/wiki/PCI_Express
>
> You likely have PCIe v3, where 1-lane have 984.6 MBytes/s or 7.87 Gbit/s
> Thus, x16-lanes have 15.75 GBytes or 126 Gbit/s. It does say "in each
> direction", but you are also forwarding this RX->TX on both (dual) ports
> NIC that is sharing the same PCIe slot.
Network controller changed from 2-port 100G connectx4 to 2 separate
cards 100G connectx5
PerfTop: 92239 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
cycles], (all, 56 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6.65% [kernel] [k] irq_entries_start
5.57% [kernel] [k] tasklet_action_common.isra.21
4.60% [kernel] [k] mlx5_eq_int
4.04% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
3.66% [kernel] [k] _raw_spin_lock_irqsave
3.58% [kernel] [k] mlx5e_sq_xmit
2.66% [kernel] [k] fib_table_lookup
2.52% [kernel] [k] _raw_spin_lock
2.51% [kernel] [k] build_skb
2.50% [kernel] [k] _raw_spin_lock_irq
2.04% [kernel] [k] try_to_wake_up
1.83% [kernel] [k] queued_spin_lock_slowpath
1.81% [kernel] [k] mlx5e_poll_tx_cq
1.65% [kernel] [k] do_idle
1.50% [kernel] [k] mlx5e_poll_rx_cq
1.34% [kernel] [k] __sched_text_start
1.32% [kernel] [k] cmd_exec
1.30% [kernel] [k] cmd_work_handler
1.16% [kernel] [k] vlan_do_receive
1.15% [kernel] [k] memcpy_erms
1.15% [kernel] [k] __dev_queue_xmit
1.07% [kernel] [k] mlx5_cmd_comp_handler
1.06% [kernel] [k] sched_ttwu_pending
1.00% [kernel] [k] ipt_do_table
0.98% [kernel] [k] ip_finish_output2
0.92% [kernel] [k] pfifo_fast_dequeue
0.88% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
0.78% [kernel] [k] dev_gro_receive
0.78% [kernel] [k] mlx5e_napi_poll
0.76% [kernel] [k] mlx5e_post_rx_mpwqes
0.70% [kernel] [k] process_one_work
0.67% [kernel] [k] __netif_receive_skb_core
0.65% [kernel] [k] __build_skb
0.63% [kernel] [k] llist_add_batch
0.62% [kernel] [k] tcp_gro_receive
0.60% [kernel] [k] inet_gro_receive
0.59% [kernel] [k] ip_route_input_rcu
0.59% [kernel] [k] rcu_irq_exit
0.56% [kernel] [k] napi_complete_done
0.52% [kernel] [k] kmem_cache_alloc
0.48% [kernel] [k] __softirqentry_text_start
0.48% [kernel] [k] mlx5e_xmit
0.47% [kernel] [k] __queue_work
0.46% [kernel] [k] memset_erms
0.46% [kernel] [k] dev_hard_start_xmit
0.45% [kernel] [k] insert_work
0.45% [kernel] [k] enqueue_task_fair
0.44% [kernel] [k] __wake_up_common
0.43% [kernel] [k] finish_task_switch
0.43% [kernel] [k] kmem_cache_free_bulk
0.42% [kernel] [k] ip_forward
0.42% [kernel] [k] worker_thread
0.41% [kernel] [k] schedule
0.41% [kernel] [k] _raw_spin_unlock_irqrestore
0.40% [kernel] [k] netif_skb_features
0.40% [kernel] [k] queue_work_on
0.40% [kernel] [k] pfifo_fast_enqueue
0.39% [kernel] [k] vlan_dev_hard_start_xmit
0.39% [kernel] [k] page_frag_free
0.36% [kernel] [k] swiotlb_map_page
0.36% [kernel] [k] update_cfs_rq_h_load
0.35% [kernel] [k] validate_xmit_skb.isra.142
0.35% [kernel] [k] dev_ifconf
0.35% [kernel] [k] check_preempt_curr
0.34% [kernel] [k] _raw_spin_trylock
0.34% [kernel] [k] rcu_idle_exit
0.33% [kernel] [k] ip_rcv_core.isra.20.constprop.25
0.33% [kernel] [k] __qdisc_run
0.33% [kernel] [k] skb_release_data
0.32% [kernel] [k] native_sched_clock
0.30% [kernel] [k] add_interrupt_randomness
0.29% [kernel] [k] interrupt_entry
0.28% [kernel] [k] skb_gro_receive
0.26% [kernel] [k] read_tsc
0.26% [kernel] [k] __get_xps_queue_idx
0.26% [kernel] [k] inet_gifconf
0.26% [kernel] [k] skb_segment
0.25% [kernel] [k] __tasklet_schedule_common
0.25% [kernel] [k] smpboot_thread_fn
0.23% [kernel] [k] __update_load_avg_se
0.22% [kernel] [k] tcp4_gro_receive
Not much traffic now:
bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
input: /proc/net/dev type: rate
| iface Rx Tx Total
==============================================================================
enp175s0: 6.95 Gb/s 4.20 Gb/s
11.15 Gb/s
enp216s0: 4.23 Gb/s 6.98 Gb/s
11.21 Gb/s
------------------------------------------------------------------------------
total: 11.18 Gb/s 11.18 Gb/s
22.37 Gb/s
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
| iface Rx Tx Total
==============================================================================
enp175s0: 700264.50 P/s 923890.25 P/s 1624154.75 P/s
enp216s0: 932598.81 P/s 708771.50 P/s 1641370.25 P/s
------------------------------------------------------------------------------
total: 1632863.38 P/s 1632661.75 P/s 3265525.00 P/s
>
>
>> going on there - and hard to catch - cause perf top doestn chenged
>> besides there is no queued slowpath hit now
>>
>> I ordered now also intel cards to compare - but 3 weeks eta
>> Faster - cause 3 days - i will have mellanox connectx 5 - so can
>> separate traffic to two different x16 pcie busses
> I do think you need to separate traffic to two different x16 PCIe
> slots. I have found that the ConnectX-5 is significantly faster
> packet-per-sec performance than ConnectX-4, but that is not your
> use-case (max BW). I've not tested these NICs for maximum
> _bidirectional_ bandwidth limits, I've only made sure I can do 100G
> unidirectional, which can hit some funny motherboard memory limits
> (remember to equip motherboard with 4 RAM blocks for full memory BW).
>
Yes memory channels are separated and there are 4 modules per cpu :)
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 0:59 ` Paweł Staszewski
@ 2018-11-08 1:13 ` Paweł Staszewski
2018-11-08 14:43 ` Paweł Staszewski
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 1:13 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: David Ahern, netdev, Yoel Caspersen
W dniu 08.11.2018 o 01:59, Paweł Staszewski pisze:
>
>
> W dniu 05.11.2018 o 21:17, Jesper Dangaard Brouer pisze:
>> On Sun, 4 Nov 2018 01:24:03 +0100 Paweł Staszewski
>> <pstaszewski@itcare.pl> wrote:
>>
>>> And today again after allpy patch for page allocator - reached again
>>> 64/64 Gbit/s
>>>
>>> with only 50-60% cpu load
>> Great.
>>
>>> today no slowpath hit for netwoking :)
>>>
>>> But again dropped pckt at 64GbitRX and 64TX ....
>>> And as it should not be pcie express limit -i think something more is
>> Well, this does sounds like a PCIe bandwidth limit to me.
>>
>> See the PCIe BW here: https://en.wikipedia.org/wiki/PCI_Express
>>
>> You likely have PCIe v3, where 1-lane have 984.6 MBytes/s or 7.87 Gbit/s
>> Thus, x16-lanes have 15.75 GBytes or 126 Gbit/s. It does say "in each
>> direction", but you are also forwarding this RX->TX on both (dual) ports
>> NIC that is sharing the same PCIe slot.
> Network controller changed from 2-port 100G connectx4 to 2 separate
> cards 100G connectx5
>
>
> PerfTop: 92239 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
> cycles], (all, 56 CPUs)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>
> 6.65% [kernel] [k] irq_entries_start
> 5.57% [kernel] [k] tasklet_action_common.isra.21
> 4.60% [kernel] [k] mlx5_eq_int
> 4.04% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
> 3.66% [kernel] [k] _raw_spin_lock_irqsave
> 3.58% [kernel] [k] mlx5e_sq_xmit
> 2.66% [kernel] [k] fib_table_lookup
> 2.52% [kernel] [k] _raw_spin_lock
> 2.51% [kernel] [k] build_skb
> 2.50% [kernel] [k] _raw_spin_lock_irq
> 2.04% [kernel] [k] try_to_wake_up
> 1.83% [kernel] [k] queued_spin_lock_slowpath
> 1.81% [kernel] [k] mlx5e_poll_tx_cq
> 1.65% [kernel] [k] do_idle
> 1.50% [kernel] [k] mlx5e_poll_rx_cq
> 1.34% [kernel] [k] __sched_text_start
> 1.32% [kernel] [k] cmd_exec
> 1.30% [kernel] [k] cmd_work_handler
> 1.16% [kernel] [k] vlan_do_receive
> 1.15% [kernel] [k] memcpy_erms
> 1.15% [kernel] [k] __dev_queue_xmit
> 1.07% [kernel] [k] mlx5_cmd_comp_handler
> 1.06% [kernel] [k] sched_ttwu_pending
> 1.00% [kernel] [k] ipt_do_table
> 0.98% [kernel] [k] ip_finish_output2
> 0.92% [kernel] [k] pfifo_fast_dequeue
> 0.88% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
> 0.78% [kernel] [k] dev_gro_receive
> 0.78% [kernel] [k] mlx5e_napi_poll
> 0.76% [kernel] [k] mlx5e_post_rx_mpwqes
> 0.70% [kernel] [k] process_one_work
> 0.67% [kernel] [k] __netif_receive_skb_core
> 0.65% [kernel] [k] __build_skb
> 0.63% [kernel] [k] llist_add_batch
> 0.62% [kernel] [k] tcp_gro_receive
> 0.60% [kernel] [k] inet_gro_receive
> 0.59% [kernel] [k] ip_route_input_rcu
> 0.59% [kernel] [k] rcu_irq_exit
> 0.56% [kernel] [k] napi_complete_done
> 0.52% [kernel] [k] kmem_cache_alloc
> 0.48% [kernel] [k] __softirqentry_text_start
> 0.48% [kernel] [k] mlx5e_xmit
> 0.47% [kernel] [k] __queue_work
> 0.46% [kernel] [k] memset_erms
> 0.46% [kernel] [k] dev_hard_start_xmit
> 0.45% [kernel] [k] insert_work
> 0.45% [kernel] [k] enqueue_task_fair
> 0.44% [kernel] [k] __wake_up_common
> 0.43% [kernel] [k] finish_task_switch
> 0.43% [kernel] [k] kmem_cache_free_bulk
> 0.42% [kernel] [k] ip_forward
> 0.42% [kernel] [k] worker_thread
> 0.41% [kernel] [k] schedule
> 0.41% [kernel] [k] _raw_spin_unlock_irqrestore
> 0.40% [kernel] [k] netif_skb_features
> 0.40% [kernel] [k] queue_work_on
> 0.40% [kernel] [k] pfifo_fast_enqueue
> 0.39% [kernel] [k] vlan_dev_hard_start_xmit
> 0.39% [kernel] [k] page_frag_free
> 0.36% [kernel] [k] swiotlb_map_page
> 0.36% [kernel] [k] update_cfs_rq_h_load
> 0.35% [kernel] [k] validate_xmit_skb.isra.142
> 0.35% [kernel] [k] dev_ifconf
> 0.35% [kernel] [k] check_preempt_curr
> 0.34% [kernel] [k] _raw_spin_trylock
> 0.34% [kernel] [k] rcu_idle_exit
> 0.33% [kernel] [k] ip_rcv_core.isra.20.constprop.25
> 0.33% [kernel] [k] __qdisc_run
> 0.33% [kernel] [k] skb_release_data
> 0.32% [kernel] [k] native_sched_clock
> 0.30% [kernel] [k] add_interrupt_randomness
> 0.29% [kernel] [k] interrupt_entry
> 0.28% [kernel] [k] skb_gro_receive
> 0.26% [kernel] [k] read_tsc
> 0.26% [kernel] [k] __get_xps_queue_idx
> 0.26% [kernel] [k] inet_gifconf
> 0.26% [kernel] [k] skb_segment
> 0.25% [kernel] [k] __tasklet_schedule_common
> 0.25% [kernel] [k] smpboot_thread_fn
> 0.23% [kernel] [k] __update_load_avg_se
> 0.22% [kernel] [k] tcp4_gro_receive
>
>
> Not much traffic now:
> bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
> input: /proc/net/dev type: rate
> | iface Rx Tx Total
> ==============================================================================
>
> enp175s0: 6.95 Gb/s 4.20 Gb/s
> 11.15 Gb/s
> enp216s0: 4.23 Gb/s 6.98 Gb/s
> 11.21 Gb/s
> ------------------------------------------------------------------------------
>
> total: 11.18 Gb/s 11.18 Gb/s
> 22.37 Gb/s
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> | iface Rx Tx Total
> ==============================================================================
>
> enp175s0: 700264.50 P/s 923890.25 P/s 1624154.75
> P/s
> enp216s0: 932598.81 P/s 708771.50 P/s 1641370.25
> P/s
> ------------------------------------------------------------------------------
>
> total: 1632863.38 P/s 1632661.75 P/s 3265525.00
> P/s
>
>
>
>
Also is that normal that some kworker procs takes 10%+ of cpu ?
below top
2913 root 20 0 0 0 0 I 10.3 0.0 6:58.29
kworker/u112:1-
7 root 20 0 0 0 0 I 8.6 0.0 6:17.18
kworker/u112:0-
10289 root 20 0 0 0 0 I 6.6 0.0 6:33.90
kworker/u112:4-
2939 root 20 0 0 0 0 R 3.6 0.0 7:37.68
kworker/u112:2-
4557 root 20 0 0 0 0 I 1.3 0.0 0:08.82
kworker/45:4-ev
6775 root 20 0 0 0 0 I 1.3 0.0 0:26.30
kworker/50:4-ev
6833 root 20 0 0 0 0 D 1.3 0.0 0:04.96
kworker/15:0+ev
6840 root 20 0 0 0 0 I 1.3 0.0 0:09.32
kworker/55:2-ev
6874 root 20 0 0 0 0 D 1.3 0.0 0:08.51
kworker/53:0+ev
7710 root 20 0 0 0 0 I 1.3 0.0 0:07.78
kworker/14:1-ev
12075 root 20 0 0 0 0 I 1.3 0.0 1:19.22
kworker/23:3-ev
31209 root 20 0 0 0 0 I 1.3 0.0 0:07.02
kworker/20:1-ev
32351 root 20 0 0 0 0 R 1.3 0.0 0:06.99
kworker/51:2+ev
39869 root 20 0 0 0 0 D 1.3 0.0 0:06.15
kworker/42:0+ev
39959 root 20 0 0 0 0 I 1.3 0.0 0:16.23
kworker/51:1-ev
42858 root 20 0 0 0 0 I 1.3 0.0 0:47.72
kworker/27:2-ev
43281 root 20 0 0 0 0 I 1.3 0.0 0:14.99
kworker/14:4-ev
43282 root 20 0 0 0 0 I 1.3 0.0 0:13.38
kworker/16:1-ev
43389 root 20 0 0 0 0 D 1.3 0.0 0:08.92
kworker/54:2+ev
45214 root 20 0 0 0 0 I 1.3 0.0 0:05.82
kworker/55:0-ev
46894 root 20 0 0 0 0 I 1.3 0.0 0:04.11
kworker/46:1-ev
47027 root 20 0 0 0 0 D 1.3 0.0 0:03.79
kworker/47:1+ev
47129 root 20 0 0 0 0 D 1.3 0.0 0:03.15
kworker/52:0+ev
47133 root 20 0 0 0 0 I 1.3 0.0 0:03.19
kworker/49:1-ev
47179 root 20 0 0 0 0 I 1.3 0.0 0:02.83
kworker/17:3-ev
48062 root 20 0 0 0 0 I 1.3 0.0 0:02.54
kworker/44:1-ev
48158 root 20 0 0 0 0 I 1.3 0.0 0:02.17
kworker/16:2-ev
48168 root 20 0 0 0 0 I 1.3 0.0 0:02.13
kworker/27:3-ev
48247 root 20 0 0 0 0 I 1.3 0.0 0:01.83
kworker/22:0-ev
48337 root 20 0 0 0 0 I 1.3 0.0 0:01.57
kworker/15:1-ev
48345 root 20 0 0 0 0 I 1.3 0.0 0:01.49
kworker/24:3-ev
49302 root 20 0 0 0 0 I 1.3 0.0 0:00.71
kworker/54:1-ev
49366 root 20 0 0 0 0 I 1.3 0.0 0:00.38
kworker/20:3-ev
49400 root 20 0 0 0 0 I 1.3 0.0 0:00.31
kworker/26:2-ev
49430 root 20 0 0 0 0 I 1.3 0.0 0:00.21
kworker/42:2-ev
49463 root 20 0 0 0 0 D 1.3 0.0 0:00.08
kworker/50:2+ev
51698 root 20 0 0 0 0 D 1.3 0.0 0:14.85
kworker/46:2+ev
54238 root 20 0 0 0 0 I 1.3 0.0 0:23.73
kworker/52:1-ev
2507 root 20 0 0 0 0 I 1.0 0.0 0:09.60
kworker/44:2-ev
4525 root 20 0 0 0 0 I 1.0 0.0 0:08.07
kworker/26:1-ev
4556 root 20 0 0 0 0 I 1.0 0.0 0:05.15
kworker/48:0-ev
4604 root 20 0 0 0 0 I 1.0 0.0 0:10.90
kworker/19:0-ev
5789 root 20 0 0 0 0 I 1.0 0.0 0:08.24
kworker/18:0-ev
6868 root 20 0 0 0 0 I 1.0 0.0 0:09.68
kworker/47:0-ev
6900 root 20 0 0 0 0 I 1.0 0.0 0:28.83
kworker/18:1-ev
7764 root 20 0 0 0 0 I 1.0 0.0 0:03.00
kworker/49:2-ev
12045 root 20 0 0 0 0 I 1.0 0.0 1:16.98
kworker/24:2-ev
32218 root 20 0 0 0 0 I 1.0 0.0 0:04.13
kworker/45:2-ev
34082 root 20 0 0 0 0 I 1.0 0.0 0:06.29
kworker/17:1-ev
39791 root 20 0 0 0 0 I 1.0 0.0 0:19.51
kworker/21:4-ev
39973 root 20 0 0 0 0 I 1.0 0.0 0:17.12
kworker/53:2-ev
43223 root 20 0 0 0 0 I 1.0 0.0 0:07.88
kworker/25:0-ev
43295 root 20 0 0 0 0 I 1.0 0.0 0:10.89
kworker/22:4-ev
46055 root 20 0 0 0 0 I 1.0 0.0 0:04.00
kworker/21:2-ev
46077 root 20 0 0 0 0 I 1.0 0.0 0:04.62
kworker/19:1-ev
47204 root 20 0 0 0 0 I 1.0 0.0 0:03.03
kworker/25:2-ev
47989 root 20 0 0 0 0 I 1.0 0.0 0:02.65
kworker/43:1-ev
49127 root 20 0 0 0 0 I 1.0 0.0 0:01.10
kworker/48:2-ev
49317 root 20 0 0 0 0 I 1.0 0.0 0:00.56
kworker/23:1-ev
54191 root 20 0 0 0 0 R 1.0 0.0 0:30.27
kworker/43:2+ev
81 root 20 0 0 0 0 S 0.7 0.0 0:50.27
ksoftirqd/14
87 root 20 0 0 0 0 S 0.7 0.0 1:02.92
ksoftirqd/15
102 root 20 0 0 0 0 S 0.7 0.0 0:29.78
ksoftirqd/18
117 root 20 0 0 0 0 S 0.7 0.0 0:30.73
ksoftirqd/21
127 root 20 0 0 0 0 S 0.7 0.0 0:24.45
ksoftirqd/23
137 root 20 0 0 0 0 S 0.7 0.0 0:24.94
ksoftirqd/25
142 root 20 0 0 0 0 S 0.7 0.0 0:21.74
ksoftirqd/26
222 root 20 0 0 0 0 S 0.7 0.0 0:27.83
ksoftirqd/42
227 root 20 0 0 0 0 S 0.7 0.0 0:25.35
ksoftirqd/43
242 root 20 0 0 0 0 S 0.7 0.0 0:21.40
ksoftirqd/46
267 root 20 0 0 0 0 S 0.7 0.0 0:08.62
ksoftirqd/51
5174 root 20 0 0 0 0 I 0.7 0.0 5:57.10
kworker/u112:3-
>
>
>>
>>
>>> going on there - and hard to catch - cause perf top doestn chenged
>>> besides there is no queued slowpath hit now
>>>
>>> I ordered now also intel cards to compare - but 3 weeks eta
>>> Faster - cause 3 days - i will have mellanox connectx 5 - so can
>>> separate traffic to two different x16 pcie busses
>> I do think you need to separate traffic to two different x16 PCIe
>> slots. I have found that the ConnectX-5 is significantly faster
>> packet-per-sec performance than ConnectX-4, but that is not your
>> use-case (max BW). I've not tested these NICs for maximum
>> _bidirectional_ bandwidth limits, I've only made sure I can do 100G
>> unidirectional, which can hit some funny motherboard memory limits
>> (remember to equip motherboard with 4 RAM blocks for full memory BW).
>>
> Yes memory channels are separated and there are 4 modules per cpu :)
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-07 21:06 ` David Ahern
@ 2018-11-08 13:33 ` Paweł Staszewski
2018-11-08 16:06 ` David Ahern
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 13:33 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 07.11.2018 o 22:06, David Ahern pisze:
> On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>>> Does your setup have any other device types besides physical ports with
>>> VLANs (e.g., any macvlans or bonds)?
>>>
>>>
>> no.
>> just
>> phy(mlnx)->vlans only config
> VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
> https://github.com/dsahern/linux.git bpf/kernel-tables-wip
>
> I got lazy with the vlan exports; right now it requires 8021q to be
> builtin (CONFIG_VLAN_8021Q=y)
>
> You can use the xdp_fwd sample:
> make O=kbuild -C samples/bpf -j 8
>
> Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
> and run:
> ./xdp_fwd <list of NIC ports>
>
> e.g., in my testing I run:
> xdp_fwd eth1 eth2 eth3 eth4
>
> All of the relevant forwarding ports need to be on the same command
> line. This version populates a second map to verify the egress port has
> XDP enabled.
Installed today on some lab server with mellanox connectx4
And trying some simple static routing first - but after enabling xdp
program - receiver is not receiving frames
Route table is simple as possible for tests :)
icmp ping test send from 192.168.22.237 to 172.16.0.2 - incomming
packets on vlan 4081
ip r
default via 192.168.22.236 dev vlan4081
172.16.0.0/30 dev vlan1740 proto kernel scope link src 172.16.0.1
192.168.22.0/24 dev vlan4081 proto kernel scope link src 192.168.22.205
neigh table:
ip neigh ls
192.168.22.237 dev vlan4081 lladdr 00:25:90:fb:a6:8d REACHABLE
172.16.0.2 dev vlan1740 lladdr ac:1f:6b:2c:2e:5a REACHABLE
and interfaces:
4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp/id:5 qdisc
mq state UP group default qlen 1000
link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
valid_lft forever preferred_lft forever
6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP group default qlen 1000
link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
inet 192.168.22.205/24 scope global vlan4081
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe07:c890/64 scope link
valid_lft forever preferred_lft forever
7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP group default qlen 1000
link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.1/30 scope global vlan1740
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
valid_lft forever preferred_lft forever
xdp program detached:
Receiving side tcpdump:
14:28:09.141233 IP 192.168.22.237 > 172.16.0.2: ICMP echo request, id
30227, seq 487, length 64
I can see icmp requests
enabling xdp
./xdp_fwd enp175s0f1 enp175s0f0
4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
state UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
prog/xdp id 5 tag 3c231ff1e5e77f3f
5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
state UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
prog/xdp id 5 tag 3c231ff1e5e77f3f
6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP mode DEFAULT group default qlen 1000
link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
Receiving side no icmp echo requests incommint to interface.
And some ethtool stats for xdp interface trat receiving icmp requests
from sender to be forwarded:
ethtool -S enp175s0f0 | grep 'rx_xdp_redirect'
rx_xdp_redirect: 321
ethtool stats for interface that should forward icmp requests to
receiver on vlan id 1740
ethtool -S enp175s0f1 | grep 'tx_xdp'
tx_xdp_xmit: 0
tx_xdp_full: 0
tx_xdp_err: 0
tx_xdp_cqes: 0
No frames tx-ed.
>
>> And today again after allpy patch for page allocator - reached again
>> 64/64 Gbit/s
>>
>> with only 50-60% cpu load
> you should see the cpu load drop considerably.
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 0:59 ` Paweł Staszewski
2018-11-08 1:13 ` Paweł Staszewski
@ 2018-11-08 14:43 ` Paweł Staszewski
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 14:43 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: David Ahern, netdev, Yoel Caspersen
W dniu 08.11.2018 o 01:59, Paweł Staszewski pisze:
>
>
> W dniu 05.11.2018 o 21:17, Jesper Dangaard Brouer pisze:
>> On Sun, 4 Nov 2018 01:24:03 +0100 Paweł Staszewski
>> <pstaszewski@itcare.pl> wrote:
>>
>>> And today again after allpy patch for page allocator - reached again
>>> 64/64 Gbit/s
>>>
>>> with only 50-60% cpu load
>> Great.
>>
>>> today no slowpath hit for netwoking :)
>>>
>>> But again dropped pckt at 64GbitRX and 64TX ....
>>> And as it should not be pcie express limit -i think something more is
>> Well, this does sounds like a PCIe bandwidth limit to me.
>>
>> See the PCIe BW here: https://en.wikipedia.org/wiki/PCI_Express
>>
>> You likely have PCIe v3, where 1-lane have 984.6 MBytes/s or 7.87 Gbit/s
>> Thus, x16-lanes have 15.75 GBytes or 126 Gbit/s. It does say "in each
>> direction", but you are also forwarding this RX->TX on both (dual) ports
>> NIC that is sharing the same PCIe slot.
> Network controller changed from 2-port 100G connectx4 to 2 separate
> cards 100G connectx5
>
>
> PerfTop: 92239 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
> cycles], (all, 56 CPUs)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>
> 6.65% [kernel] [k] irq_entries_start
> 5.57% [kernel] [k] tasklet_action_common.isra.21
> 4.60% [kernel] [k] mlx5_eq_int
> 4.04% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
> 3.66% [kernel] [k] _raw_spin_lock_irqsave
> 3.58% [kernel] [k] mlx5e_sq_xmit
> 2.66% [kernel] [k] fib_table_lookup
> 2.52% [kernel] [k] _raw_spin_lock
> 2.51% [kernel] [k] build_skb
> 2.50% [kernel] [k] _raw_spin_lock_irq
> 2.04% [kernel] [k] try_to_wake_up
> 1.83% [kernel] [k] queued_spin_lock_slowpath
> 1.81% [kernel] [k] mlx5e_poll_tx_cq
> 1.65% [kernel] [k] do_idle
> 1.50% [kernel] [k] mlx5e_poll_rx_cq
> 1.34% [kernel] [k] __sched_text_start
> 1.32% [kernel] [k] cmd_exec
> 1.30% [kernel] [k] cmd_work_handler
> 1.16% [kernel] [k] vlan_do_receive
> 1.15% [kernel] [k] memcpy_erms
> 1.15% [kernel] [k] __dev_queue_xmit
> 1.07% [kernel] [k] mlx5_cmd_comp_handler
> 1.06% [kernel] [k] sched_ttwu_pending
> 1.00% [kernel] [k] ipt_do_table
> 0.98% [kernel] [k] ip_finish_output2
> 0.92% [kernel] [k] pfifo_fast_dequeue
> 0.88% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
> 0.78% [kernel] [k] dev_gro_receive
> 0.78% [kernel] [k] mlx5e_napi_poll
> 0.76% [kernel] [k] mlx5e_post_rx_mpwqes
> 0.70% [kernel] [k] process_one_work
> 0.67% [kernel] [k] __netif_receive_skb_core
> 0.65% [kernel] [k] __build_skb
> 0.63% [kernel] [k] llist_add_batch
> 0.62% [kernel] [k] tcp_gro_receive
> 0.60% [kernel] [k] inet_gro_receive
> 0.59% [kernel] [k] ip_route_input_rcu
> 0.59% [kernel] [k] rcu_irq_exit
> 0.56% [kernel] [k] napi_complete_done
> 0.52% [kernel] [k] kmem_cache_alloc
> 0.48% [kernel] [k] __softirqentry_text_start
> 0.48% [kernel] [k] mlx5e_xmit
> 0.47% [kernel] [k] __queue_work
> 0.46% [kernel] [k] memset_erms
> 0.46% [kernel] [k] dev_hard_start_xmit
> 0.45% [kernel] [k] insert_work
> 0.45% [kernel] [k] enqueue_task_fair
> 0.44% [kernel] [k] __wake_up_common
> 0.43% [kernel] [k] finish_task_switch
> 0.43% [kernel] [k] kmem_cache_free_bulk
> 0.42% [kernel] [k] ip_forward
> 0.42% [kernel] [k] worker_thread
> 0.41% [kernel] [k] schedule
> 0.41% [kernel] [k] _raw_spin_unlock_irqrestore
> 0.40% [kernel] [k] netif_skb_features
> 0.40% [kernel] [k] queue_work_on
> 0.40% [kernel] [k] pfifo_fast_enqueue
> 0.39% [kernel] [k] vlan_dev_hard_start_xmit
> 0.39% [kernel] [k] page_frag_free
> 0.36% [kernel] [k] swiotlb_map_page
> 0.36% [kernel] [k] update_cfs_rq_h_load
> 0.35% [kernel] [k] validate_xmit_skb.isra.142
> 0.35% [kernel] [k] dev_ifconf
> 0.35% [kernel] [k] check_preempt_curr
> 0.34% [kernel] [k] _raw_spin_trylock
> 0.34% [kernel] [k] rcu_idle_exit
> 0.33% [kernel] [k] ip_rcv_core.isra.20.constprop.25
> 0.33% [kernel] [k] __qdisc_run
> 0.33% [kernel] [k] skb_release_data
> 0.32% [kernel] [k] native_sched_clock
> 0.30% [kernel] [k] add_interrupt_randomness
> 0.29% [kernel] [k] interrupt_entry
> 0.28% [kernel] [k] skb_gro_receive
> 0.26% [kernel] [k] read_tsc
> 0.26% [kernel] [k] __get_xps_queue_idx
> 0.26% [kernel] [k] inet_gifconf
> 0.26% [kernel] [k] skb_segment
> 0.25% [kernel] [k] __tasklet_schedule_common
> 0.25% [kernel] [k] smpboot_thread_fn
> 0.23% [kernel] [k] __update_load_avg_se
> 0.22% [kernel] [k] tcp4_gro_receive
>
>
> Not much traffic now:
> bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
> input: /proc/net/dev type: rate
> | iface Rx Tx Total
> ==============================================================================
>
> enp175s0: 6.95 Gb/s 4.20 Gb/s
> 11.15 Gb/s
> enp216s0: 4.23 Gb/s 6.98 Gb/s
> 11.21 Gb/s
> ------------------------------------------------------------------------------
>
> total: 11.18 Gb/s 11.18 Gb/s
> 22.37 Gb/s
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> | iface Rx Tx Total
> ==============================================================================
>
> enp175s0: 700264.50 P/s 923890.25 P/s 1624154.75
> P/s
> enp216s0: 932598.81 P/s 708771.50 P/s 1641370.25
> P/s
> ------------------------------------------------------------------------------
>
> total: 1632863.38 P/s 1632661.75 P/s 3265525.00
> P/s
>
>
>
updated perf top - more traffic 37Gbit/37Gbit total traffic
bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
input: /proc/net/dev type: rate
/ iface Rx Tx Total
==============================================================================
enp175s0: 28.91 Gb/s 8.89 Gb/s
37.80 Gb/s
enp216s0: 8.91 Gb/s 28.95 Gb/s
37.86 Gb/s
------------------------------------------------------------------------------
total: 37.82 Gb/s 37.84 Gb/s
75.67 Gb/s
bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
input: /proc/net/dev type: rate
- iface Rx Tx Total
==============================================================================
enp175s0: 2721518.75 P/s 2460930.50 P/s 5182449.50 P/s
enp216s0: 2471451.25 P/s 2731946.25 P/s 5203397.50 P/s
------------------------------------------------------------------------------
total: 5192970.00 P/s 5192876.50 P/s 10385847.00 P/s
PerfTop: 56488 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
cycles], (all, 56 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
10.41% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
7.73% [kernel] [k] mlx5e_sq_xmit
6.05% [kernel] [k] build_skb
5.63% [kernel] [k] fib_table_lookup
2.75% [kernel] [k] mlx5e_poll_rx_cq
2.74% [kernel] [k] memcpy_erms
2.33% [kernel] [k] vlan_do_receive
2.00% [kernel] [k] __dev_queue_xmit
1.81% [kernel] [k] ip_finish_output2
1.79% [kernel] [k] dev_gro_receive
1.78% [kernel] [k] ipt_do_table
1.78% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
1.76% [kernel] [k] pfifo_fast_dequeue
1.70% [kernel] [k] mlx5e_post_rx_mpwqes
1.52% [kernel] [k] mlx5e_poll_tx_cq
1.49% [kernel] [k] irq_entries_start
1.47% [kernel] [k] _raw_spin_lock
1.45% [kernel] [k] inet_gro_receive
1.42% [kernel] [k] __netif_receive_skb_core
1.39% [kernel] [k] mlx5_eq_int
1.39% [kernel] [k] tcp_gro_receive
1.23% [kernel] [k] __build_skb
1.14% [kernel] [k] ip_route_input_rcu
1.00% [kernel] [k] vlan_dev_hard_start_xmit
0.92% [kernel] [k] _raw_spin_lock_irqsave
0.89% [kernel] [k] kmem_cache_alloc
0.88% [kernel] [k] dev_hard_start_xmit
0.88% [kernel] [k] swiotlb_map_page
0.86% [kernel] [k] mlx5e_xmit
0.81% [kernel] [k] ip_forward
0.80% [kernel] [k] tasklet_action_common.isra.21
0.79% [kernel] [k] netif_skb_features
0.77% [kernel] [k] pfifo_fast_enqueue
0.66% [kernel] [k] validate_xmit_skb.isra.142
0.64% [kernel] [k] ip_rcv_core.isra.20.constprop.25
0.63% [kernel] [k] find_busiest_group
0.60% [kernel] [k] __qdisc_run
0.59% [kernel] [k] skb_release_data
0.59% [kernel] [k] skb_gro_receive
0.58% [kernel] [k] page_frag_free
0.53% [kernel] [k] skb_segment
0.52% [kernel] [k] try_to_wake_up
0.52% [kernel] [k] _raw_spin_lock_irq
0.50% [kernel] [k] tcp4_gro_receive
0.47% [kernel] [k] kmem_cache_free_bulk
0.45% [kernel] [k] mlx5e_page_release
0.43% [kernel] [k] _raw_spin_trylock
0.39% [kernel] [k] kmem_cache_free
0.38% [kernel] [k] __sched_text_start
0.38% [kernel] [k] sch_direct_xmit
0.38% [kernel] [k] do_idle
0.34% [kernel] [k] vlan_passthru_hard_header
0.34% [kernel] [k] cmd_exec
0.34% [kernel] [k] __local_bh_enable_ip
0.33% [kernel] [k] inet_lookup_ifaddr_rcu
0.33% [kernel] [k] skb_network_protocol
0.33% [kernel] [k] netdev_pick_tx
0.33% [kernel] [k] eth_type_trans
0.32% [kernel] [k] __get_xps_queue_idx
0.31% [kernel] [k] __slab_free.isra.79
0.29% [kernel] [k] mlx5e_xdp_handle
0.27% [kernel] [k] sched_ttwu_pending
0.26% [kernel] [k] cmd_work_handler
0.24% [kernel] [k] ip_finish_output
0.23% [kernel] [k] neigh_connected_output
0.23% [kernel] [k] napi_gro_receive
0.23% [kernel] [k] mlx5e_napi_poll
0.23% [kernel] [k] mlx5e_features_check
0.22% [kernel] [k] ip_output
0.21% [kernel] [k] ip_rcv_finish_core.isra.17
0.21% [kernel] [k] fib_validate_source
0.20% [kernel] [k] dev_ifconf
0.20% [kernel] [k] eth_header
0.20% [kernel] [k] __netdev_pick_tx
0.20% [kernel] [k] mlx5_cmd_comp_handler
0.19% [kernel] [k] memset_erms
0.18% [kernel] [k] __netif_receive_skb_one_core
0.18% [kernel] [k] __memcpy
0.18% [kernel] [k] queued_spin_lock_slowpath
0.18% [kernel] [k] nf_hook_slow
0.17% [kernel] [k] enqueue_task_fair
Also modified a little coal settings for connectx5 compared to connectx4
ethtool -c enp175s0
Coalesce parameters for enp175s0:
Adaptive RX: off TX: on
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
dmac: 32588
rx-usecs: 128
rx-frames: 128
rx-usecs-irq: 0
rx-frames-irq: 0
tx-usecs: 8
tx-frames: 128
tx-usecs-irq: 0
tx-frames-irq: 0
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
So far cpu load (looks better than in previous configuration with 2 port
100G connectx4):
Average: CPU %usr %nice %sys %iowait %irq %soft %steal
%guest %gnice %idle
Average: all 0.05 0.00 0.64 0.01 0.00 8.79 0.00
0.00 0.00 90.51
Average: 0 0.00 0.00 0.10 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 4 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 7 0.10 0.00 1.30 0.00 0.00 0.00 0.00
0.00 0.00 98.60
Average: 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 11 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 13 0.00 0.00 1.00 0.00 0.00 0.00 0.00
0.00 0.00 99.00
Average: 14 0.10 0.00 0.80 0.00 0.00 22.80 0.00
0.00 0.00 76.30
Average: 15 0.10 0.00 0.70 0.00 0.00 21.20 0.00
0.00 0.00 78.00
Average: 16 0.00 0.00 0.80 0.00 0.00 17.70 0.00
0.00 0.00 81.50
Average: 17 0.00 0.00 0.50 0.00 0.00 15.10 0.00
0.00 0.00 84.40
Average: 18 0.00 0.00 0.70 0.00 0.00 20.90 0.00
0.00 0.00 78.40
Average: 19 0.10 0.00 0.70 0.00 0.00 20.50 0.00
0.00 0.00 78.70
Average: 20 0.50 0.00 1.70 0.00 0.00 18.80 0.00
0.00 0.00 79.00
Average: 21 0.10 0.00 1.30 0.00 0.00 20.90 0.00
0.00 0.00 77.70
Average: 22 0.00 0.00 0.70 0.00 0.00 19.40 0.00
0.00 0.00 79.90
Average: 23 0.00 0.00 0.90 0.00 0.00 18.50 0.00
0.00 0.00 80.60
Average: 24 0.10 0.00 1.00 0.00 0.00 15.80 0.00
0.00 0.00 83.10
Average: 25 0.00 0.00 0.70 0.00 0.00 19.50 0.00
0.00 0.00 79.80
Average: 26 0.00 0.00 0.50 0.00 0.00 18.30 0.00
0.00 0.00 81.20
Average: 27 0.00 0.00 0.70 0.00 0.00 17.60 0.00
0.00 0.00 81.70
Average: 28 0.00 0.00 0.70 0.00 0.00 0.00 0.00
0.00 0.00 99.30
Average: 29 0.00 0.00 2.00 0.00 0.00 0.00 0.00
0.00 0.00 98.00
Average: 30 0.00 0.00 0.10 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 31 0.00 0.00 2.50 0.00 0.00 0.00 0.00
0.00 0.00 97.50
Average: 32 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 33 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 34 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 35 0.00 0.00 0.70 0.00 0.00 0.00 0.00
0.00 0.00 99.30
Average: 36 0.00 0.00 2.00 0.00 0.00 0.00 0.00
0.00 0.00 98.00
Average: 37 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 38 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 39 0.00 0.00 1.40 0.00 0.00 0.00 0.00
0.00 0.00 98.60
Average: 40 0.60 0.00 0.40 0.00 0.00 0.00 0.00
0.00 0.00 99.00
Average: 41 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 42 0.00 0.00 1.20 0.00 0.00 17.70 0.00
0.00 0.00 81.10
Average: 43 0.00 0.00 0.70 0.00 0.00 20.00 0.00
0.00 0.00 79.30
Average: 44 0.00 0.00 0.50 0.00 0.00 16.10 0.00
0.00 0.00 83.40
Average: 45 0.30 0.00 1.10 0.00 0.00 16.10 0.00
0.00 0.00 82.50
Average: 46 0.00 0.00 0.80 0.00 0.00 14.80 0.00
0.00 0.00 84.40
Average: 47 0.10 0.00 1.60 0.00 0.00 17.20 0.00
0.00 0.00 81.10
Average: 48 0.00 0.00 0.60 0.00 0.00 15.00 0.00
0.00 0.00 84.40
Average: 49 0.10 0.00 0.80 0.00 0.00 14.90 0.00
0.00 0.00 84.20
Average: 50 0.20 0.00 0.50 0.70 0.00 13.60 0.00
0.00 0.00 85.00
Average: 51 0.00 0.00 0.70 0.00 0.00 14.10 0.00
0.00 0.00 85.20
Average: 52 0.20 0.00 1.60 0.00 0.00 16.80 0.00
0.00 0.00 81.40
Average: 53 0.00 0.00 0.80 0.00 0.00 13.20 0.00
0.00 0.00 86.00
Average: 54 0.20 0.00 0.50 0.00 0.00 17.20 0.00
0.00 0.00 82.10
Average: 55 0.00 0.00 0.40 0.00 0.00 18.30 0.00
0.00 0.00 81.30
>
>
>
>>
>>
>>> going on there - and hard to catch - cause perf top doestn chenged
>>> besides there is no queued slowpath hit now
>>>
>>> I ordered now also intel cards to compare - but 3 weeks eta
>>> Faster - cause 3 days - i will have mellanox connectx 5 - so can
>>> separate traffic to two different x16 pcie busses
>> I do think you need to separate traffic to two different x16 PCIe
>> slots. I have found that the ConnectX-5 is significantly faster
>> packet-per-sec performance than ConnectX-4, but that is not your
>> use-case (max BW). I've not tested these NICs for maximum
>> _bidirectional_ bandwidth limits, I've only made sure I can do 100G
>> unidirectional, which can hit some funny motherboard memory limits
>> (remember to equip motherboard with 4 RAM blocks for full memory BW).
>>
> Yes memory channels are separated and there are 4 modules per cpu :)
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 13:33 ` Paweł Staszewski
@ 2018-11-08 16:06 ` David Ahern
2018-11-08 16:25 ` Paweł Staszewski
2018-11-09 10:20 ` Paweł Staszewski
0 siblings, 2 replies; 77+ messages in thread
From: David Ahern @ 2018-11-08 16:06 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/8/18 6:33 AM, Paweł Staszewski wrote:
>
>
> W dniu 07.11.2018 o 22:06, David Ahern pisze:
>> On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>>>> Does your setup have any other device types besides physical ports with
>>>> VLANs (e.g., any macvlans or bonds)?
>>>>
>>>>
>>> no.
>>> just
>>> phy(mlnx)->vlans only config
>> VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
>> https://github.com/dsahern/linux.git bpf/kernel-tables-wip
>>
>> I got lazy with the vlan exports; right now it requires 8021q to be
>> builtin (CONFIG_VLAN_8021Q=y)
>>
>> You can use the xdp_fwd sample:
>> make O=kbuild -C samples/bpf -j 8
>>
>> Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
>> and run:
>> ./xdp_fwd <list of NIC ports>
>>
>> e.g., in my testing I run:
>> xdp_fwd eth1 eth2 eth3 eth4
>>
>> All of the relevant forwarding ports need to be on the same command
>> line. This version populates a second map to verify the egress port has
>> XDP enabled.
> Installed today on some lab server with mellanox connectx4
>
> And trying some simple static routing first - but after enabling xdp
> program - receiver is not receiving frames
>
> Route table is simple as possible for tests :)
>
> icmp ping test send from 192.168.22.237 to 172.16.0.2 - incomming
> packets on vlan 4081
>
> ip r
> default via 192.168.22.236 dev vlan4081
> 172.16.0.0/30 dev vlan1740 proto kernel scope link src 172.16.0.1
> 192.168.22.0/24 dev vlan4081 proto kernel scope link src 192.168.22.205
>
> neigh table:
> ip neigh ls
>
> 192.168.22.237 dev vlan4081 lladdr 00:25:90:fb:a6:8d REACHABLE
> 172.16.0.2 dev vlan1740 lladdr ac:1f:6b:2c:2e:5a REACHABLE
>
> and interfaces:
> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
> UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
> UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>
> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp/id:5 qdisc
> mq state UP group default qlen 1000
> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
> valid_lft forever preferred_lft forever
> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
> inet 192.168.22.205/24 scope global vlan4081
> valid_lft forever preferred_lft forever
> inet6 fe80::ae1f:6bff:fe07:c890/64 scope link
> valid_lft forever preferred_lft forever
> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
> inet 172.16.0.1/30 scope global vlan1740
> valid_lft forever preferred_lft forever
> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
> valid_lft forever preferred_lft forever
>
>
> xdp program detached:
> Receiving side tcpdump:
> 14:28:09.141233 IP 192.168.22.237 > 172.16.0.2: ICMP echo request, id
> 30227, seq 487, length 64
>
> I can see icmp requests
>
>
> enabling xdp
> ./xdp_fwd enp175s0f1 enp175s0f0
>
> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
> state UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
> prog/xdp id 5 tag 3c231ff1e5e77f3f
> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
> state UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
> prog/xdp id 5 tag 3c231ff1e5e77f3f
> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP mode DEFAULT group default qlen 1000
> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>
What hardware is this?
Start with:
echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
cat /sys/kernel/debug/tracing/trace_pipe
>From there, you can check the FIB lookups:
sysctl -w kernel.perf_event_max_stack=16
perf record -e fib:* -a -g -- sleep 5
perf script
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 16:06 ` David Ahern
@ 2018-11-08 16:25 ` Paweł Staszewski
2018-11-08 16:27 ` Paweł Staszewski
2018-11-09 10:20 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 16:25 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 08.11.2018 o 17:06, David Ahern pisze:
> On 11/8/18 6:33 AM, Paweł Staszewski wrote:
>>
>> W dniu 07.11.2018 o 22:06, David Ahern pisze:
>>> On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>>>>> Does your setup have any other device types besides physical ports with
>>>>> VLANs (e.g., any macvlans or bonds)?
>>>>>
>>>>>
>>>> no.
>>>> just
>>>> phy(mlnx)->vlans only config
>>> VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
>>> https://github.com/dsahern/linux.git bpf/kernel-tables-wip
>>>
>>> I got lazy with the vlan exports; right now it requires 8021q to be
>>> builtin (CONFIG_VLAN_8021Q=y)
>>>
>>> You can use the xdp_fwd sample:
>>> make O=kbuild -C samples/bpf -j 8
>>>
>>> Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
>>> and run:
>>> ./xdp_fwd <list of NIC ports>
>>>
>>> e.g., in my testing I run:
>>> xdp_fwd eth1 eth2 eth3 eth4
>>>
>>> All of the relevant forwarding ports need to be on the same command
>>> line. This version populates a second map to verify the egress port has
>>> XDP enabled.
>> Installed today on some lab server with mellanox connectx4
>>
>> And trying some simple static routing first - but after enabling xdp
>> program - receiver is not receiving frames
>>
>> Route table is simple as possible for tests :)
>>
>> icmp ping test send from 192.168.22.237 to 172.16.0.2 - incomming
>> packets on vlan 4081
>>
>> ip r
>> default via 192.168.22.236 dev vlan4081
>> 172.16.0.0/30 dev vlan1740 proto kernel scope link src 172.16.0.1
>> 192.168.22.0/24 dev vlan4081 proto kernel scope link src 192.168.22.205
>>
>> neigh table:
>> ip neigh ls
>>
>> 192.168.22.237 dev vlan4081 lladdr 00:25:90:fb:a6:8d REACHABLE
>> 172.16.0.2 dev vlan1740 lladdr ac:1f:6b:2c:2e:5a REACHABLE
>>
>> and interfaces:
>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>> UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>> UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp/id:5 qdisc
>> mq state UP group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>> valid_lft forever preferred_lft forever
>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.22.205/24 scope global vlan4081
>> valid_lft forever preferred_lft forever
>> inet6 fe80::ae1f:6bff:fe07:c890/64 scope link
>> valid_lft forever preferred_lft forever
>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> inet 172.16.0.1/30 scope global vlan1740
>> valid_lft forever preferred_lft forever
>> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>> valid_lft forever preferred_lft forever
>>
>>
>> xdp program detached:
>> Receiving side tcpdump:
>> 14:28:09.141233 IP 192.168.22.237 > 172.16.0.2: ICMP echo request, id
>> 30227, seq 487, length 64
>>
>> I can see icmp requests
>>
>>
>> enabling xdp
>> ./xdp_fwd enp175s0f1 enp175s0f0
>>
>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>> state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> prog/xdp id 5 tag 3c231ff1e5e77f3f
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>> state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> prog/xdp id 5 tag 3c231ff1e5e77f3f
>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>
> What hardware is this?
>
> Start with:
>
> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
> cat /sys/kernel/debug/tracing/trace_pipe
cat /sys/kernel/debug/tracing/trace_pipe
<idle>-0 [045] ..s. 68469.467752: xdp_devmap_xmit:
ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
from_ifindex=4 to_ifindex=5 err=-6
<idle>-0 [045] ..s. 68470.483836: xdp_redirect_map:
prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
map_index=5
<idle>-0 [045] ..s. 68470.483837: xdp_devmap_xmit:
ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
from_ifindex=4 to_ifindex=5 err=-6
<idle>-0 [045] ..s. 68471.503853: xdp_redirect_map:
prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
map_index=5
<idle>-0 [045] ..s. 68471.503853: xdp_devmap_xmit:
ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
from_ifindex=4 to_ifindex=5 err=-6
<idle>-0 [045] ..s. 68472.527871: xdp_redirect_map:
prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
map_index=5
<idle>-0 [045] ..s. 68472.527877: xdp_devmap_xmit:
ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
from_ifindex=4 to_ifindex=5 err=-6
<idle>-0 [045] ..s. 68473.551876: xdp_redirect_map:
prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
map_index=5
<idle>-0 [045] ..s. 68473.551880: xdp_devmap_xmit:
ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
from_ifindex=4 to_ifindex=5 err=-6
<idle>-0 [045] ..s. 68474.575893: xdp_redirect_map:
prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
map_index=5
<idle>-0 [045] ..s. 68474.575897: xdp_devmap_xmit:
ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
from_ifindex=4 to_ifindex=5 err=-6
<idle>-0 [045] ..s. 68475.599909: xdp_redirect_map:
prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
map_index=5
<idle>-0 [045] ..s. 68475.599912: xdp_devmap_xmit:
ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
from_ifindex=4 to_ifindex=5 err=-6
>
> >From there, you can check the FIB lookups:
> sysctl -w kernel.perf_event_max_stack=16
> perf record -e fib:* -a -g -- sleep 5
> perf script
>
swapper 0 [045] 68493.746274: fib:fib_table_lookup: table 254 oif 0
iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0 ==>
dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
swapper 0 [045] 68494.770287: fib:fib_table_lookup: table 254 oif 0
iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0 ==>
dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
swapper 0 [045] 68495.794304: fib:fib_table_lookup: table 254 oif 0
iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0 ==>
dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
swapper 0 [045] 68496.818308: fib:fib_table_lookup: table 254 oif 0
iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0 ==>
dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
swapper 0 [045] 68497.842313: fib:fib_table_lookup: table 254 oif 0
iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0 ==>
dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 16:25 ` Paweł Staszewski
@ 2018-11-08 16:27 ` Paweł Staszewski
2018-11-08 16:32 ` David Ahern
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 16:27 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 08.11.2018 o 17:25, Paweł Staszewski pisze:
>
>
> W dniu 08.11.2018 o 17:06, David Ahern pisze:
>> On 11/8/18 6:33 AM, Paweł Staszewski wrote:
>>>
>>> W dniu 07.11.2018 o 22:06, David Ahern pisze:
>>>> On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>>>>>> Does your setup have any other device types besides physical
>>>>>> ports with
>>>>>> VLANs (e.g., any macvlans or bonds)?
>>>>>>
>>>>>>
>>>>> no.
>>>>> just
>>>>> phy(mlnx)->vlans only config
>>>> VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
>>>> https://github.com/dsahern/linux.git bpf/kernel-tables-wip
>>>>
>>>> I got lazy with the vlan exports; right now it requires 8021q to be
>>>> builtin (CONFIG_VLAN_8021Q=y)
>>>>
>>>> You can use the xdp_fwd sample:
>>>> make O=kbuild -C samples/bpf -j 8
>>>>
>>>> Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
>>>> and run:
>>>> ./xdp_fwd <list of NIC ports>
>>>>
>>>> e.g., in my testing I run:
>>>> xdp_fwd eth1 eth2 eth3 eth4
>>>>
>>>> All of the relevant forwarding ports need to be on the same command
>>>> line. This version populates a second map to verify the egress port
>>>> has
>>>> XDP enabled.
>>> Installed today on some lab server with mellanox connectx4
>>>
>>> And trying some simple static routing first - but after enabling xdp
>>> program - receiver is not receiving frames
>>>
>>> Route table is simple as possible for tests :)
>>>
>>> icmp ping test send from 192.168.22.237 to 172.16.0.2 - incomming
>>> packets on vlan 4081
>>>
>>> ip r
>>> default via 192.168.22.236 dev vlan4081
>>> 172.16.0.0/30 dev vlan1740 proto kernel scope link src 172.16.0.1
>>> 192.168.22.0/24 dev vlan4081 proto kernel scope link src 192.168.22.205
>>>
>>> neigh table:
>>> ip neigh ls
>>>
>>> 192.168.22.237 dev vlan4081 lladdr 00:25:90:fb:a6:8d REACHABLE
>>> 172.16.0.2 dev vlan1740 lladdr ac:1f:6b:2c:2e:5a REACHABLE
>>>
>>> and interfaces:
>>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>>> state
>>> UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>>> state
>>> UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc
>>> noqueue state UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc
>>> noqueue state UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>>
>>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp/id:5
>>> qdisc
>>> mq state UP group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>>> valid_lft forever preferred_lft forever
>>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc
>>> noqueue state UP group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>>> inet 192.168.22.205/24 scope global vlan4081
>>> valid_lft forever preferred_lft forever
>>> inet6 fe80::ae1f:6bff:fe07:c890/64 scope link
>>> valid_lft forever preferred_lft forever
>>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc
>>> noqueue state UP group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>> inet 172.16.0.1/30 scope global vlan1740
>>> valid_lft forever preferred_lft forever
>>> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>>> valid_lft forever preferred_lft forever
>>>
>>>
>>> xdp program detached:
>>> Receiving side tcpdump:
>>> 14:28:09.141233 IP 192.168.22.237 > 172.16.0.2: ICMP echo request, id
>>> 30227, seq 487, length 64
>>>
>>> I can see icmp requests
>>>
>>>
>>> enabling xdp
>>> ./xdp_fwd enp175s0f1 enp175s0f0
>>>
>>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>>> state UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>>> prog/xdp id 5 tag 3c231ff1e5e77f3f
>>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>>> state UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>> prog/xdp id 5 tag 3c231ff1e5e77f3f
>>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc
>>> noqueue state UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc
>>> noqueue state UP mode DEFAULT group default qlen 1000
>>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>>
>> What hardware is this?
>>
mellanox connectx 4
ethtool -i enp175s0f0
driver: mlx5_core
version: 5.0-0
firmware-version: 12.21.1000 (SM_2001000001033)
expansion-rom-version:
bus-info: 0000:af:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
ethtool -i enp175s0f1
driver: mlx5_core
version: 5.0-0
firmware-version: 12.21.1000 (SM_2001000001033)
expansion-rom-version:
bus-info: 0000:af:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
>> Start with:
>>
>> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
>> cat /sys/kernel/debug/tracing/trace_pipe
> cat /sys/kernel/debug/tracing/trace_pipe
> <idle>-0 [045] ..s. 68469.467752: xdp_devmap_xmit:
> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
> from_ifindex=4 to_ifindex=5 err=-6
> <idle>-0 [045] ..s. 68470.483836: xdp_redirect_map:
> prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
> map_index=5
> <idle>-0 [045] ..s. 68470.483837: xdp_devmap_xmit:
> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
> from_ifindex=4 to_ifindex=5 err=-6
> <idle>-0 [045] ..s. 68471.503853: xdp_redirect_map:
> prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
> map_index=5
> <idle>-0 [045] ..s. 68471.503853: xdp_devmap_xmit:
> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
> from_ifindex=4 to_ifindex=5 err=-6
> <idle>-0 [045] ..s. 68472.527871: xdp_redirect_map:
> prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
> map_index=5
> <idle>-0 [045] ..s. 68472.527877: xdp_devmap_xmit:
> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
> from_ifindex=4 to_ifindex=5 err=-6
> <idle>-0 [045] ..s. 68473.551876: xdp_redirect_map:
> prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
> map_index=5
> <idle>-0 [045] ..s. 68473.551880: xdp_devmap_xmit:
> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
> from_ifindex=4 to_ifindex=5 err=-6
> <idle>-0 [045] ..s. 68474.575893: xdp_redirect_map:
> prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
> map_index=5
> <idle>-0 [045] ..s. 68474.575897: xdp_devmap_xmit:
> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
> from_ifindex=4 to_ifindex=5 err=-6
> <idle>-0 [045] ..s. 68475.599909: xdp_redirect_map:
> prog_id=30 action=REDIRECT ifindex=4 to_ifindex=5 err=0 map_id=32
> map_index=5
> <idle>-0 [045] ..s. 68475.599912: xdp_devmap_xmit:
> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
> from_ifindex=4 to_ifindex=5 err=-6
>
>
>
>>
>> >From there, you can check the FIB lookups:
>> sysctl -w kernel.perf_event_max_stack=16
>> perf record -e fib:* -a -g -- sleep 5
>> perf script
>>
> swapper 0 [045] 68493.746274: fib:fib_table_lookup: table 254 oif
> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>
> swapper 0 [045] 68494.770287: fib:fib_table_lookup: table 254 oif
> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>
> swapper 0 [045] 68495.794304: fib:fib_table_lookup: table 254 oif
> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>
> swapper 0 [045] 68496.818308: fib:fib_table_lookup: table 254 oif
> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>
> swapper 0 [045] 68497.842313: fib:fib_table_lookup: table 254 oif
> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 16:27 ` Paweł Staszewski
@ 2018-11-08 16:32 ` David Ahern
2018-11-08 17:30 ` Paweł Staszewski
2018-11-09 0:40 ` Paweł Staszewski
0 siblings, 2 replies; 77+ messages in thread
From: David Ahern @ 2018-11-08 16:32 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/8/18 9:27 AM, Paweł Staszewski wrote:
>>> What hardware is this?
>>>
> mellanox connectx 4
> ethtool -i enp175s0f0
> driver: mlx5_core
> version: 5.0-0
> firmware-version: 12.21.1000 (SM_2001000001033)
> expansion-rom-version:
> bus-info: 0000:af:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: yes
>
> ethtool -i enp175s0f1
> driver: mlx5_core
> version: 5.0-0
> firmware-version: 12.21.1000 (SM_2001000001033)
> expansion-rom-version:
> bus-info: 0000:af:00.1
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: yes
>
>>> Start with:
>>>
>>> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
>>> cat /sys/kernel/debug/tracing/trace_pipe
>> cat /sys/kernel/debug/tracing/trace_pipe
>> <idle>-0 [045] ..s. 68469.467752: xdp_devmap_xmit:
>> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
>> from_ifindex=4 to_ifindex=5 err=-6
FIB lookup is good, the redirect is happening, but the mlx5 driver does
not like it.
I think the -6 is coming from the mlx5 driver and the packet is getting
dropped. Perhaps this check in mlx5e_xdp_xmit:
if (unlikely(sq_num >= priv->channels.num))
return -ENXIO;
>> swapper 0 [045] 68493.746274: fib:fib_table_lookup: table 254 oif
>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>
>> swapper 0 [045] 68494.770287: fib:fib_table_lookup: table 254 oif
>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>
>> swapper 0 [045] 68495.794304: fib:fib_table_lookup: table 254 oif
>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>
>> swapper 0 [045] 68496.818308: fib:fib_table_lookup: table 254 oif
>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>
>> swapper 0 [045] 68497.842313: fib:fib_table_lookup: table 254 oif
>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 16:32 ` David Ahern
@ 2018-11-08 17:30 ` Paweł Staszewski
2018-11-08 18:05 ` David Ahern
2018-11-09 0:40 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 17:30 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 08.11.2018 o 17:32, David Ahern pisze:
> On 11/8/18 9:27 AM, Paweł Staszewski wrote:
>>>> What hardware is this?
>>>>
>> mellanox connectx 4
>> ethtool -i enp175s0f0
>> driver: mlx5_core
>> version: 5.0-0
>> firmware-version: 12.21.1000 (SM_2001000001033)
>> expansion-rom-version:
>> bus-info: 0000:af:00.0
>> supports-statistics: yes
>> supports-test: yes
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: yes
>>
>> ethtool -i enp175s0f1
>> driver: mlx5_core
>> version: 5.0-0
>> firmware-version: 12.21.1000 (SM_2001000001033)
>> expansion-rom-version:
>> bus-info: 0000:af:00.1
>> supports-statistics: yes
>> supports-test: yes
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: yes
>>
>>>> Start with:
>>>>
>>>> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
>>>> cat /sys/kernel/debug/tracing/trace_pipe
>>> cat /sys/kernel/debug/tracing/trace_pipe
>>> <idle>-0 [045] ..s. 68469.467752: xdp_devmap_xmit:
>>> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
>>> from_ifindex=4 to_ifindex=5 err=-6
> FIB lookup is good, the redirect is happening, but the mlx5 driver does
> not like it.
>
> I think the -6 is coming from the mlx5 driver and the packet is getting
> dropped. Perhaps this check in mlx5e_xdp_xmit:
>
> if (unlikely(sq_num >= priv->channels.num))
> return -ENXIO;
>
Wondering about this:
swapper 0 [045] 68494.770287: fib:fib_table_lookup: table 254 oif 0
iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0 ==>
dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
oif 0 ?
Is that correct here ?
>
>>> swapper 0 [045] 68493.746274: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68494.770287: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68495.794304: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68496.818308: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68497.842313: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 17:30 ` Paweł Staszewski
@ 2018-11-08 18:05 ` David Ahern
0 siblings, 0 replies; 77+ messages in thread
From: David Ahern @ 2018-11-08 18:05 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/8/18 10:30 AM, Paweł Staszewski wrote:
> Wondering about this:
> swapper 0 [045] 68494.770287: fib:fib_table_lookup: table 254 oif 0
> iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0 ==>
> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>
> oif 0 ?
>
> Is that correct here ?
ingress path so iif is set to the vlan device and oif is 0.
egress lookups (e.g., locally generated traffic) have oif non-0.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-03 0:18 ` Paweł Staszewski
@ 2018-11-08 19:12 ` Paweł Staszewski
2018-11-09 22:20 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-08 19:12 UTC (permalink / raw)
To: Saeed Mahameed, netdev
W dniu 03.11.2018 o 01:18, Paweł Staszewski pisze:
>
>
> W dniu 01.11.2018 o 21:37, Saeed Mahameed pisze:
>> On Thu, 2018-11-01 at 12:09 +0100, Paweł Staszewski wrote:
>>> W dniu 01.11.2018 o 10:50, Saeed Mahameed pisze:
>>>> On Wed, 2018-10-31 at 22:57 +0100, Paweł Staszewski wrote:
>>>>> Hi
>>>>>
>>>>> So maybee someone will be interested how linux kernel handles
>>>>> normal
>>>>> traffic (not pktgen :) )
>>>>>
>>>>>
>>>>> Server HW configuration:
>>>>>
>>>>> CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
>>>>>
>>>>> NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT)
>>>>>
>>>>>
>>>>> Server software:
>>>>>
>>>>> FRR - as routing daemon
>>>>>
>>>>> enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to
>>>>> local
>>>>> numa
>>>>> node)
>>>>>
>>>>> enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local
>>>>> numa
>>>>> node)
>>>>>
>>>>>
>>>>> Maximum traffic that server can handle:
>>>>>
>>>>> Bandwidth
>>>>>
>>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>>> input: /proc/net/dev type: rate
>>>>> \ iface Rx Tx Total
>>>>> =================================================================
>>>>> ====
>>>>> =========
>>>>> enp175s0f1: 28.51 Gb/s 37.24
>>>>> Gb/s
>>>>> 65.74 Gb/s
>>>>> enp175s0f0: 38.07 Gb/s 28.44
>>>>> Gb/s
>>>>> 66.51 Gb/s
>>>>> ---------------------------------------------------------------
>>>>> ----
>>>>> -----------
>>>>> total: 66.58 Gb/s 65.67
>>>>> Gb/s
>>>>> 132.25 Gb/s
>>>>>
>>>>>
>>>>> Packets per second:
>>>>>
>>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>>> input: /proc/net/dev type: rate
>>>>> - iface Rx Tx Total
>>>>> =================================================================
>>>>> ====
>>>>> =========
>>>>> enp175s0f1: 5248589.00 P/s 3486617.75 P/s
>>>>> 8735207.00 P/s
>>>>> enp175s0f0: 3557944.25 P/s 5232516.00 P/s
>>>>> 8790460.00 P/s
>>>>> ---------------------------------------------------------------
>>>>> ----
>>>>> -----------
>>>>> total: 8806533.00 P/s 8719134.00 P/s
>>>>> 17525668.00 P/s
>>>>>
>>>>>
>>>>> After reaching that limits nics on the upstream side (more RX
>>>>> traffic)
>>>>> start to drop packets
>>>>>
>>>>>
>>>>> I just dont understand that server can't handle more bandwidth
>>>>> (~40Gbit/s is limit where all cpu's are 100% util) - where pps on
>>>>> RX
>>>>> side are increasing.
>>>>>
>>>> Where do you see 40 Gb/s ? you showed that both ports on the same
>>>> NIC (
>>>> same pcie link) are doing 66.58 Gb/s (RX) + 65.67 Gb/s (TX) =
>>>> 132.25
>>>> Gb/s which aligns with your pcie link limit, what am i missing ?
>>> hmm yes that was my concern also - cause cant find anywhere
>>> informations
>>> about that bandwidth is uni or bidirectional - so if 126Gbit for x16
>>> 8GT
>>> is unidir - then bidir will be 126/2 ~68Gbit - which will fit total
>>> bw
>>> on both ports
>> i think it is bidir
> So yes - we are hitting there other problem i think pcie is most
> probabbly bidirectional max bw 126Gbit so RX 126Gbit and at same time
> TX should be 126Gbit
>
>
So one 2-port 100G card connectx4 replaced with two separate connectx5
placed in two different pcie x16 gen 3.0
lspci -vvv -s af:00.0
af:00.0 Ethernet controller: Mellanox Technologies MT27800 Family
[ConnectX-5]
Subsystem: Mellanox Technologies MT27800 Family [ConnectX-5]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 90
NUMA node: 1
Region 0: Memory at 39bffe000000 (64-bit, prefetchable) [size=32M]
Expansion ROM at ee600000 [disabled] [size=1M]
Capabilities: [60] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s
unlimited, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
SlotPowerLimit 0.000W
DevCtl: Report errors: Correctable- Non-Fatal- Fatal-
Unsupported-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
FLReset-
MaxPayload 256 bytes, MaxReadReq 4096 bytes
DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+
AuxPwr- TransPend-
LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM not
supported, Exit Latency L0s unlimited, L1 unlimited
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+
DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+,
LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-,
LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance-
SpeedDis-
Transmit Margin: Normal Operating Range,
EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB,
EqualizationComplete+, EqualizationPhase1+
EqualizationPhase2+, EqualizationPhase3+,
LinkEqualizationRequest-
Capabilities: [48] Vital Product Data
Product Name: CX515A - ConnectX-5 QSFP28
Read-only fields:
[PN] Part number: MCX515A-CCAT
[EC] Engineering changes: A6
[V2] Vendor specific: MCX515A-CCAT
[SN] Serial number: MT1831J00221
[V3] Vendor specific:
14a5c73bee92e811800098039b1ee5f0
[VA] Vendor specific:
MLX:MODL=CX515A:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0
[V0] Vendor specific: PCIeGen3 x16
[RV] Reserved: checksum good, 2 byte(s) reserved
End
Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
Vector table: BAR=0 offset=00002000
PBA: BAR=0 offset=00003000
Capabilities: [c0] Vendor Specific Information: Len=18 <?>
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA
PME(D0-,D1-,D2-,D3hot-,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [100 v1] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt-
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt-
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt-
UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout-
NonFatalErr+
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout-
NonFatalErr+
AERCap: First Error Pointer: 04, GenCap+ CGenEn-
ChkCap+ ChkEn-
Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
ARICap: MFVC- ACS-, Next Function: 0
ARICtl: MFVC- ACS-, Function Group: 0
Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
IOVCap: Migration-, Interrupt Message Number: 000
IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-
IOVSta: Migration-
Initial VFs: 0, Total VFs: 0, Number of VFs: 0,
Function Dependency Link: 00
VF offset: 1, stride: 1, Device ID: 1018
Supported Page Size: 000007ff, System Page Size: 00000001
Region 0: Memory at 0000000000000000 (64-bit, prefetchable)
VF Migration: offset: 00000000, BIR: 0
Capabilities: [1c0 v1] #19
Kernel driver in use: mlx5_core
d8:00.0 Ethernet controller: Mellanox Technologies MT27800 Family
[ConnectX-5]
Subsystem: Mellanox Technologies MT27800 Family [ConnectX-5]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 159
NUMA node: 1
Region 0: Memory at 39fffe000000 (64-bit, prefetchable) [size=32M]
Expansion ROM at fbe00000 [disabled] [size=1M]
Capabilities: [60] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s
unlimited, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
SlotPowerLimit 0.000W
DevCtl: Report errors: Correctable- Non-Fatal- Fatal-
Unsupported-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
FLReset-
MaxPayload 256 bytes, MaxReadReq 4096 bytes
DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+
AuxPwr- TransPend-
LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM not
supported, Exit Latency L0s unlimited, L1 unlimited
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+
DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+,
LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-,
LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance-
SpeedDis-
Transmit Margin: Normal Operating Range,
EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB,
EqualizationComplete+, EqualizationPhase1+
EqualizationPhase2+, EqualizationPhase3+,
LinkEqualizationRequest-
Capabilities: [48] Vital Product Data
Product Name: CX515A - ConnectX-5 QSFP28
Read-only fields:
[PN] Part number: MCX515A-CCAT
[EC] Engineering changes: A6
[V2] Vendor specific: MCX515A-CCAT
[SN] Serial number: MT1831J00169
[V3] Vendor specific:
c06757e6e092e811800098039b1ee520
[VA] Vendor specific:
MLX:MODL=CX515A:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0
[V0] Vendor specific: PCIeGen3 x16
[RV] Reserved: checksum good, 2 byte(s) reserved
End
Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
Vector table: BAR=0 offset=00002000
PBA: BAR=0 offset=00003000
Capabilities: [c0] Vendor Specific Information: Len=18 <?>
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA
PME(D0-,D1-,D2-,D3hot-,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [100 v1] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt-
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt-
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt-
UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout-
NonFatalErr+
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout-
NonFatalErr+
AERCap: First Error Pointer: 04, GenCap+ CGenEn-
ChkCap+ ChkEn-
Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
ARICap: MFVC- ACS-, Next Function: 0
ARICtl: MFVC- ACS-, Function Group: 0
Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
IOVCap: Migration-, Interrupt Message Number: 000
IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-
IOVSta: Migration-
Initial VFs: 0, Total VFs: 0, Number of VFs: 0,
Function Dependency Link: 00
VF offset: 1, stride: 1, Device ID: 1018
Supported Page Size: 000007ff, System Page Size: 00000001
Region 0: Memory at 0000000000000000 (64-bit, prefetchable)
VF Migration: offset: 00000000, BIR: 0
Capabilities: [1c0 v1] #19
Kernel driver in use: mlx5_core
CPU load is lower than for connectx4 - but it looks like bandwidth limit
is the same :)
But also after reaching 60Gbit/60Gbit
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
- iface Rx Tx Total
==============================================================================
enp175s0: 45.09 Gb/s 15.09 Gb/s
60.18 Gb/s
enp216s0: 15.14 Gb/s 45.19 Gb/s
60.33 Gb/s
------------------------------------------------------------------------------
total: 60.45 Gb/s 60.48 Gb/s 120.93 Gb/s
Nics start to drop packets (discards from nic's where is more rx traffic):
ethtool -S enp175s0 |grep 'disc'
rx_discards_phy: 47265611
after 20 secs
ethtool -S enp175s0 |grep 'disc'
rx_discards_phy: 49434472
current coalescence params:
ethtool -c enp175s0
Coalesce parameters for enp175s0:
Adaptive RX: off TX: on
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
dmac: 32651
rx-usecs: 128
rx-frames: 128
rx-usecs-irq: 0
rx-frames-irq: 0
tx-usecs: 8
tx-frames: 128
tx-usecs-irq: 0
tx-frames-irq: 0
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
and perf top:
PerfTop: 86898 irqs/sec kernel:99.5% exact: 0.0% [4000Hz
cycles], (all, 56 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
12.76% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
8.68% [kernel] [k] mlx5e_sq_xmit
6.47% [kernel] [k] build_skb
4.78% [kernel] [k] fib_table_lookup
4.58% [kernel] [k] memcpy_erms
3.47% [kernel] [k] mlx5e_poll_rx_cq
2.59% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
2.37% [kernel] [k] mlx5e_post_rx_mpwqes
2.33% [kernel] [k] vlan_do_receive
1.94% [kernel] [k] __dev_queue_xmit
1.89% [kernel] [k] mlx5e_poll_tx_cq
1.74% [kernel] [k] ip_finish_output2
1.67% [kernel] [k] dev_gro_receive
1.64% [kernel] [k] ipt_do_table
1.58% [kernel] [k] tcp_gro_receive
1.49% [kernel] [k] pfifo_fast_dequeue
1.28% [kernel] [k] mlx5_eq_int
1.26% [kernel] [k] inet_gro_receive
1.26% [kernel] [k] _raw_spin_lock
1.20% [kernel] [k] __netif_receive_skb_core
1.19% [kernel] [k] irq_entries_start
1.17% [kernel] [k] swiotlb_map_page
1.13% [kernel] [k] vlan_dev_hard_start_xmit
1.12% [kernel] [k] ip_route_input_rcu
0.97% [kernel] [k] __build_skb
0.84% [kernel] [k] _raw_spin_lock_irqsave
0.78% [kernel] [k] kmem_cache_alloc
0.77% [kernel] [k] mlx5e_xmit
0.77% [kernel] [k] dev_hard_start_xmit
0.76% [kernel] [k] ip_forward
0.73% [kernel] [k] netif_skb_features
0.70% [kernel] [k] tasklet_action_common.isra.21
0.58% [kernel] [k] validate_xmit_skb.isra.142
0.55% [kernel] [k] ip_rcv_core.isra.20.constprop.25
0.55% [kernel] [k] mlx5e_page_release
0.55% [kernel] [k] __qdisc_run
0.51% [kernel] [k] __memcpy
0.48% [kernel] [k] kmem_cache_free_bulk
0.48% [kernel] [k] page_frag_free
0.47% [kernel] [k] inet_lookup_ifaddr_rcu
0.47% [kernel] [k] queued_spin_lock_slowpath
0.46% [kernel] [k] pfifo_fast_enqueue
0.43% [kernel] [k] tcp4_gro_receive
0.40% [kernel] [k] skb_gro_receive
0.39% [kernel] [k] skb_release_data
0.38% [kernel] [k] find_busiest_group
0.36% [kernel] [k] _raw_spin_trylock
0.36% [kernel] [k] skb_segment
0.33% [kernel] [k] eth_type_trans
0.32% [kernel] [k] __sched_text_start
0.32% [kernel] [k] __netif_schedule
0.32% [kernel] [k] try_to_wake_up
0.31% [kernel] [k] _raw_spin_lock_irq
0.31% [kernel] [k] __local_bh_enable_ip
Also mpstat:
Average: CPU %usr %nice %sys %iowait %irq %soft %steal
%guest %gnice %idle
Average: all 0.06 0.00 1.00 0.02 0.00 21.61 0.00
0.00 0.00 77.32
Average: 0 0.00 0.00 0.60 0.00 0.00 0.00 0.00
0.00 0.00 99.40
Average: 1 0.10 0.00 1.30 0.00 0.00 0.00 0.00
0.00 0.00 98.60
Average: 2 0.00 0.00 0.20 0.00 0.00 0.00 0.00
0.00 0.00 99.80
Average: 3 0.00 0.00 1.60 0.00 0.00 0.00 0.00
0.00 0.00 98.40
Average: 4 0.00 0.00 1.00 0.00 0.00 0.00 0.00
0.00 0.00 99.00
Average: 5 0.20 0.00 4.60 0.00 0.00 0.00 0.00
0.00 0.00 95.20
Average: 6 0.00 0.00 0.20 0.00 0.00 0.00 0.00
0.00 0.00 99.80
Average: 7 0.60 0.00 3.00 0.00 0.00 0.00 0.00
0.00 0.00 96.40
Average: 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 9 0.70 0.00 0.30 0.00 0.00 0.00 0.00
0.00 0.00 99.00
Average: 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 11 0.00 0.00 2.00 0.00 0.00 0.00 0.00
0.00 0.00 98.00
Average: 12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 13 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 14 0.00 0.00 1.00 0.00 0.00 50.40 0.00
0.00 0.00 48.60
Average: 15 0.00 0.00 1.30 0.00 0.00 47.90 0.00
0.00 0.00 50.80
Average: 16 0.00 0.00 2.00 0.00 0.00 47.80 0.00
0.00 0.00 50.20
Average: 17 0.00 0.00 1.30 0.00 0.00 50.20 0.00
0.00 0.00 48.50
Average: 18 0.10 0.00 1.10 0.00 0.00 42.40 0.00
0.00 0.00 56.40
Average: 19 0.00 0.00 1.50 0.00 0.00 44.40 0.00
0.00 0.00 54.10
Average: 20 0.00 0.00 1.40 0.00 0.00 45.90 0.00
0.00 0.00 52.70
Average: 21 0.00 0.00 0.70 0.00 0.00 44.50 0.00
0.00 0.00 54.80
Average: 22 0.10 0.00 1.40 0.00 0.00 47.00 0.00
0.00 0.00 51.50
Average: 23 0.00 0.00 0.30 0.00 0.00 45.50 0.00
0.00 0.00 54.20
Average: 24 0.00 0.00 1.60 0.00 0.00 50.00 0.00
0.00 0.00 48.40
Average: 25 0.10 0.00 0.70 0.00 0.00 47.00 0.00
0.00 0.00 52.20
Average: 26 0.00 0.00 1.80 0.00 0.00 48.70 0.00
0.00 0.00 49.50
Average: 27 0.00 0.00 1.10 0.00 0.00 44.80 0.00
0.00 0.00 54.10
Average: 28 0.30 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 99.70
Average: 29 0.10 0.00 0.60 0.00 0.00 0.00 0.00
0.00 0.00 99.30
Average: 30 0.00 0.00 0.20 0.00 0.00 0.00 0.00
0.00 0.00 99.80
Average: 31 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 32 0.00 0.00 1.20 0.00 0.00 0.00 0.00
0.00 0.00 98.80
Average: 33 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 34 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 35 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 36 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 37 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 38 0.20 0.00 0.80 0.00 0.00 0.00 0.00
0.00 0.00 99.00
Average: 39 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 40 0.00 0.00 3.30 0.00 0.00 0.00 0.00
0.00 0.00 96.70
Average: 41 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 42 0.00 0.00 0.80 0.00 0.00 45.00 0.00
0.00 0.00 54.20
Average: 43 0.00 0.00 1.60 0.00 0.00 48.30 0.00
0.00 0.00 50.10
Average: 44 0.00 0.00 1.60 0.00 0.00 37.90 0.00
0.00 0.00 60.50
Average: 45 0.30 0.00 1.40 0.00 0.00 32.90 0.00
0.00 0.00 65.40
Average: 46 0.00 0.00 1.50 0.90 0.00 37.60 0.00
0.00 0.00 60.00
Average: 47 0.10 0.00 0.40 0.00 0.00 41.40 0.00
0.00 0.00 58.10
Average: 48 0.20 0.00 1.70 0.00 0.00 38.20 0.00
0.00 0.00 59.90
Average: 49 0.00 0.00 1.40 0.00 0.00 37.20 0.00
0.00 0.00 61.40
Average: 50 0.00 0.00 1.30 0.00 0.00 38.10 0.00
0.00 0.00 60.60
Average: 51 0.00 0.00 0.80 0.00 0.00 39.40 0.00
0.00 0.00 59.80
Average: 52 0.00 0.00 1.70 0.00 0.00 39.50 0.00
0.00 0.00 58.80
Average: 53 0.10 0.00 0.90 0.00 0.00 38.20 0.00
0.00 0.00 60.80
Average: 54 0.00 0.00 1.30 0.00 0.00 42.10 0.00
0.00 0.00 56.60
Average: 55 0.00 0.00 1.60 0.00 0.00 37.70 0.00
0.00 0.00 60.70
So it looks like previously there was also no problem with pciexpress x16
>
>
>
>>> This can explain maybee also why cpuload is rising rapidly from
>>> 120Gbit/s in total to 132Gbit (counters of bwmng are from /proc/net -
>>> so
>>> there can be some error in reading them when offloading (gro/gso/tso)
>>> on
>>> nic's is enabled that is why
>>>
>>>>> Was thinking that maybee reached some pcie x16 limit - but x16
>>>>> 8GT
>>>>> is
>>>>> 126Gbit - and also when testing with pktgen i can reach more bw
>>>>> and
>>>>> pps
>>>>> (like 4x more comparing to normal internet traffic)
>>>>>
>>>> Are you forwarding when using pktgen as well or you just testing
>>>> the RX
>>>> side pps ?
>>> Yes pktgen was tested on single port RX
>>> Can check also forwarding to eliminate pciex limits
>>>
>> So this explains why you have more RX pps, since tx is idle and pcie
>> will be free to do only rx.
>>
>> [...]
>>
>>
>>>>> ethtool -S enp175s0f1
>>>>> NIC statistics:
>>>>> rx_packets: 173730800927
>>>>> rx_bytes: 99827422751332
>>>>> tx_packets: 142532009512
>>>>> tx_bytes: 184633045911222
>>>>> tx_tso_packets: 25989113891
>>>>> tx_tso_bytes: 132933363384458
>>>>> tx_tso_inner_packets: 0
>>>>> tx_tso_inner_bytes: 0
>>>>> tx_added_vlan_packets: 74630239613
>>>>> tx_nop: 2029817748
>>>>> rx_lro_packets: 0
>>>>> rx_lro_bytes: 0
>>>>> rx_ecn_mark: 0
>>>>> rx_removed_vlan_packets: 173730800927
>>>>> rx_csum_unnecessary: 0
>>>>> rx_csum_none: 434357
>>>>> rx_csum_complete: 173730366570
>>>>> rx_csum_unnecessary_inner: 0
>>>>> rx_xdp_drop: 0
>>>>> rx_xdp_redirect: 0
>>>>> rx_xdp_tx_xmit: 0
>>>>> rx_xdp_tx_full: 0
>>>>> rx_xdp_tx_err: 0
>>>>> rx_xdp_tx_cqe: 0
>>>>> tx_csum_none: 38260960853
>>>>> tx_csum_partial: 36369278774
>>>>> tx_csum_partial_inner: 0
>>>>> tx_queue_stopped: 1
>>>>> tx_queue_dropped: 0
>>>>> tx_xmit_more: 748638099
>>>>> tx_recover: 0
>>>>> tx_cqes: 73881645031
>>>>> tx_queue_wake: 1
>>>>> tx_udp_seg_rem: 0
>>>>> tx_cqe_err: 0
>>>>> tx_xdp_xmit: 0
>>>>> tx_xdp_full: 0
>>>>> tx_xdp_err: 0
>>>>> tx_xdp_cqes: 0
>>>>> rx_wqe_err: 0
>>>>> rx_mpwqe_filler_cqes: 0
>>>>> rx_mpwqe_filler_strides: 0
>>>>> rx_buff_alloc_err: 0
>>>>> rx_cqe_compress_blks: 0
>>>>> rx_cqe_compress_pkts: 0
>>>> If this is a pcie bottleneck it might be useful to enable CQE
>>>> compression (to reduce PCIe completion descriptors transactions)
>>>> you should see the above rx_cqe_compress_pkts increasing when
>>>> enabled.
>>>>
>>>> $ ethtool --set-priv-flags enp175s0f1 rx_cqe_compress on
>>>> $ ethtool --show-priv-flags enp175s0f1
>>>> Private flags for p6p1:
>>>> rx_cqe_moder : on
>>>> cqe_moder : off
>>>> rx_cqe_compress : on
>>>> ...
>>>>
>>>> try this on both interfaces.
>>> Done
>>> ethtool --show-priv-flags enp175s0f1
>>> Private flags for enp175s0f1:
>>> rx_cqe_moder : on
>>> tx_cqe_moder : off
>>> rx_cqe_compress : on
>>> rx_striding_rq : off
>>> rx_no_csum_complete: off
>>>
>>> ethtool --show-priv-flags enp175s0f0
>>> Private flags for enp175s0f0:
>>> rx_cqe_moder : on
>>> tx_cqe_moder : off
>>> rx_cqe_compress : on
>>> rx_striding_rq : off
>>> rx_no_csum_complete: off
>>>
>> did it help reduce the load on the pcie ? do you see more pps ?
>> what is the ratio between rx_cqe_compress_pkts and over all rx packets
>> ?
>>
>> [...]
>>
>>>>> ethtool -S enp175s0f0
>>>>> NIC statistics:
>>>>> rx_packets: 141574897253
>>>>> rx_bytes: 184445040406258
>>>>> tx_packets: 172569543894
>>>>> tx_bytes: 99486882076365
>>>>> tx_tso_packets: 9367664195
>>>>> tx_tso_bytes: 56435233992948
>>>>> tx_tso_inner_packets: 0
>>>>> tx_tso_inner_bytes: 0
>>>>> tx_added_vlan_packets: 141297671626
>>>>> tx_nop: 2102916272
>>>>> rx_lro_packets: 0
>>>>> rx_lro_bytes: 0
>>>>> rx_ecn_mark: 0
>>>>> rx_removed_vlan_packets: 141574897252
>>>>> rx_csum_unnecessary: 0
>>>>> rx_csum_none: 23135854
>>>>> rx_csum_complete: 141551761398
>>>>> rx_csum_unnecessary_inner: 0
>>>>> rx_xdp_drop: 0
>>>>> rx_xdp_redirect: 0
>>>>> rx_xdp_tx_xmit: 0
>>>>> rx_xdp_tx_full: 0
>>>>> rx_xdp_tx_err: 0
>>>>> rx_xdp_tx_cqe: 0
>>>>> tx_csum_none: 127934791664
>>>> It is a good idea to look into this, tx is not requesting hw tx
>>>> csumming for a lot of packets, maybe you are wasting a lot of cpu
>>>> on
>>>> calculating csum, or maybe this is just the rx csum complete..
>>>>
>>>>> tx_csum_partial: 13362879974
>>>>> tx_csum_partial_inner: 0
>>>>> tx_queue_stopped: 232561
>>>> TX queues are stalling, could be an indentation for the pcie
>>>> bottelneck.
>>>>
>>>>> tx_queue_dropped: 0
>>>>> tx_xmit_more: 1266021946
>>>>> tx_recover: 0
>>>>> tx_cqes: 140031716469
>>>>> tx_queue_wake: 232561
>>>>> tx_udp_seg_rem: 0
>>>>> tx_cqe_err: 0
>>>>> tx_xdp_xmit: 0
>>>>> tx_xdp_full: 0
>>>>> tx_xdp_err: 0
>>>>> tx_xdp_cqes: 0
>>>>> rx_wqe_err: 0
>>>>> rx_mpwqe_filler_cqes: 0
>>>>> rx_mpwqe_filler_strides: 0
>>>>> rx_buff_alloc_err: 0
>>>>> rx_cqe_compress_blks: 0
>>>>> rx_cqe_compress_pkts: 0
>>>>> rx_page_reuse: 0
>>>>> rx_cache_reuse: 16625975793
>>>>> rx_cache_full: 54161465914
>>>>> rx_cache_empty: 258048
>>>>> rx_cache_busy: 54161472735
>>>>> rx_cache_waive: 0
>>>>> rx_congst_umr: 0
>>>>> rx_arfs_err: 0
>>>>> ch_events: 40572621887
>>>>> ch_poll: 40885650979
>>>>> ch_arm: 40429276692
>>>>> ch_aff_change: 0
>>>>> ch_eq_rearm: 0
>>>>> rx_out_of_buffer: 2791690
>>>>> rx_if_down_packets: 74
>>>>> rx_vport_unicast_packets: 141843476308
>>>>> rx_vport_unicast_bytes: 185421265403318
>>>>> tx_vport_unicast_packets: 172569484005
>>>>> tx_vport_unicast_bytes: 100019940094298
>>>>> rx_vport_multicast_packets: 85122935
>>>>> rx_vport_multicast_bytes: 5761316431
>>>>> tx_vport_multicast_packets: 6452
>>>>> tx_vport_multicast_bytes: 643540
>>>>> rx_vport_broadcast_packets: 22423624
>>>>> rx_vport_broadcast_bytes: 1390127090
>>>>> tx_vport_broadcast_packets: 22024
>>>>> tx_vport_broadcast_bytes: 1321440
>>>>> rx_vport_rdma_unicast_packets: 0
>>>>> rx_vport_rdma_unicast_bytes: 0
>>>>> tx_vport_rdma_unicast_packets: 0
>>>>> tx_vport_rdma_unicast_bytes: 0
>>>>> rx_vport_rdma_multicast_packets: 0
>>>>> rx_vport_rdma_multicast_bytes: 0
>>>>> tx_vport_rdma_multicast_packets: 0
>>>>> tx_vport_rdma_multicast_bytes: 0
>>>>> tx_packets_phy: 172569501577
>>>>> rx_packets_phy: 142871314588
>>>>> rx_crc_errors_phy: 0
>>>>> tx_bytes_phy: 100710212814151
>>>>> rx_bytes_phy: 187209224289564
>>>>> tx_multicast_phy: 6452
>>>>> tx_broadcast_phy: 22024
>>>>> rx_multicast_phy: 85122933
>>>>> rx_broadcast_phy: 22423623
>>>>> rx_in_range_len_errors_phy: 2
>>>>> rx_out_of_range_len_phy: 0
>>>>> rx_oversize_pkts_phy: 0
>>>>> rx_symbol_err_phy: 0
>>>>> tx_mac_control_phy: 0
>>>>> rx_mac_control_phy: 0
>>>>> rx_unsupported_op_phy: 0
>>>>> rx_pause_ctrl_phy: 0
>>>>> tx_pause_ctrl_phy: 0
>>>>> rx_discards_phy: 920161423
>>>> Ok, this port seem to be suffering more, RX is congested, maybe due
>>>> to
>>>> the pcie bottleneck.
>>> Yes this side is receiving more traffic - second port is +10G more tx
>>>
>> [...]
>>
>>
>>>>> Average: 17 0.00 0.00 16.60 0.00 0.00 52.10
>>>>> 0.00 0.00 0.00 31.30
>>>>> Average: 18 0.00 0.00 13.90 0.00 0.00 61.20
>>>>> 0.00 0.00 0.00 24.90
>>>>> Average: 19 0.00 0.00 9.99 0.00 0.00 70.33
>>>>> 0.00 0.00 0.00 19.68
>>>>> Average: 20 0.00 0.00 9.00 0.00 0.00 73.00
>>>>> 0.00 0.00 0.00 18.00
>>>>> Average: 21 0.00 0.00 8.70 0.00 0.00 73.90
>>>>> 0.00 0.00 0.00 17.40
>>>>> Average: 22 0.00 0.00 15.42 0.00 0.00 58.56
>>>>> 0.00 0.00 0.00 26.03
>>>>> Average: 23 0.00 0.00 10.81 0.00 0.00 71.67
>>>>> 0.00 0.00 0.00 17.52
>>>>> Average: 24 0.00 0.00 10.00 0.00 0.00 71.80
>>>>> 0.00 0.00 0.00 18.20
>>>>> Average: 25 0.00 0.00 11.19 0.00 0.00 71.13
>>>>> 0.00 0.00 0.00 17.68
>>>>> Average: 26 0.00 0.00 11.00 0.00 0.00 70.80
>>>>> 0.00 0.00 0.00 18.20
>>>>> Average: 27 0.00 0.00 10.01 0.00 0.00 69.57
>>>>> 0.00 0.00 0.00 20.42
>>>> The numa cores are not at 100% util, you have around 20% of idle on
>>>> each one.
>>> Yes - no 100% cpu - but the difference between 80% and 100% is like
>>> push
>>> aditional 1-2Gbit/s
>>>
>> yes but, it doens't look like the bottleneck is the cpu, although it is
>> close to be :)..
>>
>>>>> Average: 28 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 29 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 30 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 31 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 32 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 33 0.00 0.00 3.90 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 96.10
>>>>> Average: 34 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 35 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 36 0.10 0.00 0.20 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.70
>>>>> Average: 37 0.20 0.00 0.30 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.50
>>>>> Average: 38 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 100.00
>>>>> Average: 39 0.00 0.00 2.60 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 97.40
>>>>> Average: 40 0.00 0.00 0.90 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.10
>>>>> Average: 41 0.10 0.00 0.50 0.00 0.00 0.00
>>>>> 0.00
>>>>> 0.00 0.00 99.40
>>>>> Average: 42 0.00 0.00 9.91 0.00 0.00 70.67
>>>>> 0.00 0.00 0.00 19.42
>>>>> Average: 43 0.00 0.00 15.90 0.00 0.00 57.50
>>>>> 0.00 0.00 0.00 26.60
>>>>> Average: 44 0.00 0.00 12.20 0.00 0.00 66.20
>>>>> 0.00 0.00 0.00 21.60
>>>>> Average: 45 0.00 0.00 12.00 0.00 0.00 67.50
>>>>> 0.00 0.00 0.00 20.50
>>>>> Average: 46 0.00 0.00 12.90 0.00 0.00 65.50
>>>>> 0.00 0.00 0.00 21.60
>>>>> Average: 47 0.00 0.00 14.59 0.00 0.00 60.84
>>>>> 0.00 0.00 0.00 24.58
>>>>> Average: 48 0.00 0.00 13.59 0.00 0.00 61.74
>>>>> 0.00 0.00 0.00 24.68
>>>>> Average: 49 0.00 0.00 18.36 0.00 0.00 53.29
>>>>> 0.00 0.00 0.00 28.34
>>>>> Average: 50 0.00 0.00 15.32 0.00 0.00 58.86
>>>>> 0.00 0.00 0.00 25.83
>>>>> Average: 51 0.00 0.00 17.60 0.00 0.00 55.20
>>>>> 0.00 0.00 0.00 27.20
>>>>> Average: 52 0.00 0.00 15.92 0.00 0.00 56.06
>>>>> 0.00 0.00 0.00 28.03
>>>>> Average: 53 0.00 0.00 13.00 0.00 0.00 62.30
>>>>> 0.00 0.00 0.00 24.70
>>>>> Average: 54 0.00 0.00 13.20 0.00 0.00 61.50
>>>>> 0.00 0.00 0.00 25.30
>>>>> Average: 55 0.00 0.00 14.59 0.00 0.00 58.64
>>>>> 0.00 0.00 0.00 26.77
>>>>>
>>>>>
>>>>> ethtool -k enp175s0f0
>>>>> Features for enp175s0f0:
>>>>> rx-checksumming: on
>>>>> tx-checksumming: on
>>>>> tx-checksum-ipv4: on
>>>>> tx-checksum-ip-generic: off [fixed]
>>>>> tx-checksum-ipv6: on
>>>>> tx-checksum-fcoe-crc: off [fixed]
>>>>> tx-checksum-sctp: off [fixed]
>>>>> scatter-gather: on
>>>>> tx-scatter-gather: on
>>>>> tx-scatter-gather-fraglist: off [fixed]
>>>>> tcp-segmentation-offload: on
>>>>> tx-tcp-segmentation: on
>>>>> tx-tcp-ecn-segmentation: off [fixed]
>>>>> tx-tcp-mangleid-segmentation: off
>>>>> tx-tcp6-segmentation: on
>>>>> udp-fragmentation-offload: off
>>>>> generic-segmentation-offload: on
>>>>> generic-receive-offload: on
>>>>> large-receive-offload: off [fixed]
>>>>> rx-vlan-offload: on
>>>>> tx-vlan-offload: on
>>>>> ntuple-filters: off
>>>>> receive-hashing: on
>>>>> highdma: on [fixed]
>>>>> rx-vlan-filter: on
>>>>> vlan-challenged: off [fixed]
>>>>> tx-lockless: off [fixed]
>>>>> netns-local: off [fixed]
>>>>> tx-gso-robust: off [fixed]
>>>>> tx-fcoe-segmentation: off [fixed]
>>>>> tx-gre-segmentation: on
>>>>> tx-gre-csum-segmentation: on
>>>>> tx-ipxip4-segmentation: off [fixed]
>>>>> tx-ipxip6-segmentation: off [fixed]
>>>>> tx-udp_tnl-segmentation: on
>>>>> tx-udp_tnl-csum-segmentation: on
>>>>> tx-gso-partial: on
>>>>> tx-sctp-segmentation: off [fixed]
>>>>> tx-esp-segmentation: off [fixed]
>>>>> tx-udp-segmentation: on
>>>>> fcoe-mtu: off [fixed]
>>>>> tx-nocache-copy: off
>>>>> loopback: off [fixed]
>>>>> rx-fcs: off
>>>>> rx-all: off
>>>>> tx-vlan-stag-hw-insert: on
>>>>> rx-vlan-stag-hw-parse: off [fixed]
>>>>> rx-vlan-stag-filter: on [fixed]
>>>>> l2-fwd-offload: off [fixed]
>>>>> hw-tc-offload: off
>>>>> esp-hw-offload: off [fixed]
>>>>> esp-tx-csum-hw-offload: off [fixed]
>>>>> rx-udp_tunnel-port-offload: on
>>>>> tls-hw-tx-offload: off [fixed]
>>>>> tls-hw-rx-offload: off [fixed]
>>>>> rx-gro-hw: off [fixed]
>>>>> tls-hw-record: off [fixed]
>>>>>
>>>>> ethtool -c enp175s0f0
>>>>> Coalesce parameters for enp175s0f0:
>>>>> Adaptive RX: off TX: on
>>>>> stats-block-usecs: 0
>>>>> sample-interval: 0
>>>>> pkt-rate-low: 0
>>>>> pkt-rate-high: 0
>>>>> dmac: 32703
>>>>>
>>>>> rx-usecs: 256
>>>>> rx-frames: 128
>>>>> rx-usecs-irq: 0
>>>>> rx-frames-irq: 0
>>>>>
>>>>> tx-usecs: 8
>>>>> tx-frames: 128
>>>>> tx-usecs-irq: 0
>>>>> tx-frames-irq: 0
>>>>>
>>>>> rx-usecs-low: 0
>>>>> rx-frame-low: 0
>>>>> tx-usecs-low: 0
>>>>> tx-frame-low: 0
>>>>>
>>>>> rx-usecs-high: 0
>>>>> rx-frame-high: 0
>>>>> tx-usecs-high: 0
>>>>> tx-frame-high: 0
>>>>>
>>>>> ethtool -g enp175s0f0
>>>>> Ring parameters for enp175s0f0:
>>>>> Pre-set maximums:
>>>>> RX: 8192
>>>>> RX Mini: 0
>>>>> RX Jumbo: 0
>>>>> TX: 8192
>>>>> Current hardware settings:
>>>>> RX: 4096
>>>>> RX Mini: 0
>>>>> RX Jumbo: 0
>>>>> TX: 4096
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>> Also changed a little coalesce params - and best for this config are:
>>> ethtool -c enp175s0f0
>>> Coalesce parameters for enp175s0f0:
>>> Adaptive RX: off TX: off
>>> stats-block-usecs: 0
>>> sample-interval: 0
>>> pkt-rate-low: 0
>>> pkt-rate-high: 0
>>> dmac: 32573
>>>
>>> rx-usecs: 40
>>> rx-frames: 128
>>> rx-usecs-irq: 0
>>> rx-frames-irq: 0
>>>
>>> tx-usecs: 8
>>> tx-frames: 8
>>> tx-usecs-irq: 0
>>> tx-frames-irq: 0
>>>
>>> rx-usecs-low: 0
>>> rx-frame-low: 0
>>> tx-usecs-low: 0
>>> tx-frame-low: 0
>>>
>>> rx-usecs-high: 0
>>> rx-frame-high: 0
>>> tx-usecs-high: 0
>>> tx-frame-high: 0
>>>
>>>
>>> Less drops on RX side - and more pps in overall forwarded.
>>>
>> how much improvement ? maybe we can improve our adaptive rx coal to be
>> efficient for this work load.
>>
>>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 16:32 ` David Ahern
2018-11-08 17:30 ` Paweł Staszewski
@ 2018-11-09 0:40 ` Paweł Staszewski
2018-11-09 0:42 ` David Ahern
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-09 0:40 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 08.11.2018 o 17:32, David Ahern pisze:
> On 11/8/18 9:27 AM, Paweł Staszewski wrote:
>>>> What hardware is this?
>>>>
>> mellanox connectx 4
>> ethtool -i enp175s0f0
>> driver: mlx5_core
>> version: 5.0-0
>> firmware-version: 12.21.1000 (SM_2001000001033)
>> expansion-rom-version:
>> bus-info: 0000:af:00.0
>> supports-statistics: yes
>> supports-test: yes
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: yes
>>
>> ethtool -i enp175s0f1
>> driver: mlx5_core
>> version: 5.0-0
>> firmware-version: 12.21.1000 (SM_2001000001033)
>> expansion-rom-version:
>> bus-info: 0000:af:00.1
>> supports-statistics: yes
>> supports-test: yes
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: yes
>>
>>>> Start with:
>>>>
>>>> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
>>>> cat /sys/kernel/debug/tracing/trace_pipe
>>> cat /sys/kernel/debug/tracing/trace_pipe
>>> <idle>-0 [045] ..s. 68469.467752: xdp_devmap_xmit:
>>> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
>>> from_ifindex=4 to_ifindex=5 err=-6
> FIB lookup is good, the redirect is happening, but the mlx5 driver does
> not like it.
>
> I think the -6 is coming from the mlx5 driver and the packet is getting
> dropped. Perhaps this check in mlx5e_xdp_xmit:
>
> if (unlikely(sq_num >= priv->channels.num))
> return -ENXIO;
I removed that part and recompiled - but after running now xdp_fwd i
have kernel pamic :)
>
>
>>> swapper 0 [045] 68493.746274: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68494.770287: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68495.794304: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68496.818308: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
>>>
>>> swapper 0 [045] 68497.842313: fib:fib_table_lookup: table 254 oif
>>> 0 iif 6 proto 1 192.168.22.237/0 -> 172.16.0.2/0 tos 0 scope 0 flags 0
>>> ==> dev vlan1740 gw 0.0.0.0 src 172.16.0.1 err 0
>>> 7fff818c13b5 fib_table_lookup ([kernel.kallsyms])
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 0:40 ` Paweł Staszewski
@ 2018-11-09 0:42 ` David Ahern
2018-11-09 4:52 ` Saeed Mahameed
0 siblings, 1 reply; 77+ messages in thread
From: David Ahern @ 2018-11-09 0:42 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/8/18 5:40 PM, Paweł Staszewski wrote:
>
>
> W dniu 08.11.2018 o 17:32, David Ahern pisze:
>> On 11/8/18 9:27 AM, Paweł Staszewski wrote:
>>>>> What hardware is this?
>>>>>
>>> mellanox connectx 4
>>> ethtool -i enp175s0f0
>>> driver: mlx5_core
>>> version: 5.0-0
>>> firmware-version: 12.21.1000 (SM_2001000001033)
>>> expansion-rom-version:
>>> bus-info: 0000:af:00.0
>>> supports-statistics: yes
>>> supports-test: yes
>>> supports-eeprom-access: no
>>> supports-register-dump: no
>>> supports-priv-flags: yes
>>>
>>> ethtool -i enp175s0f1
>>> driver: mlx5_core
>>> version: 5.0-0
>>> firmware-version: 12.21.1000 (SM_2001000001033)
>>> expansion-rom-version:
>>> bus-info: 0000:af:00.1
>>> supports-statistics: yes
>>> supports-test: yes
>>> supports-eeprom-access: no
>>> supports-register-dump: no
>>> supports-priv-flags: yes
>>>
>>>>> Start with:
>>>>>
>>>>> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
>>>>> cat /sys/kernel/debug/tracing/trace_pipe
>>>> cat /sys/kernel/debug/tracing/trace_pipe
>>>> <idle>-0 [045] ..s. 68469.467752: xdp_devmap_xmit:
>>>> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0 drops=1
>>>> from_ifindex=4 to_ifindex=5 err=-6
>> FIB lookup is good, the redirect is happening, but the mlx5 driver does
>> not like it.
>>
>> I think the -6 is coming from the mlx5 driver and the packet is getting
>> dropped. Perhaps this check in mlx5e_xdp_xmit:
>>
>> if (unlikely(sq_num >= priv->channels.num))
>> return -ENXIO;
> I removed that part and recompiled - but after running now xdp_fwd i
> have kernel pamic :)
Jesper or one of the Mellanox folks needs to respond about the config
needed to run XDP with this NIC. I don't have a 40G or 100G card to play
with.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 0:42 ` David Ahern
@ 2018-11-09 4:52 ` Saeed Mahameed
2018-11-09 7:52 ` Jesper Dangaard Brouer
2018-11-09 9:56 ` Paweł Staszewski
0 siblings, 2 replies; 77+ messages in thread
From: Saeed Mahameed @ 2018-11-09 4:52 UTC (permalink / raw)
To: dsahern, pstaszewski, brouer; +Cc: netdev, yoel
On Thu, 2018-11-08 at 17:42 -0700, David Ahern wrote:
> On 11/8/18 5:40 PM, Paweł Staszewski wrote:
> >
> > W dniu 08.11.2018 o 17:32, David Ahern pisze:
> > > On 11/8/18 9:27 AM, Paweł Staszewski wrote:
> > > > > > What hardware is this?
> > > > > >
> > > > mellanox connectx 4
> > > > ethtool -i enp175s0f0
> > > > driver: mlx5_core
> > > > version: 5.0-0
> > > > firmware-version: 12.21.1000 (SM_2001000001033)
> > > > expansion-rom-version:
> > > > bus-info: 0000:af:00.0
> > > > supports-statistics: yes
> > > > supports-test: yes
> > > > supports-eeprom-access: no
> > > > supports-register-dump: no
> > > > supports-priv-flags: yes
> > > >
> > > > ethtool -i enp175s0f1
> > > > driver: mlx5_core
> > > > version: 5.0-0
> > > > firmware-version: 12.21.1000 (SM_2001000001033)
> > > > expansion-rom-version:
> > > > bus-info: 0000:af:00.1
> > > > supports-statistics: yes
> > > > supports-test: yes
> > > > supports-eeprom-access: no
> > > > supports-register-dump: no
> > > > supports-priv-flags: yes
> > > >
> > > > > > Start with:
> > > > > >
> > > > > > echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
> > > > > > cat /sys/kernel/debug/tracing/trace_pipe
> > > > > cat /sys/kernel/debug/tracing/trace_pipe
> > > > > <idle>-0 [045] ..s. 68469.467752:
> > > > > xdp_devmap_xmit:
> > > > > ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0
> > > > > drops=1
> > > > > from_ifindex=4 to_ifindex=5 err=-6
> > > FIB lookup is good, the redirect is happening, but the mlx5
> > > driver does
> > > not like it.
> > >
> > > I think the -6 is coming from the mlx5 driver and the packet is
> > > getting
> > > dropped. Perhaps this check in mlx5e_xdp_xmit:
> > >
> > > if (unlikely(sq_num >= priv->channels.num))
> > > return -ENXIO;
> > I removed that part and recompiled - but after running now xdp_fwd
> > i
> > have kernel pamic :)
>
hh, no please don't do such thing :)
It must be because the tx netdev has less tx queues than the rx netdev.
or the rx netdev rings are bound to a high cpu indexes.
anyway, best practice is to open #cores RX/TX netdev on both sides
ethtool -L enp175s0f0 combined $(nproc)
ethtool -L enp175s0f1 combined $(nproc)
> Jesper or one of the Mellanox folks needs to respond about the config
> needed to run XDP with this NIC. I don't have a 40G or 100G card to
> play
> with.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 4:52 ` Saeed Mahameed
@ 2018-11-09 7:52 ` Jesper Dangaard Brouer
2018-11-09 9:56 ` Paweł Staszewski
1 sibling, 0 replies; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-09 7:52 UTC (permalink / raw)
To: Saeed Mahameed
Cc: dsahern, pstaszewski, netdev, yoel, brouer, John Fastabend,
Tariq Toukan, Toke Høiland-Jørgensen
On Fri, 9 Nov 2018 04:52:01 +0000
Saeed Mahameed <saeedm@mellanox.com> wrote:
> On Thu, 2018-11-08 at 17:42 -0700, David Ahern wrote:
> > On 11/8/18 5:40 PM, Paweł Staszewski wrote:
> > >
> > > W dniu 08.11.2018 o 17:32, David Ahern pisze:
> > > > On 11/8/18 9:27 AM, Paweł Staszewski wrote:
> > > > > > > What hardware is this?
> > > > > > >
> > > > > mellanox connectx 4
> > > > > ethtool -i enp175s0f0
> > > > > driver: mlx5_core
> > > > > version: 5.0-0
> > > > > firmware-version: 12.21.1000 (SM_2001000001033)
> > > > > expansion-rom-version:
> > > > > bus-info: 0000:af:00.0
> > > > > supports-statistics: yes
> > > > > supports-test: yes
> > > > > supports-eeprom-access: no
> > > > > supports-register-dump: no
> > > > > supports-priv-flags: yes
> > > > >
> > > > > ethtool -i enp175s0f1
> > > > > driver: mlx5_core
> > > > > version: 5.0-0
> > > > > firmware-version: 12.21.1000 (SM_2001000001033)
> > > > > expansion-rom-version:
> > > > > bus-info: 0000:af:00.1
> > > > > supports-statistics: yes
> > > > > supports-test: yes
> > > > > supports-eeprom-access: no
> > > > > supports-register-dump: no
> > > > > supports-priv-flags: yes
> > > > >
> > > > > > > Start with:
> > > > > > >
> > > > > > > echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
> > > > > > > cat /sys/kernel/debug/tracing/trace_pipe
> > > > > > cat /sys/kernel/debug/tracing/trace_pipe
> > > > > > <idle>-0 [045] ..s. 68469.467752:
> > > > > > xdp_devmap_xmit:
> > > > > > ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0
> > > > > > drops=1
> > > > > > from_ifindex=4 to_ifindex=5 err=-6
> > > > FIB lookup is good, the redirect is happening, but the mlx5
> > > > driver does
> > > > not like it.
> > > >
> > > > I think the -6 is coming from the mlx5 driver and the packet is
> > > > getting
> > > > dropped. Perhaps this check in mlx5e_xdp_xmit:
> > > >
> > > > if (unlikely(sq_num >= priv->channels.num))
> > > > return -ENXIO;
> > > I removed that part and recompiled - but after running now xdp_fwd
> > > i
> > > have kernel pamic :)
> >
>
> hh, no please don't do such thing :)
>
> It must be because the tx netdev has less tx queues than the rx netdev.
> or the rx netdev rings are bound to a high cpu indexes.
>
> anyway, best practice is to open #cores RX/TX netdev on both sides
>
> ethtool -L enp175s0f0 combined $(nproc)
> ethtool -L enp175s0f1 combined $(nproc)
>
> > Jesper or one of the Mellanox folks needs to respond about the config
> > needed to run XDP with this NIC. I don't have a 40G or 100G card to
> > play with.
Saeed already answered with a solution... you need to increase the
number of RX/TX queues to be equal to the number of CPUs.
IHMO this again shows that the resource allocations around ndo_xdp_xmit
needs a better API. The implicit requirement is that once ndo_xdp_xmit
is enabled the driver MUST allocate for each CPU a dedicated TX for
XDP. It seems for mlx5 that this is a manual process. And as Pawel
discovered it is hard to troubleshoot and only via tracepoints.
I think we need to do better in this area, both regarding usability and
more graceful handling when the HW doesn't have the resources. The
original requirement for a XDP-TX queue per CPU was necessary because
the ndo_xdp_xmit was only sending 1-packet at the time. After my
recent changes, the ndo_xdp_xmit can now send in bulks. Thus,
performance wise it is feasible to use an (array of) locks, if e.g. the
HW cannot allocated more TX-HW queues, or e.g. allow sysadm to set the
mode of operation (if the system as a hole have issues allocating TX
completion IRQs for all these queues).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 4:52 ` Saeed Mahameed
2018-11-09 7:52 ` Jesper Dangaard Brouer
@ 2018-11-09 9:56 ` Paweł Staszewski
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-09 9:56 UTC (permalink / raw)
To: Saeed Mahameed, dsahern, brouer; +Cc: netdev, yoel
W dniu 09.11.2018 o 05:52, Saeed Mahameed pisze:
> On Thu, 2018-11-08 at 17:42 -0700, David Ahern wrote:
>> On 11/8/18 5:40 PM, Paweł Staszewski wrote:
>>> W dniu 08.11.2018 o 17:32, David Ahern pisze:
>>>> On 11/8/18 9:27 AM, Paweł Staszewski wrote:
>>>>>>> What hardware is this?
>>>>>>>
>>>>> mellanox connectx 4
>>>>> ethtool -i enp175s0f0
>>>>> driver: mlx5_core
>>>>> version: 5.0-0
>>>>> firmware-version: 12.21.1000 (SM_2001000001033)
>>>>> expansion-rom-version:
>>>>> bus-info: 0000:af:00.0
>>>>> supports-statistics: yes
>>>>> supports-test: yes
>>>>> supports-eeprom-access: no
>>>>> supports-register-dump: no
>>>>> supports-priv-flags: yes
>>>>>
>>>>> ethtool -i enp175s0f1
>>>>> driver: mlx5_core
>>>>> version: 5.0-0
>>>>> firmware-version: 12.21.1000 (SM_2001000001033)
>>>>> expansion-rom-version:
>>>>> bus-info: 0000:af:00.1
>>>>> supports-statistics: yes
>>>>> supports-test: yes
>>>>> supports-eeprom-access: no
>>>>> supports-register-dump: no
>>>>> supports-priv-flags: yes
>>>>>
>>>>>>> Start with:
>>>>>>>
>>>>>>> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
>>>>>>> cat /sys/kernel/debug/tracing/trace_pipe
>>>>>> cat /sys/kernel/debug/tracing/trace_pipe
>>>>>> <idle>-0 [045] ..s. 68469.467752:
>>>>>> xdp_devmap_xmit:
>>>>>> ndo_xdp_xmit map_id=32 map_index=5 action=REDIRECT sent=0
>>>>>> drops=1
>>>>>> from_ifindex=4 to_ifindex=5 err=-6
>>>> FIB lookup is good, the redirect is happening, but the mlx5
>>>> driver does
>>>> not like it.
>>>>
>>>> I think the -6 is coming from the mlx5 driver and the packet is
>>>> getting
>>>> dropped. Perhaps this check in mlx5e_xdp_xmit:
>>>>
>>>> if (unlikely(sq_num >= priv->channels.num))
>>>> return -ENXIO;
>>> I removed that part and recompiled - but after running now xdp_fwd
>>> i
>>> have kernel pamic :)
> hh, no please don't do such thing :)
yes - dirty "try" :)
Code back in place :)
>
> It must be because the tx netdev has less tx queues than the rx netdev.
> or the rx netdev rings are bound to a high cpu indexes.
>
> anyway, best practice is to open #cores RX/TX netdev on both sides
>
> ethtool -L enp175s0f0 combined $(nproc)
> ethtool -L enp175s0f1 combined $(nproc)
Ok now it is working.
Time for some tests :)
Thanks
>> Jesper or one of the Mellanox folks needs to respond about the config
>> needed to run XDP with this NIC. I don't have a 40G or 100G card to
>> play
>> with.
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 16:06 ` David Ahern
2018-11-08 16:25 ` Paweł Staszewski
@ 2018-11-09 10:20 ` Paweł Staszewski
2018-11-09 16:21 ` David Ahern
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-09 10:20 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 08.11.2018 o 17:06, David Ahern pisze:
> On 11/8/18 6:33 AM, Paweł Staszewski wrote:
>>
>> W dniu 07.11.2018 o 22:06, David Ahern pisze:
>>> On 11/3/18 6:24 PM, Paweł Staszewski wrote:
>>>>> Does your setup have any other device types besides physical ports with
>>>>> VLANs (e.g., any macvlans or bonds)?
>>>>>
>>>>>
>>>> no.
>>>> just
>>>> phy(mlnx)->vlans only config
>>> VLAN and non-VLAN (and a mix) seem to work ok. Patches are here:
>>> https://github.com/dsahern/linux.git bpf/kernel-tables-wip
>>>
>>> I got lazy with the vlan exports; right now it requires 8021q to be
>>> builtin (CONFIG_VLAN_8021Q=y)
>>>
>>> You can use the xdp_fwd sample:
>>> make O=kbuild -C samples/bpf -j 8
>>>
>>> Copy samples/bpf/xdp_fwd_kern.o and samples/bpf/xdp_fwd to the server
>>> and run:
>>> ./xdp_fwd <list of NIC ports>
>>>
>>> e.g., in my testing I run:
>>> xdp_fwd eth1 eth2 eth3 eth4
>>>
>>> All of the relevant forwarding ports need to be on the same command
>>> line. This version populates a second map to verify the egress port has
>>> XDP enabled.
>> Installed today on some lab server with mellanox connectx4
>>
>> And trying some simple static routing first - but after enabling xdp
>> program - receiver is not receiving frames
>>
>> Route table is simple as possible for tests :)
>>
>> icmp ping test send from 192.168.22.237 to 172.16.0.2 - incomming
>> packets on vlan 4081
>>
>> ip r
>> default via 192.168.22.236 dev vlan4081
>> 172.16.0.0/30 dev vlan1740 proto kernel scope link src 172.16.0.1
>> 192.168.22.0/24 dev vlan4081 proto kernel scope link src 192.168.22.205
>>
>> neigh table:
>> ip neigh ls
>>
>> 192.168.22.237 dev vlan4081 lladdr 00:25:90:fb:a6:8d REACHABLE
>> 172.16.0.2 dev vlan1740 lladdr ac:1f:6b:2c:2e:5a REACHABLE
>>
>> and interfaces:
>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>> UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>> UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp/id:5 qdisc
>> mq state UP group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>> valid_lft forever preferred_lft forever
>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.22.205/24 scope global vlan4081
>> valid_lft forever preferred_lft forever
>> inet6 fe80::ae1f:6bff:fe07:c890/64 scope link
>> valid_lft forever preferred_lft forever
>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> inet 172.16.0.1/30 scope global vlan1740
>> valid_lft forever preferred_lft forever
>> inet6 fe80::ae1f:6bff:fe07:c891/64 scope link
>> valid_lft forever preferred_lft forever
>>
>>
>> xdp program detached:
>> Receiving side tcpdump:
>> 14:28:09.141233 IP 192.168.22.237 > 172.16.0.2: ICMP echo request, id
>> 30227, seq 487, length 64
>>
>> I can see icmp requests
>>
>>
>> enabling xdp
>> ./xdp_fwd enp175s0f1 enp175s0f0
>>
>> 4: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>> state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> prog/xdp id 5 tag 3c231ff1e5e77f3f
>> 5: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdp qdisc mq
>> state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>> prog/xdp id 5 tag 3c231ff1e5e77f3f
>> 6: vlan4081@enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:90 brd ff:ff:ff:ff:ff:ff
>> 7: vlan1740@enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP mode DEFAULT group default qlen 1000
>> link/ether ac:1f:6b:07:c8:91 brd ff:ff:ff:ff:ff:ff
>>
> What hardware is this?
>
> Start with:
>
> echo 1 > /sys/kernel/debug/tracing/events/xdp/enable
> cat /sys/kernel/debug/tracing/trace_pipe
>
> >From there, you can check the FIB lookups:
> sysctl -w kernel.perf_event_max_stack=16
> perf record -e fib:* -a -g -- sleep 5
> perf script
>
I just catch some weird behavior :)
All was working fine for about 20k packets
Then after xdp start to forward every 10 packets
ping 172.16.0.2 -i 0.1
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.20 ms
64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=4.85 ms
64 bytes from 172.16.0.2: icmp_seq=29 ttl=64 time=4.91 ms
64 bytes from 172.16.0.2: icmp_seq=38 ttl=64 time=4.85 ms
64 bytes from 172.16.0.2: icmp_seq=48 ttl=64 time=5.00 ms
^C
--- 172.16.0.2 ping statistics ---
55 packets transmitted, 6 received, 89% packet loss, time 5655ms
rtt min/avg/max/mdev = 4.850/4.992/5.203/0.145 ms
And again after some time back to normal
ping 172.16.0.2 -i 0.1
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.02 ms
64 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=5.06 ms
64 bytes from 172.16.0.2: icmp_seq=3 ttl=64 time=5.19 ms
64 bytes from 172.16.0.2: icmp_seq=4 ttl=64 time=5.07 ms
64 bytes from 172.16.0.2: icmp_seq=5 ttl=64 time=5.08 ms
64 bytes from 172.16.0.2: icmp_seq=6 ttl=64 time=5.14 ms
64 bytes from 172.16.0.2: icmp_seq=7 ttl=64 time=5.08 ms
64 bytes from 172.16.0.2: icmp_seq=8 ttl=64 time=5.17 ms
64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.04 ms
64 bytes from 172.16.0.2: icmp_seq=10 ttl=64 time=5.10 ms
64 bytes from 172.16.0.2: icmp_seq=11 ttl=64 time=5.11 ms
64 bytes from 172.16.0.2: icmp_seq=12 ttl=64 time=5.13 ms
64 bytes from 172.16.0.2: icmp_seq=13 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=14 ttl=64 time=5.15 ms
64 bytes from 172.16.0.2: icmp_seq=15 ttl=64 time=5.13 ms
64 bytes from 172.16.0.2: icmp_seq=16 ttl=64 time=5.04 ms
64 bytes from 172.16.0.2: icmp_seq=17 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=18 ttl=64 time=5.07 ms
64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=5.06 ms
64 bytes from 172.16.0.2: icmp_seq=20 ttl=64 time=5.12 ms
64 bytes from 172.16.0.2: icmp_seq=21 ttl=64 time=5.21 ms
64 bytes from 172.16.0.2: icmp_seq=22 ttl=64 time=4.98 ms
^C
--- 172.16.0.2 ping statistics ---
22 packets transmitted, 22 received, 0% packet loss, time 2105ms
rtt min/avg/max/mdev = 4.988/5.104/5.210/0.089 ms
I will try to catch this with debug enabled
Wondering also - cause xdp will bypass now vlan counters and other stuff
like tcpdump
Is there possible to add only counters from xdp for vlans ?
This will help me in testing.
And also - for non lab scenario there should be possible to sniff
sometimes on interface :)
Soo wondering if need to attack another xdp program to interface or all
this can be done by one
I think this is time where i will need to learn more about xdp :)
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 10:20 ` Paweł Staszewski
@ 2018-11-09 16:21 ` David Ahern
2018-11-09 19:59 ` Paweł Staszewski
2018-11-10 0:06 ` David Ahern
0 siblings, 2 replies; 77+ messages in thread
From: David Ahern @ 2018-11-09 16:21 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/9/18 3:20 AM, Paweł Staszewski wrote:
>
> I just catch some weird behavior :)
> All was working fine for about 20k packets
>
> Then after xdp start to forward every 10 packets
Interesting. Any counter showing drops?
> ping 172.16.0.2 -i 0.1
> PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
> 64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.12 ms
> 64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.20 ms
> 64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=4.85 ms
> 64 bytes from 172.16.0.2: icmp_seq=29 ttl=64 time=4.91 ms
> 64 bytes from 172.16.0.2: icmp_seq=38 ttl=64 time=4.85 ms
> 64 bytes from 172.16.0.2: icmp_seq=48 ttl=64 time=5.00 ms
> ^C
> --- 172.16.0.2 ping statistics ---
> 55 packets transmitted, 6 received, 89% packet loss, time 5655ms
> rtt min/avg/max/mdev = 4.850/4.992/5.203/0.145 ms
>
>
> And again after some time back to normal
>
> ping 172.16.0.2 -i 0.1
> PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
> 64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.02 ms
> 64 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=5.06 ms
> 64 bytes from 172.16.0.2: icmp_seq=3 ttl=64 time=5.19 ms
> 64 bytes from 172.16.0.2: icmp_seq=4 ttl=64 time=5.07 ms
> 64 bytes from 172.16.0.2: icmp_seq=5 ttl=64 time=5.08 ms
> 64 bytes from 172.16.0.2: icmp_seq=6 ttl=64 time=5.14 ms
> 64 bytes from 172.16.0.2: icmp_seq=7 ttl=64 time=5.08 ms
> 64 bytes from 172.16.0.2: icmp_seq=8 ttl=64 time=5.17 ms
> 64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.04 ms
> 64 bytes from 172.16.0.2: icmp_seq=10 ttl=64 time=5.10 ms
> 64 bytes from 172.16.0.2: icmp_seq=11 ttl=64 time=5.11 ms
> 64 bytes from 172.16.0.2: icmp_seq=12 ttl=64 time=5.13 ms
> 64 bytes from 172.16.0.2: icmp_seq=13 ttl=64 time=5.12 ms
> 64 bytes from 172.16.0.2: icmp_seq=14 ttl=64 time=5.15 ms
> 64 bytes from 172.16.0.2: icmp_seq=15 ttl=64 time=5.13 ms
> 64 bytes from 172.16.0.2: icmp_seq=16 ttl=64 time=5.04 ms
> 64 bytes from 172.16.0.2: icmp_seq=17 ttl=64 time=5.12 ms
> 64 bytes from 172.16.0.2: icmp_seq=18 ttl=64 time=5.07 ms
> 64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=5.06 ms
> 64 bytes from 172.16.0.2: icmp_seq=20 ttl=64 time=5.12 ms
> 64 bytes from 172.16.0.2: icmp_seq=21 ttl=64 time=5.21 ms
> 64 bytes from 172.16.0.2: icmp_seq=22 ttl=64 time=4.98 ms
> ^C
> --- 172.16.0.2 ping statistics ---
> 22 packets transmitted, 22 received, 0% packet loss, time 2105ms
> rtt min/avg/max/mdev = 4.988/5.104/5.210/0.089 ms
>
>
> I will try to catch this with debug enabled
>
>
>
>
>
> Wondering also - cause xdp will bypass now vlan counters and other stuff
> like tcpdump
yes, xdp is before tcpdump based sockets.
And the counters (vlan just being the current example) is another
problem to be solved. The vlan net_device never sees the packet and you
can not arbitrarily bump the counters just because the device lookups
reference them.
>
> Is there possible to add only counters from xdp for vlans ?
> This will help me in testing.
I will take a look today at adding counters that you can dump using
bpftool. It will be a temporary solution for this xdp program only.
>
>
> And also - for non lab scenario there should be possible to sniff
> sometimes on interface :)
Yes, sampling is another problem.
> Soo wondering if need to attack another xdp program to interface or all
> this can be done by one
>
> I think this is time where i will need to learn more about xdp :)
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 16:21 ` David Ahern
@ 2018-11-09 19:59 ` Paweł Staszewski
2018-11-10 0:06 ` David Ahern
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-09 19:59 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 09.11.2018 o 17:21, David Ahern pisze:
> On 11/9/18 3:20 AM, Paweł Staszewski wrote:
>> I just catch some weird behavior :)
>> All was working fine for about 20k packets
>>
>> Then after xdp start to forward every 10 packets
> Interesting. Any counter showing drops?
nothing that will fit
NIC statistics:
rx_packets: 187041
rx_bytes: 10600954
tx_packets: 40316
tx_bytes: 16526844
tx_tso_packets: 797
tx_tso_bytes: 3876084
tx_tso_inner_packets: 0
tx_tso_inner_bytes: 0
tx_added_vlan_packets: 38391
tx_nop: 2
rx_lro_packets: 0
rx_lro_bytes: 0
rx_ecn_mark: 0
rx_removed_vlan_packets: 187041
rx_csum_unnecessary: 0
rx_csum_none: 150011
rx_csum_complete: 37030
rx_csum_unnecessary_inner: 0
rx_xdp_drop: 0
rx_xdp_redirect: 64893
rx_xdp_tx_xmit: 0
rx_xdp_tx_full: 0
rx_xdp_tx_err: 0
rx_xdp_tx_cqe: 0
tx_csum_none: 2468
tx_csum_partial: 35955
tx_csum_partial_inner: 0
tx_queue_stopped: 0
tx_queue_dropped: 0
tx_xmit_more: 0
tx_recover: 0
tx_cqes: 38423
tx_queue_wake: 0
tx_udp_seg_rem: 0
tx_cqe_err: 0
tx_xdp_xmit: 0
tx_xdp_full: 0
tx_xdp_err: 0
tx_xdp_cqes: 0
rx_wqe_err: 0
rx_mpwqe_filler_cqes: 0
rx_mpwqe_filler_strides: 0
rx_buff_alloc_err: 0
rx_cqe_compress_blks: 0
rx_cqe_compress_pkts: 0
rx_page_reuse: 0
rx_cache_reuse: 186302
rx_cache_full: 0
rx_cache_empty: 666768
rx_cache_busy: 174
rx_cache_waive: 0
rx_congst_umr: 0
rx_arfs_err: 0
ch_events: 249320
ch_poll: 249321
ch_arm: 249001
ch_aff_change: 0
ch_eq_rearm: 0
rx_out_of_buffer: 0
rx_if_down_packets: 57
rx_vport_unicast_packets: 142659
rx_vport_unicast_bytes: 42706914
tx_vport_unicast_packets: 40167
tx_vport_unicast_bytes: 16668096
rx_vport_multicast_packets: 39188170
rx_vport_multicast_bytes: 3466527450
tx_vport_multicast_packets: 58
tx_vport_multicast_bytes: 4556
rx_vport_broadcast_packets: 16343520
rx_vport_broadcast_bytes: 1031334602
tx_vport_broadcast_packets: 91
tx_vport_broadcast_bytes: 5460
rx_vport_rdma_unicast_packets: 0
rx_vport_rdma_unicast_bytes: 0
tx_vport_rdma_unicast_packets: 0
tx_vport_rdma_unicast_bytes: 0
rx_vport_rdma_multicast_packets: 0
rx_vport_rdma_multicast_bytes: 0
tx_vport_rdma_multicast_packets: 0
tx_vport_rdma_multicast_bytes: 0
tx_packets_phy: 40316
rx_packets_phy: 55674361
rx_crc_errors_phy: 0
tx_bytes_phy: 16839376
rx_bytes_phy: 4763267396
tx_multicast_phy: 58
tx_broadcast_phy: 91
rx_multicast_phy: 39188180
rx_broadcast_phy: 16343521
rx_in_range_len_errors_phy: 0
rx_out_of_range_len_phy: 0
rx_oversize_pkts_phy: 0
rx_symbol_err_phy: 0
tx_mac_control_phy: 0
rx_mac_control_phy: 0
rx_unsupported_op_phy: 0
rx_pause_ctrl_phy: 0
tx_pause_ctrl_phy: 0
rx_discards_phy: 1
tx_discards_phy: 0
tx_errors_phy: 0
rx_undersize_pkts_phy: 0
rx_fragments_phy: 0
rx_jabbers_phy: 0
rx_64_bytes_phy: 3792455
rx_65_to_127_bytes_phy: 51821620
rx_128_to_255_bytes_phy: 37669
rx_256_to_511_bytes_phy: 1481
rx_512_to_1023_bytes_phy: 434
rx_1024_to_1518_bytes_phy: 694
rx_1519_to_2047_bytes_phy: 20008
rx_2048_to_4095_bytes_phy: 0
rx_4096_to_8191_bytes_phy: 0
rx_8192_to_10239_bytes_phy: 0
link_down_events_phy: 0
rx_pcs_symbol_err_phy: 0
rx_corrected_bits_phy: 6
rx_err_lane_0_phy: 0
rx_err_lane_1_phy: 0
rx_err_lane_2_phy: 0
rx_err_lane_3_phy: 6
rx_buffer_passed_thres_phy: 0
rx_pci_signal_integrity: 0
tx_pci_signal_integrity: 82
outbound_pci_stalled_rd: 0
outbound_pci_stalled_wr: 0
outbound_pci_stalled_rd_events: 0
outbound_pci_stalled_wr_events: 0
rx_prio0_bytes: 4144920388
rx_prio0_packets: 48310037
tx_prio0_bytes: 16839376
tx_prio0_packets: 40316
rx_prio1_bytes: 481032
rx_prio1_packets: 7074
tx_prio1_bytes: 0
tx_prio1_packets: 0
rx_prio2_bytes: 9074194
rx_prio2_packets: 106207
tx_prio2_bytes: 0
tx_prio2_packets: 0
rx_prio3_bytes: 0
rx_prio3_packets: 0
tx_prio3_bytes: 0
tx_prio3_packets: 0
rx_prio4_bytes: 0
rx_prio4_packets: 0
tx_prio4_bytes: 0
tx_prio4_packets: 0
rx_prio5_bytes: 0
rx_prio5_packets: 0
tx_prio5_bytes: 0
tx_prio5_packets: 0
rx_prio6_bytes: 371961810
rx_prio6_packets: 4006281
tx_prio6_bytes: 0
tx_prio6_packets: 0
rx_prio7_bytes: 236830040
rx_prio7_packets: 3244761
tx_prio7_bytes: 0
tx_prio7_packets: 0
tx_pause_storm_warning_events : 0
tx_pause_storm_error_events: 0
module_unplug: 0
module_bus_stuck: 0
module_high_temp: 0
module_bad_shorted: 0
NIC statistics:
rx_packets: 843
rx_bytes: 58889
tx_packets: 324
tx_bytes: 23324
tx_tso_packets: 0
tx_tso_bytes: 0
tx_tso_inner_packets: 0
tx_tso_inner_bytes: 0
tx_added_vlan_packets: 293
tx_nop: 0
rx_lro_packets: 0
rx_lro_bytes: 0
rx_ecn_mark: 0
rx_removed_vlan_packets: 843
rx_csum_unnecessary: 0
rx_csum_none: 190
rx_csum_complete: 653
rx_csum_unnecessary_inner: 0
rx_xdp_drop: 0
rx_xdp_redirect: 0
rx_xdp_tx_xmit: 0
rx_xdp_tx_full: 0
rx_xdp_tx_err: 0
rx_xdp_tx_cqe: 0
tx_csum_none: 324
tx_csum_partial: 0
tx_csum_partial_inner: 0
tx_queue_stopped: 0
tx_queue_dropped: 0
tx_xmit_more: 1
tx_recover: 0
tx_cqes: 323
tx_queue_wake: 0
tx_udp_seg_rem: 0
tx_cqe_err: 0
tx_xdp_xmit: 64926
tx_xdp_full: 0
tx_xdp_err: 0
tx_xdp_cqes: 47958
rx_wqe_err: 0
rx_mpwqe_filler_cqes: 0
rx_mpwqe_filler_strides: 0
rx_buff_alloc_err: 0
rx_cqe_compress_blks: 0
rx_cqe_compress_pkts: 0
rx_page_reuse: 0
rx_cache_reuse: 648
rx_cache_full: 0
rx_cache_empty: 602112
rx_cache_busy: 0
rx_cache_waive: 0
rx_congst_umr: 0
rx_arfs_err: 0
ch_events: 49628
ch_poll: 49628
ch_arm: 49626
ch_aff_change: 0
ch_eq_rearm: 0
rx_out_of_buffer: 0
rx_if_down_packets: 46
rx_vport_unicast_packets: 5953
rx_vport_unicast_bytes: 4927049
tx_vport_unicast_packets: 65194
tx_vport_unicast_bytes: 31820150
rx_vport_multicast_packets: 37085249
rx_vport_multicast_bytes: 2449620421
tx_vport_multicast_packets: 55
tx_vport_multicast_bytes: 4278
rx_vport_broadcast_packets: 434654
rx_vport_broadcast_bytes: 31881063
tx_vport_broadcast_packets: 1
tx_vport_broadcast_bytes: 60
rx_vport_rdma_unicast_packets: 0
rx_vport_rdma_unicast_bytes: 0
tx_vport_rdma_unicast_packets: 0
tx_vport_rdma_unicast_bytes: 0
rx_vport_rdma_multicast_packets: 0
rx_vport_rdma_multicast_bytes: 0
tx_vport_rdma_multicast_packets: 0
tx_vport_rdma_multicast_bytes: 0
tx_packets_phy: 65250
rx_packets_phy: 37525857
rx_crc_errors_phy: 0
tx_bytes_phy: 32085488
rx_bytes_phy: 2636532027
tx_multicast_phy: 55
tx_broadcast_phy: 1
rx_multicast_phy: 37085250
rx_broadcast_phy: 434654
rx_in_range_len_errors_phy: 0
rx_out_of_range_len_phy: 0
rx_oversize_pkts_phy: 0
rx_symbol_err_phy: 0
tx_mac_control_phy: 0
rx_mac_control_phy: 0
rx_unsupported_op_phy: 0
rx_pause_ctrl_phy: 0
tx_pause_ctrl_phy: 0
rx_discards_phy: 0
tx_discards_phy: 0
tx_errors_phy: 0
rx_undersize_pkts_phy: 0
rx_fragments_phy: 0
rx_jabbers_phy: 0
rx_64_bytes_phy: 63346
rx_65_to_127_bytes_phy: 37434768
rx_128_to_255_bytes_phy: 14088
rx_256_to_511_bytes_phy: 10461
rx_512_to_1023_bytes_phy: 96
rx_1024_to_1518_bytes_phy: 1933
rx_1519_to_2047_bytes_phy: 1165
rx_2048_to_4095_bytes_phy: 0
rx_4096_to_8191_bytes_phy: 0
rx_8192_to_10239_bytes_phy: 0
link_down_events_phy: 0
rx_pcs_symbol_err_phy: 0
rx_corrected_bits_phy: 5
rx_err_lane_0_phy: 1
rx_err_lane_1_phy: 0
rx_err_lane_2_phy: 0
rx_err_lane_3_phy: 4
rx_buffer_passed_thres_phy: 0
rx_pci_signal_integrity: 0
tx_pci_signal_integrity: 82
outbound_pci_stalled_rd: 0
outbound_pci_stalled_wr: 0
outbound_pci_stalled_rd_events: 0
outbound_pci_stalled_wr_events: 0
rx_prio0_bytes: 23157221
rx_prio0_packets: 195789
tx_prio0_bytes: 32085488
tx_prio0_packets: 65250
rx_prio1_bytes: 0
rx_prio1_packets: 0
tx_prio1_bytes: 0
tx_prio1_packets: 0
rx_prio2_bytes: 0
rx_prio2_packets: 0
tx_prio2_bytes: 0
tx_prio2_packets: 0
rx_prio3_bytes: 23397578
rx_prio3_packets: 343182
tx_prio3_bytes: 0
tx_prio3_packets: 0
rx_prio4_bytes: 0
rx_prio4_packets: 0
tx_prio4_bytes: 0
tx_prio4_packets: 0
rx_prio5_bytes: 0
rx_prio5_packets: 0
tx_prio5_bytes: 0
tx_prio5_packets: 0
rx_prio6_bytes: 14643472
rx_prio6_packets: 203589
tx_prio6_bytes: 0
tx_prio6_packets: 0
rx_prio7_bytes: 2575333474
rx_prio7_packets: 36783293
tx_prio7_bytes: 0
tx_prio7_packets: 0
tx_pause_storm_warning_events : 0
tx_pause_storm_error_events: 0
module_unplug: 0
module_bus_stuck: 0
module_high_temp: 0
module_bad_shorted: 0
But wondering if any offloading now can do some things that we dont want
for xdp
currently all offloads are enabled.
ethtool -k enp175s0f0
Features for enp175s0f0:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: on
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: off
tx-vlan-stag-hw-insert: on
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: on [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
Also at the time when xdp is forwarding 1/10 frame - same problem is
with local input/output traffic - testing server is also responding to
1/10 icmp request
>
>
>> ping 172.16.0.2 -i 0.1
>> PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
>> 64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.12 ms
>> 64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.20 ms
>> 64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=4.85 ms
>> 64 bytes from 172.16.0.2: icmp_seq=29 ttl=64 time=4.91 ms
>> 64 bytes from 172.16.0.2: icmp_seq=38 ttl=64 time=4.85 ms
>> 64 bytes from 172.16.0.2: icmp_seq=48 ttl=64 time=5.00 ms
>> ^C
>> --- 172.16.0.2 ping statistics ---
>> 55 packets transmitted, 6 received, 89% packet loss, time 5655ms
>> rtt min/avg/max/mdev = 4.850/4.992/5.203/0.145 ms
>>
>>
>> And again after some time back to normal
>>
>> ping 172.16.0.2 -i 0.1
>> PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
>> 64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=5.02 ms
>> 64 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=5.06 ms
>> 64 bytes from 172.16.0.2: icmp_seq=3 ttl=64 time=5.19 ms
>> 64 bytes from 172.16.0.2: icmp_seq=4 ttl=64 time=5.07 ms
>> 64 bytes from 172.16.0.2: icmp_seq=5 ttl=64 time=5.08 ms
>> 64 bytes from 172.16.0.2: icmp_seq=6 ttl=64 time=5.14 ms
>> 64 bytes from 172.16.0.2: icmp_seq=7 ttl=64 time=5.08 ms
>> 64 bytes from 172.16.0.2: icmp_seq=8 ttl=64 time=5.17 ms
>> 64 bytes from 172.16.0.2: icmp_seq=9 ttl=64 time=5.04 ms
>> 64 bytes from 172.16.0.2: icmp_seq=10 ttl=64 time=5.10 ms
>> 64 bytes from 172.16.0.2: icmp_seq=11 ttl=64 time=5.11 ms
>> 64 bytes from 172.16.0.2: icmp_seq=12 ttl=64 time=5.13 ms
>> 64 bytes from 172.16.0.2: icmp_seq=13 ttl=64 time=5.12 ms
>> 64 bytes from 172.16.0.2: icmp_seq=14 ttl=64 time=5.15 ms
>> 64 bytes from 172.16.0.2: icmp_seq=15 ttl=64 time=5.13 ms
>> 64 bytes from 172.16.0.2: icmp_seq=16 ttl=64 time=5.04 ms
>> 64 bytes from 172.16.0.2: icmp_seq=17 ttl=64 time=5.12 ms
>> 64 bytes from 172.16.0.2: icmp_seq=18 ttl=64 time=5.07 ms
>> 64 bytes from 172.16.0.2: icmp_seq=19 ttl=64 time=5.06 ms
>> 64 bytes from 172.16.0.2: icmp_seq=20 ttl=64 time=5.12 ms
>> 64 bytes from 172.16.0.2: icmp_seq=21 ttl=64 time=5.21 ms
>> 64 bytes from 172.16.0.2: icmp_seq=22 ttl=64 time=4.98 ms
>> ^C
>> --- 172.16.0.2 ping statistics ---
>> 22 packets transmitted, 22 received, 0% packet loss, time 2105ms
>> rtt min/avg/max/mdev = 4.988/5.104/5.210/0.089 ms
>>
>>
>> I will try to catch this with debug enabled
>>
>>
>>
>>
>>
>> Wondering also - cause xdp will bypass now vlan counters and other stuff
>> like tcpdump
> yes, xdp is before tcpdump based sockets.
>
> And the counters (vlan just being the current example) is another
> problem to be solved. The vlan net_device never sees the packet and you
> can not arbitrarily bump the counters just because the device lookups
> reference them.
Ok.
>> Is there possible to add only counters from xdp for vlans ?
>> This will help me in testing.
> I will take a look today at adding counters that you can dump using
> bpftool. It will be a temporary solution for this xdp program only.
Yes anything that can give me counters to check traffic lvls
>>
>> And also - for non lab scenario there should be possible to sniff
>> sometimes on interface :)
> Yes, sampling is another problem.
>
>
>> Soo wondering if need to attack another xdp program to interface or all
>> this can be done by one
>>
>> I think this is time where i will need to learn more about xdp :)
>>
>>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-08 19:12 ` Paweł Staszewski
@ 2018-11-09 22:20 ` Paweł Staszewski
2018-11-10 19:34 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-09 22:20 UTC (permalink / raw)
To: Saeed Mahameed, netdev, Jesper Dangaard Brouer
W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
> CPU load is lower than for connectx4 - but it looks like bandwidth
> limit is the same :)
> But also after reaching 60Gbit/60Gbit
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> - iface Rx Tx Total
> ==============================================================================
>
> enp175s0: 45.09 Gb/s 15.09 Gb/s
> 60.18 Gb/s
> enp216s0: 15.14 Gb/s 45.19 Gb/s
> 60.33 Gb/s
> ------------------------------------------------------------------------------
>
> total: 60.45 Gb/s 60.48 Gb/s 120.93 Gb/s
Today reached 65/65Gbit/s
But starting from 60Gbit/s RX / 60Gbit TX nics start to drop packets
(with 50%CPU on all 28cores) - so still there is cpu power to use :).
So checked other stats.
softnet_stats shows average 1k squeezed per sec:
cpu total dropped squeezed collision rps flow_limit
0 18554 0 1 0 0 0
1 16728 0 1 0 0 0
2 18033 0 1 0 0 0
3 17757 0 1 0 0 0
4 18861 0 0 0 0 0
5 0 0 1 0 0 0
6 2 0 1 0 0 0
7 0 0 1 0 0 0
8 0 0 0 0 0 0
9 0 0 1 0 0 0
10 0 0 0 0 0 0
11 0 0 1 0 0 0
12 50 0 1 0 0 0
13 257 0 0 0 0 0
14 3629115363 0 3353259 0 0 0
15 255167835 0 3138271 0 0 0
16 4240101961 0 3036130 0 0 0
17 599810018 0 3072169 0 0 0
18 432796524 0 3034191 0 0 0
19 41803906 0 3037405 0 0 0
20 900382666 0 3112294 0 0 0
21 620926085 0 3086009 0 0 0
22 41861198 0 3023142 0 0 0
23 4090425574 0 2990412 0 0 0
24 4264870218 0 3010272 0 0 0
25 141401811 0 3027153 0 0 0
26 104155188 0 3051251 0 0 0
27 4261258691 0 3039765 0 0 0
28 4 0 1 0 0 0
29 4 0 0 0 0 0
30 0 0 1 0 0 0
31 0 0 0 0 0 0
32 3 0 1 0 0 0
33 1 0 1 0 0 0
34 0 0 1 0 0 0
35 0 0 0 0 0 0
36 0 0 1 0 0 0
37 0 0 1 0 0 0
38 0 0 1 0 0 0
39 0 0 1 0 0 0
40 0 0 0 0 0 0
41 0 0 1 0 0 0
42 299758202 0 3139693 0 0 0
43 4254727979 0 3103577 0 0 0
44 1959555543 0 2554885 0 0 0
45 1675702723 0 2513481 0 0 0
46 1908435503 0 2519698 0 0 0
47 1877799710 0 2537768 0 0 0
48 2384274076 0 2584673 0 0 0
49 2598104878 0 2593616 0 0 0
50 1897566829 0 2530857 0 0 0
51 1712741629 0 2489089 0 0 0
52 1704033648 0 2495892 0 0 0
53 1636781820 0 2499783 0 0 0
54 1861997734 0 2541060 0 0 0
55 2113521616 0 2555673 0 0 0
So i rised netdev backlog and budged to rly high values
524288 for netdev_budget and same for backlog
This rised sortirqs from about 600k/sec to 800k/sec for NET_TX/NET_RX
But after this changes i have less packets drops.
Below perf top from max traffic reached:
PerfTop: 72230 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
cycles], (all, 56 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
12.62% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
8.44% [kernel] [k] mlx5e_sq_xmit
6.69% [kernel] [k] build_skb
5.21% [kernel] [k] fib_table_lookup
3.54% [kernel] [k] memcpy_erms
3.20% [kernel] [k] mlx5e_poll_rx_cq
2.25% [kernel] [k] vlan_do_receive
2.20% [kernel] [k] mlx5e_post_rx_mpwqes
2.02% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
1.95% [kernel] [k] __dev_queue_xmit
1.83% [kernel] [k] dev_gro_receive
1.79% [kernel] [k] tcp_gro_receive
1.73% [kernel] [k] ip_finish_output2
1.63% [kernel] [k] mlx5e_poll_tx_cq
1.49% [kernel] [k] ipt_do_table
1.38% [kernel] [k] inet_gro_receive
1.31% [kernel] [k] __netif_receive_skb_core
1.30% [kernel] [k] _raw_spin_lock
1.28% [kernel] [k] mlx5_eq_int
1.24% [kernel] [k] irq_entries_start
1.19% [kernel] [k] __build_skb
1.15% [kernel] [k] swiotlb_map_page
1.02% [kernel] [k] vlan_dev_hard_start_xmit
0.94% [kernel] [k] pfifo_fast_dequeue
0.92% [kernel] [k] ip_route_input_rcu
0.86% [kernel] [k] kmem_cache_alloc
0.80% [kernel] [k] mlx5e_xmit
0.79% [kernel] [k] dev_hard_start_xmit
0.78% [kernel] [k] _raw_spin_lock_irqsave
0.74% [kernel] [k] ip_forward
0.72% [kernel] [k] tasklet_action_common.isra.21
0.68% [kernel] [k] pfifo_fast_enqueue
0.67% [kernel] [k] netif_skb_features
0.66% [kernel] [k] skb_segment
0.60% [kernel] [k] skb_gro_receive
0.56% [kernel] [k] validate_xmit_skb.isra.142
0.53% [kernel] [k] skb_release_data
0.51% [kernel] [k] mlx5e_page_release
0.51% [kernel] [k] ip_rcv_core.isra.20.constprop.25
0.51% [kernel] [k] __qdisc_run
0.50% [kernel] [k] tcp4_gro_receive
0.49% [kernel] [k] page_frag_free
0.46% [kernel] [k] kmem_cache_free_bulk
0.43% [kernel] [k] kmem_cache_free
0.42% [kernel] [k] try_to_wake_up
0.39% [kernel] [k] _raw_spin_lock_irq
0.39% [kernel] [k] find_busiest_group
0.37% [kernel] [k] __memcpy
Remember those tests are now on two separate connectx5 connected to two
separate pcie x16 gen 3.0
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 16:21 ` David Ahern
2018-11-09 19:59 ` Paweł Staszewski
@ 2018-11-10 0:06 ` David Ahern
2018-11-10 13:18 ` Paweł Staszewski
2018-11-19 21:59 ` David Ahern
1 sibling, 2 replies; 77+ messages in thread
From: David Ahern @ 2018-11-10 0:06 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/9/18 9:21 AM, David Ahern wrote:
>> Is there possible to add only counters from xdp for vlans ?
>> This will help me in testing.
> I will take a look today at adding counters that you can dump using
> bpftool. It will be a temporary solution for this xdp program only.
>
Same tree, kernel-tables-wip-02 branch. Compile kernel and install.
Compile samples as before.
If you give the userspace program a -t arg, it loop showing stats.
Ctrl-C to break. The xdp programs are not detached on exit.
Example:
./xdp_fwd -t 5 eth1 eth2 eth3 eth4
15:59:32: rx tx dropped skipped l3_dev fib_dev
index 3: 901158 901158 0 18 0 0
index 4: 901159 901158 0 20 0 901139
index 10: 0 0 0 0 19 19
index 11: 0 0 0 0 901139 901139
index 15: 0 0 0 0 19 19
index 16: 0 0 0 0 901139 0
Rx and Tx counters are for the physical port.
VLANs show up as l3_dev (ingress) and fib_dev (egress).
dropped is anytime the xdp program returns XDP_DROP (e.g., invalid packet)
skipped is anytime the program returns XDP_PASS (e.g., not ipv4 or ipv6,
local traffic, or needs full stack assist).
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 0:06 ` David Ahern
@ 2018-11-10 13:18 ` Paweł Staszewski
2018-11-10 14:56 ` David Ahern
2018-11-19 21:59 ` David Ahern
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-10 13:18 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 10.11.2018 o 01:06, David Ahern pisze:
> On 11/9/18 9:21 AM, David Ahern wrote:
>>> Is there possible to add only counters from xdp for vlans ?
>>> This will help me in testing.
>> I will take a look today at adding counters that you can dump using
>> bpftool. It will be a temporary solution for this xdp program only.
>>
> Same tree, kernel-tables-wip-02 branch. Compile kernel and install.
> Compile samples as before.
>
> If you give the userspace program a -t arg, it loop showing stats.
> Ctrl-C to break. The xdp programs are not detached on exit.
>
> Example:
>
> ./xdp_fwd -t 5 eth1 eth2 eth3 eth4
>
> 15:59:32: rx tx dropped skipped l3_dev fib_dev
> index 3: 901158 901158 0 18 0 0
> index 4: 901159 901158 0 20 0 901139
> index 10: 0 0 0 0 19 19
> index 11: 0 0 0 0 901139 901139
> index 15: 0 0 0 0 19 19
> index 16: 0 0 0 0 901139 0
>
> Rx and Tx counters are for the physical port.
>
> VLANs show up as l3_dev (ingress) and fib_dev (egress).
>
> dropped is anytime the xdp program returns XDP_DROP (e.g., invalid packet)
>
> skipped is anytime the program returns XDP_PASS (e.g., not ipv4 or ipv6,
> local traffic, or needs full stack assist).
>
recompiled new version but:
./xdp_fwd enp175s0f0 enp175s0f1
libbpf: failed to create map (name: 'stats_map'): Operation not permitted
libbpf: failed to load object './xdp_fwd_kern.o'
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 13:18 ` Paweł Staszewski
@ 2018-11-10 14:56 ` David Ahern
0 siblings, 0 replies; 77+ messages in thread
From: David Ahern @ 2018-11-10 14:56 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/10/18 6:18 AM, Paweł Staszewski wrote:
>
> ./xdp_fwd enp175s0f0 enp175s0f1
> libbpf: failed to create map (name: 'stats_map'): Operation not permitted
> libbpf: failed to load object './xdp_fwd_kern.o'
Forgot I had increased locked memory:
ulimit -l unlimited
./xdp_fwd enp175s0f0 enp175s0f1
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-09 22:20 ` Paweł Staszewski
@ 2018-11-10 19:34 ` Jesper Dangaard Brouer
2018-11-10 19:49 ` Paweł Staszewski
2018-11-10 20:02 ` Paweł Staszewski
0 siblings, 2 replies; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-10 19:34 UTC (permalink / raw)
To: Paweł Staszewski; +Cc: Saeed Mahameed, netdev, brouer
On Fri, 9 Nov 2018 23:20:38 +0100 Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
> > CPU load is lower than for connectx4 - but it looks like bandwidth
> > limit is the same :)
> > But also after reaching 60Gbit/60Gbit
> >
> > bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> > input: /proc/net/dev type: rate
> > - iface Rx Tx Total
> > ==========================================================================
> >
> > enp175s0: 45.09 Gb/s 15.09 Gb/s 60.18 Gb/s
> > enp216s0: 15.14 Gb/s 45.19 Gb/s 60.33 Gb/s
> > --------------------------------------------------------------------------
> >
> > total: 60.45 Gb/s 60.48 Gb/s 120.93 Gb/s
>
> Today reached 65/65Gbit/s
>
> But starting from 60Gbit/s RX / 60Gbit TX nics start to drop packets
> (with 50%CPU on all 28cores) - so still there is cpu power to use :).
This is weird!
How do you see / measure these drops?
> So checked other stats.
> softnet_stats shows average 1k squeezed per sec:
Is below output the raw counters? not per sec?
It would be valuable to see the per sec stats instead...
I use this tool:
https://github.com/netoptimizer/network-testing/blob/master/bin/softnet_stat.pl
> cpu total dropped squeezed collision rps flow_limit
> 0 18554 0 1 0 0 0
> 1 16728 0 1 0 0 0
> 2 18033 0 1 0 0 0
> 3 17757 0 1 0 0 0
> 4 18861 0 0 0 0 0
> 5 0 0 1 0 0 0
> 6 2 0 1 0 0 0
> 7 0 0 1 0 0 0
> 8 0 0 0 0 0 0
> 9 0 0 1 0 0 0
> 10 0 0 0 0 0 0
> 11 0 0 1 0 0 0
> 12 50 0 1 0 0 0
> 13 257 0 0 0 0 0
> 14 3629115363 0 3353259 0 0 0
> 15 255167835 0 3138271 0 0 0
> 16 4240101961 0 3036130 0 0 0
> 17 599810018 0 3072169 0 0 0
> 18 432796524 0 3034191 0 0 0
> 19 41803906 0 3037405 0 0 0
> 20 900382666 0 3112294 0 0 0
> 21 620926085 0 3086009 0 0 0
> 22 41861198 0 3023142 0 0 0
> 23 4090425574 0 2990412 0 0 0
> 24 4264870218 0 3010272 0 0 0
> 25 141401811 0 3027153 0 0 0
> 26 104155188 0 3051251 0 0 0
> 27 4261258691 0 3039765 0 0 0
> 28 4 0 1 0 0 0
> 29 4 0 0 0 0 0
> 30 0 0 1 0 0 0
> 31 0 0 0 0 0 0
> 32 3 0 1 0 0 0
> 33 1 0 1 0 0 0
> 34 0 0 1 0 0 0
> 35 0 0 0 0 0 0
> 36 0 0 1 0 0 0
> 37 0 0 1 0 0 0
> 38 0 0 1 0 0 0
> 39 0 0 1 0 0 0
> 40 0 0 0 0 0 0
> 41 0 0 1 0 0 0
> 42 299758202 0 3139693 0 0 0
> 43 4254727979 0 3103577 0 0 0
> 44 1959555543 0 2554885 0 0 0
> 45 1675702723 0 2513481 0 0 0
> 46 1908435503 0 2519698 0 0 0
> 47 1877799710 0 2537768 0 0 0
> 48 2384274076 0 2584673 0 0 0
> 49 2598104878 0 2593616 0 0 0
> 50 1897566829 0 2530857 0 0 0
> 51 1712741629 0 2489089 0 0 0
> 52 1704033648 0 2495892 0 0 0
> 53 1636781820 0 2499783 0 0 0
> 54 1861997734 0 2541060 0 0 0
> 55 2113521616 0 2555673 0 0 0
>
>
> So i rised netdev backlog and budged to rly high values
> 524288 for netdev_budget and same for backlog
Does it affect the squeezed counters?
Notice, this (crazy) huge netdev_budget limit will also be limited
by /proc/sys/net/core/netdev_budget_usecs.
> This rised sortirqs from about 600k/sec to 800k/sec for NET_TX/NET_RX
Hmmm, this could indicated not enough NAPI bulking is occurring.
I have a BPF tool, that can give you some insight into NAPI bulking and
softirq idle/kthread starting. Called 'napi_monitor', could you try to
run this, so can try to understand this? You find the tool here:
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_user.c
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_kern.c
> But after this changes i have less packets drops.
>
>
> Below perf top from max traffic reached:
> PerfTop: 72230 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
> cycles], (all, 56 CPUs)
> ------------------------------------------------------------------------------------------
>
> 12.62% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
> 8.44% [kernel] [k] mlx5e_sq_xmit
> 6.69% [kernel] [k] build_skb
> 5.21% [kernel] [k] fib_table_lookup
> 3.54% [kernel] [k] memcpy_erms
> 3.20% [kernel] [k] mlx5e_poll_rx_cq
> 2.25% [kernel] [k] vlan_do_receive
> 2.20% [kernel] [k] mlx5e_post_rx_mpwqes
> 2.02% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
> 1.95% [kernel] [k] __dev_queue_xmit
> 1.83% [kernel] [k] dev_gro_receive
> 1.79% [kernel] [k] tcp_gro_receive
> 1.73% [kernel] [k] ip_finish_output2
> 1.63% [kernel] [k] mlx5e_poll_tx_cq
> 1.49% [kernel] [k] ipt_do_table
> 1.38% [kernel] [k] inet_gro_receive
> 1.31% [kernel] [k] __netif_receive_skb_core
> 1.30% [kernel] [k] _raw_spin_lock
> 1.28% [kernel] [k] mlx5_eq_int
> 1.24% [kernel] [k] irq_entries_start
> 1.19% [kernel] [k] __build_skb
> 1.15% [kernel] [k] swiotlb_map_page
> 1.02% [kernel] [k] vlan_dev_hard_start_xmit
> 0.94% [kernel] [k] pfifo_fast_dequeue
> 0.92% [kernel] [k] ip_route_input_rcu
> 0.86% [kernel] [k] kmem_cache_alloc
> 0.80% [kernel] [k] mlx5e_xmit
> 0.79% [kernel] [k] dev_hard_start_xmit
> 0.78% [kernel] [k] _raw_spin_lock_irqsave
> 0.74% [kernel] [k] ip_forward
> 0.72% [kernel] [k] tasklet_action_common.isra.21
> 0.68% [kernel] [k] pfifo_fast_enqueue
> 0.67% [kernel] [k] netif_skb_features
> 0.66% [kernel] [k] skb_segment
> 0.60% [kernel] [k] skb_gro_receive
> 0.56% [kernel] [k] validate_xmit_skb.isra.142
> 0.53% [kernel] [k] skb_release_data
> 0.51% [kernel] [k] mlx5e_page_release
> 0.51% [kernel] [k] ip_rcv_core.isra.20.constprop.25
> 0.51% [kernel] [k] __qdisc_run
> 0.50% [kernel] [k] tcp4_gro_receive
> 0.49% [kernel] [k] page_frag_free
> 0.46% [kernel] [k] kmem_cache_free_bulk
> 0.43% [kernel] [k] kmem_cache_free
> 0.42% [kernel] [k] try_to_wake_up
> 0.39% [kernel] [k] _raw_spin_lock_irq
> 0.39% [kernel] [k] find_busiest_group
> 0.37% [kernel] [k] __memcpy
>
>
>
> Remember those tests are now on two separate connectx5 connected to
> two separate pcie x16 gen 3.0
That is strange... I still suspect some HW NIC issue, can you provide
ethtool stats info via tool:
https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
$ ethtool_stats.pl --dev enp175s0 --dev enp216s0
The tool remove zero-stats counters and report per sec stats. It makes
it easier to spot that is relevant for the given workload.
Can you give output put from:
$ ethtool --show-priv-flag DEVICE
I want you to experiment with:
ethtool --set-priv-flags DEVICE rx_striding_rq off
I think you already have played with 'rx_cqe_compress', right.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 19:34 ` Jesper Dangaard Brouer
@ 2018-11-10 19:49 ` Paweł Staszewski
2018-11-10 19:56 ` Paweł Staszewski
2018-11-10 20:02 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-10 19:49 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
> On Fri, 9 Nov 2018 23:20:38 +0100 Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
>>> CPU load is lower than for connectx4 - but it looks like bandwidth
>>> limit is the same :)
>>> But also after reaching 60Gbit/60Gbit
>>>
>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>> input: /proc/net/dev type: rate
>>> - iface Rx Tx Total
>>> ==========================================================================
>>>
>>> enp175s0: 45.09 Gb/s 15.09 Gb/s 60.18 Gb/s
>>> enp216s0: 15.14 Gb/s 45.19 Gb/s 60.33 Gb/s
>>> --------------------------------------------------------------------------
>>>
>>> total: 60.45 Gb/s 60.48 Gb/s 120.93 Gb/s
>> Today reached 65/65Gbit/s
>>
>> But starting from 60Gbit/s RX / 60Gbit TX nics start to drop packets
>> (with 50%CPU on all 28cores) - so still there is cpu power to use :).
> This is weird!
>
> How do you see / measure these drops?
Simple icmp test like ping -i 0.1
And im testing by icmp management ip address on vlan that is attacked to
one NIC (the side that is more stressed with RX)
And another icmp test is forward thru this router - host behind it
Both measurements shows same loss ratio from 0.1 to 0.5% after reaching
~45Gbit/s RX side - depends how much RX side is pushed drops vary
between 0.1 to 0.5 - even 0.6%:)
>
>
>> So checked other stats.
>> softnet_stats shows average 1k squeezed per sec:
> Is below output the raw counters? not per sec?
>
> It would be valuable to see the per sec stats instead...
> I use this tool:
> https://github.com/netoptimizer/network-testing/blob/master/bin/softnet_stat.pl
>
>> cpu total dropped squeezed collision rps flow_limit
>> 0 18554 0 1 0 0 0
>> 1 16728 0 1 0 0 0
>> 2 18033 0 1 0 0 0
>> 3 17757 0 1 0 0 0
>> 4 18861 0 0 0 0 0
>> 5 0 0 1 0 0 0
>> 6 2 0 1 0 0 0
>> 7 0 0 1 0 0 0
>> 8 0 0 0 0 0 0
>> 9 0 0 1 0 0 0
>> 10 0 0 0 0 0 0
>> 11 0 0 1 0 0 0
>> 12 50 0 1 0 0 0
>> 13 257 0 0 0 0 0
>> 14 3629115363 0 3353259 0 0 0
>> 15 255167835 0 3138271 0 0 0
>> 16 4240101961 0 3036130 0 0 0
>> 17 599810018 0 3072169 0 0 0
>> 18 432796524 0 3034191 0 0 0
>> 19 41803906 0 3037405 0 0 0
>> 20 900382666 0 3112294 0 0 0
>> 21 620926085 0 3086009 0 0 0
>> 22 41861198 0 3023142 0 0 0
>> 23 4090425574 0 2990412 0 0 0
>> 24 4264870218 0 3010272 0 0 0
>> 25 141401811 0 3027153 0 0 0
>> 26 104155188 0 3051251 0 0 0
>> 27 4261258691 0 3039765 0 0 0
>> 28 4 0 1 0 0 0
>> 29 4 0 0 0 0 0
>> 30 0 0 1 0 0 0
>> 31 0 0 0 0 0 0
>> 32 3 0 1 0 0 0
>> 33 1 0 1 0 0 0
>> 34 0 0 1 0 0 0
>> 35 0 0 0 0 0 0
>> 36 0 0 1 0 0 0
>> 37 0 0 1 0 0 0
>> 38 0 0 1 0 0 0
>> 39 0 0 1 0 0 0
>> 40 0 0 0 0 0 0
>> 41 0 0 1 0 0 0
>> 42 299758202 0 3139693 0 0 0
>> 43 4254727979 0 3103577 0 0 0
>> 44 1959555543 0 2554885 0 0 0
>> 45 1675702723 0 2513481 0 0 0
>> 46 1908435503 0 2519698 0 0 0
>> 47 1877799710 0 2537768 0 0 0
>> 48 2384274076 0 2584673 0 0 0
>> 49 2598104878 0 2593616 0 0 0
>> 50 1897566829 0 2530857 0 0 0
>> 51 1712741629 0 2489089 0 0 0
>> 52 1704033648 0 2495892 0 0 0
>> 53 1636781820 0 2499783 0 0 0
>> 54 1861997734 0 2541060 0 0 0
>> 55 2113521616 0 2555673 0 0 0
>>
>>
>> So i rised netdev backlog and budged to rly high values
>> 524288 for netdev_budget and same for backlog
> Does it affect the squeezed counters?
a little - but not much
After change budget from 65536 to to 524k - number of squeezed counters
for all cpus changed from 1.5k per second to 0.9-1k per second - but
increasing it more like above 524k change nothing - same 0.9 to 1k/s
squeezed
>
> Notice, this (crazy) huge netdev_budget limit will also be limited
> by /proc/sys/net/core/netdev_budget_usecs.
Yes changed that also to 1000 / 2000 / 3000 / 4000 not much difference
on squeezed - even cant see the difference
>
>> This rised sortirqs from about 600k/sec to 800k/sec for NET_TX/NET_RX
> Hmmm, this could indicated not enough NAPI bulking is occurring.
>
> I have a BPF tool, that can give you some insight into NAPI bulking and
> softirq idle/kthread starting. Called 'napi_monitor', could you try to
> run this, so can try to understand this? You find the tool here:
>
> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/
> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_user.c
> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_kern.c
yes will try it
>
>> But after this changes i have less packets drops.
>>
>>
>> Below perf top from max traffic reached:
>> PerfTop: 72230 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
>> cycles], (all, 56 CPUs)
>> ------------------------------------------------------------------------------------------
>>
>> 12.62% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
>> 8.44% [kernel] [k] mlx5e_sq_xmit
>> 6.69% [kernel] [k] build_skb
>> 5.21% [kernel] [k] fib_table_lookup
>> 3.54% [kernel] [k] memcpy_erms
>> 3.20% [kernel] [k] mlx5e_poll_rx_cq
>> 2.25% [kernel] [k] vlan_do_receive
>> 2.20% [kernel] [k] mlx5e_post_rx_mpwqes
>> 2.02% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
>> 1.95% [kernel] [k] __dev_queue_xmit
>> 1.83% [kernel] [k] dev_gro_receive
>> 1.79% [kernel] [k] tcp_gro_receive
>> 1.73% [kernel] [k] ip_finish_output2
>> 1.63% [kernel] [k] mlx5e_poll_tx_cq
>> 1.49% [kernel] [k] ipt_do_table
>> 1.38% [kernel] [k] inet_gro_receive
>> 1.31% [kernel] [k] __netif_receive_skb_core
>> 1.30% [kernel] [k] _raw_spin_lock
>> 1.28% [kernel] [k] mlx5_eq_int
>> 1.24% [kernel] [k] irq_entries_start
>> 1.19% [kernel] [k] __build_skb
>> 1.15% [kernel] [k] swiotlb_map_page
>> 1.02% [kernel] [k] vlan_dev_hard_start_xmit
>> 0.94% [kernel] [k] pfifo_fast_dequeue
>> 0.92% [kernel] [k] ip_route_input_rcu
>> 0.86% [kernel] [k] kmem_cache_alloc
>> 0.80% [kernel] [k] mlx5e_xmit
>> 0.79% [kernel] [k] dev_hard_start_xmit
>> 0.78% [kernel] [k] _raw_spin_lock_irqsave
>> 0.74% [kernel] [k] ip_forward
>> 0.72% [kernel] [k] tasklet_action_common.isra.21
>> 0.68% [kernel] [k] pfifo_fast_enqueue
>> 0.67% [kernel] [k] netif_skb_features
>> 0.66% [kernel] [k] skb_segment
>> 0.60% [kernel] [k] skb_gro_receive
>> 0.56% [kernel] [k] validate_xmit_skb.isra.142
>> 0.53% [kernel] [k] skb_release_data
>> 0.51% [kernel] [k] mlx5e_page_release
>> 0.51% [kernel] [k] ip_rcv_core.isra.20.constprop.25
>> 0.51% [kernel] [k] __qdisc_run
>> 0.50% [kernel] [k] tcp4_gro_receive
>> 0.49% [kernel] [k] page_frag_free
>> 0.46% [kernel] [k] kmem_cache_free_bulk
>> 0.43% [kernel] [k] kmem_cache_free
>> 0.42% [kernel] [k] try_to_wake_up
>> 0.39% [kernel] [k] _raw_spin_lock_irq
>> 0.39% [kernel] [k] find_busiest_group
>> 0.37% [kernel] [k] __memcpy
>>
>>
>>
>> Remember those tests are now on two separate connectx5 connected to
>> two separate pcie x16 gen 3.0
>
> That is strange... I still suspect some HW NIC issue, can you provide
> ethtool stats info via tool:
>
> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
>
> $ ethtool_stats.pl --dev enp175s0 --dev enp216s0
>
> The tool remove zero-stats counters and report per sec stats. It makes
> it easier to spot that is relevant for the given workload.
yes mlnx have just too many counters that are always 0 for my case :)
Will try this also
>
> Can you give output put from:
> $ ethtool --show-priv-flag DEVICE
>
> I want you to experiment with:
ethtool --show-priv-flags enp175s0
Private flags for enp175s0:
rx_cqe_moder : on
tx_cqe_moder : off
rx_cqe_compress : off
rx_striding_rq : on
rx_no_csum_complete: off
>
> ethtool --set-priv-flags DEVICE rx_striding_rq off
ok i will first check on test server if this will reset my interface and
will not produce kernel panic :)
>
> I think you already have played with 'rx_cqe_compress', right.
yes - and compress increasing number of irq's but doing not much for
bandwidth same limit 60-64Gbit/s total RX+TX on one 100G port
And what is weird - that limit is in overall symetric - cause if for
example 100G port is receiving 42G traffic and transmitting 20G traffic
- when i flood rx side with pktgen or other for example icmp traffic
1/2/3/4/5G - then receiving side increase with 1/2/3/4/5Gbit of traffic
but transmitting is going down for same lvl's
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 19:49 ` Paweł Staszewski
@ 2018-11-10 19:56 ` Paweł Staszewski
2018-11-10 22:06 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-10 19:56 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 10.11.2018 o 20:49, Paweł Staszewski pisze:
>
>
> W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
>> On Fri, 9 Nov 2018 23:20:38 +0100 Paweł Staszewski
>> <pstaszewski@itcare.pl> wrote:
>>
>>> W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
>>>> CPU load is lower than for connectx4 - but it looks like bandwidth
>>>> limit is the same :)
>>>> But also after reaching 60Gbit/60Gbit
>>>>
>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>> input: /proc/net/dev type: rate
>>>> - iface Rx Tx Total
>>>> ==========================================================================
>>>>
>>>>
>>>> enp175s0: 45.09 Gb/s 15.09 Gb/s
>>>> 60.18 Gb/s
>>>> enp216s0: 15.14 Gb/s 45.19 Gb/s
>>>> 60.33 Gb/s
>>>> --------------------------------------------------------------------------
>>>>
>>>>
>>>> total: 60.45 Gb/s 60.48 Gb/s 120.93
>>>> Gb/s
>>> Today reached 65/65Gbit/s
>>>
>>> But starting from 60Gbit/s RX / 60Gbit TX nics start to drop packets
>>> (with 50%CPU on all 28cores) - so still there is cpu power to use :).
>> This is weird!
>>
>> How do you see / measure these drops?
> Simple icmp test like ping -i 0.1
> And im testing by icmp management ip address on vlan that is attacked
> to one NIC (the side that is more stressed with RX)
> And another icmp test is forward thru this router - host behind it
>
> Both measurements shows same loss ratio from 0.1 to 0.5% after
> reaching ~45Gbit/s RX side - depends how much RX side is pushed drops
> vary between 0.1 to 0.5 - even 0.6%:)
>
>
>>
>>
>>> So checked other stats.
>>> softnet_stats shows average 1k squeezed per sec:
>> Is below output the raw counters? not per sec?
>>
>> It would be valuable to see the per sec stats instead...
>> I use this tool:
>> https://github.com/netoptimizer/network-testing/blob/master/bin/softnet_stat.pl
CPU total/sec dropped/sec squeezed/sec
collision/sec rx_rps/sec flow_limit/sec
CPU:00 0 0 0 0
0 0
CPU:01 0 0 0 0
0 0
CPU:02 0 0 0 0
0 0
CPU:03 0 0 0 0
0 0
CPU:04 0 0 0 0
0 0
CPU:05 0 0 0 0
0 0
CPU:06 0 0 0 0
0 0
CPU:07 0 0 0 0
0 0
CPU:08 0 0 0 0
0 0
CPU:09 0 0 0 0
0 0
CPU:10 0 0 0 0
0 0
CPU:11 0 0 0 0
0 0
CPU:12 0 0 0 0
0 0
CPU:13 0 0 0 0
0 0
CPU:14 485538 0 43 0
0 0
CPU:15 474794 0 51 0
0 0
CPU:16 449322 0 41 0
0 0
CPU:17 476420 0 46 0
0 0
CPU:18 440436 0 38 0
0 0
CPU:19 501499 0 49 0
0 0
CPU:20 459468 0 49 0
0 0
CPU:21 438928 0 47 0
0 0
CPU:22 468983 0 40 0
0 0
CPU:23 446253 0 47 0
0 0
CPU:24 451909 0 46 0
0 0
CPU:25 479373 0 55 0
0 0
CPU:26 467848 0 49 0
0 0
CPU:27 453153 0 51 0
0 0
CPU:28 0 0 0 0
0 0
CPU:29 0 0 0 0
0 0
CPU:30 0 0 0 0
0 0
CPU:31 0 0 0 0
0 0
CPU:32 0 0 0 0
0 0
CPU:33 0 0 0 0
0 0
CPU:34 0 0 0 0
0 0
CPU:35 0 0 0 0
0 0
CPU:36 0 0 0 0
0 0
CPU:37 0 0 0 0
0 0
CPU:38 0 0 0 0
0 0
CPU:39 0 0 0 0
0 0
CPU:40 0 0 0 0
0 0
CPU:41 0 0 0 0
0 0
CPU:42 466853 0 43 0
0 0
CPU:43 453059 0 54 0
0 0
CPU:44 363219 0 34 0
0 0
CPU:45 353632 0 38 0
0 0
CPU:46 371618 0 40 0
0 0
CPU:47 350518 0 46 0
0 0
CPU:48 397544 0 40 0
0 0
CPU:49 364873 0 38 0
0 0
CPU:50 383630 0 38 0
0 0
CPU:51 358771 0 39 0
0 0
CPU:52 372547 0 38 0
0 0
CPU:53 372882 0 36 0
0 0
CPU:54 366244 0 43 0
0 0
CPU:55 365886 0 39 0
0 0
Summed: 11835201 0 1217 0
0 0
>>
>>> cpu total dropped squeezed collision rps flow_limit
>>> 0 18554 0 1 0 0 0
>>> 1 16728 0 1 0 0 0
>>> 2 18033 0 1 0 0 0
>>> 3 17757 0 1 0 0 0
>>> 4 18861 0 0 0 0 0
>>> 5 0 0 1 0 0 0
>>> 6 2 0 1 0 0 0
>>> 7 0 0 1 0 0 0
>>> 8 0 0 0 0 0 0
>>> 9 0 0 1 0 0 0
>>> 10 0 0 0 0 0 0
>>> 11 0 0 1 0 0 0
>>> 12 50 0 1 0 0 0
>>> 13 257 0 0 0 0 0
>>> 14 3629115363 0 3353259 0 0 0
>>> 15 255167835 0 3138271 0 0 0
>>> 16 4240101961 0 3036130 0 0 0
>>> 17 599810018 0 3072169 0 0 0
>>> 18 432796524 0 3034191 0 0 0
>>> 19 41803906 0 3037405 0 0 0
>>> 20 900382666 0 3112294 0 0 0
>>> 21 620926085 0 3086009 0 0 0
>>> 22 41861198 0 3023142 0 0 0
>>> 23 4090425574 0 2990412 0 0 0
>>> 24 4264870218 0 3010272 0 0 0
>>> 25 141401811 0 3027153 0 0 0
>>> 26 104155188 0 3051251 0 0 0
>>> 27 4261258691 0 3039765 0 0 0
>>> 28 4 0 1 0 0 0
>>> 29 4 0 0 0 0 0
>>> 30 0 0 1 0 0 0
>>> 31 0 0 0 0 0 0
>>> 32 3 0 1 0 0 0
>>> 33 1 0 1 0 0 0
>>> 34 0 0 1 0 0 0
>>> 35 0 0 0 0 0 0
>>> 36 0 0 1 0 0 0
>>> 37 0 0 1 0 0 0
>>> 38 0 0 1 0 0 0
>>> 39 0 0 1 0 0 0
>>> 40 0 0 0 0 0 0
>>> 41 0 0 1 0 0 0
>>> 42 299758202 0 3139693 0 0 0
>>> 43 4254727979 0 3103577 0 0 0
>>> 44 1959555543 0 2554885 0 0 0
>>> 45 1675702723 0 2513481 0 0 0
>>> 46 1908435503 0 2519698 0 0 0
>>> 47 1877799710 0 2537768 0 0 0
>>> 48 2384274076 0 2584673 0 0 0
>>> 49 2598104878 0 2593616 0 0 0
>>> 50 1897566829 0 2530857 0 0 0
>>> 51 1712741629 0 2489089 0 0 0
>>> 52 1704033648 0 2495892 0 0 0
>>> 53 1636781820 0 2499783 0 0 0
>>> 54 1861997734 0 2541060 0 0 0
>>> 55 2113521616 0 2555673 0 0 0
>>>
>>>
>>> So i rised netdev backlog and budged to rly high values
>>> 524288 for netdev_budget and same for backlog
>> Does it affect the squeezed counters?
> a little - but not much
> After change budget from 65536 to to 524k - number of squeezed
> counters for all cpus changed from 1.5k per second to 0.9-1k per
> second - but increasing it more like above 524k change nothing - same
> 0.9 to 1k/s squeezed
>>
>> Notice, this (crazy) huge netdev_budget limit will also be limited
>> by /proc/sys/net/core/netdev_budget_usecs.
> Yes changed that also to 1000 / 2000 / 3000 / 4000 not much
> difference on squeezed - even cant see the difference
>
>>
>>> This rised sortirqs from about 600k/sec to 800k/sec for NET_TX/NET_RX
>> Hmmm, this could indicated not enough NAPI bulking is occurring.
>>
>> I have a BPF tool, that can give you some insight into NAPI bulking and
>> softirq idle/kthread starting. Called 'napi_monitor', could you try to
>> run this, so can try to understand this? You find the tool here:
>>
>> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/
>> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_user.c
>> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_kern.c
> yes will try it
>
>>> But after this changes i have less packets drops.
>>>
>>>
>>> Below perf top from max traffic reached:
>>> PerfTop: 72230 irqs/sec kernel:99.4% exact: 0.0% [4000Hz
>>> cycles], (all, 56 CPUs)
>>> ------------------------------------------------------------------------------------------
>>>
>>>
>>> 12.62% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
>>> 8.44% [kernel] [k] mlx5e_sq_xmit
>>> 6.69% [kernel] [k] build_skb
>>> 5.21% [kernel] [k] fib_table_lookup
>>> 3.54% [kernel] [k] memcpy_erms
>>> 3.20% [kernel] [k] mlx5e_poll_rx_cq
>>> 2.25% [kernel] [k] vlan_do_receive
>>> 2.20% [kernel] [k] mlx5e_post_rx_mpwqes
>>> 2.02% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
>>> 1.95% [kernel] [k] __dev_queue_xmit
>>> 1.83% [kernel] [k] dev_gro_receive
>>> 1.79% [kernel] [k] tcp_gro_receive
>>> 1.73% [kernel] [k] ip_finish_output2
>>> 1.63% [kernel] [k] mlx5e_poll_tx_cq
>>> 1.49% [kernel] [k] ipt_do_table
>>> 1.38% [kernel] [k] inet_gro_receive
>>> 1.31% [kernel] [k] __netif_receive_skb_core
>>> 1.30% [kernel] [k] _raw_spin_lock
>>> 1.28% [kernel] [k] mlx5_eq_int
>>> 1.24% [kernel] [k] irq_entries_start
>>> 1.19% [kernel] [k] __build_skb
>>> 1.15% [kernel] [k] swiotlb_map_page
>>> 1.02% [kernel] [k] vlan_dev_hard_start_xmit
>>> 0.94% [kernel] [k] pfifo_fast_dequeue
>>> 0.92% [kernel] [k] ip_route_input_rcu
>>> 0.86% [kernel] [k] kmem_cache_alloc
>>> 0.80% [kernel] [k] mlx5e_xmit
>>> 0.79% [kernel] [k] dev_hard_start_xmit
>>> 0.78% [kernel] [k] _raw_spin_lock_irqsave
>>> 0.74% [kernel] [k] ip_forward
>>> 0.72% [kernel] [k] tasklet_action_common.isra.21
>>> 0.68% [kernel] [k] pfifo_fast_enqueue
>>> 0.67% [kernel] [k] netif_skb_features
>>> 0.66% [kernel] [k] skb_segment
>>> 0.60% [kernel] [k] skb_gro_receive
>>> 0.56% [kernel] [k] validate_xmit_skb.isra.142
>>> 0.53% [kernel] [k] skb_release_data
>>> 0.51% [kernel] [k] mlx5e_page_release
>>> 0.51% [kernel] [k] ip_rcv_core.isra.20.constprop.25
>>> 0.51% [kernel] [k] __qdisc_run
>>> 0.50% [kernel] [k] tcp4_gro_receive
>>> 0.49% [kernel] [k] page_frag_free
>>> 0.46% [kernel] [k] kmem_cache_free_bulk
>>> 0.43% [kernel] [k] kmem_cache_free
>>> 0.42% [kernel] [k] try_to_wake_up
>>> 0.39% [kernel] [k] _raw_spin_lock_irq
>>> 0.39% [kernel] [k] find_busiest_group
>>> 0.37% [kernel] [k] __memcpy
>>>
>>>
>>>
>>> Remember those tests are now on two separate connectx5 connected to
>>> two separate pcie x16 gen 3.0
>> That is strange... I still suspect some HW NIC issue, can you provide
>> ethtool stats info via tool:
>>
>> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
>>
>> $ ethtool_stats.pl --dev enp175s0 --dev enp216s0
>>
>> The tool remove zero-stats counters and report per sec stats. It makes
>> it easier to spot that is relevant for the given workload.
> yes mlnx have just too many counters that are always 0 for my case :)
> Will try this also
>
But still alot of non 0 counters
Show adapter(s) (enp175s0 enp216s0) statistics (ONLY that changed!)
Ethtool(enp175s0) stat: 8891 ( 8,891) <= ch0_arm /sec
Ethtool(enp175s0) stat: 10265 ( 10,265) <= ch0_events /sec
Ethtool(enp175s0) stat: 11072 ( 11,072) <= ch0_poll /sec
Ethtool(enp175s0) stat: 9003 ( 9,003) <= ch10_arm /sec
Ethtool(enp175s0) stat: 10476 ( 10,476) <= ch10_events /sec
Ethtool(enp175s0) stat: 11284 ( 11,284) <= ch10_poll /sec
Ethtool(enp175s0) stat: 11211 ( 11,211) <= ch11_arm /sec
Ethtool(enp175s0) stat: 12645 ( 12,645) <= ch11_events /sec
Ethtool(enp175s0) stat: 13450 ( 13,450) <= ch11_poll /sec
Ethtool(enp175s0) stat: 9012 ( 9,012) <= ch12_arm /sec
Ethtool(enp175s0) stat: 10366 ( 10,366) <= ch12_events /sec
Ethtool(enp175s0) stat: 11074 ( 11,074) <= ch12_poll /sec
Ethtool(enp175s0) stat: 8810 ( 8,810) <= ch13_arm /sec
Ethtool(enp175s0) stat: 10177 ( 10,177) <= ch13_events /sec
Ethtool(enp175s0) stat: 10886 ( 10,886) <= ch13_poll /sec
Ethtool(enp175s0) stat: 9794 ( 9,794) <= ch14_arm /sec
Ethtool(enp175s0) stat: 11159 ( 11,159) <= ch14_events /sec
Ethtool(enp175s0) stat: 11932 ( 11,932) <= ch14_poll /sec
Ethtool(enp175s0) stat: 8703 ( 8,703) <= ch15_arm /sec
Ethtool(enp175s0) stat: 10052 ( 10,052) <= ch15_events /sec
Ethtool(enp175s0) stat: 10774 ( 10,774) <= ch15_poll /sec
Ethtool(enp175s0) stat: 6429 ( 6,429) <= ch16_arm /sec
Ethtool(enp175s0) stat: 7591 ( 7,591) <= ch16_events /sec
Ethtool(enp175s0) stat: 8223 ( 8,223) <= ch16_poll /sec
Ethtool(enp175s0) stat: 8981 ( 8,981) <= ch17_arm /sec
Ethtool(enp175s0) stat: 10229 ( 10,229) <= ch17_events /sec
Ethtool(enp175s0) stat: 10887 ( 10,887) <= ch17_poll /sec
Ethtool(enp175s0) stat: 6786 ( 6,786) <= ch18_arm /sec
Ethtool(enp175s0) stat: 7887 ( 7,887) <= ch18_events /sec
Ethtool(enp175s0) stat: 8484 ( 8,484) <= ch18_poll /sec
Ethtool(enp175s0) stat: 6080 ( 6,080) <= ch19_arm /sec
Ethtool(enp175s0) stat: 7377 ( 7,377) <= ch19_events /sec
Ethtool(enp175s0) stat: 8124 ( 8,124) <= ch19_poll /sec
Ethtool(enp175s0) stat: 7715 ( 7,715) <= ch1_arm /sec
Ethtool(enp175s0) stat: 9109 ( 9,109) <= ch1_events /sec
Ethtool(enp175s0) stat: 9923 ( 9,923) <= ch1_poll /sec
Ethtool(enp175s0) stat: 7303 ( 7,303) <= ch20_arm /sec
Ethtool(enp175s0) stat: 8514 ( 8,514) <= ch20_events /sec
Ethtool(enp175s0) stat: 9169 ( 9,169) <= ch20_poll /sec
Ethtool(enp175s0) stat: 8972 ( 8,972) <= ch21_arm /sec
Ethtool(enp175s0) stat: 10060 ( 10,060) <= ch21_events /sec
Ethtool(enp175s0) stat: 10647 ( 10,647) <= ch21_poll /sec
Ethtool(enp175s0) stat: 7729 ( 7,729) <= ch22_arm /sec
Ethtool(enp175s0) stat: 8932 ( 8,932) <= ch22_events /sec
Ethtool(enp175s0) stat: 9585 ( 9,585) <= ch22_poll /sec
Ethtool(enp175s0) stat: 8125 ( 8,125) <= ch23_arm /sec
Ethtool(enp175s0) stat: 9218 ( 9,218) <= ch23_events /sec
Ethtool(enp175s0) stat: 9805 ( 9,805) <= ch23_poll /sec
Ethtool(enp175s0) stat: 7212 ( 7,212) <= ch24_arm /sec
Ethtool(enp175s0) stat: 8369 ( 8,369) <= ch24_events /sec
Ethtool(enp175s0) stat: 8993 ( 8,993) <= ch24_poll /sec
Ethtool(enp175s0) stat: 6328 ( 6,328) <= ch25_arm /sec
Ethtool(enp175s0) stat: 7567 ( 7,567) <= ch25_events /sec
Ethtool(enp175s0) stat: 8274 ( 8,274) <= ch25_poll /sec
Ethtool(enp175s0) stat: 6210 ( 6,210) <= ch26_arm /sec
Ethtool(enp175s0) stat: 7409 ( 7,409) <= ch26_events /sec
Ethtool(enp175s0) stat: 8062 ( 8,062) <= ch26_poll /sec
Ethtool(enp175s0) stat: 7035 ( 7,035) <= ch27_arm /sec
Ethtool(enp175s0) stat: 8203 ( 8,203) <= ch27_events /sec
Ethtool(enp175s0) stat: 8840 ( 8,840) <= ch27_poll /sec
Ethtool(enp175s0) stat: 11278 ( 11,278) <= ch2_arm /sec
Ethtool(enp175s0) stat: 12632 ( 12,632) <= ch2_events /sec
Ethtool(enp175s0) stat: 13348 ( 13,348) <= ch2_poll /sec
Ethtool(enp175s0) stat: 10612 ( 10,612) <= ch3_arm /sec
Ethtool(enp175s0) stat: 11900 ( 11,900) <= ch3_events /sec
Ethtool(enp175s0) stat: 12567 ( 12,567) <= ch3_poll /sec
Ethtool(enp175s0) stat: 8936 ( 8,936) <= ch4_arm /sec
Ethtool(enp175s0) stat: 10248 ( 10,248) <= ch4_events /sec
Ethtool(enp175s0) stat: 10962 ( 10,962) <= ch4_poll /sec
Ethtool(enp175s0) stat: 11631 ( 11,631) <= ch5_arm /sec
Ethtool(enp175s0) stat: 12953 ( 12,953) <= ch5_events /sec
Ethtool(enp175s0) stat: 13629 ( 13,629) <= ch5_poll /sec
Ethtool(enp175s0) stat: 9877 ( 9,877) <= ch6_arm /sec
Ethtool(enp175s0) stat: 11114 ( 11,114) <= ch6_events /sec
Ethtool(enp175s0) stat: 11800 ( 11,800) <= ch6_poll /sec
Ethtool(enp175s0) stat: 8228 ( 8,228) <= ch7_arm /sec
Ethtool(enp175s0) stat: 9577 ( 9,577) <= ch7_events /sec
Ethtool(enp175s0) stat: 10320 ( 10,320) <= ch7_poll /sec
Ethtool(enp175s0) stat: 11808 ( 11,808) <= ch8_arm /sec
Ethtool(enp175s0) stat: 13135 ( 13,135) <= ch8_events /sec
Ethtool(enp175s0) stat: 13828 ( 13,828) <= ch8_poll /sec
Ethtool(enp175s0) stat: 10566 ( 10,566) <= ch9_arm /sec
Ethtool(enp175s0) stat: 11904 ( 11,904) <= ch9_events /sec
Ethtool(enp175s0) stat: 12634 ( 12,634) <= ch9_poll /sec
Ethtool(enp175s0) stat: 243256 ( 243,256) <= ch_arm /sec
Ethtool(enp175s0) stat: 279057 ( 279,057) <= ch_events /sec
Ethtool(enp175s0) stat: 298563 ( 298,563) <= ch_poll /sec
Ethtool(enp175s0) stat: 186677525 ( 186,677,525) <= rx0_bytes /sec
Ethtool(enp175s0) stat: 72870 ( 72,870) <=
rx0_cache_reuse /sec
Ethtool(enp175s0) stat: 145627 ( 145,627) <=
rx0_csum_complete /sec
Ethtool(enp175s0) stat: 88 ( 88) <= rx0_csum_none /sec
Ethtool(enp175s0) stat: 145715 ( 145,715) <= rx0_packets /sec
Ethtool(enp175s0) stat: 145715 ( 145,715) <=
rx0_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 198552827 ( 198,552,827) <= rx10_bytes /sec
Ethtool(enp175s0) stat: 75553 ( 75,553) <=
rx10_cache_reuse /sec
Ethtool(enp175s0) stat: 151021 ( 151,021) <=
rx10_csum_complete /sec
Ethtool(enp175s0) stat: 151021 ( 151,021) <= rx10_packets /sec
Ethtool(enp175s0) stat: 151021 ( 151,021) <=
rx10_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 200924148 ( 200,924,148) <= rx11_bytes /sec
Ethtool(enp175s0) stat: 76589 ( 76,589) <=
rx11_cache_reuse /sec
Ethtool(enp175s0) stat: 153221 ( 153,221) <=
rx11_csum_complete /sec
Ethtool(enp175s0) stat: 153221 ( 153,221) <= rx11_packets /sec
Ethtool(enp175s0) stat: 153221 ( 153,221) <=
rx11_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 186259790 ( 186,259,790) <= rx12_bytes /sec
Ethtool(enp175s0) stat: 70675 ( 70,675) <=
rx12_cache_reuse /sec
Ethtool(enp175s0) stat: 141440 ( 141,440) <=
rx12_csum_complete /sec
Ethtool(enp175s0) stat: 141440 ( 141,440) <= rx12_packets /sec
Ethtool(enp175s0) stat: 141440 ( 141,440) <=
rx12_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 189627451 ( 189,627,451) <= rx13_bytes /sec
Ethtool(enp175s0) stat: 72626 ( 72,626) <=
rx13_cache_reuse /sec
Ethtool(enp175s0) stat: 145327 ( 145,327) <=
rx13_csum_complete /sec
Ethtool(enp175s0) stat: 145327 ( 145,327) <= rx13_packets /sec
Ethtool(enp175s0) stat: 145327 ( 145,327) <=
rx13_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 199246096 ( 199,246,096) <= rx14_bytes /sec
Ethtool(enp175s0) stat: 77992 ( 77,992) <=
rx14_cache_reuse /sec
Ethtool(enp175s0) stat: 156043 ( 156,043) <=
rx14_csum_complete /sec
Ethtool(enp175s0) stat: 156043 ( 156,043) <= rx14_packets /sec
Ethtool(enp175s0) stat: 156043 ( 156,043) <=
rx14_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 189698176 ( 189,698,176) <= rx15_bytes /sec
Ethtool(enp175s0) stat: 72382 ( 72,382) <=
rx15_cache_reuse /sec
Ethtool(enp175s0) stat: 144658 ( 144,658) <=
rx15_csum_complete /sec
Ethtool(enp175s0) stat: 144658 ( 144,658) <= rx15_packets /sec
Ethtool(enp175s0) stat: 144658 ( 144,658) <=
rx15_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 143896232 ( 143,896,232) <= rx16_bytes /sec
Ethtool(enp175s0) stat: 55369 ( 55,369) <=
rx16_cache_reuse /sec
Ethtool(enp175s0) stat: 110745 ( 110,745) <=
rx16_csum_complete /sec
Ethtool(enp175s0) stat: 110745 ( 110,745) <= rx16_packets /sec
Ethtool(enp175s0) stat: 110745 ( 110,745) <=
rx16_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 171449483 ( 171,449,483) <= rx17_bytes /sec
Ethtool(enp175s0) stat: 65308 ( 65,308) <=
rx17_cache_reuse /sec
Ethtool(enp175s0) stat: 130563 ( 130,563) <=
rx17_csum_complete /sec
Ethtool(enp175s0) stat: 130563 ( 130,563) <= rx17_packets /sec
Ethtool(enp175s0) stat: 130563 ( 130,563) <=
rx17_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 141033264 ( 141,033,264) <= rx18_bytes /sec
Ethtool(enp175s0) stat: 54515 ( 54,515) <=
rx18_cache_reuse /sec
Ethtool(enp175s0) stat: 108966 ( 108,966) <=
rx18_csum_complete /sec
Ethtool(enp175s0) stat: 108966 ( 108,966) <= rx18_packets /sec
Ethtool(enp175s0) stat: 108966 ( 108,966) <=
rx18_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 163097410 ( 163,097,410) <= rx19_bytes /sec
Ethtool(enp175s0) stat: 61894 ( 61,894) <=
rx19_cache_reuse /sec
Ethtool(enp175s0) stat: 123773 ( 123,773) <=
rx19_csum_complete /sec
Ethtool(enp175s0) stat: 123773 ( 123,773) <= rx19_packets /sec
Ethtool(enp175s0) stat: 123773 ( 123,773) <=
rx19_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 181707418 ( 181,707,418) <= rx1_bytes /sec
Ethtool(enp175s0) stat: 71223 ( 71,223) <=
rx1_cache_reuse /sec
Ethtool(enp175s0) stat: 142445 ( 142,445) <=
rx1_csum_complete /sec
Ethtool(enp175s0) stat: 142445 ( 142,445) <= rx1_packets /sec
Ethtool(enp175s0) stat: 142445 ( 142,445) <=
rx1_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 161626368 ( 161,626,368) <= rx20_bytes /sec
Ethtool(enp175s0) stat: 61345 ( 61,345) <=
rx20_cache_reuse /sec
Ethtool(enp175s0) stat: 122724 ( 122,724) <=
rx20_csum_complete /sec
Ethtool(enp175s0) stat: 122724 ( 122,724) <= rx20_packets /sec
Ethtool(enp175s0) stat: 122724 ( 122,724) <=
rx20_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 138593554 ( 138,593,554) <= rx21_bytes /sec
Ethtool(enp175s0) stat: 53478 ( 53,478) <=
rx21_cache_reuse /sec
Ethtool(enp175s0) stat: 106949 ( 106,949) <=
rx21_csum_complete /sec
Ethtool(enp175s0) stat: 106949 ( 106,949) <= rx21_packets /sec
Ethtool(enp175s0) stat: 106949 ( 106,949) <=
rx21_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 149217722 ( 149,217,722) <= rx22_bytes /sec
Ethtool(enp175s0) stat: 58174 ( 58,174) <=
rx22_cache_reuse /sec
Ethtool(enp175s0) stat: 116342 ( 116,342) <=
rx22_csum_complete /sec
Ethtool(enp175s0) stat: 116342 ( 116,342) <= rx22_packets /sec
Ethtool(enp175s0) stat: 116342 ( 116,342) <=
rx22_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 147968086 ( 147,968,086) <= rx23_bytes /sec
Ethtool(enp175s0) stat: 55979 ( 55,979) <=
rx23_cache_reuse /sec
Ethtool(enp175s0) stat: 111901 ( 111,901) <=
rx23_csum_complete /sec
Ethtool(enp175s0) stat: 111901 ( 111,901) <= rx23_packets /sec
Ethtool(enp175s0) stat: 111901 ( 111,901) <=
rx23_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 145955524 ( 145,955,524) <= rx24_bytes /sec
Ethtool(enp175s0) stat: 55491 ( 55,491) <=
rx24_cache_reuse /sec
Ethtool(enp175s0) stat: 110980 ( 110,980) <=
rx24_csum_complete /sec
Ethtool(enp175s0) stat: 110980 ( 110,980) <= rx24_packets /sec
Ethtool(enp175s0) stat: 110980 ( 110,980) <=
rx24_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 155552699 ( 155,552,699) <= rx25_bytes /sec
Ethtool(enp175s0) stat: 59028 ( 59,028) <=
rx25_cache_reuse /sec
Ethtool(enp175s0) stat: 118074 ( 118,074) <=
rx25_csum_complete /sec
Ethtool(enp175s0) stat: 118074 ( 118,074) <= rx25_packets /sec
Ethtool(enp175s0) stat: 118074 ( 118,074) <=
rx25_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 144880442 ( 144,880,442) <= rx26_bytes /sec
Ethtool(enp175s0) stat: 56223 ( 56,223) <=
rx26_cache_reuse /sec
Ethtool(enp175s0) stat: 112334 ( 112,334) <=
rx26_csum_complete /sec
Ethtool(enp175s0) stat: 112334 ( 112,334) <= rx26_packets /sec
Ethtool(enp175s0) stat: 112334 ( 112,334) <=
rx26_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 154545288 ( 154,545,288) <= rx27_bytes /sec
Ethtool(enp175s0) stat: 58784 ( 58,784) <=
rx27_cache_reuse /sec
Ethtool(enp175s0) stat: 117627 ( 117,627) <=
rx27_csum_complete /sec
Ethtool(enp175s0) stat: 117627 ( 117,627) <= rx27_packets /sec
Ethtool(enp175s0) stat: 117627 ( 117,627) <=
rx27_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 182425129 ( 182,425,129) <= rx2_bytes /sec
Ethtool(enp175s0) stat: 71406 ( 71,406) <=
rx2_cache_reuse /sec
Ethtool(enp175s0) stat: 142872 ( 142,872) <=
rx2_csum_complete /sec
Ethtool(enp175s0) stat: 142872 ( 142,872) <= rx2_packets /sec
Ethtool(enp175s0) stat: 142872 ( 142,872) <=
rx2_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 188368405 ( 188,368,405) <= rx3_bytes /sec
Ethtool(enp175s0) stat: 72138 ( 72,138) <=
rx3_cache_reuse /sec
Ethtool(enp175s0) stat: 144259 ( 144,259) <=
rx3_csum_complete /sec
Ethtool(enp175s0) stat: 144259 ( 144,259) <= rx3_packets /sec
Ethtool(enp175s0) stat: 144259 ( 144,259) <=
rx3_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 186009984 ( 186,009,984) <= rx4_bytes /sec
Ethtool(enp175s0) stat: 70004 ( 70,004) <=
rx4_cache_reuse /sec
Ethtool(enp175s0) stat: 139939 ( 139,939) <=
rx4_csum_complete /sec
Ethtool(enp175s0) stat: 139939 ( 139,939) <= rx4_packets /sec
Ethtool(enp175s0) stat: 139939 ( 139,939) <=
rx4_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 198040550 ( 198,040,550) <= rx5_bytes /sec
Ethtool(enp175s0) stat: 75492 ( 75,492) <=
rx5_cache_reuse /sec
Ethtool(enp175s0) stat: 150950 ( 150,950) <=
rx5_csum_complete /sec
Ethtool(enp175s0) stat: 150950 ( 150,950) <= rx5_packets /sec
Ethtool(enp175s0) stat: 150950 ( 150,950) <=
rx5_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 182607101 ( 182,607,101) <= rx6_bytes /sec
Ethtool(enp175s0) stat: 69699 ( 69,699) <=
rx6_cache_reuse /sec
Ethtool(enp175s0) stat: 139335 ( 139,335) <=
rx6_csum_complete /sec
Ethtool(enp175s0) stat: 139335 ( 139,335) <= rx6_packets /sec
Ethtool(enp175s0) stat: 139335 ( 139,335) <=
rx6_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 174999243 ( 174,999,243) <= rx7_bytes /sec
Ethtool(enp175s0) stat: 66650 ( 66,650) <=
rx7_cache_reuse /sec
Ethtool(enp175s0) stat: 133323 ( 133,323) <=
rx7_csum_complete /sec
Ethtool(enp175s0) stat: 133323 ( 133,323) <= rx7_packets /sec
Ethtool(enp175s0) stat: 133323 ( 133,323) <=
rx7_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 204109286 ( 204,109,286) <= rx8_bytes /sec
Ethtool(enp175s0) stat: 76711 ( 76,711) <=
rx8_cache_reuse /sec
Ethtool(enp175s0) stat: 153481 ( 153,481) <=
rx8_csum_complete /sec
Ethtool(enp175s0) stat: 153481 ( 153,481) <= rx8_packets /sec
Ethtool(enp175s0) stat: 153481 ( 153,481) <=
rx8_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 183703752 ( 183,703,752) <= rx9_bytes /sec
Ethtool(enp175s0) stat: 71101 ( 71,101) <=
rx9_cache_reuse /sec
Ethtool(enp175s0) stat: 142172 ( 142,172) <=
rx9_csum_complete /sec
Ethtool(enp175s0) stat: 142172 ( 142,172) <= rx9_packets /sec
Ethtool(enp175s0) stat: 142172 ( 142,172) <=
rx9_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 2024072 ( 2,024,072) <=
rx_1024_to_1518_bytes_phy /sec
Ethtool(enp175s0) stat: 106582 ( 106,582) <=
rx_128_to_255_bytes_phy /sec
Ethtool(enp175s0) stat: 1296735 ( 1,296,735) <=
rx_1519_to_2047_bytes_phy /sec
Ethtool(enp175s0) stat: 59460 ( 59,460) <=
rx_256_to_511_bytes_phy /sec
Ethtool(enp175s0) stat: 57326 ( 57,326) <=
rx_512_to_1023_bytes_phy /sec
Ethtool(enp175s0) stat: 7159 ( 7,159) <=
rx_64_bytes_phy /sec
Ethtool(enp175s0) stat: 310730 ( 310,730) <=
rx_65_to_127_bytes_phy /sec
Ethtool(enp175s0) stat: 232 ( 232) <=
rx_broadcast_phy /sec
Ethtool(enp175s0) stat: 4850734036 ( 4,850,734,036) <= rx_bytes /sec
Ethtool(enp175s0) stat: 5069043007 ( 5,069,043,007) <= rx_bytes_phy /sec
Ethtool(enp175s0) stat: 1858636 ( 1,858,636) <= rx_cache_reuse
/sec
Ethtool(enp175s0) stat: 3717060 ( 3,717,060) <=
rx_csum_complete /sec
Ethtool(enp175s0) stat: 88 ( 88) <= rx_csum_none /sec
Ethtool(enp175s0) stat: 139602 ( 139,602) <=
rx_discards_phy /sec
Ethtool(enp175s0) stat: 354 ( 354) <=
rx_multicast_phy /sec
Ethtool(enp175s0) stat: 3717148 ( 3,717,148) <= rx_packets /sec
Ethtool(enp175s0) stat: 3862420 ( 3,862,420) <= rx_packets_phy
/sec
Ethtool(enp175s0) stat: 5063355121 ( 5,063,355,121) <= rx_prio0_bytes
/sec
Ethtool(enp175s0) stat: 3718759 ( 3,718,759) <=
rx_prio0_packets /sec
Ethtool(enp175s0) stat: 7193190 ( 7,193,190) <= rx_prio1_bytes
/sec
Ethtool(enp175s0) stat: 5031 ( 5,031) <=
rx_prio1_packets /sec
Ethtool(enp175s0) stat: 557 ( 557) <= rx_prio2_bytes
/sec
Ethtool(enp175s0) stat: 5 ( 5) <=
rx_prio2_packets /sec
Ethtool(enp175s0) stat: 61 ( 61) <= rx_prio3_bytes
/sec
Ethtool(enp175s0) stat: 1 ( 1) <=
rx_prio3_packets /sec
Ethtool(enp175s0) stat: 21010 ( 21,010) <= rx_prio4_bytes
/sec
Ethtool(enp175s0) stat: 39 ( 39) <=
rx_prio4_packets /sec
Ethtool(enp175s0) stat: 187 ( 187) <= rx_prio5_bytes
/sec
Ethtool(enp175s0) stat: 2 ( 2) <=
rx_prio5_packets /sec
Ethtool(enp175s0) stat: 1711 ( 1,711) <= rx_prio6_bytes
/sec
Ethtool(enp175s0) stat: 15 ( 15) <=
rx_prio6_packets /sec
Ethtool(enp175s0) stat: 19498 ( 19,498) <= rx_prio7_bytes
/sec
Ethtool(enp175s0) stat: 273 ( 273) <=
rx_prio7_packets /sec
Ethtool(enp175s0) stat: 3717148 ( 3,717,148) <=
rx_removed_vlan_packets /sec
Ethtool(enp175s0) stat: 5737 ( 5,737) <=
rx_steer_missed_packets /sec
Ethtool(enp175s0) stat: 14573 ( 14,573) <=
rx_vport_broadcast_bytes /sec
Ethtool(enp175s0) stat: 232 ( 232) <=
rx_vport_broadcast_packets /sec
Ethtool(enp175s0) stat: 25491 ( 25,491) <=
rx_vport_multicast_bytes /sec
Ethtool(enp175s0) stat: 354 ( 354) <=
rx_vport_multicast_packets /sec
Ethtool(enp175s0) stat: 4872354516 ( 4,872,354,516) <=
rx_vport_unicast_bytes /sec
Ethtool(enp175s0) stat: 3721920 ( 3,721,920) <=
rx_vport_unicast_packets /sec
Ethtool(enp175s0) stat: 158883 ( 158,883) <=
tx0_added_vlan_packets /sec
Ethtool(enp175s0) stat: 93790423 ( 93,790,423) <= tx0_bytes /sec
Ethtool(enp175s0) stat: 158854 ( 158,854) <= tx0_cqes /sec
Ethtool(enp175s0) stat: 146499 ( 146,499) <= tx0_csum_none /sec
Ethtool(enp175s0) stat: 12384 ( 12,384) <=
tx0_csum_partial /sec
Ethtool(enp175s0) stat: 2144 ( 2,144) <= tx0_nop /sec
Ethtool(enp175s0) stat: 188173 ( 188,173) <= tx0_packets /sec
Ethtool(enp175s0) stat: 53068613 ( 53,068,613) <= tx0_tso_bytes /sec
Ethtool(enp175s0) stat: 8839 ( 8,839) <=
tx0_tso_packets /sec
Ethtool(enp175s0) stat: 30 ( 30) <= tx0_xmit_more /sec
Ethtool(enp175s0) stat: 165538 ( 165,538) <=
tx10_added_vlan_packets /sec
Ethtool(enp175s0) stat: 102395057 ( 102,395,057) <= tx10_bytes /sec
Ethtool(enp175s0) stat: 165303 ( 165,303) <= tx10_cqes /sec
Ethtool(enp175s0) stat: 151089 ( 151,089) <= tx10_csum_none
/sec
Ethtool(enp175s0) stat: 14448 ( 14,448) <=
tx10_csum_partial /sec
Ethtool(enp175s0) stat: 2391 ( 2,391) <= tx10_nop /sec
Ethtool(enp175s0) stat: 198798 ( 198,798) <= tx10_packets /sec
Ethtool(enp175s0) stat: 58951449 ( 58,951,449) <= tx10_tso_bytes
/sec
Ethtool(enp175s0) stat: 8987 ( 8,987) <=
tx10_tso_packets /sec
Ethtool(enp175s0) stat: 234 ( 234) <= tx10_xmit_more
/sec
Ethtool(enp175s0) stat: 166402 ( 166,402) <=
tx11_added_vlan_packets /sec
Ethtool(enp175s0) stat: 98591304 ( 98,591,304) <= tx11_bytes /sec
Ethtool(enp175s0) stat: 166384 ( 166,384) <= tx11_cqes /sec
Ethtool(enp175s0) stat: 152456 ( 152,456) <= tx11_csum_none
/sec
Ethtool(enp175s0) stat: 13946 ( 13,946) <=
tx11_csum_partial /sec
Ethtool(enp175s0) stat: 2386 ( 2,386) <= tx11_nop /sec
Ethtool(enp175s0) stat: 201615 ( 201,615) <= tx11_packets /sec
Ethtool(enp175s0) stat: 63844660 ( 63,844,660) <= tx11_tso_bytes
/sec
Ethtool(enp175s0) stat: 10515 ( 10,515) <=
tx11_tso_packets /sec
Ethtool(enp175s0) stat: 18 ( 18) <= tx11_xmit_more
/sec
Ethtool(enp175s0) stat: 156312 ( 156,312) <=
tx12_added_vlan_packets /sec
Ethtool(enp175s0) stat: 97068537 ( 97,068,537) <= tx12_bytes /sec
Ethtool(enp175s0) stat: 156302 ( 156,302) <= tx12_cqes /sec
Ethtool(enp175s0) stat: 142468 ( 142,468) <= tx12_csum_none
/sec
Ethtool(enp175s0) stat: 13844 ( 13,844) <=
tx12_csum_partial /sec
Ethtool(enp175s0) stat: 2278 ( 2,278) <= tx12_nop /sec
Ethtool(enp175s0) stat: 187535 ( 187,535) <= tx12_packets /sec
Ethtool(enp175s0) stat: 58368798 ( 58,368,798) <= tx12_tso_bytes
/sec
Ethtool(enp175s0) stat: 10398 ( 10,398) <=
tx12_tso_packets /sec
Ethtool(enp175s0) stat: 10 ( 10) <= tx12_xmit_more
/sec
Ethtool(enp175s0) stat: 161768 ( 161,768) <=
tx13_added_vlan_packets /sec
Ethtool(enp175s0) stat: 120232518 ( 120,232,518) <= tx13_bytes /sec
Ethtool(enp175s0) stat: 161584 ( 161,584) <= tx13_cqes /sec
Ethtool(enp175s0) stat: 144388 ( 144,388) <= tx13_csum_none
/sec
Ethtool(enp175s0) stat: 17380 ( 17,380) <=
tx13_csum_partial /sec
Ethtool(enp175s0) stat: 2425 ( 2,425) <= tx13_nop /sec
Ethtool(enp175s0) stat: 202823 ( 202,823) <= tx13_packets /sec
Ethtool(enp175s0) stat: 72804507 ( 72,804,507) <= tx13_tso_bytes
/sec
Ethtool(enp175s0) stat: 10865 ( 10,865) <=
tx13_tso_packets /sec
Ethtool(enp175s0) stat: 185 ( 185) <= tx13_xmit_more
/sec
Ethtool(enp175s0) stat: 165762 ( 165,762) <=
tx14_added_vlan_packets /sec
Ethtool(enp175s0) stat: 99271622 ( 99,271,622) <= tx14_bytes /sec
Ethtool(enp175s0) stat: 165688 ( 165,688) <= tx14_cqes /sec
Ethtool(enp175s0) stat: 153195 ( 153,195) <= tx14_csum_none
/sec
Ethtool(enp175s0) stat: 12566 ( 12,566) <=
tx14_csum_partial /sec
Ethtool(enp175s0) stat: 2195 ( 2,195) <= tx14_nop /sec
Ethtool(enp175s0) stat: 195504 ( 195,504) <= tx14_packets /sec
Ethtool(enp175s0) stat: 53717277 ( 53,717,277) <= tx14_tso_bytes
/sec
Ethtool(enp175s0) stat: 8743 ( 8,743) <=
tx14_tso_packets /sec
Ethtool(enp175s0) stat: 32 ( 32) <= tx14_xmit_more
/sec
Ethtool(enp175s0) stat: 162803 ( 162,803) <=
tx15_added_vlan_packets /sec
Ethtool(enp175s0) stat: 105591893 ( 105,591,893) <= tx15_bytes /sec
Ethtool(enp175s0) stat: 162673 ( 162,673) <= tx15_cqes /sec
Ethtool(enp175s0) stat: 147080 ( 147,080) <= tx15_csum_none
/sec
Ethtool(enp175s0) stat: 15723 ( 15,723) <=
tx15_csum_partial /sec
Ethtool(enp175s0) stat: 2355 ( 2,355) <= tx15_nop /sec
Ethtool(enp175s0) stat: 198282 ( 198,282) <= tx15_packets /sec
Ethtool(enp175s0) stat: 64278573 ( 64,278,573) <= tx15_tso_bytes
/sec
Ethtool(enp175s0) stat: 10704 ( 10,704) <=
tx15_tso_packets /sec
Ethtool(enp175s0) stat: 183 ( 183) <= tx15_xmit_more
/sec
Ethtool(enp175s0) stat: 125282 ( 125,282) <=
tx16_added_vlan_packets /sec
Ethtool(enp175s0) stat: 81835815 ( 81,835,815) <= tx16_bytes /sec
Ethtool(enp175s0) stat: 125264 ( 125,264) <= tx16_cqes /sec
Ethtool(enp175s0) stat: 113284 ( 113,284) <= tx16_csum_none
/sec
Ethtool(enp175s0) stat: 11998 ( 11,998) <=
tx16_csum_partial /sec
Ethtool(enp175s0) stat: 1773 ( 1,773) <= tx16_nop /sec
Ethtool(enp175s0) stat: 150812 ( 150,812) <= tx16_packets /sec
Ethtool(enp175s0) stat: 46027767 ( 46,027,767) <= tx16_tso_bytes
/sec
Ethtool(enp175s0) stat: 7361 ( 7,361) <=
tx16_tso_packets /sec
Ethtool(enp175s0) stat: 18 ( 18) <= tx16_xmit_more
/sec
Ethtool(enp175s0) stat: 131865 ( 131,865) <=
tx17_added_vlan_packets /sec
Ethtool(enp175s0) stat: 89213555 ( 89,213,555) <= tx17_bytes /sec
Ethtool(enp175s0) stat: 131409 ( 131,409) <= tx17_cqes /sec
Ethtool(enp175s0) stat: 117199 ( 117,199) <= tx17_csum_none
/sec
Ethtool(enp175s0) stat: 14665 ( 14,665) <=
tx17_csum_partial /sec
Ethtool(enp175s0) stat: 1933 ( 1,933) <= tx17_nop /sec
Ethtool(enp175s0) stat: 161576 ( 161,576) <= tx17_packets /sec
Ethtool(enp175s0) stat: 53682297 ( 53,682,297) <= tx17_tso_bytes
/sec
Ethtool(enp175s0) stat: 8825 ( 8,825) <=
tx17_tso_packets /sec
Ethtool(enp175s0) stat: 459 ( 459) <= tx17_xmit_more
/sec
Ethtool(enp175s0) stat: 122492 ( 122,492) <=
tx18_added_vlan_packets /sec
Ethtool(enp175s0) stat: 87373476 ( 87,373,476) <= tx18_bytes /sec
Ethtool(enp175s0) stat: 122138 ( 122,138) <= tx18_cqes /sec
Ethtool(enp175s0) stat: 109037 ( 109,037) <= tx18_csum_none
/sec
Ethtool(enp175s0) stat: 13455 ( 13,455) <=
tx18_csum_partial /sec
Ethtool(enp175s0) stat: 1933 ( 1,933) <= tx18_nop /sec
Ethtool(enp175s0) stat: 152821 ( 152,821) <= tx18_packets /sec
Ethtool(enp175s0) stat: 54163102 ( 54,163,102) <= tx18_tso_bytes
/sec
Ethtool(enp175s0) stat: 8674 ( 8,674) <=
tx18_tso_packets /sec
Ethtool(enp175s0) stat: 354 ( 354) <= tx18_xmit_more
/sec
Ethtool(enp175s0) stat: 119294 ( 119,294) <=
tx19_added_vlan_packets /sec
Ethtool(enp175s0) stat: 67719609 ( 67,719,609) <= tx19_bytes /sec
Ethtool(enp175s0) stat: 119262 ( 119,262) <= tx19_cqes /sec
Ethtool(enp175s0) stat: 108591 ( 108,591) <= tx19_csum_none
/sec
Ethtool(enp175s0) stat: 10703 ( 10,703) <=
tx19_csum_partial /sec
Ethtool(enp175s0) stat: 1551 ( 1,551) <= tx19_nop /sec
Ethtool(enp175s0) stat: 141344 ( 141,344) <= tx19_packets /sec
Ethtool(enp175s0) stat: 39539778 ( 39,539,778) <= tx19_tso_bytes
/sec
Ethtool(enp175s0) stat: 6310 ( 6,310) <=
tx19_tso_packets /sec
Ethtool(enp175s0) stat: 31 ( 31) <= tx19_xmit_more
/sec
Ethtool(enp175s0) stat: 157094 ( 157,094) <=
tx1_added_vlan_packets /sec
Ethtool(enp175s0) stat: 93505806 ( 93,505,806) <= tx1_bytes /sec
Ethtool(enp175s0) stat: 156935 ( 156,935) <= tx1_cqes /sec
Ethtool(enp175s0) stat: 144805 ( 144,805) <= tx1_csum_none /sec
Ethtool(enp175s0) stat: 12289 ( 12,289) <=
tx1_csum_partial /sec
Ethtool(enp175s0) stat: 2201 ( 2,201) <= tx1_nop /sec
Ethtool(enp175s0) stat: 184561 ( 184,561) <= tx1_packets /sec
Ethtool(enp175s0) stat: 50513729 ( 50,513,729) <= tx1_tso_bytes /sec
Ethtool(enp175s0) stat: 8699 ( 8,699) <=
tx1_tso_packets /sec
Ethtool(enp175s0) stat: 159 ( 159) <= tx1_xmit_more /sec
Ethtool(enp175s0) stat: 134411 ( 134,411) <=
tx20_added_vlan_packets /sec
Ethtool(enp175s0) stat: 88898658 ( 88,898,658) <= tx20_bytes /sec
Ethtool(enp175s0) stat: 134221 ( 134,221) <= tx20_cqes /sec
Ethtool(enp175s0) stat: 120787 ( 120,787) <= tx20_csum_none
/sec
Ethtool(enp175s0) stat: 13624 ( 13,624) <=
tx20_csum_partial /sec
Ethtool(enp175s0) stat: 2064 ( 2,064) <= tx20_nop /sec
Ethtool(enp175s0) stat: 165064 ( 165,064) <= tx20_packets /sec
Ethtool(enp175s0) stat: 54707259 ( 54,707,259) <= tx20_tso_bytes
/sec
Ethtool(enp175s0) stat: 8769 ( 8,769) <=
tx20_tso_packets /sec
Ethtool(enp175s0) stat: 188 ( 188) <= tx20_xmit_more
/sec
Ethtool(enp175s0) stat: 127029 ( 127,029) <=
tx21_added_vlan_packets /sec
Ethtool(enp175s0) stat: 98012630 ( 98,012,630) <= tx21_bytes /sec
Ethtool(enp175s0) stat: 126990 ( 126,990) <= tx21_cqes /sec
Ethtool(enp175s0) stat: 111553 ( 111,553) <= tx21_csum_none
/sec
Ethtool(enp175s0) stat: 15476 ( 15,476) <=
tx21_csum_partial /sec
Ethtool(enp175s0) stat: 2002 ( 2,002) <= tx21_nop /sec
Ethtool(enp175s0) stat: 159688 ( 159,688) <= tx21_packets /sec
Ethtool(enp175s0) stat: 59637304 ( 59,637,304) <= tx21_tso_bytes
/sec
Ethtool(enp175s0) stat: 9988 ( 9,988) <=
tx21_tso_packets /sec
Ethtool(enp175s0) stat: 39 ( 39) <= tx21_xmit_more
/sec
Ethtool(enp175s0) stat: 122610 ( 122,610) <=
tx22_added_vlan_packets /sec
Ethtool(enp175s0) stat: 70972052 ( 70,972,052) <= tx22_bytes /sec
Ethtool(enp175s0) stat: 122600 ( 122,600) <= tx22_cqes /sec
Ethtool(enp175s0) stat: 111526 ( 111,526) <= tx22_csum_none
/sec
Ethtool(enp175s0) stat: 11085 ( 11,085) <=
tx22_csum_partial /sec
Ethtool(enp175s0) stat: 1544 ( 1,544) <= tx22_nop /sec
Ethtool(enp175s0) stat: 144874 ( 144,874) <= tx22_packets /sec
Ethtool(enp175s0) stat: 40175057 ( 40,175,057) <= tx22_tso_bytes
/sec
Ethtool(enp175s0) stat: 6517 ( 6,517) <=
tx22_tso_packets /sec
Ethtool(enp175s0) stat: 30 ( 30) <= tx22_xmit_more
/sec
Ethtool(enp175s0) stat: 126809 ( 126,809) <=
tx23_added_vlan_packets /sec
Ethtool(enp175s0) stat: 76906314 ( 76,906,314) <= tx23_bytes /sec
Ethtool(enp175s0) stat: 126791 ( 126,791) <= tx23_cqes /sec
Ethtool(enp175s0) stat: 116656 ( 116,656) <= tx23_csum_none
/sec
Ethtool(enp175s0) stat: 10153 ( 10,153) <=
tx23_csum_partial /sec
Ethtool(enp175s0) stat: 1686 ( 1,686) <= tx23_nop /sec
Ethtool(enp175s0) stat: 148551 ( 148,551) <= tx23_packets /sec
Ethtool(enp175s0) stat: 39913764 ( 39,913,764) <= tx23_tso_bytes
/sec
Ethtool(enp175s0) stat: 6978 ( 6,978) <=
tx23_tso_packets /sec
Ethtool(enp175s0) stat: 18 ( 18) <= tx23_xmit_more
/sec
Ethtool(enp175s0) stat: 133910 ( 133,910) <=
tx24_added_vlan_packets /sec
Ethtool(enp175s0) stat: 69609913 ( 69,609,913) <= tx24_bytes /sec
Ethtool(enp175s0) stat: 133890 ( 133,890) <= tx24_cqes /sec
Ethtool(enp175s0) stat: 124462 ( 124,462) <= tx24_csum_none
/sec
Ethtool(enp175s0) stat: 9448 ( 9,448) <=
tx24_csum_partial /sec
Ethtool(enp175s0) stat: 1639 ( 1,639) <= tx24_nop /sec
Ethtool(enp175s0) stat: 154475 ( 154,475) <= tx24_packets /sec
Ethtool(enp175s0) stat: 37456148 ( 37,456,148) <= tx24_tso_bytes
/sec
Ethtool(enp175s0) stat: 6247 ( 6,247) <=
tx24_tso_packets /sec
Ethtool(enp175s0) stat: 20 ( 20) <= tx24_xmit_more
/sec
Ethtool(enp175s0) stat: 118528 ( 118,528) <=
tx25_added_vlan_packets /sec
Ethtool(enp175s0) stat: 62435525 ( 62,435,525) <= tx25_bytes /sec
Ethtool(enp175s0) stat: 118471 ( 118,471) <= tx25_cqes /sec
Ethtool(enp175s0) stat: 108887 ( 108,887) <= tx25_csum_none
/sec
Ethtool(enp175s0) stat: 9640 ( 9,640) <=
tx25_csum_partial /sec
Ethtool(enp175s0) stat: 1528 ( 1,528) <= tx25_nop /sec
Ethtool(enp175s0) stat: 138748 ( 138,748) <= tx25_packets /sec
Ethtool(enp175s0) stat: 36592045 ( 36,592,045) <= tx25_tso_bytes
/sec
Ethtool(enp175s0) stat: 5993 ( 5,993) <=
tx25_tso_packets /sec
Ethtool(enp175s0) stat: 44 ( 44) <= tx25_xmit_more
/sec
Ethtool(enp175s0) stat: 119890 ( 119,890) <=
tx26_added_vlan_packets /sec
Ethtool(enp175s0) stat: 76743929 ( 76,743,929) <= tx26_bytes /sec
Ethtool(enp175s0) stat: 119873 ( 119,873) <= tx26_cqes /sec
Ethtool(enp175s0) stat: 108181 ( 108,181) <= tx26_csum_none
/sec
Ethtool(enp175s0) stat: 11709 ( 11,709) <=
tx26_csum_partial /sec
Ethtool(enp175s0) stat: 1690 ( 1,690) <= tx26_nop /sec
Ethtool(enp175s0) stat: 144495 ( 144,495) <= tx26_packets /sec
Ethtool(enp175s0) stat: 43922304 ( 43,922,304) <= tx26_tso_bytes
/sec
Ethtool(enp175s0) stat: 7043 ( 7,043) <=
tx26_tso_packets /sec
Ethtool(enp175s0) stat: 17 ( 17) <= tx26_xmit_more
/sec
Ethtool(enp175s0) stat: 130825 ( 130,825) <=
tx27_added_vlan_packets /sec
Ethtool(enp175s0) stat: 88723162 ( 88,723,162) <= tx27_bytes /sec
Ethtool(enp175s0) stat: 130769 ( 130,769) <= tx27_cqes /sec
Ethtool(enp175s0) stat: 116555 ( 116,555) <= tx27_csum_none
/sec
Ethtool(enp175s0) stat: 14270 ( 14,270) <=
tx27_csum_partial /sec
Ethtool(enp175s0) stat: 2042 ( 2,042) <= tx27_nop /sec
Ethtool(enp175s0) stat: 161228 ( 161,228) <= tx27_packets /sec
Ethtool(enp175s0) stat: 55637023 ( 55,637,023) <= tx27_tso_bytes
/sec
Ethtool(enp175s0) stat: 9707 ( 9,707) <=
tx27_tso_packets /sec
Ethtool(enp175s0) stat: 71 ( 71) <= tx27_xmit_more
/sec
Ethtool(enp175s0) stat: 166973 ( 166,973) <=
tx2_added_vlan_packets /sec
Ethtool(enp175s0) stat: 103659503 ( 103,659,503) <= tx2_bytes /sec
Ethtool(enp175s0) stat: 166941 ( 166,941) <= tx2_cqes /sec
Ethtool(enp175s0) stat: 151389 ( 151,389) <= tx2_csum_none /sec
Ethtool(enp175s0) stat: 15585 ( 15,585) <=
tx2_csum_partial /sec
Ethtool(enp175s0) stat: 2455 ( 2,455) <= tx2_nop /sec
Ethtool(enp175s0) stat: 201854 ( 201,854) <= tx2_packets /sec
Ethtool(enp175s0) stat: 65384298 ( 65,384,298) <= tx2_tso_bytes /sec
Ethtool(enp175s0) stat: 11809 ( 11,809) <=
tx2_tso_packets /sec
Ethtool(enp175s0) stat: 30 ( 30) <= tx2_xmit_more /sec
Ethtool(enp175s0) stat: 172353 ( 172,353) <=
tx3_added_vlan_packets /sec
Ethtool(enp175s0) stat: 88248541 ( 88,248,541) <= tx3_bytes /sec
Ethtool(enp175s0) stat: 172277 ( 172,277) <= tx3_cqes /sec
Ethtool(enp175s0) stat: 160714 ( 160,714) <= tx3_csum_none /sec
Ethtool(enp175s0) stat: 11639 ( 11,639) <=
tx3_csum_partial /sec
Ethtool(enp175s0) stat: 2258 ( 2,258) <= tx3_nop /sec
Ethtool(enp175s0) stat: 199853 ( 199,853) <= tx3_packets /sec
Ethtool(enp175s0) stat: 49776463 ( 49,776,463) <= tx3_tso_bytes /sec
Ethtool(enp175s0) stat: 8353 ( 8,353) <=
tx3_tso_packets /sec
Ethtool(enp175s0) stat: 76 ( 76) <= tx3_xmit_more /sec
Ethtool(enp175s0) stat: 157527 ( 157,527) <=
tx4_added_vlan_packets /sec
Ethtool(enp175s0) stat: 110770979 ( 110,770,979) <= tx4_bytes /sec
Ethtool(enp175s0) stat: 157492 ( 157,492) <= tx4_cqes /sec
Ethtool(enp175s0) stat: 141858 ( 141,858) <= tx4_csum_none /sec
Ethtool(enp175s0) stat: 15670 ( 15,670) <=
tx4_csum_partial /sec
Ethtool(enp175s0) stat: 2320 ( 2,320) <= tx4_nop /sec
Ethtool(enp175s0) stat: 192429 ( 192,429) <= tx4_packets /sec
Ethtool(enp175s0) stat: 64689503 ( 64,689,503) <= tx4_tso_bytes /sec
Ethtool(enp175s0) stat: 11367 ( 11,367) <=
tx4_tso_packets /sec
Ethtool(enp175s0) stat: 35 ( 35) <= tx4_xmit_more /sec
Ethtool(enp175s0) stat: 169077 ( 169,077) <=
tx5_added_vlan_packets /sec
Ethtool(enp175s0) stat: 121536690 ( 121,536,690) <= tx5_bytes /sec
Ethtool(enp175s0) stat: 168906 ( 168,906) <= tx5_cqes /sec
Ethtool(enp175s0) stat: 150099 ( 150,099) <= tx5_csum_none /sec
Ethtool(enp175s0) stat: 18978 ( 18,978) <=
tx5_csum_partial /sec
Ethtool(enp175s0) stat: 2678 ( 2,678) <= tx5_nop /sec
Ethtool(enp175s0) stat: 210733 ( 210,733) <= tx5_packets /sec
Ethtool(enp175s0) stat: 76448346 ( 76,448,346) <= tx5_tso_bytes /sec
Ethtool(enp175s0) stat: 13238 ( 13,238) <=
tx5_tso_packets /sec
Ethtool(enp175s0) stat: 171 ( 171) <= tx5_xmit_more /sec
Ethtool(enp175s0) stat: 156881 ( 156,881) <=
tx6_added_vlan_packets /sec
Ethtool(enp175s0) stat: 85752393 ( 85,752,393) <= tx6_bytes /sec
Ethtool(enp175s0) stat: 156859 ( 156,859) <= tx6_cqes /sec
Ethtool(enp175s0) stat: 144843 ( 144,843) <= tx6_csum_none /sec
Ethtool(enp175s0) stat: 12038 ( 12,038) <=
tx6_csum_partial /sec
Ethtool(enp175s0) stat: 2034 ( 2,034) <= tx6_nop /sec
Ethtool(enp175s0) stat: 181315 ( 181,315) <= tx6_packets /sec
Ethtool(enp175s0) stat: 44625307 ( 44,625,307) <= tx6_tso_bytes /sec
Ethtool(enp175s0) stat: 7752 ( 7,752) <=
tx6_tso_packets /sec
Ethtool(enp175s0) stat: 24 ( 24) <= tx6_xmit_more /sec
Ethtool(enp175s0) stat: 157744 ( 157,744) <=
tx7_added_vlan_packets /sec
Ethtool(enp175s0) stat: 88795890 ( 88,795,890) <= tx7_bytes /sec
Ethtool(enp175s0) stat: 157666 ( 157,666) <= tx7_cqes /sec
Ethtool(enp175s0) stat: 146023 ( 146,023) <= tx7_csum_none /sec
Ethtool(enp175s0) stat: 11720 ( 11,720) <=
tx7_csum_partial /sec
Ethtool(enp175s0) stat: 2190 ( 2,190) <= tx7_nop /sec
Ethtool(enp175s0) stat: 184845 ( 184,845) <= tx7_packets /sec
Ethtool(enp175s0) stat: 48986735 ( 48,986,735) <= tx7_tso_bytes /sec
Ethtool(enp175s0) stat: 8401 ( 8,401) <=
tx7_tso_packets /sec
Ethtool(enp175s0) stat: 78 ( 78) <= tx7_xmit_more /sec
Ethtool(enp175s0) stat: 165190 ( 165,190) <=
tx8_added_vlan_packets /sec
Ethtool(enp175s0) stat: 108303283 ( 108,303,283) <= tx8_bytes /sec
Ethtool(enp175s0) stat: 165145 ( 165,145) <= tx8_cqes /sec
Ethtool(enp175s0) stat: 150047 ( 150,047) <= tx8_csum_none /sec
Ethtool(enp175s0) stat: 15143 ( 15,143) <=
tx8_csum_partial /sec
Ethtool(enp175s0) stat: 2497 ( 2,497) <= tx8_nop /sec
Ethtool(enp175s0) stat: 201615 ( 201,615) <= tx8_packets /sec
Ethtool(enp175s0) stat: 67575199 ( 67,575,199) <= tx8_tso_bytes /sec
Ethtool(enp175s0) stat: 11828 ( 11,828) <=
tx8_tso_packets /sec
Ethtool(enp175s0) stat: 45 ( 45) <= tx8_xmit_more /sec
Ethtool(enp175s0) stat: 168256 ( 168,256) <=
tx9_added_vlan_packets /sec
Ethtool(enp175s0) stat: 115981227 ( 115,981,227) <= tx9_bytes /sec
Ethtool(enp175s0) stat: 168225 ( 168,225) <= tx9_cqes /sec
Ethtool(enp175s0) stat: 147980 ( 147,980) <= tx9_csum_none /sec
Ethtool(enp175s0) stat: 20275 ( 20,275) <=
tx9_csum_partial /sec
Ethtool(enp175s0) stat: 2522 ( 2,522) <= tx9_nop /sec
Ethtool(enp175s0) stat: 203398 ( 203,398) <= tx9_packets /sec
Ethtool(enp175s0) stat: 63891360 ( 63,891,360) <= tx9_tso_bytes /sec
Ethtool(enp175s0) stat: 10610 ( 10,610) <=
tx9_tso_packets /sec
Ethtool(enp175s0) stat: 30 ( 30) <= tx9_xmit_more /sec
Ethtool(enp175s0) stat: 4121491 ( 4,121,491) <=
tx_added_vlan_packets /sec
Ethtool(enp175s0) stat: 1 ( 1) <=
tx_broadcast_phy /sec
Ethtool(enp175s0) stat: 2591851137 ( 2,591,851,137) <= tx_bytes /sec
Ethtool(enp175s0) stat: 2626884154 ( 2,626,884,154) <= tx_bytes_phy /sec
Ethtool(enp175s0) stat: 4118829 ( 4,118,829) <= tx_cqes /sec
Ethtool(enp175s0) stat: 3741682 ( 3,741,682) <= tx_csum_none /sec
Ethtool(enp175s0) stat: 379810 ( 379,810) <=
tx_csum_partial /sec
Ethtool(enp175s0) stat: 58718 ( 58,718) <= tx_nop /sec
Ethtool(enp175s0) stat: 4956995 ( 4,956,995) <= tx_packets /sec
Ethtool(enp175s0) stat: 4956579 ( 4,956,579) <= tx_packets_phy
/sec
Ethtool(enp175s0) stat: 2627152852 ( 2,627,152,852) <= tx_prio0_bytes
/sec
Ethtool(enp175s0) stat: 4957649 ( 4,957,649) <=
tx_prio0_packets /sec
Ethtool(enp175s0) stat: 1518383905 ( 1,518,383,905) <= tx_tso_bytes /sec
Ethtool(enp175s0) stat: 253526 ( 253,526) <= tx_tso_packets
/sec
Ethtool(enp175s0) stat: 57 ( 57) <=
tx_vport_broadcast_bytes /sec
Ethtool(enp175s0) stat: 1 ( 1) <=
tx_vport_broadcast_packets /sec
Ethtool(enp175s0) stat: 2607061269 ( 2,607,061,269) <=
tx_vport_unicast_bytes /sec
Ethtool(enp175s0) stat: 4956492 ( 4,956,492) <=
tx_vport_unicast_packets /sec
Ethtool(enp175s0) stat: 2630 ( 2,630) <= tx_xmit_more /sec
Ethtool(enp216s0) stat: 6923 ( 6,923) <= ch0_arm /sec
Ethtool(enp216s0) stat: 8367 ( 8,367) <= ch0_events /sec
Ethtool(enp216s0) stat: 9797 ( 9,797) <= ch0_poll /sec
Ethtool(enp216s0) stat: 7703 ( 7,703) <= ch10_arm /sec
Ethtool(enp216s0) stat: 9253 ( 9,253) <= ch10_events /sec
Ethtool(enp216s0) stat: 10807 ( 10,807) <= ch10_poll /sec
Ethtool(enp216s0) stat: 7886 ( 7,886) <= ch11_arm /sec
Ethtool(enp216s0) stat: 9391 ( 9,391) <= ch11_events /sec
Ethtool(enp216s0) stat: 10896 ( 10,896) <= ch11_poll /sec
Ethtool(enp216s0) stat: 9793 ( 9,793) <= ch12_arm /sec
Ethtool(enp216s0) stat: 11299 ( 11,299) <= ch12_events /sec
Ethtool(enp216s0) stat: 12637 ( 12,637) <= ch12_poll /sec
Ethtool(enp216s0) stat: 9119 ( 9,119) <= ch13_arm /sec
Ethtool(enp216s0) stat: 10671 ( 10,671) <= ch13_events /sec
Ethtool(enp216s0) stat: 12205 ( 12,205) <= ch13_poll /sec
Ethtool(enp216s0) stat: 8784 ( 8,784) <= ch14_arm /sec
Ethtool(enp216s0) stat: 10189 ( 10,189) <= ch14_events /sec
Ethtool(enp216s0) stat: 11565 ( 11,565) <= ch14_poll /sec
Ethtool(enp216s0) stat: 9248 ( 9,248) <= ch15_arm /sec
Ethtool(enp216s0) stat: 10739 ( 10,739) <= ch15_events /sec
Ethtool(enp216s0) stat: 12238 ( 12,238) <= ch15_poll /sec
Ethtool(enp216s0) stat: 5201 ( 5,201) <= ch16_arm /sec
Ethtool(enp216s0) stat: 6554 ( 6,554) <= ch16_events /sec
Ethtool(enp216s0) stat: 7667 ( 7,667) <= ch16_poll /sec
Ethtool(enp216s0) stat: 7799 ( 7,799) <= ch17_arm /sec
Ethtool(enp216s0) stat: 9120 ( 9,120) <= ch17_events /sec
Ethtool(enp216s0) stat: 10194 ( 10,194) <= ch17_poll /sec
Ethtool(enp216s0) stat: 6889 ( 6,889) <= ch18_arm /sec
Ethtool(enp216s0) stat: 8266 ( 8,266) <= ch18_events /sec
Ethtool(enp216s0) stat: 9332 ( 9,332) <= ch18_poll /sec
Ethtool(enp216s0) stat: 5494 ( 5,494) <= ch19_arm /sec
Ethtool(enp216s0) stat: 6813 ( 6,813) <= ch19_events /sec
Ethtool(enp216s0) stat: 7837 ( 7,837) <= ch19_poll /sec
Ethtool(enp216s0) stat: 6672 ( 6,672) <= ch1_arm /sec
Ethtool(enp216s0) stat: 8124 ( 8,124) <= ch1_events /sec
Ethtool(enp216s0) stat: 9604 ( 9,604) <= ch1_poll /sec
Ethtool(enp216s0) stat: 7705 ( 7,705) <= ch20_arm /sec
Ethtool(enp216s0) stat: 9102 ( 9,102) <= ch20_events /sec
Ethtool(enp216s0) stat: 10287 ( 10,287) <= ch20_poll /sec
Ethtool(enp216s0) stat: 5929 ( 5,929) <= ch21_arm /sec
Ethtool(enp216s0) stat: 7333 ( 7,333) <= ch21_events /sec
Ethtool(enp216s0) stat: 8463 ( 8,463) <= ch21_poll /sec
Ethtool(enp216s0) stat: 5495 ( 5,495) <= ch22_arm /sec
Ethtool(enp216s0) stat: 6813 ( 6,813) <= ch22_events /sec
Ethtool(enp216s0) stat: 7843 ( 7,843) <= ch22_poll /sec
Ethtool(enp216s0) stat: 7091 ( 7,091) <= ch23_arm /sec
Ethtool(enp216s0) stat: 8367 ( 8,367) <= ch23_events /sec
Ethtool(enp216s0) stat: 9344 ( 9,344) <= ch23_poll /sec
Ethtool(enp216s0) stat: 5481 ( 5,481) <= ch24_arm /sec
Ethtool(enp216s0) stat: 6879 ( 6,879) <= ch24_events /sec
Ethtool(enp216s0) stat: 7995 ( 7,995) <= ch24_poll /sec
Ethtool(enp216s0) stat: 5642 ( 5,642) <= ch25_arm /sec
Ethtool(enp216s0) stat: 6959 ( 6,959) <= ch25_events /sec
Ethtool(enp216s0) stat: 7927 ( 7,927) <= ch25_poll /sec
Ethtool(enp216s0) stat: 5289 ( 5,289) <= ch26_arm /sec
Ethtool(enp216s0) stat: 6643 ( 6,643) <= ch26_events /sec
Ethtool(enp216s0) stat: 7691 ( 7,691) <= ch26_poll /sec
Ethtool(enp216s0) stat: 7313 ( 7,313) <= ch27_arm /sec
Ethtool(enp216s0) stat: 8719 ( 8,719) <= ch27_events /sec
Ethtool(enp216s0) stat: 9876 ( 9,876) <= ch27_poll /sec
Ethtool(enp216s0) stat: 7791 ( 7,791) <= ch2_arm /sec
Ethtool(enp216s0) stat: 9328 ( 9,328) <= ch2_events /sec
Ethtool(enp216s0) stat: 10838 ( 10,838) <= ch2_poll /sec
Ethtool(enp216s0) stat: 9595 ( 9,595) <= ch3_arm /sec
Ethtool(enp216s0) stat: 11015 ( 11,015) <= ch3_events /sec
Ethtool(enp216s0) stat: 12402 ( 12,402) <= ch3_poll /sec
Ethtool(enp216s0) stat: 9653 ( 9,653) <= ch4_arm /sec
Ethtool(enp216s0) stat: 11163 ( 11,163) <= ch4_events /sec
Ethtool(enp216s0) stat: 12553 ( 12,553) <= ch4_poll /sec
Ethtool(enp216s0) stat: 10269 ( 10,269) <= ch5_arm /sec
Ethtool(enp216s0) stat: 11727 ( 11,727) <= ch5_events /sec
Ethtool(enp216s0) stat: 13160 ( 13,160) <= ch5_poll /sec
Ethtool(enp216s0) stat: 8806 ( 8,806) <= ch6_arm /sec
Ethtool(enp216s0) stat: 10191 ( 10,191) <= ch6_events /sec
Ethtool(enp216s0) stat: 11431 ( 11,431) <= ch6_poll /sec
Ethtool(enp216s0) stat: 6866 ( 6,866) <= ch7_arm /sec
Ethtool(enp216s0) stat: 8412 ( 8,412) <= ch7_events /sec
Ethtool(enp216s0) stat: 9815 ( 9,815) <= ch7_poll /sec
Ethtool(enp216s0) stat: 10533 ( 10,533) <= ch8_arm /sec
Ethtool(enp216s0) stat: 11945 ( 11,945) <= ch8_events /sec
Ethtool(enp216s0) stat: 13286 ( 13,286) <= ch8_poll /sec
Ethtool(enp216s0) stat: 7126 ( 7,126) <= ch9_arm /sec
Ethtool(enp216s0) stat: 8643 ( 8,643) <= ch9_events /sec
Ethtool(enp216s0) stat: 10210 ( 10,210) <= ch9_poll /sec
Ethtool(enp216s0) stat: 212206 ( 212,206) <= ch_arm /sec
Ethtool(enp216s0) stat: 252118 ( 252,118) <= ch_events /sec
Ethtool(enp216s0) stat: 288009 ( 288,009) <= ch_poll /sec
Ethtool(enp216s0) stat: 93281134 ( 93,281,134) <= rx0_bytes /sec
Ethtool(enp216s0) stat: 94200 ( 94,200) <=
rx0_cache_reuse /sec
Ethtool(enp216s0) stat: 188327 ( 188,327) <=
rx0_csum_complete /sec
Ethtool(enp216s0) stat: 7 ( 7) <= rx0_csum_none /sec
Ethtool(enp216s0) stat: 188334 ( 188,334) <= rx0_packets /sec
Ethtool(enp216s0) stat: 188334 ( 188,334) <=
rx0_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 102443052 ( 102,443,052) <= rx10_bytes /sec
Ethtool(enp216s0) stat: 99816 ( 99,816) <=
rx10_cache_reuse /sec
Ethtool(enp216s0) stat: 199616 ( 199,616) <=
rx10_csum_complete /sec
Ethtool(enp216s0) stat: 199616 ( 199,616) <= rx10_packets /sec
Ethtool(enp216s0) stat: 199616 ( 199,616) <=
rx10_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 97655592 ( 97,655,592) <= rx11_bytes /sec
Ethtool(enp216s0) stat: 100854 ( 100,854) <=
rx11_cache_reuse /sec
Ethtool(enp216s0) stat: 201655 ( 201,655) <=
rx11_csum_complete /sec
Ethtool(enp216s0) stat: 201655 ( 201,655) <= rx11_packets /sec
Ethtool(enp216s0) stat: 201655 ( 201,655) <=
rx11_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 97076715 ( 97,076,715) <= rx12_bytes /sec
Ethtool(enp216s0) stat: 94078 ( 94,078) <=
rx12_cache_reuse /sec
Ethtool(enp216s0) stat: 188037 ( 188,037) <=
rx12_csum_complete /sec
Ethtool(enp216s0) stat: 188037 ( 188,037) <= rx12_packets /sec
Ethtool(enp216s0) stat: 188037 ( 188,037) <=
rx12_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 120547921 ( 120,547,921) <= rx13_bytes /sec
Ethtool(enp216s0) stat: 101709 ( 101,709) <=
rx13_cache_reuse /sec
Ethtool(enp216s0) stat: 203473 ( 203,473) <=
rx13_csum_complete /sec
Ethtool(enp216s0) stat: 203473 ( 203,473) <= rx13_packets /sec
Ethtool(enp216s0) stat: 203473 ( 203,473) <=
rx13_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 98367936 ( 98,367,936) <= rx14_bytes /sec
Ethtool(enp216s0) stat: 97741 ( 97,741) <=
rx14_cache_reuse /sec
Ethtool(enp216s0) stat: 195506 ( 195,506) <=
rx14_csum_complete /sec
Ethtool(enp216s0) stat: 195506 ( 195,506) <= rx14_packets /sec
Ethtool(enp216s0) stat: 195506 ( 195,506) <=
rx14_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 106726542 ( 106,726,542) <= rx15_bytes /sec
Ethtool(enp216s0) stat: 99694 ( 99,694) <=
rx15_cache_reuse /sec
Ethtool(enp216s0) stat: 199395 ( 199,395) <=
rx15_csum_complete /sec
Ethtool(enp216s0) stat: 199395 ( 199,395) <= rx15_packets /sec
Ethtool(enp216s0) stat: 199395 ( 199,395) <=
rx15_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 81928969 ( 81,928,969) <= rx16_bytes /sec
Ethtool(enp216s0) stat: 75580 ( 75,580) <=
rx16_cache_reuse /sec
Ethtool(enp216s0) stat: 151139 ( 151,139) <=
rx16_csum_complete /sec
Ethtool(enp216s0) stat: 151139 ( 151,139) <= rx16_packets /sec
Ethtool(enp216s0) stat: 151139 ( 151,139) <=
rx16_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 90509227 ( 90,509,227) <= rx17_bytes /sec
Ethtool(enp216s0) stat: 81196 ( 81,196) <=
rx17_cache_reuse /sec
Ethtool(enp216s0) stat: 162403 ( 162,403) <=
rx17_csum_complete /sec
Ethtool(enp216s0) stat: 162403 ( 162,403) <= rx17_packets /sec
Ethtool(enp216s0) stat: 162403 ( 162,403) <=
rx17_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 87854385 ( 87,854,385) <= rx18_bytes /sec
Ethtool(enp216s0) stat: 76923 ( 76,923) <=
rx18_cache_reuse /sec
Ethtool(enp216s0) stat: 153866 ( 153,866) <=
rx18_csum_complete /sec
Ethtool(enp216s0) stat: 153866 ( 153,866) <= rx18_packets /sec
Ethtool(enp216s0) stat: 153866 ( 153,866) <=
rx18_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 67849725 ( 67,849,725) <= rx19_bytes /sec
Ethtool(enp216s0) stat: 71001 ( 71,001) <=
rx19_cache_reuse /sec
Ethtool(enp216s0) stat: 142064 ( 142,064) <=
rx19_csum_complete /sec
Ethtool(enp216s0) stat: 142064 ( 142,064) <= rx19_packets /sec
Ethtool(enp216s0) stat: 142064 ( 142,064) <=
rx19_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 92611021 ( 92,611,021) <= rx1_bytes /sec
Ethtool(enp216s0) stat: 92307 ( 92,307) <=
rx1_cache_reuse /sec
Ethtool(enp216s0) stat: 184639 ( 184,639) <=
rx1_csum_complete /sec
Ethtool(enp216s0) stat: 184639 ( 184,639) <= rx1_packets /sec
Ethtool(enp216s0) stat: 184639 ( 184,639) <=
rx1_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 88902617 ( 88,902,617) <= rx20_bytes /sec
Ethtool(enp216s0) stat: 82844 ( 82,844) <=
rx20_cache_reuse /sec
Ethtool(enp216s0) stat: 165764 ( 165,764) <=
rx20_csum_complete /sec
Ethtool(enp216s0) stat: 165764 ( 165,764) <= rx20_packets /sec
Ethtool(enp216s0) stat: 165764 ( 165,764) <=
rx20_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 98096942 ( 98,096,942) <= rx21_bytes /sec
Ethtool(enp216s0) stat: 79975 ( 79,975) <=
rx21_cache_reuse /sec
Ethtool(enp216s0) stat: 159908 ( 159,908) <=
rx21_csum_complete /sec
Ethtool(enp216s0) stat: 159908 ( 159,908) <= rx21_packets /sec
Ethtool(enp216s0) stat: 159908 ( 159,908) <=
rx21_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 71885445 ( 71,885,445) <= rx22_bytes /sec
Ethtool(enp216s0) stat: 73565 ( 73,565) <=
rx22_cache_reuse /sec
Ethtool(enp216s0) stat: 147136 ( 147,136) <=
rx22_csum_complete /sec
Ethtool(enp216s0) stat: 147136 ( 147,136) <= rx22_packets /sec
Ethtool(enp216s0) stat: 147136 ( 147,136) <=
rx22_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 77030721 ( 77,030,721) <= rx23_bytes /sec
Ethtool(enp216s0) stat: 74481 ( 74,481) <=
rx23_cache_reuse /sec
Ethtool(enp216s0) stat: 148989 ( 148,989) <=
rx23_csum_complete /sec
Ethtool(enp216s0) stat: 148989 ( 148,989) <= rx23_packets /sec
Ethtool(enp216s0) stat: 148989 ( 148,989) <=
rx23_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 69603951 ( 69,603,951) <= rx24_bytes /sec
Ethtool(enp216s0) stat: 77472 ( 77,472) <=
rx24_cache_reuse /sec
Ethtool(enp216s0) stat: 154916 ( 154,916) <=
rx24_csum_complete /sec
Ethtool(enp216s0) stat: 154916 ( 154,916) <= rx24_packets /sec
Ethtool(enp216s0) stat: 154916 ( 154,916) <=
rx24_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 62277522 ( 62,277,522) <= rx25_bytes /sec
Ethtool(enp216s0) stat: 69414 ( 69,414) <=
rx25_cache_reuse /sec
Ethtool(enp216s0) stat: 138835 ( 138,835) <=
rx25_csum_complete /sec
Ethtool(enp216s0) stat: 138835 ( 138,835) <= rx25_packets /sec
Ethtool(enp216s0) stat: 138835 ( 138,835) <=
rx25_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 77498258 ( 77,498,258) <= rx26_bytes /sec
Ethtool(enp216s0) stat: 72466 ( 72,466) <=
rx26_cache_reuse /sec
Ethtool(enp216s0) stat: 144925 ( 144,925) <=
rx26_csum_complete /sec
Ethtool(enp216s0) stat: 144925 ( 144,925) <= rx26_packets /sec
Ethtool(enp216s0) stat: 144925 ( 144,925) <=
rx26_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 88962856 ( 88,962,856) <= rx27_bytes /sec
Ethtool(enp216s0) stat: 80952 ( 80,952) <=
rx27_cache_reuse /sec
Ethtool(enp216s0) stat: 161879 ( 161,879) <=
rx27_csum_complete /sec
Ethtool(enp216s0) stat: 161879 ( 161,879) <= rx27_packets /sec
Ethtool(enp216s0) stat: 161879 ( 161,879) <=
rx27_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 103624977 ( 103,624,977) <= rx2_bytes /sec
Ethtool(enp216s0) stat: 101098 ( 101,098) <=
rx2_cache_reuse /sec
Ethtool(enp216s0) stat: 202130 ( 202,130) <=
rx2_csum_complete /sec
Ethtool(enp216s0) stat: 202130 ( 202,130) <= rx2_packets /sec
Ethtool(enp216s0) stat: 202130 ( 202,130) <=
rx2_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 87213368 ( 87,213,368) <= rx3_bytes /sec
Ethtool(enp216s0) stat: 99877 ( 99,877) <=
rx3_cache_reuse /sec
Ethtool(enp216s0) stat: 199778 ( 199,778) <=
rx3_csum_complete /sec
Ethtool(enp216s0) stat: 199778 ( 199,778) <= rx3_packets /sec
Ethtool(enp216s0) stat: 199778 ( 199,778) <=
rx3_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 110458845 ( 110,458,845) <= rx4_bytes /sec
Ethtool(enp216s0) stat: 692 ( 692) <= rx4_cache_busy
/sec
Ethtool(enp216s0) stat: 692 ( 692) <= rx4_cache_full
/sec
Ethtool(enp216s0) stat: 95523 ( 95,523) <=
rx4_cache_reuse /sec
Ethtool(enp216s0) stat: 192402 ( 192,402) <=
rx4_csum_complete /sec
Ethtool(enp216s0) stat: 192402 ( 192,402) <= rx4_packets /sec
Ethtool(enp216s0) stat: 192402 ( 192,402) <=
rx4_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 121222062 ( 121,222,062) <= rx5_bytes /sec
Ethtool(enp216s0) stat: 105616 ( 105,616) <=
rx5_cache_reuse /sec
Ethtool(enp216s0) stat: 211273 ( 211,273) <=
rx5_csum_complete /sec
Ethtool(enp216s0) stat: 211273 ( 211,273) <= rx5_packets /sec
Ethtool(enp216s0) stat: 211273 ( 211,273) <=
rx5_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 85541470 ( 85,541,470) <= rx6_bytes /sec
Ethtool(enp216s0) stat: 91147 ( 91,147) <=
rx6_cache_reuse /sec
Ethtool(enp216s0) stat: 182257 ( 182,257) <=
rx6_csum_complete /sec
Ethtool(enp216s0) stat: 182257 ( 182,257) <= rx6_packets /sec
Ethtool(enp216s0) stat: 182257 ( 182,257) <=
rx6_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 88443649 ( 88,443,649) <= rx7_bytes /sec
Ethtool(enp216s0) stat: 92368 ( 92,368) <=
rx7_cache_reuse /sec
Ethtool(enp216s0) stat: 184828 ( 184,828) <=
rx7_csum_complete /sec
Ethtool(enp216s0) stat: 184828 ( 184,828) <= rx7_packets /sec
Ethtool(enp216s0) stat: 184828 ( 184,828) <=
rx7_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 108419072 ( 108,419,072) <= rx8_bytes /sec
Ethtool(enp216s0) stat: 101098 ( 101,098) <=
rx8_cache_reuse /sec
Ethtool(enp216s0) stat: 202241 ( 202,241) <=
rx8_csum_complete /sec
Ethtool(enp216s0) stat: 202241 ( 202,241) <= rx8_packets /sec
Ethtool(enp216s0) stat: 202241 ( 202,241) <=
rx8_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 116210958 ( 116,210,958) <= rx9_bytes /sec
Ethtool(enp216s0) stat: 102014 ( 102,014) <=
rx9_cache_reuse /sec
Ethtool(enp216s0) stat: 204092 ( 204,092) <=
rx9_csum_complete /sec
Ethtool(enp216s0) stat: 204092 ( 204,092) <= rx9_packets /sec
Ethtool(enp216s0) stat: 204092 ( 204,092) <=
rx9_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 1065697 ( 1,065,697) <=
rx_1024_to_1518_bytes_phy /sec
Ethtool(enp216s0) stat: 141705 ( 141,705) <=
rx_128_to_255_bytes_phy /sec
Ethtool(enp216s0) stat: 512465 ( 512,465) <=
rx_1519_to_2047_bytes_phy /sec
Ethtool(enp216s0) stat: 49523 ( 49,523) <=
rx_256_to_511_bytes_phy /sec
Ethtool(enp216s0) stat: 56321 ( 56,321) <=
rx_512_to_1023_bytes_phy /sec
Ethtool(enp216s0) stat: 957286 ( 957,286) <=
rx_64_bytes_phy /sec
Ethtool(enp216s0) stat: 2193644 ( 2,193,644) <=
rx_65_to_127_bytes_phy /sec
Ethtool(enp216s0) stat: 1 ( 1) <=
rx_broadcast_phy /sec
Ethtool(enp216s0) stat: 2592286809 ( 2,592,286,809) <= rx_bytes /sec
Ethtool(enp216s0) stat: 2633575771 ( 2,633,575,771) <= rx_bytes_phy /sec
Ethtool(enp216s0) stat: 692 ( 692) <= rx_cache_busy /sec
Ethtool(enp216s0) stat: 692 ( 692) <= rx_cache_full /sec
Ethtool(enp216s0) stat: 2484928 ( 2,484,928) <= rx_cache_reuse
/sec
Ethtool(enp216s0) stat: 4971670 ( 4,971,670) <=
rx_csum_complete /sec
Ethtool(enp216s0) stat: 7 ( 7) <= rx_csum_none /sec
Ethtool(enp216s0) stat: 464 ( 464) <=
rx_discards_phy /sec
Ethtool(enp216s0) stat: 1376 ( 1,376) <=
rx_multicast_phy /sec
Ethtool(enp216s0) stat: 4971677 ( 4,971,677) <= rx_packets /sec
Ethtool(enp216s0) stat: 4975563 ( 4,975,563) <= rx_packets_phy
/sec
Ethtool(enp216s0) stat: 2634727119 ( 2,634,727,119) <= rx_prio0_bytes
/sec
Ethtool(enp216s0) stat: 4975012 ( 4,975,012) <=
rx_prio0_packets /sec
Ethtool(enp216s0) stat: 61 ( 61) <= rx_prio1_bytes
/sec
Ethtool(enp216s0) stat: 1 ( 1) <=
rx_prio1_packets /sec
Ethtool(enp216s0) stat: 2003 ( 2,003) <= rx_prio3_bytes
/sec
Ethtool(enp216s0) stat: 7 ( 7) <=
rx_prio3_packets /sec
Ethtool(enp216s0) stat: 2497 ( 2,497) <= rx_prio4_bytes
/sec
Ethtool(enp216s0) stat: 3 ( 3) <=
rx_prio4_packets /sec
Ethtool(enp216s0) stat: 10697 ( 10,697) <= rx_prio5_bytes
/sec
Ethtool(enp216s0) stat: 163 ( 163) <=
rx_prio5_packets /sec
Ethtool(enp216s0) stat: 3519 ( 3,519) <= rx_prio6_bytes
/sec
Ethtool(enp216s0) stat: 26 ( 26) <=
rx_prio6_packets /sec
Ethtool(enp216s0) stat: 95583 ( 95,583) <= rx_prio7_bytes
/sec
Ethtool(enp216s0) stat: 1365 ( 1,365) <=
rx_prio7_packets /sec
Ethtool(enp216s0) stat: 4971677 ( 4,971,677) <=
rx_removed_vlan_packets /sec
Ethtool(enp216s0) stat: 1377 ( 1,377) <=
rx_steer_missed_packets /sec
Ethtool(enp216s0) stat: 150 ( 150) <=
rx_vport_broadcast_bytes /sec
Ethtool(enp216s0) stat: 1 ( 1) <=
rx_vport_broadcast_packets /sec
Ethtool(enp216s0) stat: 90890 ( 90,890) <=
rx_vport_multicast_bytes /sec
Ethtool(enp216s0) stat: 1376 ( 1,376) <=
rx_vport_multicast_packets /sec
Ethtool(enp216s0) stat: 2612989221 ( 2,612,989,221) <=
rx_vport_unicast_bytes /sec
Ethtool(enp216s0) stat: 4973191 ( 4,973,191) <=
rx_vport_unicast_packets /sec
Ethtool(enp216s0) stat: 82580 ( 82,580) <=
tx0_added_vlan_packets /sec
Ethtool(enp216s0) stat: 186927644 ( 186,927,644) <= tx0_bytes /sec
Ethtool(enp216s0) stat: 81347 ( 81,347) <= tx0_cqes /sec
Ethtool(enp216s0) stat: 49836 ( 49,836) <= tx0_csum_none /sec
Ethtool(enp216s0) stat: 32745 ( 32,745) <=
tx0_csum_partial /sec
Ethtool(enp216s0) stat: 2093 ( 2,093) <= tx0_nop /sec
Ethtool(enp216s0) stat: 146450 ( 146,450) <= tx0_packets /sec
Ethtool(enp216s0) stat: 131547357 ( 131,547,357) <= tx0_tso_bytes /sec
Ethtool(enp216s0) stat: 28323 ( 28,323) <=
tx0_tso_packets /sec
Ethtool(enp216s0) stat: 1226 ( 1,226) <= tx0_xmit_more /sec
Ethtool(enp216s0) stat: 85851 ( 85,851) <=
tx10_added_vlan_packets /sec
Ethtool(enp216s0) stat: 198440153 ( 198,440,153) <= tx10_bytes /sec
Ethtool(enp216s0) stat: 84107 ( 84,107) <= tx10_cqes /sec
Ethtool(enp216s0) stat: 51644 ( 51,644) <= tx10_csum_none
/sec
Ethtool(enp216s0) stat: 34207 ( 34,207) <=
tx10_csum_partial /sec
Ethtool(enp216s0) stat: 2256 ( 2,256) <= tx10_nop /sec
Ethtool(enp216s0) stat: 151642 ( 151,642) <= tx10_packets /sec
Ethtool(enp216s0) stat: 135145744 ( 135,145,744) <= tx10_tso_bytes
/sec
Ethtool(enp216s0) stat: 29140 ( 29,140) <=
tx10_tso_packets /sec
Ethtool(enp216s0) stat: 1739 ( 1,739) <= tx10_xmit_more
/sec
Ethtool(enp216s0) stat: 85526 ( 85,526) <=
tx11_added_vlan_packets /sec
Ethtool(enp216s0) stat: 199463985 ( 199,463,985) <= tx11_bytes /sec
Ethtool(enp216s0) stat: 83793 ( 83,793) <= tx11_cqes /sec
Ethtool(enp216s0) stat: 50188 ( 50,188) <= tx11_csum_none
/sec
Ethtool(enp216s0) stat: 35338 ( 35,338) <=
tx11_csum_partial /sec
Ethtool(enp216s0) stat: 2227 ( 2,227) <= tx11_nop /sec
Ethtool(enp216s0) stat: 152815 ( 152,815) <= tx11_packets /sec
Ethtool(enp216s0) stat: 138134263 ( 138,134,263) <= tx11_tso_bytes
/sec
Ethtool(enp216s0) stat: 29948 ( 29,948) <=
tx11_tso_packets /sec
Ethtool(enp216s0) stat: 1733 ( 1,733) <= tx11_xmit_more
/sec
Ethtool(enp216s0) stat: 77464 ( 77,464) <=
tx12_added_vlan_packets /sec
Ethtool(enp216s0) stat: 185723223 ( 185,723,223) <= tx12_bytes /sec
Ethtool(enp216s0) stat: 75907 ( 75,907) <= tx12_cqes /sec
Ethtool(enp216s0) stat: 43727 ( 43,727) <= tx12_csum_none
/sec
Ethtool(enp216s0) stat: 33738 ( 33,738) <=
tx12_csum_partial /sec
Ethtool(enp216s0) stat: 2060 ( 2,060) <= tx12_nop /sec
Ethtool(enp216s0) stat: 141903 ( 141,903) <= tx12_packets /sec
Ethtool(enp216s0) stat: 133227574 ( 133,227,574) <= tx12_tso_bytes
/sec
Ethtool(enp216s0) stat: 29139 ( 29,139) <=
tx12_tso_packets /sec
Ethtool(enp216s0) stat: 1559 ( 1,559) <= tx12_xmit_more
/sec
Ethtool(enp216s0) stat: 79682 ( 79,682) <=
tx13_added_vlan_packets /sec
Ethtool(enp216s0) stat: 189943899 ( 189,943,899) <= tx13_bytes /sec
Ethtool(enp216s0) stat: 78110 ( 78,110) <= tx13_cqes /sec
Ethtool(enp216s0) stat: 46294 ( 46,294) <= tx13_csum_none
/sec
Ethtool(enp216s0) stat: 33388 ( 33,388) <=
tx13_csum_partial /sec
Ethtool(enp216s0) stat: 2109 ( 2,109) <= tx13_nop /sec
Ethtool(enp216s0) stat: 145877 ( 145,877) <= tx13_packets /sec
Ethtool(enp216s0) stat: 136076991 ( 136,076,991) <= tx13_tso_bytes
/sec
Ethtool(enp216s0) stat: 29367 ( 29,367) <=
tx13_tso_packets /sec
Ethtool(enp216s0) stat: 1572 ( 1,572) <= tx13_xmit_more
/sec
Ethtool(enp216s0) stat: 86096 ( 86,096) <=
tx14_added_vlan_packets /sec
Ethtool(enp216s0) stat: 199318635 ( 199,318,635) <= tx14_bytes /sec
Ethtool(enp216s0) stat: 84335 ( 84,335) <= tx14_cqes /sec
Ethtool(enp216s0) stat: 50934 ( 50,934) <= tx14_csum_none
/sec
Ethtool(enp216s0) stat: 35163 ( 35,163) <=
tx14_csum_partial /sec
Ethtool(enp216s0) stat: 2263 ( 2,263) <= tx14_nop /sec
Ethtool(enp216s0) stat: 156445 ( 156,445) <= tx14_packets /sec
Ethtool(enp216s0) stat: 142293555 ( 142,293,555) <= tx14_tso_bytes
/sec
Ethtool(enp216s0) stat: 29703 ( 29,703) <=
tx14_tso_packets /sec
Ethtool(enp216s0) stat: 1761 ( 1,761) <= tx14_xmit_more
/sec
Ethtool(enp216s0) stat: 79698 ( 79,698) <=
tx15_added_vlan_packets /sec
Ethtool(enp216s0) stat: 189620424 ( 189,620,424) <= tx15_bytes /sec
Ethtool(enp216s0) stat: 78356 ( 78,356) <= tx15_cqes /sec
Ethtool(enp216s0) stat: 45528 ( 45,528) <= tx15_csum_none
/sec
Ethtool(enp216s0) stat: 34170 ( 34,170) <=
tx15_csum_partial /sec
Ethtool(enp216s0) stat: 2156 ( 2,156) <= tx15_nop /sec
Ethtool(enp216s0) stat: 144935 ( 144,935) <= tx15_packets /sec
Ethtool(enp216s0) stat: 135023821 ( 135,023,821) <= tx15_tso_bytes
/sec
Ethtool(enp216s0) stat: 29400 ( 29,400) <=
tx15_tso_packets /sec
Ethtool(enp216s0) stat: 1344 ( 1,344) <= tx15_xmit_more
/sec
Ethtool(enp216s0) stat: 59598 ( 59,598) <=
tx16_added_vlan_packets /sec
Ethtool(enp216s0) stat: 143187495 ( 143,187,495) <= tx16_bytes /sec
Ethtool(enp216s0) stat: 58408 ( 58,408) <= tx16_cqes /sec
Ethtool(enp216s0) stat: 35002 ( 35,002) <= tx16_csum_none
/sec
Ethtool(enp216s0) stat: 24595 ( 24,595) <=
tx16_csum_partial /sec
Ethtool(enp216s0) stat: 1585 ( 1,585) <= tx16_nop /sec
Ethtool(enp216s0) stat: 110250 ( 110,250) <= tx16_packets /sec
Ethtool(enp216s0) stat: 101339613 ( 101,339,613) <= tx16_tso_bytes
/sec
Ethtool(enp216s0) stat: 20698 ( 20,698) <=
tx16_tso_packets /sec
Ethtool(enp216s0) stat: 1179 ( 1,179) <= tx16_xmit_more
/sec
Ethtool(enp216s0) stat: 69504 ( 69,504) <=
tx17_added_vlan_packets /sec
Ethtool(enp216s0) stat: 171534675 ( 171,534,675) <= tx17_bytes /sec
Ethtool(enp216s0) stat: 68155 ( 68,155) <= tx17_cqes /sec
Ethtool(enp216s0) stat: 39445 ( 39,445) <= tx17_csum_none
/sec
Ethtool(enp216s0) stat: 30059 ( 30,059) <=
tx17_csum_partial /sec
Ethtool(enp216s0) stat: 1886 ( 1,886) <= tx17_nop /sec
Ethtool(enp216s0) stat: 130910 ( 130,910) <= tx17_packets /sec
Ethtool(enp216s0) stat: 123978012 ( 123,978,012) <= tx17_tso_bytes
/sec
Ethtool(enp216s0) stat: 26215 ( 26,215) <=
tx17_tso_packets /sec
Ethtool(enp216s0) stat: 1349 ( 1,349) <= tx17_xmit_more
/sec
Ethtool(enp216s0) stat: 58880 ( 58,880) <=
tx18_added_vlan_packets /sec
Ethtool(enp216s0) stat: 141863299 ( 141,863,299) <= tx18_bytes /sec
Ethtool(enp216s0) stat: 57755 ( 57,755) <= tx18_cqes /sec
Ethtool(enp216s0) stat: 33875 ( 33,875) <= tx18_csum_none
/sec
Ethtool(enp216s0) stat: 25005 ( 25,005) <=
tx18_csum_partial /sec
Ethtool(enp216s0) stat: 1592 ( 1,592) <= tx18_nop /sec
Ethtool(enp216s0) stat: 110248 ( 110,248) <= tx18_packets /sec
Ethtool(enp216s0) stat: 103105544 ( 103,105,544) <= tx18_tso_bytes
/sec
Ethtool(enp216s0) stat: 21243 ( 21,243) <=
tx18_tso_packets /sec
Ethtool(enp216s0) stat: 1108 ( 1,108) <= tx18_xmit_more
/sec
Ethtool(enp216s0) stat: 69804 ( 69,804) <=
tx19_added_vlan_packets /sec
Ethtool(enp216s0) stat: 164225730 ( 164,225,730) <= tx19_bytes /sec
Ethtool(enp216s0) stat: 68756 ( 68,756) <= tx19_cqes /sec
Ethtool(enp216s0) stat: 42003 ( 42,003) <= tx19_csum_none
/sec
Ethtool(enp216s0) stat: 27801 ( 27,801) <=
tx19_csum_partial /sec
Ethtool(enp216s0) stat: 1790 ( 1,790) <= tx19_nop /sec
Ethtool(enp216s0) stat: 124794 ( 124,794) <= tx19_packets /sec
Ethtool(enp216s0) stat: 110814620 ( 110,814,620) <= tx19_tso_bytes
/sec
Ethtool(enp216s0) stat: 23278 ( 23,278) <=
tx19_tso_packets /sec
Ethtool(enp216s0) stat: 1045 ( 1,045) <= tx19_xmit_more
/sec
Ethtool(enp216s0) stat: 79346 ( 79,346) <=
tx1_added_vlan_packets /sec
Ethtool(enp216s0) stat: 181179251 ( 181,179,251) <= tx1_bytes /sec
Ethtool(enp216s0) stat: 78062 ( 78,062) <= tx1_cqes /sec
Ethtool(enp216s0) stat: 46525 ( 46,525) <= tx1_csum_none /sec
Ethtool(enp216s0) stat: 32821 ( 32,821) <=
tx1_csum_partial /sec
Ethtool(enp216s0) stat: 1996 ( 1,996) <= tx1_nop /sec
Ethtool(enp216s0) stat: 142716 ( 142,716) <= tx1_packets /sec
Ethtool(enp216s0) stat: 129507562 ( 129,507,562) <= tx1_tso_bytes /sec
Ethtool(enp216s0) stat: 27579 ( 27,579) <=
tx1_tso_packets /sec
Ethtool(enp216s0) stat: 1281 ( 1,281) <= tx1_xmit_more /sec
Ethtool(enp216s0) stat: 66641 ( 66,641) <=
tx20_added_vlan_packets /sec
Ethtool(enp216s0) stat: 161452661 ( 161,452,661) <= tx20_bytes /sec
Ethtool(enp216s0) stat: 65374 ( 65,374) <= tx20_cqes /sec
Ethtool(enp216s0) stat: 37657 ( 37,657) <= tx20_csum_none
/sec
Ethtool(enp216s0) stat: 28983 ( 28,983) <=
tx20_csum_partial /sec
Ethtool(enp216s0) stat: 1739 ( 1,739) <= tx20_nop /sec
Ethtool(enp216s0) stat: 122824 ( 122,824) <= tx20_packets /sec
Ethtool(enp216s0) stat: 115707977 ( 115,707,977) <= tx20_tso_bytes
/sec
Ethtool(enp216s0) stat: 24823 ( 24,823) <=
tx20_tso_packets /sec
Ethtool(enp216s0) stat: 1263 ( 1,263) <= tx20_xmit_more
/sec
Ethtool(enp216s0) stat: 60564 ( 60,564) <=
tx21_added_vlan_packets /sec
Ethtool(enp216s0) stat: 138260273 ( 138,260,273) <= tx21_bytes /sec
Ethtool(enp216s0) stat: 59611 ( 59,611) <= tx21_cqes /sec
Ethtool(enp216s0) stat: 36105 ( 36,105) <= tx21_csum_none
/sec
Ethtool(enp216s0) stat: 24459 ( 24,459) <=
tx21_csum_partial /sec
Ethtool(enp216s0) stat: 1490 ( 1,490) <= tx21_nop /sec
Ethtool(enp216s0) stat: 106913 ( 106,913) <= tx21_packets /sec
Ethtool(enp216s0) stat: 97216016 ( 97,216,016) <= tx21_tso_bytes
/sec
Ethtool(enp216s0) stat: 21944 ( 21,944) <=
tx21_tso_packets /sec
Ethtool(enp216s0) stat: 930 ( 930) <= tx21_xmit_more
/sec
Ethtool(enp216s0) stat: 66527 ( 66,527) <=
tx22_added_vlan_packets /sec
Ethtool(enp216s0) stat: 149686537 ( 149,686,537) <= tx22_bytes /sec
Ethtool(enp216s0) stat: 65262 ( 65,262) <= tx22_cqes /sec
Ethtool(enp216s0) stat: 40609 ( 40,609) <= tx22_csum_none
/sec
Ethtool(enp216s0) stat: 25918 ( 25,918) <=
tx22_csum_partial /sec
Ethtool(enp216s0) stat: 1676 ( 1,676) <= tx22_nop /sec
Ethtool(enp216s0) stat: 118071 ( 118,071) <= tx22_packets /sec
Ethtool(enp216s0) stat: 104822520 ( 104,822,520) <= tx22_tso_bytes
/sec
Ethtool(enp216s0) stat: 22046 ( 22,046) <=
tx22_tso_packets /sec
Ethtool(enp216s0) stat: 1265 ( 1,265) <= tx22_xmit_more
/sec
Ethtool(enp216s0) stat: 59973 ( 59,973) <=
tx23_added_vlan_packets /sec
Ethtool(enp216s0) stat: 147788578 ( 147,788,578) <= tx23_bytes /sec
Ethtool(enp216s0) stat: 58827 ( 58,827) <= tx23_cqes /sec
Ethtool(enp216s0) stat: 33197 ( 33,197) <= tx23_csum_none
/sec
Ethtool(enp216s0) stat: 26776 ( 26,776) <=
tx23_csum_partial /sec
Ethtool(enp216s0) stat: 1669 ( 1,669) <= tx23_nop /sec
Ethtool(enp216s0) stat: 112053 ( 112,053) <= tx23_packets /sec
Ethtool(enp216s0) stat: 106098597 ( 106,098,597) <= tx23_tso_bytes
/sec
Ethtool(enp216s0) stat: 22433 ( 22,433) <=
tx23_tso_packets /sec
Ethtool(enp216s0) stat: 1126 ( 1,126) <= tx23_xmit_more
/sec
Ethtool(enp216s0) stat: 58819 ( 58,819) <=
tx24_added_vlan_packets /sec
Ethtool(enp216s0) stat: 146231570 ( 146,231,570) <= tx24_bytes /sec
Ethtool(enp216s0) stat: 57661 ( 57,661) <= tx24_cqes /sec
Ethtool(enp216s0) stat: 32150 ( 32,150) <= tx24_csum_none
/sec
Ethtool(enp216s0) stat: 26669 ( 26,669) <=
tx24_csum_partial /sec
Ethtool(enp216s0) stat: 1578 ( 1,578) <= tx24_nop /sec
Ethtool(enp216s0) stat: 111230 ( 111,230) <= tx24_packets /sec
Ethtool(enp216s0) stat: 106359530 ( 106,359,530) <= tx24_tso_bytes
/sec
Ethtool(enp216s0) stat: 22402 ( 22,402) <=
tx24_tso_packets /sec
Ethtool(enp216s0) stat: 1158 ( 1,158) <= tx24_xmit_more
/sec
Ethtool(enp216s0) stat: 64116 ( 64,116) <=
tx25_added_vlan_packets /sec
Ethtool(enp216s0) stat: 156132090 ( 156,132,090) <= tx25_bytes /sec
Ethtool(enp216s0) stat: 62901 ( 62,901) <= tx25_cqes /sec
Ethtool(enp216s0) stat: 36357 ( 36,357) <= tx25_csum_none
/sec
Ethtool(enp216s0) stat: 27759 ( 27,759) <=
tx25_csum_partial /sec
Ethtool(enp216s0) stat: 1717 ( 1,717) <= tx25_nop /sec
Ethtool(enp216s0) stat: 118618 ( 118,618) <= tx25_packets /sec
Ethtool(enp216s0) stat: 110752893 ( 110,752,893) <= tx25_tso_bytes
/sec
Ethtool(enp216s0) stat: 23282 ( 23,282) <=
tx25_tso_packets /sec
Ethtool(enp216s0) stat: 1212 ( 1,212) <= tx25_xmit_more
/sec
Ethtool(enp216s0) stat: 62028 ( 62,028) <=
tx26_added_vlan_packets /sec
Ethtool(enp216s0) stat: 144966495 ( 144,966,495) <= tx26_bytes /sec
Ethtool(enp216s0) stat: 60906 ( 60,906) <= tx26_cqes /sec
Ethtool(enp216s0) stat: 36345 ( 36,345) <= tx26_csum_none
/sec
Ethtool(enp216s0) stat: 25684 ( 25,684) <=
tx26_csum_partial /sec
Ethtool(enp216s0) stat: 1615 ( 1,615) <= tx26_nop /sec
Ethtool(enp216s0) stat: 112487 ( 112,487) <= tx26_packets /sec
Ethtool(enp216s0) stat: 102083200 ( 102,083,200) <= tx26_tso_bytes
/sec
Ethtool(enp216s0) stat: 21598 ( 21,598) <=
tx26_tso_packets /sec
Ethtool(enp216s0) stat: 1123 ( 1,123) <= tx26_xmit_more
/sec
Ethtool(enp216s0) stat: 64029 ( 64,029) <=
tx27_added_vlan_packets /sec
Ethtool(enp216s0) stat: 154747343 ( 154,747,343) <= tx27_bytes /sec
Ethtool(enp216s0) stat: 62881 ( 62,881) <= tx27_cqes /sec
Ethtool(enp216s0) stat: 35343 ( 35,343) <= tx27_csum_none
/sec
Ethtool(enp216s0) stat: 28686 ( 28,686) <=
tx27_csum_partial /sec
Ethtool(enp216s0) stat: 1707 ( 1,707) <= tx27_nop /sec
Ethtool(enp216s0) stat: 118074 ( 118,074) <= tx27_packets /sec
Ethtool(enp216s0) stat: 110960676 ( 110,960,676) <= tx27_tso_bytes
/sec
Ethtool(enp216s0) stat: 24139 ( 24,139) <=
tx27_tso_packets /sec
Ethtool(enp216s0) stat: 1148 ( 1,148) <= tx27_xmit_more
/sec
Ethtool(enp216s0) stat: 82542 ( 82,542) <=
tx2_added_vlan_packets /sec
Ethtool(enp216s0) stat: 183007051 ( 183,007,051) <= tx2_bytes /sec
Ethtool(enp216s0) stat: 81320 ( 81,320) <= tx2_cqes /sec
Ethtool(enp216s0) stat: 49989 ( 49,989) <= tx2_csum_none /sec
Ethtool(enp216s0) stat: 32553 ( 32,553) <=
tx2_csum_partial /sec
Ethtool(enp216s0) stat: 2142 ( 2,142) <= tx2_nop /sec
Ethtool(enp216s0) stat: 143613 ( 143,613) <= tx2_packets /sec
Ethtool(enp216s0) stat: 126895404 ( 126,895,404) <= tx2_tso_bytes /sec
Ethtool(enp216s0) stat: 28173 ( 28,173) <=
tx2_tso_packets /sec
Ethtool(enp216s0) stat: 1220 ( 1,220) <= tx2_xmit_more /sec
Ethtool(enp216s0) stat: 78737 ( 78,737) <=
tx3_added_vlan_packets /sec
Ethtool(enp216s0) stat: 188546556 ( 188,546,556) <= tx3_bytes /sec
Ethtool(enp216s0) stat: 77389 ( 77,389) <= tx3_cqes /sec
Ethtool(enp216s0) stat: 46147 ( 46,147) <= tx3_csum_none /sec
Ethtool(enp216s0) stat: 32590 ( 32,590) <=
tx3_csum_partial /sec
Ethtool(enp216s0) stat: 2171 ( 2,171) <= tx3_nop /sec
Ethtool(enp216s0) stat: 144570 ( 144,570) <= tx3_packets /sec
Ethtool(enp216s0) stat: 134231500 ( 134,231,500) <= tx3_tso_bytes /sec
Ethtool(enp216s0) stat: 28857 ( 28,857) <=
tx3_tso_packets /sec
Ethtool(enp216s0) stat: 1348 ( 1,348) <= tx3_xmit_more /sec
Ethtool(enp216s0) stat: 76284 ( 76,284) <=
tx4_added_vlan_packets /sec
Ethtool(enp216s0) stat: 185633403 ( 185,633,403) <= tx4_bytes /sec
Ethtool(enp216s0) stat: 74710 ( 74,710) <= tx4_cqes /sec
Ethtool(enp216s0) stat: 42803 ( 42,803) <= tx4_csum_none /sec
Ethtool(enp216s0) stat: 33482 ( 33,482) <=
tx4_csum_partial /sec
Ethtool(enp216s0) stat: 2005 ( 2,005) <= tx4_nop /sec
Ethtool(enp216s0) stat: 139978 ( 139,978) <= tx4_packets /sec
Ethtool(enp216s0) stat: 131762891 ( 131,762,891) <= tx4_tso_bytes /sec
Ethtool(enp216s0) stat: 28843 ( 28,843) <=
tx4_tso_packets /sec
Ethtool(enp216s0) stat: 1563 ( 1,563) <= tx4_xmit_more /sec
Ethtool(enp216s0) stat: 83878 ( 83,878) <=
tx5_added_vlan_packets /sec
Ethtool(enp216s0) stat: 197920426 ( 197,920,426) <= tx5_bytes /sec
Ethtool(enp216s0) stat: 82146 ( 82,146) <= tx5_cqes /sec
Ethtool(enp216s0) stat: 48122 ( 48,122) <= tx5_csum_none /sec
Ethtool(enp216s0) stat: 35755 ( 35,755) <=
tx5_csum_partial /sec
Ethtool(enp216s0) stat: 2227 ( 2,227) <= tx5_nop /sec
Ethtool(enp216s0) stat: 151214 ( 151,214) <= tx5_packets /sec
Ethtool(enp216s0) stat: 139113072 ( 139,113,072) <= tx5_tso_bytes /sec
Ethtool(enp216s0) stat: 30390 ( 30,390) <=
tx5_tso_packets /sec
Ethtool(enp216s0) stat: 1730 ( 1,730) <= tx5_xmit_more /sec
Ethtool(enp216s0) stat: 76307 ( 76,307) <=
tx6_added_vlan_packets /sec
Ethtool(enp216s0) stat: 182153538 ( 182,153,538) <= tx6_bytes /sec
Ethtool(enp216s0) stat: 74744 ( 74,744) <= tx6_cqes /sec
Ethtool(enp216s0) stat: 44381 ( 44,381) <= tx6_csum_none /sec
Ethtool(enp216s0) stat: 31926 ( 31,926) <=
tx6_csum_partial /sec
Ethtool(enp216s0) stat: 2068 ( 2,068) <= tx6_nop /sec
Ethtool(enp216s0) stat: 139745 ( 139,745) <= tx6_packets /sec
Ethtool(enp216s0) stat: 129175625 ( 129,175,625) <= tx6_tso_bytes /sec
Ethtool(enp216s0) stat: 27349 ( 27,349) <=
tx6_tso_packets /sec
Ethtool(enp216s0) stat: 1562 ( 1,562) <= tx6_xmit_more /sec
Ethtool(enp216s0) stat: 72151 ( 72,151) <=
tx7_added_vlan_packets /sec
Ethtool(enp216s0) stat: 175340247 ( 175,340,247) <= tx7_bytes /sec
Ethtool(enp216s0) stat: 70565 ( 70,565) <= tx7_cqes /sec
Ethtool(enp216s0) stat: 40824 ( 40,824) <= tx7_csum_none /sec
Ethtool(enp216s0) stat: 31327 ( 31,327) <=
tx7_csum_partial /sec
Ethtool(enp216s0) stat: 1962 ( 1,962) <= tx7_nop /sec
Ethtool(enp216s0) stat: 133790 ( 133,790) <= tx7_packets /sec
Ethtool(enp216s0) stat: 124931972 ( 124,931,972) <= tx7_tso_bytes /sec
Ethtool(enp216s0) stat: 26314 ( 26,314) <=
tx7_tso_packets /sec
Ethtool(enp216s0) stat: 1586 ( 1,586) <= tx7_xmit_more /sec
Ethtool(enp216s0) stat: 83983 ( 83,983) <=
tx8_added_vlan_packets /sec
Ethtool(enp216s0) stat: 203970262 ( 203,970,262) <= tx8_bytes /sec
Ethtool(enp216s0) stat: 82475 ( 82,475) <= tx8_cqes /sec
Ethtool(enp216s0) stat: 47937 ( 47,937) <= tx8_csum_none /sec
Ethtool(enp216s0) stat: 36046 ( 36,046) <=
tx8_csum_partial /sec
Ethtool(enp216s0) stat: 2203 ( 2,203) <= tx8_nop /sec
Ethtool(enp216s0) stat: 153743 ( 153,743) <= tx8_packets /sec
Ethtool(enp216s0) stat: 143296525 ( 143,296,525) <= tx8_tso_bytes /sec
Ethtool(enp216s0) stat: 30936 ( 30,936) <=
tx8_tso_packets /sec
Ethtool(enp216s0) stat: 1500 ( 1,500) <= tx8_xmit_more /sec
Ethtool(enp216s0) stat: 79406 ( 79,406) <=
tx9_added_vlan_packets /sec
Ethtool(enp216s0) stat: 183286836 ( 183,286,836) <= tx9_bytes /sec
Ethtool(enp216s0) stat: 77769 ( 77,769) <= tx9_cqes /sec
Ethtool(enp216s0) stat: 48014 ( 48,014) <= tx9_csum_none /sec
Ethtool(enp216s0) stat: 31392 ( 31,392) <=
tx9_csum_partial /sec
Ethtool(enp216s0) stat: 2146 ( 2,146) <= tx9_nop /sec
Ethtool(enp216s0) stat: 142229 ( 142,229) <= tx9_packets /sec
Ethtool(enp216s0) stat: 127419807 ( 127,419,807) <= tx9_tso_bytes /sec
Ethtool(enp216s0) stat: 26644 ( 26,644) <=
tx9_tso_packets /sec
Ethtool(enp216s0) stat: 1633 ( 1,633) <= tx9_xmit_more /sec
Ethtool(enp216s0) stat: 2050335 ( 2,050,335) <=
tx_added_vlan_packets /sec
Ethtool(enp216s0) stat: 4851410368 ( 4,851,410,368) <= tx_bytes /sec
Ethtool(enp216s0) stat: 4881854912 ( 4,881,854,912) <= tx_bytes_phy /sec
Ethtool(enp216s0) stat: 2011989 ( 2,011,989) <= tx_cqes /sec
Ethtool(enp216s0) stat: 1191137 ( 1,191,137) <= tx_csum_none /sec
Ethtool(enp216s0) stat: 859197 ( 859,197) <=
tx_csum_partial /sec
Ethtool(enp216s0) stat: 54140 ( 54,140) <= tx_nop /sec
Ethtool(enp216s0) stat: 3728759 ( 3,728,759) <= tx_packets /sec
Ethtool(enp216s0) stat: 3729416 ( 3,729,416) <= tx_packets_phy
/sec
Ethtool(enp216s0) stat: 4882774572 ( 4,882,774,572) <= tx_prio0_bytes
/sec
Ethtool(enp216s0) stat: 3730195 ( 3,730,195) <=
tx_prio0_packets /sec
Ethtool(enp216s0) stat: 3431642829 ( 3,431,642,829) <= tx_tso_bytes /sec
Ethtool(enp216s0) stat: 734343 ( 734,343) <= tx_tso_packets
/sec
Ethtool(enp216s0) stat: 4866327192 ( 4,866,327,192) <=
tx_vport_unicast_bytes /sec
Ethtool(enp216s0) stat: 3728966 ( 3,728,966) <=
tx_vport_unicast_packets /sec
Ethtool(enp216s0) stat: 38268 ( 38,268) <= tx_xmit_more /sec
>>
>> Can you give output put from:
>> $ ethtool --show-priv-flag DEVICE
>>
>> I want you to experiment with:
> ethtool --show-priv-flags enp175s0
> Private flags for enp175s0:
> rx_cqe_moder : on
> tx_cqe_moder : off
> rx_cqe_compress : off
> rx_striding_rq : on
> rx_no_csum_complete: off
>
>>
>> ethtool --set-priv-flags DEVICE rx_striding_rq off
> ok i will first check on test server if this will reset my interface
> and will not produce kernel panic :)
>>
>> I think you already have played with 'rx_cqe_compress', right.
> yes - and compress increasing number of irq's but doing not much for
> bandwidth same limit 60-64Gbit/s total RX+TX on one 100G port
>
> And what is weird - that limit is in overall symetric - cause if for
> example 100G port is receiving 42G traffic and transmitting 20G
> traffic - when i flood rx side with pktgen or other for example icmp
> traffic 1/2/3/4/5G - then receiving side increase with 1/2/3/4/5Gbit
> of traffic but transmitting is going down for same lvl's
>
>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 19:34 ` Jesper Dangaard Brouer
2018-11-10 19:49 ` Paweł Staszewski
@ 2018-11-10 20:02 ` Paweł Staszewski
2018-11-10 21:01 ` Jesper Dangaard Brouer
1 sibling, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-10 20:02 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
> I want you to experiment with:
>
> ethtool --set-priv-flags DEVICE rx_striding_rq off
just checked that previously connectx4 was have thos disabled:
ethtool --show-priv-flags enp175s0f0
Private flags for enp175s0f0:
rx_cqe_moder : on
tx_cqe_moder : off
rx_cqe_compress : off
rx_striding_rq : off
rx_no_csum_complete: off
So now we are on connectx5 and we have enabled - for sure connectx5
changed cpu load - where i have now max 50/60% cpu where with connectx4
there was sometimes near 100% with same configuration.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 20:02 ` Paweł Staszewski
@ 2018-11-10 21:01 ` Jesper Dangaard Brouer
2018-11-10 21:53 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-10 21:01 UTC (permalink / raw)
To: Paweł Staszewski; +Cc: Saeed Mahameed, netdev, brouer
On Sat, 10 Nov 2018 21:02:10 +0100
Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
> > I want you to experiment with:
> >
> > ethtool --set-priv-flags DEVICE rx_striding_rq off
>
> just checked that previously connectx4 was have thos disabled:
> ethtool --show-priv-flags enp175s0f0
>
> Private flags for enp175s0f0:
> rx_cqe_moder : on
> tx_cqe_moder : off
> rx_cqe_compress : off
> rx_striding_rq : off
> rx_no_csum_complete: off
>
The CX4 hardware does not have this feature (p.s. the CX4-Lx does).
> So now we are on connectx5 and we have enabled - for sure connectx5
> changed cpu load - where i have now max 50/60% cpu where with connectx4
> there was sometimes near 100% with same configuration.
I (strongly) believe the CPU load was related to the page-alloactor
lock congestion, that Aaron fixed.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 21:01 ` Jesper Dangaard Brouer
@ 2018-11-10 21:53 ` Paweł Staszewski
2018-11-10 22:04 ` Paweł Staszewski
2018-11-11 8:56 ` Jesper Dangaard Brouer
0 siblings, 2 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-10 21:53 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 10.11.2018 o 22:01, Jesper Dangaard Brouer pisze:
> On Sat, 10 Nov 2018 21:02:10 +0100
> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
>>> I want you to experiment with:
>>>
>>> ethtool --set-priv-flags DEVICE rx_striding_rq off
>> just checked that previously connectx4 was have thos disabled:
>> ethtool --show-priv-flags enp175s0f0
>>
>> Private flags for enp175s0f0:
>> rx_cqe_moder : on
>> tx_cqe_moder : off
>> rx_cqe_compress : off
>> rx_striding_rq : off
>> rx_no_csum_complete: off
>>
> The CX4 hardware does not have this feature (p.s. the CX4-Lx does).
>
>
>> So now we are on connectx5 and we have enabled - for sure connectx5
>> changed cpu load - where i have now max 50/60% cpu where with connectx4
>> there was sometimes near 100% with same configuration.
> I (strongly) believe the CPU load was related to the page-alloactor
> lock congestion, that Aaron fixed.
>
Yes i think both - most problems with cpu was due to page-allocator
problems.
But also after change connctx4 to connectx5 there is cpu load difference
- about 10% in total - but yes most of this like 40% is cause of Aaron
patch :) - rly good job :)
Now im messing with ring configuration for connectx5 nics.
And after reading that paper:
https://netdevconf.org/2.1/slides/apr6/network-performance/
04-amir-RX_and_TX_bulking_v2.pdf
changed from RX:8192 / TX: 4096 to RX:8192 / TX: 256
after this i gain about 5Gbit/s RX and TX traffic and less cpu load....
before change there was 59/59 Gbit/s
After change there is 64/64 Gbit/s
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
| iface Rx Tx Total
==============================================================================
enp175s0: 44.45 Gb/s 19.69 Gb/s
64.14 Gb/s
enp216s0: 19.69 Gb/s 44.49 Gb/s
64.19 Gb/s
------------------------------------------------------------------------------
total: 64.14 Gb/s 64.18 Gb/s 128.33 Gb/s
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 21:53 ` Paweł Staszewski
@ 2018-11-10 22:04 ` Paweł Staszewski
2018-11-11 8:56 ` Jesper Dangaard Brouer
1 sibling, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-10 22:04 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 10.11.2018 o 22:53, Paweł Staszewski pisze:
>
>
> W dniu 10.11.2018 o 22:01, Jesper Dangaard Brouer pisze:
>> On Sat, 10 Nov 2018 21:02:10 +0100
>> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>>
>>> W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
>>>> I want you to experiment with:
>>>>
>>>> ethtool --set-priv-flags DEVICE rx_striding_rq off
>>> just checked that previously connectx4 was have thos disabled:
>>> ethtool --show-priv-flags enp175s0f0
>>>
>>> Private flags for enp175s0f0:
>>> rx_cqe_moder : on
>>> tx_cqe_moder : off
>>> rx_cqe_compress : off
>>> rx_striding_rq : off
>>> rx_no_csum_complete: off
>>>
>> The CX4 hardware does not have this feature (p.s. the CX4-Lx does).
>>
>>> So now we are on connectx5 and we have enabled - for sure connectx5
>>> changed cpu load - where i have now max 50/60% cpu where with connectx4
>>> there was sometimes near 100% with same configuration.
>> I (strongly) believe the CPU load was related to the page-alloactor
>> lock congestion, that Aaron fixed.
>>
> Yes i think both - most problems with cpu was due to page-allocator
> problems.
> But also after change connctx4 to connectx5 there is cpu load
> difference - about 10% in total - but yes most of this like 40% is
> cause of Aaron patch :) - rly good job :)
>
>
> Now im messing with ring configuration for connectx5 nics.
> And after reading that paper:
> https://netdevconf.org/2.1/slides/apr6/network-performance/
> 04-amir-RX_and_TX_bulking_v2.pdf
>
> changed from RX:8192 / TX: 4096 to RX:8192 / TX: 256
>
> after this i gain about 5Gbit/s RX and TX traffic and less cpu load....
> before change there was 59/59 Gbit/s
>
> After change there is 64/64 Gbit/s
>
> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> input: /proc/net/dev type: rate
> | iface Rx Tx Total
> ==============================================================================
>
> enp175s0: 44.45 Gb/s 19.69 Gb/s
> 64.14 Gb/s
> enp216s0: 19.69 Gb/s 44.49 Gb/s
> 64.19 Gb/s
> ------------------------------------------------------------------------------
>
> total: 64.14 Gb/s 64.18 Gb/s 128.33 Gb/s
>
>
Also after this change kernel freed some memory... like 500MB
Still squeezed but less with more traffic...
CPU total/sec dropped/sec squeezed/sec
collision/sec rx_rps/sec flow_limit/sec
CPU:00 0 0 0 0
0 0
CPU:01 0 0 0 0
0 0
CPU:02 0 0 0 0
0 0
CPU:03 0 0 0 0
0 0
CPU:04 0 0 0 0
0 0
CPU:05 0 0 0 0
0 0
CPU:06 0 0 0 0
0 0
CPU:07 0 0 0 0
0 0
CPU:08 0 0 0 0
0 0
CPU:09 0 0 0 0
0 0
CPU:10 0 0 0 0
0 0
CPU:11 0 0 0 0
0 0
CPU:12 0 0 0 0
0 0
CPU:13 0 0 0 0
0 0
CPU:14 389270 0 41 0
0 0
CPU:15 375543 0 32 0
0 0
CPU:16 385847 0 22 0
0 0
CPU:17 412293 0 34 0
0 0
CPU:18 401287 0 30 0
0 0
CPU:19 368345 0 30 0
0 0
CPU:20 395452 0 28 0
0 0
CPU:21 374032 0 38 0
0 0
CPU:22 342036 0 32 0
0 0
CPU:23 374773 0 34 0
0 0
CPU:24 356139 0 31 0
0 0
CPU:25 392725 0 32 0
0 0
CPU:26 385937 0 37 0
0 0
CPU:27 385282 0 37 0
0 0
CPU:28 0 0 0 0
0 0
CPU:29 0 0 0 0
0 0
CPU:30 0 0 0 0
0 0
CPU:31 0 0 0 0
0 0
CPU:32 0 0 0 0
0 0
CPU:33 0 0 0 0
0 0
CPU:34 0 0 0 0
0 0
CPU:35 0 0 0 0
0 0
CPU:36 0 0 0 0
0 0
CPU:37 0 0 0 0
0 0
CPU:38 0 0 0 0
0 0
CPU:39 0 0 0 0
0 0
CPU:40 0 0 0 0
0 0
CPU:41 0 0 0 0
0 0
CPU:42 340817 0 33 0
0 0
CPU:43 364805 0 42 0
0 0
CPU:44 298484 0 29 0
0 0
CPU:45 292798 0 30 0
0 0
CPU:46 301739 0 24 0
0 0
CPU:47 275116 0 20 0
0 0
CPU:48 319237 0 34 0
0 0
CPU:49 290350 0 29 0
0 0
CPU:50 307084 0 30 0
0 0
CPU:51 332908 0 24 0
0 0
CPU:52 300151 0 24 0
0 0
CPU:53 310140 0 28 0
0 0
CPU:54 341788 0 28 0
0 0
CPU:55 320344 0 28 0
0 0
Summed: 9734722 0 860 0
0 0
>
>
>
>
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 19:56 ` Paweł Staszewski
@ 2018-11-10 22:06 ` Jesper Dangaard Brouer
2018-11-10 22:19 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-10 22:06 UTC (permalink / raw)
To: Paweł Staszewski; +Cc: Saeed Mahameed, netdev, brouer
On Sat, 10 Nov 2018 20:56:02 +0100
Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> W dniu 10.11.2018 o 20:49, Paweł Staszewski pisze:
> >
> >
> > W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
> >> On Fri, 9 Nov 2018 23:20:38 +0100 Paweł Staszewski
> >> <pstaszewski@itcare.pl> wrote:
> >>
> >>> W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
> >>>> CPU load is lower than for connectx4 - but it looks like bandwidth
> >>>> limit is the same :)
> >>>> But also after reaching 60Gbit/60Gbit
> >>>>
> >>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
> >>>> input: /proc/net/dev type: rate
> >>>> - iface Rx Tx Total
> >>>> ===================================================================
> >>>>
> >>>>
> >>>> enp175s0: 45.09 Gb/s 15.09 Gb/s 60.18 Gb/s
> >>>> enp216s0: 15.14 Gb/s 45.19 Gb/s 60.33 Gb/s
> >>>> -------------------------------------------------------------------
> >>>>
> >>>>
> >>>> total: 60.45 Gb/s 60.48 Gb/s 120.93 Gb/s
> >>> Today reached 65/65Gbit/s
> >>>
> >>> But starting from 60Gbit/s RX / 60Gbit TX nics start to drop packets
> >>> (with 50%CPU on all 28cores) - so still there is cpu power to use :).
> >> This is weird!
> >>
> >> How do you see / measure these drops?
> >
> > Simple icmp test like ping -i 0.1
> > And im testing by icmp management ip address on vlan that is attacked
> > to one NIC (the side that is more stressed with RX)
> > And another icmp test is forward thru this router - host behind it
> >
> > Both measurements shows same loss ratio from 0.1 to 0.5% after
> > reaching ~45Gbit/s RX side - depends how much RX side is pushed drops
> > vary between 0.1 to 0.5 - even 0.6%:)
> >
Okay good to know, you use an external measurement for this. I do
think packets are getting dropped by the NIC.
> >>> So checked other stats.
> >>> softnet_stats shows average 1k squeezed per sec:
> >> Is below output the raw counters? not per sec?
> >>
> >> It would be valuable to see the per sec stats instead...
> >> I use this tool:
> >> https://github.com/netoptimizer/network-testing/blob/master/bin/softnet_stat.pl
> CPU total/sec dropped/sec squeezed/sec collision/sec rx_rps/sec flow_limit/sec
> CPU:00 0 0 0 0 0 0
[...]
> CPU:13 0 0 0 0 0 0
> CPU:14 485538 0 43 0 0 0
> CPU:15 474794 0 51 0 0 0
> CPU:16 449322 0 41 0 0 0
> CPU:17 476420 0 46 0 0 0
> CPU:18 440436 0 38 0 0 0
> CPU:19 501499 0 49 0 0 0
> CPU:20 459468 0 49 0 0 0
> CPU:21 438928 0 47 0 0 0
> CPU:22 468983 0 40 0 0 0
> CPU:23 446253 0 47 0 0 0
> CPU:24 451909 0 46 0 0 0
> CPU:25 479373 0 55 0 0 0
> CPU:26 467848 0 49 0 0 0
> CPU:27 453153 0 51 0 0 0
> CPU:28 0 0 0 0 0 0
[...]
> CPU:40 0 0 0 0 0 0
> CPU:41 0 0 0 0 0 0
> CPU:42 466853 0 43 0 0 0
> CPU:43 453059 0 54 0 0 0
> CPU:44 363219 0 34 0 0 0
> CPU:45 353632 0 38 0 0 0
> CPU:46 371618 0 40 0 0 0
> CPU:47 350518 0 46 0 0 0
> CPU:48 397544 0 40 0 0 0
> CPU:49 364873 0 38 0 0 0
> CPU:50 383630 0 38 0 0 0
> CPU:51 358771 0 39 0 0 0
> CPU:52 372547 0 38 0 0 0
> CPU:53 372882 0 36 0 0 0
> CPU:54 366244 0 43 0 0 0
> CPU:55 365886 0 39 0 0 0
>
> Summed: 11835201 0 1217 0 0 0
Do notice, the per CPU squeeze is not too large.
The summed 11.8 Mpps is a little high compared to:
Ethtool(enp216s0) stat: 4971677 (4,971,677) <= rx_packets /sec
Ethtool(enp175s0) stat: 3717148 (3,717,148) <= rx_packets /sec
Sum: 3717148+4971677 = 8688825 (8,688,825)
[...]
> >>>
> >>> Remember those tests are now on two separate connectx5 connected to
> >>> two separate pcie x16 gen 3.0
> >> That is strange... I still suspect some HW NIC issue, can you provide
> >> ethtool stats info via tool:
> >>
> >> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> >>
> >> $ ethtool_stats.pl --dev enp175s0 --dev enp216s0
> >>
> >> The tool remove zero-stats counters and report per sec stats. It makes
> >> it easier to spot that is relevant for the given workload.
> > yes mlnx have just too many counters that are always 0 for my case :)
> > Will try this also
> >
> But still alot of non 0 counters
> Show adapter(s) (enp175s0 enp216s0) statistics (ONLY that changed!)
> Ethtool(enp175s0) stat: 8891 ( 8,891) <= ch0_arm /sec
[...]
I have copied the stats over in another document so I can better looks
at it... and I've found some interesting stats.
E.g. we can see that the NIC hardware is dropping packets.
RX-drops on enp175s0:
(enp175s0) stat: 4850734036 ( 4,850,734,036) <= rx_bytes /sec
(enp175s0) stat: 5069043007 ( 5,069,043,007) <= rx_bytes_phy /sec
-218308971 ( -218,308,971) Dropped bytes /sec
(enp175s0) stat: 139602 ( 139,602) <= rx_discards_phy /sec
(enp175s0) stat: 3717148 ( 3,717,148) <= rx_packets /sec
(enp175s0) stat: 3862420 ( 3,862,420) <= rx_packets_phy /sec
-145272 ( -145,272) Dropped packets /sec
RX-drops on enp216s0 is less:
(enp216s0) stat: 2592286809 ( 2,592,286,809) <= rx_bytes /sec
(enp216s0) stat: 2633575771 ( 2,633,575,771) <= rx_bytes_phy /sec
-41288962 ( -41,288,962) Dropped bytes /sec
(enp216s0) stat: 464 (464) <= rx_discards_phy /sec
(enp216s0) stat: 4971677 ( 4,971,677) <= rx_packets /sec
(enp216s0) stat: 4975563 ( 4,975,563) <= rx_packets_phy /sec
-3886 ( -3,886) Dropped packets /sec
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 22:06 ` Jesper Dangaard Brouer
@ 2018-11-10 22:19 ` Paweł Staszewski
2018-11-11 8:03 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-10 22:19 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 10.11.2018 o 23:06, Jesper Dangaard Brouer pisze:
> On Sat, 10 Nov 2018 20:56:02 +0100
> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> W dniu 10.11.2018 o 20:49, Paweł Staszewski pisze:
>>>
>>> W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
>>>> On Fri, 9 Nov 2018 23:20:38 +0100 Paweł Staszewski
>>>> <pstaszewski@itcare.pl> wrote:
>>>>
>>>>> W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
>>>>>> CPU load is lower than for connectx4 - but it looks like bandwidth
>>>>>> limit is the same :)
>>>>>> But also after reaching 60Gbit/60Gbit
>>>>>>
>>>>>> bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
>>>>>> input: /proc/net/dev type: rate
>>>>>> - iface Rx Tx Total
>>>>>> ===================================================================
>>>>>>
>>>>>>
>>>>>> enp175s0: 45.09 Gb/s 15.09 Gb/s 60.18 Gb/s
>>>>>> enp216s0: 15.14 Gb/s 45.19 Gb/s 60.33 Gb/s
>>>>>> -------------------------------------------------------------------
>>>>>>
>>>>>>
>>>>>> total: 60.45 Gb/s 60.48 Gb/s 120.93 Gb/s
>>>>> Today reached 65/65Gbit/s
>>>>>
>>>>> But starting from 60Gbit/s RX / 60Gbit TX nics start to drop packets
>>>>> (with 50%CPU on all 28cores) - so still there is cpu power to use :).
>>>> This is weird!
>>>>
>>>> How do you see / measure these drops?
>>> Simple icmp test like ping -i 0.1
>>> And im testing by icmp management ip address on vlan that is attacked
>>> to one NIC (the side that is more stressed with RX)
>>> And another icmp test is forward thru this router - host behind it
>>>
>>> Both measurements shows same loss ratio from 0.1 to 0.5% after
>>> reaching ~45Gbit/s RX side - depends how much RX side is pushed drops
>>> vary between 0.1 to 0.5 - even 0.6%:)
>>>
> Okay good to know, you use an external measurement for this. I do
> think packets are getting dropped by the NIC.
>
>>>>> So checked other stats.
>>>>> softnet_stats shows average 1k squeezed per sec:
>>>> Is below output the raw counters? not per sec?
>>>>
>>>> It would be valuable to see the per sec stats instead...
>>>> I use this tool:
>>>> https://github.com/netoptimizer/network-testing/blob/master/bin/softnet_stat.pl
>> CPU total/sec dropped/sec squeezed/sec collision/sec rx_rps/sec flow_limit/sec
>> CPU:00 0 0 0 0 0 0
> [...]
>> CPU:13 0 0 0 0 0 0
>> CPU:14 485538 0 43 0 0 0
>> CPU:15 474794 0 51 0 0 0
>> CPU:16 449322 0 41 0 0 0
>> CPU:17 476420 0 46 0 0 0
>> CPU:18 440436 0 38 0 0 0
>> CPU:19 501499 0 49 0 0 0
>> CPU:20 459468 0 49 0 0 0
>> CPU:21 438928 0 47 0 0 0
>> CPU:22 468983 0 40 0 0 0
>> CPU:23 446253 0 47 0 0 0
>> CPU:24 451909 0 46 0 0 0
>> CPU:25 479373 0 55 0 0 0
>> CPU:26 467848 0 49 0 0 0
>> CPU:27 453153 0 51 0 0 0
>> CPU:28 0 0 0 0 0 0
> [...]
>> CPU:40 0 0 0 0 0 0
>> CPU:41 0 0 0 0 0 0
>> CPU:42 466853 0 43 0 0 0
>> CPU:43 453059 0 54 0 0 0
>> CPU:44 363219 0 34 0 0 0
>> CPU:45 353632 0 38 0 0 0
>> CPU:46 371618 0 40 0 0 0
>> CPU:47 350518 0 46 0 0 0
>> CPU:48 397544 0 40 0 0 0
>> CPU:49 364873 0 38 0 0 0
>> CPU:50 383630 0 38 0 0 0
>> CPU:51 358771 0 39 0 0 0
>> CPU:52 372547 0 38 0 0 0
>> CPU:53 372882 0 36 0 0 0
>> CPU:54 366244 0 43 0 0 0
>> CPU:55 365886 0 39 0 0 0
>>
>> Summed: 11835201 0 1217 0 0 0
>
> Do notice, the per CPU squeeze is not too large.
Yes - but im searching invisible thing now :) something invisible is
slowing down packet processing :)
So trying to find any counter that have something to do with packet
processing.
> The summed 11.8 Mpps is a little high compared to:
>
> Ethtool(enp216s0) stat: 4971677 (4,971,677) <= rx_packets /sec
> Ethtool(enp175s0) stat: 3717148 (3,717,148) <= rx_packets /sec
> Sum: 3717148+4971677 = 8688825 (8,688,825)
Yes i was mentioning this that stats from /net/dev for nics are weird if
u compare them to ethtool - there are big differences for mellanox nic's
Especially with packets/s
For example when i change
- cqe to compress i have more interrupts - same as more packets/s - but
same bw
- change ring settings - like half hour before - changed TX fing from
4096 to 256 and i have less interrupts and less packets but more
bandwidth... weird...
Cause in normal traffic more packets/s need to be more bandwidth - if
average frame is 500-600 if I gain like 1M+pps - then it should mean in
average +5/6Gbit/s more
But it looks like it is more comparable to number of interrupts not
number of packets.
>
> [...]
>>>>> Remember those tests are now on two separate connectx5 connected to
>>>>> two separate pcie x16 gen 3.0
>>>> That is strange... I still suspect some HW NIC issue, can you provide
>>>> ethtool stats info via tool:
>>>>
>>>> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
>>>>
>>>> $ ethtool_stats.pl --dev enp175s0 --dev enp216s0
>>>>
>>>> The tool remove zero-stats counters and report per sec stats. It makes
>>>> it easier to spot that is relevant for the given workload.
>>> yes mlnx have just too many counters that are always 0 for my case :)
>>> Will try this also
>>>
>> But still alot of non 0 counters
>> Show adapter(s) (enp175s0 enp216s0) statistics (ONLY that changed!)
>> Ethtool(enp175s0) stat: 8891 ( 8,891) <= ch0_arm /sec
> [...]
>
> I have copied the stats over in another document so I can better looks
> at it... and I've found some interesting stats.
>
> E.g. we can see that the NIC hardware is dropping packets.
>
> RX-drops on enp175s0:
>
> (enp175s0) stat: 4850734036 ( 4,850,734,036) <= rx_bytes /sec
> (enp175s0) stat: 5069043007 ( 5,069,043,007) <= rx_bytes_phy /sec
> -218308971 ( -218,308,971) Dropped bytes /sec
>
> (enp175s0) stat: 139602 ( 139,602) <= rx_discards_phy /sec
>
> (enp175s0) stat: 3717148 ( 3,717,148) <= rx_packets /sec
> (enp175s0) stat: 3862420 ( 3,862,420) <= rx_packets_phy /sec
> -145272 ( -145,272) Dropped packets /sec
>
>
> RX-drops on enp216s0 is less:
>
> (enp216s0) stat: 2592286809 ( 2,592,286,809) <= rx_bytes /sec
> (enp216s0) stat: 2633575771 ( 2,633,575,771) <= rx_bytes_phy /sec
> -41288962 ( -41,288,962) Dropped bytes /sec
>
> (enp216s0) stat: 464 (464) <= rx_discards_phy /sec
>
> (enp216s0) stat: 4971677 ( 4,971,677) <= rx_packets /sec
> (enp216s0) stat: 4975563 ( 4,975,563) <= rx_packets_phy /sec
> -3886 ( -3,886) Dropped packets /sec
>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 22:19 ` Paweł Staszewski
@ 2018-11-11 8:03 ` Jesper Dangaard Brouer
2018-11-11 10:26 ` Paweł Staszewski
0 siblings, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-11 8:03 UTC (permalink / raw)
To: Paweł Staszewski; +Cc: Saeed Mahameed, netdev, brouer
On Sat, 10 Nov 2018 23:19:50 +0100
Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> W dniu 10.11.2018 o 23:06, Jesper Dangaard Brouer pisze:
> > On Sat, 10 Nov 2018 20:56:02 +0100
> > Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> >
> >> W dniu 10.11.2018 o 20:49, Paweł Staszewski pisze:
> >>>
> >>> W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
> >>>> On Fri, 9 Nov 2018 23:20:38 +0100 Paweł Staszewski
> >>>> <pstaszewski@itcare.pl> wrote:
> >>>>
> >>>>> W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
[...]
> > Do notice, the per CPU squeeze is not too large.
>
> Yes - but im searching invisible thing now :) something invisible is
> slowing down packet processing :)
> So trying to find any counter that have something to do with packet
> processing.
NOTICE, I have given you the counters you need (below)
> >
> > [...]
> >>>>> Remember those tests are now on two separate connectx5 connected to
> >>>>> two separate pcie x16 gen 3.0
> >>>> That is strange... I still suspect some HW NIC issue, can you provide
> >>>> ethtool stats info via tool:
> >>>>
> >>>> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> >>>>
> >>>> $ ethtool_stats.pl --dev enp175s0 --dev enp216s0
> >>>>
> >>>> The tool remove zero-stats counters and report per sec stats. It makes
> >>>> it easier to spot that is relevant for the given workload.
> >>> yes mlnx have just too many counters that are always 0 for my case :)
> >>> Will try this also
> >>>
> >> But still alot of non 0 counters
> >> Show adapter(s) (enp175s0 enp216s0) statistics (ONLY that changed!)
> >> Ethtool(enp175s0) stat: 8891 ( 8,891) <= ch0_arm /sec
> > [...]
> >
> > I have copied the stats over in another document so I can better looks
> > at it... and I've found some interesting stats.
> >
> > E.g. we can see that the NIC hardware is dropping packets.
> >
> > RX-drops on enp175s0:
> >
> > (enp175s0) stat: 4850734036 ( 4,850,734,036) <= rx_bytes /sec
> > (enp175s0) stat: 5069043007 ( 5,069,043,007) <= rx_bytes_phy /sec
> > -218308971 ( -218,308,971) Dropped bytes /sec
> >
> > (enp175s0) stat: 139602 ( 139,602) <= rx_discards_phy /sec
> >
> > (enp175s0) stat: 3717148 ( 3,717,148) <= rx_packets /sec
> > (enp175s0) stat: 3862420 ( 3,862,420) <= rx_packets_phy /sec
> > -145272 ( -145,272) Dropped packets /sec
> >
> >
> > RX-drops on enp216s0 is less:
> >
> > (enp216s0) stat: 2592286809 ( 2,592,286,809) <= rx_bytes /sec
> > (enp216s0) stat: 2633575771 ( 2,633,575,771) <= rx_bytes_phy /sec
> > -41288962 ( -41,288,962) Dropped bytes /sec
> >
> > (enp216s0) stat: 464 (464) <= rx_discards_phy /sec
> >
> > (enp216s0) stat: 4971677 ( 4,971,677) <= rx_packets /sec
> > (enp216s0) stat: 4975563 ( 4,975,563) <= rx_packets_phy /sec
> > -3886 ( -3,886) Dropped packets /sec
> >
I would recommend, that you use ethtool stats and monitor rx_discards_phy.
The PHY are the counters from the hardware, and it shows that packets
are getting dropped at HW level. This can be because software is not
fast enough to empty RX-queue, but in this case where CPUs are mostly
idle I don't think that is the case.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 21:53 ` Paweł Staszewski
2018-11-10 22:04 ` Paweł Staszewski
@ 2018-11-11 8:56 ` Jesper Dangaard Brouer
2018-11-12 19:19 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: Jesper Dangaard Brouer @ 2018-11-11 8:56 UTC (permalink / raw)
To: Paweł Staszewski; +Cc: Saeed Mahameed, netdev, brouer
On Sat, 10 Nov 2018 22:53:53 +0100 Paweł Staszewski <pstaszewski@itcare.pl> wrote:
> Now im messing with ring configuration for connectx5 nics.
> And after reading that paper:
> https://netdevconf.org/2.1/slides/apr6/network-performance/04-amir-RX_and_TX_bulking_v2.pdf
>
Do notice that some of the ideas in that slide deck, was never
implemented. But they are still on my todo list ;-).
Notice how that it show that TX bulking is very important, but based on
your ethtool_stats.pl, I can see that not much TX bulking is happening
in your case. This is indicated via the xmit_more counters.
Ethtool(enp175s0) stat: 2630 ( 2,630) <= tx_xmit_more /sec
Ethtool(enp175s0) stat: 4956995 ( 4,956,995) <= tx_packets /sec
And the per queue levels are also avail:
Ethtool(enp175s0) stat: 184845 ( 184,845) <= tx7_packets /sec
Ethtool(enp175s0) stat: 78 ( 78) <= tx7_xmit_more /sec
This means that you are doing too many doorbell's to the NIC hardware
at TX time, which I worry could be what cause the NIC and PCIe hardware
not to operate at optimal speeds.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-11 8:03 ` Jesper Dangaard Brouer
@ 2018-11-11 10:26 ` Paweł Staszewski
0 siblings, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-11 10:26 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 11.11.2018 o 09:03, Jesper Dangaard Brouer pisze:
> On Sat, 10 Nov 2018 23:19:50 +0100
> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> W dniu 10.11.2018 o 23:06, Jesper Dangaard Brouer pisze:
>>> On Sat, 10 Nov 2018 20:56:02 +0100
>>> Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>>>
>>>> W dniu 10.11.2018 o 20:49, Paweł Staszewski pisze:
>>>>> W dniu 10.11.2018 o 20:34, Jesper Dangaard Brouer pisze:
>>>>>> On Fri, 9 Nov 2018 23:20:38 +0100 Paweł Staszewski
>>>>>> <pstaszewski@itcare.pl> wrote:
>>>>>>
>>>>>>> W dniu 08.11.2018 o 20:12, Paweł Staszewski pisze:
> [...]
>>> Do notice, the per CPU squeeze is not too large.
>> Yes - but im searching invisible thing now :) something invisible is
>> slowing down packet processing :)
>> So trying to find any counter that have something to do with packet
>> processing.
> NOTICE, I have given you the counters you need (below)
Yes noticed this :)
>>> [...]
>>>>>>> Remember those tests are now on two separate connectx5 connected to
>>>>>>> two separate pcie x16 gen 3.0
>>>>>> That is strange... I still suspect some HW NIC issue, can you provide
>>>>>> ethtool stats info via tool:
>>>>>>
>>>>>> https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
>>>>>>
>>>>>> $ ethtool_stats.pl --dev enp175s0 --dev enp216s0
>>>>>>
>>>>>> The tool remove zero-stats counters and report per sec stats. It makes
>>>>>> it easier to spot that is relevant for the given workload.
>>>>> yes mlnx have just too many counters that are always 0 for my case :)
>>>>> Will try this also
>>>>>
>>>> But still alot of non 0 counters
>>>> Show adapter(s) (enp175s0 enp216s0) statistics (ONLY that changed!)
>>>> Ethtool(enp175s0) stat: 8891 ( 8,891) <= ch0_arm /sec
>>> [...]
>>>
>>> I have copied the stats over in another document so I can better looks
>>> at it... and I've found some interesting stats.
>>>
>>> E.g. we can see that the NIC hardware is dropping packets.
>>>
>>> RX-drops on enp175s0:
>>>
>>> (enp175s0) stat: 4850734036 ( 4,850,734,036) <= rx_bytes /sec
>>> (enp175s0) stat: 5069043007 ( 5,069,043,007) <= rx_bytes_phy /sec
>>> -218308971 ( -218,308,971) Dropped bytes /sec
>>>
>>> (enp175s0) stat: 139602 ( 139,602) <= rx_discards_phy /sec
>>>
>>> (enp175s0) stat: 3717148 ( 3,717,148) <= rx_packets /sec
>>> (enp175s0) stat: 3862420 ( 3,862,420) <= rx_packets_phy /sec
>>> -145272 ( -145,272) Dropped packets /sec
>>>
>>>
>>> RX-drops on enp216s0 is less:
>>>
>>> (enp216s0) stat: 2592286809 ( 2,592,286,809) <= rx_bytes /sec
>>> (enp216s0) stat: 2633575771 ( 2,633,575,771) <= rx_bytes_phy /sec
>>> -41288962 ( -41,288,962) Dropped bytes /sec
>>>
>>> (enp216s0) stat: 464 (464) <= rx_discards_phy /sec
>>>
>>> (enp216s0) stat: 4971677 ( 4,971,677) <= rx_packets /sec
>>> (enp216s0) stat: 4975563 ( 4,975,563) <= rx_packets_phy /sec
>>> -3886 ( -3,886) Dropped packets /sec
>>>
>
> I would recommend, that you use ethtool stats and monitor rx_discards_phy.
> The PHY are the counters from the hardware, and it shows that packets
> are getting dropped at HW level. This can be because software is not
> fast enough to empty RX-queue, but in this case where CPUs are mostly
> idle I don't think that is the case.
>
That is why i was searching some counter for software - where is
something wrong.
Cause in earlier reports from ethtool there was also phy drops reported
- just when cpu's was saturated that was normal for me that phy can drop
packets if no more cpu cycles available to pickup them from hw
But in case where i have 50% idle cpu's - there should be no problem -
that is why i start to modify ethtool params for tx/rx ring and coalescence
Currently waiting for more traffic with new ethtool settings:
ethtool -g enp175s0
Ring parameters for enp175s0:
Pre-set maximums:
RX: 8192
RX Mini: 0
RX Jumbo: 0
TX: 8192
Current hardware settings:
RX: 4096
RX Mini: 0
RX Jumbo: 0
TX: 128
ethtool -c enp175s0
Coalesce parameters for enp175s0:
Adaptive RX: off TX: on
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
dmac: 32517
rx-usecs: 64
rx-frames: 128
rx-usecs-irq: 0
rx-frames-irq: 0
tx-usecs: 8
tx-frames: 128
tx-usecs-irq: 0
tx-frames-irq: 0
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
Both ports same settings.
Current traffic:
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
| iface Rx Tx Total
==============================================================================
enp175s0: 37.85 Gb/s 7.77 Gb/s
45.62 Gb/s
enp216s0: 7.80 Gb/s 37.90 Gb/s
45.70 Gb/s
------------------------------------------------------------------------------
total: 45.61 Gb/s 45.63 Gb/s
91.24 Gb/s
and mpstat for cpu's
Average: CPU %usr %nice %sys %iowait %irq %soft %steal
%guest %gnice %idle
Average: all 0.33 0.00 1.48 0.01 0.00 12.11 0.00
0.00 0.00 86.06
Average: 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 1 0.00 0.00 0.90 0.00 0.00 0.00 0.00
0.00 0.00 99.10
Average: 2 0.10 0.00 0.20 0.80 0.00 0.00 0.00
0.00 0.00 98.90
Average: 3 0.10 0.00 0.30 0.00 0.00 0.00 0.00
0.00 0.00 99.60
Average: 4 14.10 0.00 1.00 0.00 0.00 0.00 0.00
0.00 0.00 84.90
Average: 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 6 0.00 0.00 1.50 0.00 0.00 0.00 0.00
0.00 0.00 98.50
Average: 7 0.20 0.00 2.00 0.00 0.00 0.00 0.00
0.00 0.00 97.80
Average: 8 0.10 0.00 0.40 0.00 0.00 0.00 0.00
0.00 0.00 99.50
Average: 9 0.00 0.00 0.60 0.00 0.00 0.00 0.00
0.00 0.00 99.40
Average: 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 11 0.00 0.00 5.60 0.00 0.00 0.00 0.00
0.00 0.00 94.40
Average: 12 0.00 0.00 4.10 0.00 0.00 0.00 0.00
0.00 0.00 95.90
Average: 13 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 14 0.00 0.00 1.90 0.00 0.00 27.30 0.00
0.00 0.00 70.80
Average: 15 0.00 0.00 2.10 0.00 0.00 26.00 0.00
0.00 0.00 71.90
Average: 16 0.00 0.00 2.10 0.00 0.00 25.40 0.00
0.00 0.00 72.50
Average: 17 0.20 0.00 1.80 0.00 0.00 23.10 0.00
0.00 0.00 74.90
Average: 18 0.00 0.00 2.00 0.00 0.00 25.50 0.00
0.00 0.00 72.50
Average: 19 0.00 0.00 1.90 0.00 0.00 20.20 0.00
0.00 0.00 77.90
Average: 20 0.10 0.00 1.00 0.00 0.00 26.90 0.00
0.00 0.00 72.00
Average: 21 0.10 0.00 2.80 0.00 0.00 24.70 0.00
0.00 0.00 72.40
Average: 22 0.80 0.00 3.30 0.00 0.00 24.30 0.00
0.00 0.00 71.60
Average: 23 0.10 0.00 1.80 0.00 0.00 26.60 0.00
0.00 0.00 71.50
Average: 24 0.10 0.00 1.20 0.00 0.00 23.60 0.00
0.00 0.00 75.10
Average: 25 0.00 0.00 1.80 0.00 0.00 26.60 0.00
0.00 0.00 71.60
Average: 26 0.00 0.00 1.50 0.00 0.00 26.70 0.00
0.00 0.00 71.80
Average: 27 0.10 0.00 0.70 0.00 0.00 26.70 0.00
0.00 0.00 72.50
Average: 28 0.70 0.00 0.30 0.00 0.00 0.00 0.00
0.00 0.00 99.00
Average: 29 0.20 0.00 1.50 0.00 0.00 0.00 0.00
0.00 0.00 98.30
Average: 30 0.10 0.00 0.60 0.00 0.00 0.00 0.00
0.00 0.00 99.30
Average: 31 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 32 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 33 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 34 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 35 0.10 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 99.90
Average: 36 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 37 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 38 0.00 0.00 2.80 0.00 0.00 0.00 0.00
0.00 0.00 97.20
Average: 39 0.00 0.00 7.40 0.00 0.00 0.00 0.00
0.00 0.00 92.60
Average: 40 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 41 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 100.00
Average: 42 0.00 0.00 2.10 0.00 0.00 28.40 0.00
0.00 0.00 69.50
Average: 43 0.00 0.00 1.60 0.00 0.00 25.00 0.00
0.00 0.00 73.40
Average: 44 0.10 0.00 1.60 0.00 0.00 23.90 0.00
0.00 0.00 74.40
Average: 45 0.00 0.00 1.60 0.00 0.00 21.00 0.00
0.00 0.00 77.40
Average: 46 0.00 0.00 2.20 0.00 0.00 28.00 0.00
0.00 0.00 69.80
Average: 47 0.00 0.00 2.80 0.00 0.00 20.30 0.00
0.00 0.00 76.90
Average: 48 0.00 0.00 2.50 0.00 0.00 21.60 0.00
0.00 0.00 75.90
Average: 49 0.00 0.00 0.80 0.00 0.00 22.50 0.00
0.00 0.00 76.70
Average: 50 0.40 0.00 3.00 0.00 0.00 23.50 0.00
0.00 0.00 73.10
Average: 51 0.60 0.00 2.50 0.00 0.00 25.00 0.00
0.00 0.00 71.90
Average: 52 0.10 0.00 1.30 0.00 0.00 20.70 0.00
0.00 0.00 77.90
Average: 53 0.00 0.00 2.20 0.00 0.00 22.80 0.00
0.00 0.00 75.00
Average: 54 0.00 0.00 1.40 0.00 0.00 20.80 0.00
0.00 0.00 77.80
Average: 55 0.00 0.00 2.10 0.00 0.00 21.30 0.00
0.00 0.00 76.60
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-11 8:56 ` Jesper Dangaard Brouer
@ 2018-11-12 19:19 ` Paweł Staszewski
0 siblings, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-12 19:19 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Saeed Mahameed, netdev
W dniu 11.11.2018 o 09:56, Jesper Dangaard Brouer pisze:
> On Sat, 10 Nov 2018 22:53:53 +0100 Paweł Staszewski <pstaszewski@itcare.pl> wrote:
>
>> Now im messing with ring configuration for connectx5 nics.
>> And after reading that paper:
>> https://netdevconf.org/2.1/slides/apr6/network-performance/04-amir-RX_and_TX_bulking_v2.pdf
>>
> Do notice that some of the ideas in that slide deck, was never
> implemented. But they are still on my todo list ;-).
>
> Notice how that it show that TX bulking is very important, but based on
> your ethtool_stats.pl, I can see that not much TX bulking is happening
> in your case. This is indicated via the xmit_more counters.
>
> Ethtool(enp175s0) stat: 2630 ( 2,630) <= tx_xmit_more /sec
> Ethtool(enp175s0) stat: 4956995 ( 4,956,995) <= tx_packets /sec
>
> And the per queue levels are also avail:
>
> Ethtool(enp175s0) stat: 184845 ( 184,845) <= tx7_packets /sec
> Ethtool(enp175s0) stat: 78 ( 78) <= tx7_xmit_more /sec
>
> This means that you are doing too many doorbell's to the NIC hardware
> at TX time, which I worry could be what cause the NIC and PCIe hardware
> not to operate at optimal speeds.
After tunning coal/ring a little with ethtool
Reached today:
bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help
input: /proc/net/dev type: rate
| iface Rx Tx Total
==============================================================================
enp175s0: 50.68 Gb/s 21.53 Gb/s
72.20 Gb/s
enp216s0: 21.62 Gb/s 50.81 Gb/s
72.42 Gb/s
------------------------------------------------------------------------------
total: 72.30 Gb/s 72.33 Gb/s
144.63 Gb/s
And still no packet loss (icmp side to side test every 100ms)
Below perf top
PerfTop: 104692 irqs/sec kernel:99.5% exact: 0.0% [4000Hz
cycles], (all, 56 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
9.06% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
6.43% [kernel] [k] tasklet_action_common.isra.21
5.68% [kernel] [k] fib_table_lookup
4.89% [kernel] [k] irq_entries_start
4.53% [kernel] [k] mlx5_eq_int
4.10% [kernel] [k] build_skb
3.39% [kernel] [k] mlx5e_poll_tx_cq
3.38% [kernel] [k] mlx5e_sq_xmit
2.73% [kernel] [k] mlx5e_poll_rx_cq
2.18% [kernel] [k] __dev_queue_xmit
2.13% [kernel] [k] vlan_do_receive
2.12% [kernel] [k] mlx5e_handle_rx_cqe_mpwrq
2.00% [kernel] [k] ip_finish_output2
1.87% [kernel] [k] mlx5e_post_rx_mpwqes
1.86% [kernel] [k] memcpy_erms
1.85% [kernel] [k] ipt_do_table
1.70% [kernel] [k] dev_gro_receive
1.39% [kernel] [k] __netif_receive_skb_core
1.31% [kernel] [k] inet_gro_receive
1.21% [kernel] [k] ip_route_input_rcu
1.21% [kernel] [k] tcp_gro_receive
1.13% [kernel] [k] _raw_spin_lock
1.08% [kernel] [k] __build_skb
1.06% [kernel] [k] kmem_cache_free_bulk
1.05% [kernel] [k] __softirqentry_text_start
1.03% [kernel] [k] vlan_dev_hard_start_xmit
0.98% [kernel] [k] pfifo_fast_dequeue
0.95% [kernel] [k] mlx5e_xmit
0.95% [kernel] [k] page_frag_free
0.88% [kernel] [k] ip_forward
0.81% [kernel] [k] dev_hard_start_xmit
0.78% [kernel] [k] rcu_irq_exit
0.77% [kernel] [k] netif_skb_features
0.72% [kernel] [k] napi_complete_done
0.72% [kernel] [k] kmem_cache_alloc
0.68% [kernel] [k] validate_xmit_skb.isra.142
0.66% [kernel] [k] ip_rcv_core.isra.20.constprop.25
0.58% [kernel] [k] swiotlb_map_page
0.57% [kernel] [k] __qdisc_run
0.56% [kernel] [k] tasklet_action
0.54% [kernel] [k] __get_xps_queue_idx
0.54% [kernel] [k] inet_lookup_ifaddr_rcu
0.50% [kernel] [k] tcp4_gro_receive
0.49% [kernel] [k] skb_release_data
0.47% [kernel] [k] eth_type_trans
0.40% [kernel] [k] sch_direct_xmit
0.40% [kernel] [k] net_rx_action
0.39% [kernel] [k] __local_bh_enable_ip
And perf record/report
https://ufile.io/zguq0
So now i know what was causing cpu load for some processes like:
2913 root 20 0 0 0 0 I 10.3 0.0 6:58.29
kworker/u112:1-
7 root 20 0 0 0 0 I 8.6 0.0 6:17.18
kworker/u112:0-
10289 root 20 0 0 0 0 I 6.6 0.0 6:33.90
kworker/u112:4-
2939 root 20 0 0 0 0 R 3.6 0.0 7:37.68
kworker/u112:2-
After disabling adaptative tx for coalescense - all this processes gone.
lavg drops from 40 to 1
Current settings for coalescence:
ethtool -c enp175s0
Coalesce parameters for enp175s0:
Adaptive RX: off TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
dmac: 32548
rx-usecs: 24
rx-frames: 256
rx-usecs-irq: 0
rx-frames-irq: 0
tx-usecs: 0
tx-frames: 64
tx-usecs-irq: 0
tx-frames-irq: 0
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
And currently with that traffiv lvls - have no packet loss (cpu is avg.
60% for all 28 cores)
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-10 0:06 ` David Ahern
2018-11-10 13:18 ` Paweł Staszewski
@ 2018-11-19 21:59 ` David Ahern
2018-11-20 23:00 ` Paweł Staszewski
1 sibling, 1 reply; 77+ messages in thread
From: David Ahern @ 2018-11-19 21:59 UTC (permalink / raw)
To: Paweł Staszewski, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
On 11/9/18 5:06 PM, David Ahern wrote:
> On 11/9/18 9:21 AM, David Ahern wrote:
>>> Is there possible to add only counters from xdp for vlans ?
>>> This will help me in testing.
>> I will take a look today at adding counters that you can dump using
>> bpftool. It will be a temporary solution for this xdp program only.
>>
>
> Same tree, kernel-tables-wip-02 branch. Compile kernel and install.
> Compile samples as before.
new version:
https://github.com/dsahern/linux.git bpf/kernel-tables-wip-03
This one prototypes incrementing counters for VLAN devices (rx/tx,
packets and bytes). Counters for netdevices representing physical ports
should be managed by the NIC driver.
I will look at what can be done for packet captures (e.g., xdpdump and
https://github.com/facebookincubator/katran/tree/master/tools). Most
likely a project for next week.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: Kernel 4.19 network performance - forwarding/routing normal users traffic
2018-11-19 21:59 ` David Ahern
@ 2018-11-20 23:00 ` Paweł Staszewski
0 siblings, 0 replies; 77+ messages in thread
From: Paweł Staszewski @ 2018-11-20 23:00 UTC (permalink / raw)
To: David Ahern, Jesper Dangaard Brouer; +Cc: netdev, Yoel Caspersen
W dniu 19.11.2018 o 22:59, David Ahern pisze:
> On 11/9/18 5:06 PM, David Ahern wrote:
>> On 11/9/18 9:21 AM, David Ahern wrote:
>>>> Is there possible to add only counters from xdp for vlans ?
>>>> This will help me in testing.
>>> I will take a look today at adding counters that you can dump using
>>> bpftool. It will be a temporary solution for this xdp program only.
>>>
>> Same tree, kernel-tables-wip-02 branch. Compile kernel and install.
>> Compile samples as before.
> new version:
> https://github.com/dsahern/linux.git bpf/kernel-tables-wip-03
>
> This one prototypes incrementing counters for VLAN devices (rx/tx,
> packets and bytes). Counters for netdevices representing physical ports
> should be managed by the NIC driver.
Will test it today
Thanks
Paweł
>
> I will look at what can be done for packet captures (e.g., xdpdump and
> https://github.com/facebookincubator/katran/tree/master/tools). Most
> likely a project for next week.
>
^ permalink raw reply [flat|nested] 77+ messages in thread
end of thread, other threads:[~2018-11-21 9:32 UTC | newest]
Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-31 21:57 Kernel 4.19 network performance - forwarding/routing normal users traffic Paweł Staszewski
2018-10-31 22:09 ` Eric Dumazet
2018-10-31 22:20 ` Paweł Staszewski
2018-10-31 22:45 ` Paweł Staszewski
2018-11-01 9:22 ` Jesper Dangaard Brouer
2018-11-01 10:34 ` Paweł Staszewski
2018-11-01 15:27 ` Aaron Lu
2018-11-01 20:23 ` Saeed Mahameed
2018-11-02 5:23 ` Aaron Lu
2018-11-02 11:40 ` Jesper Dangaard Brouer
2018-11-02 14:20 ` Aaron Lu
2018-11-02 19:02 ` Paweł Staszewski
2018-11-03 0:16 ` Paweł Staszewski
2018-11-03 12:01 ` Paweł Staszewski
2018-11-03 12:58 ` Jesper Dangaard Brouer
2018-11-03 15:23 ` Paweł Staszewski
2018-11-03 15:43 ` Paweł Staszewski
2018-11-03 12:53 ` Jesper Dangaard Brouer
2018-11-05 6:28 ` Aaron Lu
2018-11-05 9:10 ` Jesper Dangaard Brouer
2018-11-05 8:42 ` Tariq Toukan
2018-11-05 8:48 ` Aaron Lu
2018-11-01 3:37 ` David Ahern
2018-11-01 10:55 ` Jesper Dangaard Brouer
2018-11-01 13:52 ` Paweł Staszewski
2018-11-01 17:23 ` David Ahern
2018-11-01 17:30 ` Paweł Staszewski
2018-11-03 17:32 ` David Ahern
2018-11-04 0:24 ` Paweł Staszewski
2018-11-05 20:17 ` Jesper Dangaard Brouer
2018-11-08 0:59 ` Paweł Staszewski
2018-11-08 1:13 ` Paweł Staszewski
2018-11-08 14:43 ` Paweł Staszewski
2018-11-07 21:06 ` David Ahern
2018-11-08 13:33 ` Paweł Staszewski
2018-11-08 16:06 ` David Ahern
2018-11-08 16:25 ` Paweł Staszewski
2018-11-08 16:27 ` Paweł Staszewski
2018-11-08 16:32 ` David Ahern
2018-11-08 17:30 ` Paweł Staszewski
2018-11-08 18:05 ` David Ahern
2018-11-09 0:40 ` Paweł Staszewski
2018-11-09 0:42 ` David Ahern
2018-11-09 4:52 ` Saeed Mahameed
2018-11-09 7:52 ` Jesper Dangaard Brouer
2018-11-09 9:56 ` Paweł Staszewski
2018-11-09 10:20 ` Paweł Staszewski
2018-11-09 16:21 ` David Ahern
2018-11-09 19:59 ` Paweł Staszewski
2018-11-10 0:06 ` David Ahern
2018-11-10 13:18 ` Paweł Staszewski
2018-11-10 14:56 ` David Ahern
2018-11-19 21:59 ` David Ahern
2018-11-20 23:00 ` Paweł Staszewski
2018-11-01 9:50 ` Saeed Mahameed
2018-11-01 11:09 ` Paweł Staszewski
2018-11-01 16:49 ` Paweł Staszewski
2018-11-01 20:37 ` Saeed Mahameed
2018-11-01 21:18 ` Paweł Staszewski
2018-11-01 21:24 ` Paweł Staszewski
2018-11-01 21:34 ` Paweł Staszewski
2018-11-03 0:18 ` Paweł Staszewski
2018-11-08 19:12 ` Paweł Staszewski
2018-11-09 22:20 ` Paweł Staszewski
2018-11-10 19:34 ` Jesper Dangaard Brouer
2018-11-10 19:49 ` Paweł Staszewski
2018-11-10 19:56 ` Paweł Staszewski
2018-11-10 22:06 ` Jesper Dangaard Brouer
2018-11-10 22:19 ` Paweł Staszewski
2018-11-11 8:03 ` Jesper Dangaard Brouer
2018-11-11 10:26 ` Paweł Staszewski
2018-11-10 20:02 ` Paweł Staszewski
2018-11-10 21:01 ` Jesper Dangaard Brouer
2018-11-10 21:53 ` Paweł Staszewski
2018-11-10 22:04 ` Paweł Staszewski
2018-11-11 8:56 ` Jesper Dangaard Brouer
2018-11-12 19:19 ` Paweł Staszewski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).