* ethtool isn't showing xdp statistics
@ 2019-06-10 9:55 İbrahim Ercan
2019-06-10 10:15 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 10+ messages in thread
From: İbrahim Ercan @ 2019-06-10 9:55 UTC (permalink / raw)
To: xdp-newbies
Hi.
I'm trying to do a xdp performance test on redhat based environment.
To do so, I compiled kernel 5.0.13 and iproute 4.6.0
Then I loaded compiled code to interface with below command.
#ip -force link set dev enp7s0f0 xdp object xdptest.o
After that packets dropped as expected but I can not see statistics
with ethtool command like below.
#ethtool -S enp7s0f0 | grep xdp
ethtool version is 4.8
I did my test with that NIC
Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
I wonder why I can't see statistics. Did I miss something while
compiling kernel or iproute? Should I also compile ethtool too?
--
Ibrahim Ercan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-10 9:55 ethtool isn't showing xdp statistics İbrahim Ercan
@ 2019-06-10 10:15 ` Jesper Dangaard Brouer
2019-06-11 9:18 ` İbrahim Ercan
0 siblings, 1 reply; 10+ messages in thread
From: Jesper Dangaard Brouer @ 2019-06-10 10:15 UTC (permalink / raw)
To: İbrahim Ercan; +Cc: xdp-newbies, brouer, David Ahern
On Mon, 10 Jun 2019 12:55:07 +0300
İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
> Hi.
> I'm trying to do a xdp performance test on redhat based environment.
> To do so, I compiled kernel 5.0.13 and iproute 4.6.0
> Then I loaded compiled code to interface with below command.
> #ip -force link set dev enp7s0f0 xdp object xdptest.o
>
> After that packets dropped as expected but I can not see statistics
> with ethtool command like below.
> #ethtool -S enp7s0f0 | grep xdp
>
> ethtool version is 4.8
> I did my test with that NIC
> Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
>
> I wonder why I can't see statistics. Did I miss something while
> compiling kernel or iproute? Should I also compile ethtool too?
You did nothing wrong. Consistency for statistics with XDP is a known
issue, see [1]. The behavior varies per driver, which obviously is bad
from a user perspective. You NIC is based on ixgbe driver, which don't
have ethtool stats counters for XDP, instead it actually updates
ifconfig counters correctly. While for mlx5 it's opposite (p.s. I use
this[2] ethtool stats tool).
We want to bring consistency in this area, but there are performance
concerns. As any stat counter will bring overhead, and XDP is all
about maximum performance. Thus, we want this counter overhead to be
opt-in (that is not on as default).
Currently you have to add the stats your want to the XDP/BPF program
itself. That is the current opt-in mechanism. To help you coded this,
we have an example here[3].
[1] https://github.com/xdp-project/xdp-project/blob/master/xdp-project.org#consistency-for-statistics-with-xdp
[2] https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
[3] https://github.com/xdp-project/xdp-tutorial/blob/master/common/xdp_stats_kern.h
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-10 10:15 ` Jesper Dangaard Brouer
@ 2019-06-11 9:18 ` İbrahim Ercan
2019-06-11 10:42 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 10+ messages in thread
From: İbrahim Ercan @ 2019-06-11 9:18 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: xdp-newbies, David Ahern
Thanks for the clarification.
I used ethtool_stats.pl script and realized total dropped packets are
sum of fdir_miss and rx_missed_errors.
Here I observed sometimes fdir_miss increase to 1-2m and
rx_missed_errors drop about same amount but their total not change.
Show adapter(s) (enp7s0f0) statistics (ONLY that changed!)
Ethtool(enp7s0f0) stat: 153818 ( 153,818) <= fdir_miss /sec
Ethtool(enp7s0f0) stat: 9060176 ( 9,060,176) <= rx_bytes /sec
Ethtool(enp7s0f0) stat: 946625059 ( 946,625,059) <= rx_bytes_nic /sec
Ethtool(enp7s0f0) stat: 14694930 ( 14,694,930) <= rx_missed_errors /sec
As you can see, In my tests I dropped about 15m packets successfully.
After that I did some latency tests and get some bad results.
I loaded a xdp code that drops only udp packets. I connected 2 packet
sender through a switch. One of them I sent flood udp ddos. From other
one I just send ping and observed latency.
Here is results.
latency when there is no attack.
# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=1 ttl=64 time=0.794 ms
64 bytes from 10.0.0.213: icmp_seq=2 ttl=64 time=0.435 ms
64 bytes from 10.0.0.213: icmp_seq=3 ttl=64 time=0.394 ms
64 bytes from 10.0.0.213: icmp_seq=4 ttl=64 time=0.387 ms
64 bytes from 10.0.0.213: icmp_seq=5 ttl=64 time=0.479 ms
64 bytes from 10.0.0.213: icmp_seq=6 ttl=64 time=0.487 ms
64 bytes from 10.0.0.213: icmp_seq=7 ttl=64 time=0.458 ms
64 bytes from 10.0.0.213: icmp_seq=8 ttl=64 time=0.536 ms
64 bytes from 10.0.0.213: icmp_seq=9 ttl=64 time=0.499 ms
64 bytes from 10.0.0.213: icmp_seq=10 ttl=64 time=0.391 ms
--- 10.0.0.213 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9202ms
rtt min/avg/max/mdev = 0.387/0.486/0.794/0.113 ms
latency when there is 150k attack
# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=1 ttl=64 time=43.4 ms
64 bytes from 10.0.0.213: icmp_seq=2 ttl=64 time=8.26 ms
64 bytes from 10.0.0.213: icmp_seq=4 ttl=64 time=47.1 ms
64 bytes from 10.0.0.213: icmp_seq=5 ttl=64 time=2.51 ms
64 bytes from 10.0.0.213: icmp_seq=6 ttl=64 time=1.43 ms
64 bytes from 10.0.0.213: icmp_seq=7 ttl=64 time=40.6 ms
64 bytes from 10.0.0.213: icmp_seq=8 ttl=64 time=44.2 ms
64 bytes from 10.0.0.213: icmp_seq=9 ttl=64 time=38.0 ms
64 bytes from 10.0.0.213: icmp_seq=10 ttl=64 time=50.5 ms
--- 10.0.0.213 ping statistics ---
10 packets transmitted, 9 received, 10% packet loss, time 9060ms
latency when there is 800k attack
# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=4 ttl=64 time=0.395 ms
64 bytes from 10.0.0.213: icmp_seq=5 ttl=64 time=0.359 ms
64 bytes from 10.0.0.213: icmp_seq=8 ttl=64 time=30.3 ms
--- 10.0.0.213 ping statistics ---
10 packets transmitted, 3 received, 70% packet loss, time 9246ms
rtt min/avg/max/mdev = 0.359/10.376/30.376/14.142 ms
latency when there is 1.6m attack
# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=2 ttl=64 time=34.7 ms
--- 10.0.0.213 ping statistics ---
10 packets transmitted, 1 received, 90% packet loss, time 9205ms
rtt min/avg/max/mdev = 34.756/34.756/34.756/0.000 ms
latency when there is 2.4m attack
# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
From 10.0.0.214 icmp_seq=10 Destination Host Unreachable
--- 10.0.0.213 ping statistics ---
10 packets transmitted, 0 received, +1 errors, 100% packet loss, time 9229ms
After that all ping stop as you can see. I don't know how to debug
that latency. I believe I need to do some tuning but I don't know what
it is. I tried to enable jit but nothing changed.
If xdp cause this latency, than it is useless for me. Can you help me
to understand its cause?
On Mon, Jun 10, 2019 at 1:15 PM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> On Mon, 10 Jun 2019 12:55:07 +0300
> İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
>
> > Hi.
> > I'm trying to do a xdp performance test on redhat based environment.
> > To do so, I compiled kernel 5.0.13 and iproute 4.6.0
> > Then I loaded compiled code to interface with below command.
> > #ip -force link set dev enp7s0f0 xdp object xdptest.o
> >
> > After that packets dropped as expected but I can not see statistics
> > with ethtool command like below.
> > #ethtool -S enp7s0f0 | grep xdp
> >
> > ethtool version is 4.8
> > I did my test with that NIC
> > Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
> >
> > I wonder why I can't see statistics. Did I miss something while
> > compiling kernel or iproute? Should I also compile ethtool too?
>
> You did nothing wrong. Consistency for statistics with XDP is a known
> issue, see [1]. The behavior varies per driver, which obviously is bad
> from a user perspective. You NIC is based on ixgbe driver, which don't
> have ethtool stats counters for XDP, instead it actually updates
> ifconfig counters correctly. While for mlx5 it's opposite (p.s. I use
> this[2] ethtool stats tool).
>
> We want to bring consistency in this area, but there are performance
> concerns. As any stat counter will bring overhead, and XDP is all
> about maximum performance. Thus, we want this counter overhead to be
> opt-in (that is not on as default).
>
> Currently you have to add the stats your want to the XDP/BPF program
> itself. That is the current opt-in mechanism. To help you coded this,
> we have an example here[3].
>
>
> [1] https://github.com/xdp-project/xdp-project/blob/master/xdp-project.org#consistency-for-statistics-with-xdp
> [2] https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> [3] https://github.com/xdp-project/xdp-tutorial/blob/master/common/xdp_stats_kern.h
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-11 9:18 ` İbrahim Ercan
@ 2019-06-11 10:42 ` Jesper Dangaard Brouer
2019-06-11 13:18 ` İbrahim Ercan
0 siblings, 1 reply; 10+ messages in thread
From: Jesper Dangaard Brouer @ 2019-06-11 10:42 UTC (permalink / raw)
To: İbrahim Ercan; +Cc: xdp-newbies, David Ahern, brouer
On Tue, 11 Jun 2019 12:18:44 +0300
İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
> Thanks for the clarification.
> I used ethtool_stats.pl script and realized total dropped packets are
> sum of fdir_miss and rx_missed_errors.
> Here I observed sometimes fdir_miss increase to 1-2m and
> rx_missed_errors drop about same amount but their total not change.
>
> Show adapter(s) (enp7s0f0) statistics (ONLY that changed!)
> Ethtool(enp7s0f0) stat: 153818 ( 153,818) <= fdir_miss /sec
> Ethtool(enp7s0f0) stat: 9060176 ( 9,060,176) <= rx_bytes /sec
> Ethtool(enp7s0f0) stat: 946625059 ( 946,625,059) <= rx_bytes_nic /sec
> Ethtool(enp7s0f0) stat: 14694930 ( 14,694,930) <= rx_missed_errors /sec
>
> As you can see, In my tests I dropped about 15m packets successfully.
Sorry, but your output with 14,694,930 rx_missed_errors /sec, shows
something is *very* wrong with your setup. The rx_missed_errors (for
ixgbe) is not your XDP_DROP number. The rx_missed_errors is a hardware
Missed Packet Counter (MPC register). So, the packets are being
dropped by HW.
> After that I did some latency tests and get some bad results.
> I loaded a xdp code that drops only udp packets. I connected 2 packet
> sender through a switch. One of them I sent flood udp ddos. From other
> one I just send ping and observed latency.
> Here is results.
[cut]
We first need to figure out what is wrong with your setup, since the
NIC hardware is dropping packets.
Here is output from my testlab, so you have a baseline of what numbers
to expect.
XDP dropping packets via:
sudo ./xdp_rxq_info --dev ixgbe2 --action XDP_DROP
Running XDP on dev:ixgbe2 (ifindex:9) action:XDP_DROP options:no_touch
XDP stats CPU pps issue-pps
XDP-RX CPU 4 14,705,913 0
XDP-RX CPU total 14,705,913
RXQ stats RXQ:CPU pps issue-pps
rx_queue_index 1:4 14,705,882 0
rx_queue_index 1:sum 14,705,882
My ethtool_stats.pl output:
Show adapter(s) (ixgbe2) statistics (ONLY that changed!)
Ethtool(ixgbe2 ) stat: 15364178 ( 15,364,178) <= fdir_miss /sec
Ethtool(ixgbe2 ) stat: 881716018 ( 881,716,018) <= rx_bytes /sec
Ethtool(ixgbe2 ) stat: 952151488 ( 952,151,488) <= rx_bytes_nic /sec
Ethtool(ixgbe2 ) stat: 182070 ( 182,070) <= rx_missed_errors /sec
Ethtool(ixgbe2 ) stat: 14695267 ( 14,695,267) <= rx_packets /sec
Ethtool(ixgbe2 ) stat: 14695291 ( 14,695,291) <= rx_pkts_nic /sec
Ethtool(ixgbe2 ) stat: 881714129 ( 881,714,129) <= rx_queue_1_bytes /sec
Ethtool(ixgbe2 ) stat: 14695235 ( 14,695,235) <= rx_queue_1_packets /sec
Ethtool(ixgbe2 ) stat: 596 ( 596) <= tx_flow_control_xoff /sec
(It even shows that I forgot to disable Ethernet flow control, via
tx_flow_control_xoff).
--Jesper
> On Mon, Jun 10, 2019 at 1:15 PM Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> >
> > On Mon, 10 Jun 2019 12:55:07 +0300
> > İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
> >
> > > Hi.
> > > I'm trying to do a xdp performance test on redhat based environment.
> > > To do so, I compiled kernel 5.0.13 and iproute 4.6.0
> > > Then I loaded compiled code to interface with below command.
> > > #ip -force link set dev enp7s0f0 xdp object xdptest.o
> > >
> > > After that packets dropped as expected but I can not see statistics
> > > with ethtool command like below.
> > > #ethtool -S enp7s0f0 | grep xdp
> > >
> > > ethtool version is 4.8
> > > I did my test with that NIC
> > > Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
> > >
> > > I wonder why I can't see statistics. Did I miss something while
> > > compiling kernel or iproute? Should I also compile ethtool too?
> >
> > You did nothing wrong. Consistency for statistics with XDP is a known
> > issue, see [1]. The behavior varies per driver, which obviously is bad
> > from a user perspective. You NIC is based on ixgbe driver, which don't
> > have ethtool stats counters for XDP, instead it actually updates
> > ifconfig counters correctly. While for mlx5 it's opposite (p.s. I use
> > this[2] ethtool stats tool).
> >
> > We want to bring consistency in this area, but there are performance
> > concerns. As any stat counter will bring overhead, and XDP is all
> > about maximum performance. Thus, we want this counter overhead to be
> > opt-in (that is not on as default).
> >
> > Currently you have to add the stats your want to the XDP/BPF program
> > itself. That is the current opt-in mechanism. To help you coded this,
> > we have an example here[3].
> >
> >
> > [1] https://github.com/xdp-project/xdp-project/blob/master/xdp-project.org#consistency-for-statistics-with-xdp
> > [2] https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> > [3] https://github.com/xdp-project/xdp-tutorial/blob/master/common/xdp_stats_kern.h
> > --
> > Best regards,
> > Jesper Dangaard Brouer
> > MSc.CS, Principal Kernel Engineer at Red Hat
> > LinkedIn: http://www.linkedin.com/in/brouer
> >
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-11 10:42 ` Jesper Dangaard Brouer
@ 2019-06-11 13:18 ` İbrahim Ercan
2019-06-11 14:45 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 10+ messages in thread
From: İbrahim Ercan @ 2019-06-11 13:18 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: xdp-newbies, David Ahern
Is there any other package or library that I should upgrade besides
kernel and iproute?
I used this code example
https://gist.github.com/fntlnz/f6638d59e0e39f0993219684d9bf57d3
Compiled as
clang -O2 -Wall -target bpf -c dropudp.c -o dropudp.o
and loaded with
./ip -force link set dev enp7s0f0 xdp object dropudp.o sec prog
I also realized that after loading xdp code, network go down about 5
second, Is that normal?
Im using bridged topology. I don't know it is important or not.
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.00900b3b696c yes enp7s0f0
enp7s0f1
And this is my related kernel config parameters.
~# egrep -ie "xdp|bpf" /boot/config-5.0.13-1.lbr.x86_64
# CONFIG_CGROUP_BPF is not set
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
# CONFIG_BPF_JIT_ALWAYS_ON is not set
CONFIG_XDP_SOCKETS=y
CONFIG_NETFILTER_XT_MATCH_BPF=m
# CONFIG_BPFILTER is not set
# CONFIG_NET_CLS_BPF is not set
# CONFIG_NET_ACT_BPF is not set
CONFIG_BPF_JIT=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_BPF_EVENTS=y
# CONFIG_BPF_KPROBE_OVERRIDE is not set
# CONFIG_TEST_BPF is not set
clang version
# clang --version
clang version 5.0.1 (tags/RELEASE_501/final)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /opt/rh/llvm-toolset-7/root/usr/bin
I really wonder what is wrong with my configuration :/
On Tue, Jun 11, 2019 at 1:46 PM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> On Tue, 11 Jun 2019 12:18:44 +0300
> İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
>
> > Thanks for the clarification.
> > I used ethtool_stats.pl script and realized total dropped packets are
> > sum of fdir_miss and rx_missed_errors.
> > Here I observed sometimes fdir_miss increase to 1-2m and
> > rx_missed_errors drop about same amount but their total not change.
> >
> > Show adapter(s) (enp7s0f0) statistics (ONLY that changed!)
> > Ethtool(enp7s0f0) stat: 153818 ( 153,818) <= fdir_miss /sec
> > Ethtool(enp7s0f0) stat: 9060176 ( 9,060,176) <= rx_bytes /sec
> > Ethtool(enp7s0f0) stat: 946625059 ( 946,625,059) <= rx_bytes_nic /sec
> > Ethtool(enp7s0f0) stat: 14694930 ( 14,694,930) <= rx_missed_errors /sec
> >
> > As you can see, In my tests I dropped about 15m packets successfully.
>
> Sorry, but your output with 14,694,930 rx_missed_errors /sec, shows
> something is *very* wrong with your setup. The rx_missed_errors (for
> ixgbe) is not your XDP_DROP number. The rx_missed_errors is a hardware
> Missed Packet Counter (MPC register). So, the packets are being
> dropped by HW.
>
>
> > After that I did some latency tests and get some bad results.
> > I loaded a xdp code that drops only udp packets. I connected 2 packet
> > sender through a switch. One of them I sent flood udp ddos. From other
> > one I just send ping and observed latency.
> > Here is results.
>
> [cut]
> We first need to figure out what is wrong with your setup, since the
> NIC hardware is dropping packets.
>
> Here is output from my testlab, so you have a baseline of what numbers
> to expect.
>
> XDP dropping packets via:
>
> sudo ./xdp_rxq_info --dev ixgbe2 --action XDP_DROP
>
> Running XDP on dev:ixgbe2 (ifindex:9) action:XDP_DROP options:no_touch
> XDP stats CPU pps issue-pps
> XDP-RX CPU 4 14,705,913 0
> XDP-RX CPU total 14,705,913
>
> RXQ stats RXQ:CPU pps issue-pps
> rx_queue_index 1:4 14,705,882 0
> rx_queue_index 1:sum 14,705,882
>
>
> My ethtool_stats.pl output:
>
> Show adapter(s) (ixgbe2) statistics (ONLY that changed!)
> Ethtool(ixgbe2 ) stat: 15364178 ( 15,364,178) <= fdir_miss /sec
> Ethtool(ixgbe2 ) stat: 881716018 ( 881,716,018) <= rx_bytes /sec
> Ethtool(ixgbe2 ) stat: 952151488 ( 952,151,488) <= rx_bytes_nic /sec
> Ethtool(ixgbe2 ) stat: 182070 ( 182,070) <= rx_missed_errors /sec
> Ethtool(ixgbe2 ) stat: 14695267 ( 14,695,267) <= rx_packets /sec
> Ethtool(ixgbe2 ) stat: 14695291 ( 14,695,291) <= rx_pkts_nic /sec
> Ethtool(ixgbe2 ) stat: 881714129 ( 881,714,129) <= rx_queue_1_bytes /sec
> Ethtool(ixgbe2 ) stat: 14695235 ( 14,695,235) <= rx_queue_1_packets /sec
> Ethtool(ixgbe2 ) stat: 596 ( 596) <= tx_flow_control_xoff /sec
>
> (It even shows that I forgot to disable Ethernet flow control, via
> tx_flow_control_xoff).
>
> --Jesper
>
>
> > On Mon, Jun 10, 2019 at 1:15 PM Jesper Dangaard Brouer
> > <brouer@redhat.com> wrote:
> > >
> > > On Mon, 10 Jun 2019 12:55:07 +0300
> > > İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
> > >
> > > > Hi.
> > > > I'm trying to do a xdp performance test on redhat based environment.
> > > > To do so, I compiled kernel 5.0.13 and iproute 4.6.0
> > > > Then I loaded compiled code to interface with below command.
> > > > #ip -force link set dev enp7s0f0 xdp object xdptest.o
> > > >
> > > > After that packets dropped as expected but I can not see statistics
> > > > with ethtool command like below.
> > > > #ethtool -S enp7s0f0 | grep xdp
> > > >
> > > > ethtool version is 4.8
> > > > I did my test with that NIC
> > > > Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
> > > >
> > > > I wonder why I can't see statistics. Did I miss something while
> > > > compiling kernel or iproute? Should I also compile ethtool too?
> > >
> > > You did nothing wrong. Consistency for statistics with XDP is a known
> > > issue, see [1]. The behavior varies per driver, which obviously is bad
> > > from a user perspective. You NIC is based on ixgbe driver, which don't
> > > have ethtool stats counters for XDP, instead it actually updates
> > > ifconfig counters correctly. While for mlx5 it's opposite (p.s. I use
> > > this[2] ethtool stats tool).
> > >
> > > We want to bring consistency in this area, but there are performance
> > > concerns. As any stat counter will bring overhead, and XDP is all
> > > about maximum performance. Thus, we want this counter overhead to be
> > > opt-in (that is not on as default).
> > >
> > > Currently you have to add the stats your want to the XDP/BPF program
> > > itself. That is the current opt-in mechanism. To help you coded this,
> > > we have an example here[3].
> > >
> > >
> > > [1] https://github.com/xdp-project/xdp-project/blob/master/xdp-project.org#consistency-for-statistics-with-xdp
> > > [2] https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> > > [3] https://github.com/xdp-project/xdp-tutorial/blob/master/common/xdp_stats_kern.h
> > > --
> > > Best regards,
> > > Jesper Dangaard Brouer
> > > MSc.CS, Principal Kernel Engineer at Red Hat
> > > LinkedIn: http://www.linkedin.com/in/brouer
> > >
>
>
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-11 13:18 ` İbrahim Ercan
@ 2019-06-11 14:45 ` Jesper Dangaard Brouer
2019-06-12 6:57 ` İbrahim Ercan
0 siblings, 1 reply; 10+ messages in thread
From: Jesper Dangaard Brouer @ 2019-06-11 14:45 UTC (permalink / raw)
To: İbrahim Ercan; +Cc: xdp-newbies, David Ahern
On Tue, 11 Jun 2019 16:18:17 +0300
İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
> and loaded with
> ./ip -force link set dev enp7s0f0 xdp object dropudp.o sec prog
>
> I also realized that after loading xdp code, network go down about 5
> second, Is that normal?
>
> I'm using bridged topology. I don't know it is important or not.
>
> # brctl show
> bridge name bridge id STP enabled interfaces
> br0 8000.00900b3b696c yes enp7s0f0 enp7s0f1
I would recommend removing the bridge setup, for isolation the issue,
as this could be the issue. XDP doesn't cooperate with the bridge
code, and it works on a layer before the bridge.
For the ixgbe driver it does a full link down/up (to reconfigure all
the NIC queues), which is why you likely you see this 5 sec issue, as
you have enabled STP on your bridge. (Note, replacing an XDP-prog with
another XDP-prog does not require this link down/up).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-11 14:45 ` Jesper Dangaard Brouer
@ 2019-06-12 6:57 ` İbrahim Ercan
2019-06-12 7:53 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 10+ messages in thread
From: İbrahim Ercan @ 2019-06-12 6:57 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: xdp-newbies, David Ahern
I removed bridge and did same tests again. Unfortunately result is same :/
On Tue, Jun 11, 2019 at 5:45 PM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> I would recommend removing the bridge setup, for isolation the issue,
> as this could be the issue. XDP doesn't cooperate with the bridge
> code, and it works on a layer before the bridge.
>
> For the ixgbe driver it does a full link down/up (to reconfigure all
> the NIC queues), which is why you likely you see this 5 sec issue, as
> you have enabled STP on your bridge. (Note, replacing an XDP-prog with
> another XDP-prog does not require this link down/up).
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-12 6:57 ` İbrahim Ercan
@ 2019-06-12 7:53 ` Jesper Dangaard Brouer
2019-06-12 8:59 ` İbrahim Ercan
0 siblings, 1 reply; 10+ messages in thread
From: Jesper Dangaard Brouer @ 2019-06-12 7:53 UTC (permalink / raw)
To: İbrahim Ercan; +Cc: xdp-newbies, David Ahern, brouer
On Wed, 12 Jun 2019 09:57:02 +0300 İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
> I removed bridge and did same tests again. Unfortunately result is
> same :/
I sort of expected that, as the ethtool "rx_missed_errors" counter
says, that packets are dropped inside the NIC, before reaching Linux.
Something more fundamental is wrong with your setup.
You mentioned there was a switch between the machines in your lab. One
possibility is that the switch is somehow corrupting the frames before
the reach the NIC, e.g. in these overload DDoS scenarios. Try to
remove the switch from the equation (by directly connecting machine
back-to-back), to identify where the pitfall is...
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-12 7:53 ` Jesper Dangaard Brouer
@ 2019-06-12 8:59 ` İbrahim Ercan
2019-06-13 13:02 ` İbrahim Ercan
0 siblings, 1 reply; 10+ messages in thread
From: İbrahim Ercan @ 2019-06-12 8:59 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: xdp-newbies, David Ahern
I removed switch but still see rx_missed_errors.
Ethtool(enp7s0f0) stat: 374068 ( 374,068) <= fdir_miss /sec
Ethtool(enp7s0f0) stat: 22269145 ( 22,269,145) <= rx_bytes /sec
Ethtool(enp7s0f0) stat: 566912795 ( 566,912,795) <= rx_bytes_nic /sec
Ethtool(enp7s0f0) stat: 8568315 ( 8,568,315) <= rx_missed_errors /sec
Ethtool(enp7s0f0) stat: 371152 ( 371,152) <= rx_packets /sec
Ethtool(enp7s0f0) stat: 297774 ( 297,774) <= rx_pkts_nic /sec
I also tried it with another copper NIC (Intel I350 Gigabit Network
Connection) on same firewall. I loaded same xdp code and send about
1.5m pps. Here I didn't see any error
Ethtool(enp9s0f0) stat: 95234803 ( 95,234,803) <= rx_bytes /sec
Ethtool(enp9s0f0) stat: 95234803 ( 95,234,803) <=
rx_long_byte_count /sec
Ethtool(enp9s0f0) stat: 1488043 ( 1,488,043) <= rx_packets /sec
Then I tried to change SFP module (with same model) but got same
results. I compared 'ethtool -m' output of attacker and firewall. I
noticed signal powers different
on attacker side
# ethtool -m enp9s0f1
Identifier : 0x03 (SFP)
Extended identifier : 0x04 (GBIC/SFP
defined by 2-wire interface ID)
Connector : 0x07 (LC)
Transceiver codes : 0x10 0x00 0x00
0x00 0x00 0x00 0x00 0x00
Transceiver type : 10G Ethernet: 10G Base-SR
Encoding : 0x06 (64B/66B)
BR, Nominal : 10300MBd
Rate identifier : 0x00 (unspecified)
Length (SMF,km) : 0km
Length (SMF) : 0m
Length (50um) : 80m
Length (62.5um) : 30m
Length (Copper) : 0m
Length (OM3) : 300m
Laser wavelength : 850nm
Vendor name : FINISAR CORP.
Vendor OUI : 00:90:65
Vendor PN : FTLX8571D3BCL
Vendor rev : A
Optical diagnostics support : Yes
Laser bias current : 18.160 mA
Laser output power : 0.5945 mW / -2.26 dBm
Receiver signal average optical power : 0.6328 mW / -1.99 dBm
Module temperature : 35.47 degrees C /
95.84 degrees F
Module voltage : 3.3568 V
Alarm/warning flags implemented : Yes
Laser bias current high alarm : Off
Laser bias current low alarm : Off
Laser bias current high warning : Off
Laser bias current low warning : Off
Laser output power high alarm : Off
Laser output power low alarm : Off
Laser output power high warning : Off
Laser output power low warning : Off
Module temperature high alarm : Off
Module temperature low alarm : Off
Module temperature high warning : Off
Module temperature low warning : Off
Module voltage high alarm : Off
Module voltage low alarm : Off
Module voltage high warning : Off
Module voltage low warning : Off
Laser rx power high alarm : Off
Laser rx power low alarm : Off
Laser rx power high warning : Off
Laser rx power low warning : Off
Laser bias current high alarm threshold : 11.800 mA
Laser bias current low alarm threshold : 4.000 mA
Laser bias current high warning threshold : 10.800 mA
Laser bias current low warning threshold : 5.000 mA
Laser output power high alarm threshold : 0.8318 mW / -0.80 dBm
Laser output power low alarm threshold : 0.2512 mW / -6.00 dBm
Laser output power high warning threshold : 0.6607 mW / -1.80 dBm
Laser output power low warning threshold : 0.3162 mW / -5.00 dBm
Module temperature high alarm threshold : 78.00 degrees C /
172.40 degrees F
Module temperature low alarm threshold : -13.00 degrees C /
8.60 degrees F
Module temperature high warning threshold : 73.00 degrees C /
163.40 degrees F
Module temperature low warning threshold : -8.00 degrees C /
17.60 degrees F
Module voltage high alarm threshold : 3.7000 V
Module voltage low alarm threshold : 2.9000 V
Module voltage high warning threshold : 3.6000 V
Module voltage low warning threshold : 3.0000 V
Laser rx power high alarm threshold : 1.0000 mW / 0.00 dBm
Laser rx power low alarm threshold : 0.0100 mW / -20.00 dBm
Laser rx power high warning threshold : 0.7943 mW / -1.00 dBm
Laser rx power low warning threshold : 0.0158 mW / -18.01 dBm
on firewall side (It didn't change after I replace SFP with another)
# ethtool -m enp7s0f0
Identifier : 0x03 (SFP)
Extended identifier : 0x04 (GBIC/SFP
defined by 2-wire interface ID)
Connector : 0x07 (LC)
Transceiver codes : 0x10 0x00 0x00
0x00 0x00 0x00 0x00 0x00
Transceiver type : 10G Ethernet: 10G Base-SR
Encoding : 0x06 (64B/66B)
BR, Nominal : 10300MBd
Rate identifier : 0x00 (unspecified)
Length (SMF,km) : 0km
Length (SMF) : 0m
Length (50um) : 80m
Length (62.5um) : 30m
Length (Copper) : 0m
Length (OM3) : 300m
Laser wavelength : 850nm
Vendor name : FINISAR CORP.
Vendor OUI : 00:90:65
Vendor PN : FTLX8571D3BCL
Vendor rev : A
Option values : 0x00 0x1a
Option : RX_LOS implemented
Option : TX_FAULT implemented
Option : TX_DISABLE implemented
BR margin, max : 0%
BR margin, min : 0%
Vendor SN : AP40XS0
Date code : 130202
Optical diagnostics support : Yes
Laser bias current : 7.762 mA
Laser output power : 0.6590 mW / -1.81 dBm
Receiver signal average optical power : 0.4653 mW / -3.32 dBm
Module temperature : 30.99 degrees C /
87.78 degrees F
Module voltage : 3.3468 V
Alarm/warning flags implemented : Yes
Laser bias current high alarm : Off
Laser bias current low alarm : Off
Laser bias current high warning : Off
Laser bias current low warning : Off
Laser output power high alarm : Off
Laser output power low alarm : Off
Laser output power high warning : Off
Laser output power low warning : Off
Module temperature high alarm : Off
Module temperature low alarm : Off
Module temperature high warning : Off
Module temperature low warning : Off
Module voltage high alarm : Off
Module voltage low alarm : Off
Module voltage high warning : Off
Module voltage low warning : Off
Laser rx power high alarm : Off
Laser rx power low alarm : Off
Laser rx power high warning : Off
Laser rx power low warning : Off
Laser bias current high alarm threshold : 13.200 mA
Laser bias current low alarm threshold : 4.000 mA
Laser bias current high warning threshold : 12.600 mA
Laser bias current low warning threshold : 5.000 mA
Laser output power high alarm threshold : 1.0000 mW / 0.00 dBm
Laser output power low alarm threshold : 0.2512 mW / -6.00 dBm
Laser output power high warning threshold : 0.7943 mW / -1.00 dBm
Laser output power low warning threshold : 0.3162 mW / -5.00 dBm
Module temperature high alarm threshold : 78.00 degrees C /
172.40 degrees F
Module temperature low alarm threshold : -13.00 degrees C /
8.60 degrees F
Module temperature high warning threshold : 73.00 degrees C /
163.40 degrees F
Module temperature low warning threshold : -8.00 degrees C /
17.60 degrees F
Module voltage high alarm threshold : 3.7000 V
Module voltage low alarm threshold : 2.9000 V
Module voltage high warning threshold : 3.6000 V
Module voltage low warning threshold : 3.0000 V
Laser rx power high alarm threshold : 1.0000 mW / 0.00 dBm
Laser rx power low alarm threshold : 0.0100 mW / -20.00 dBm
Laser rx power high warning threshold : 0.7943 mW / -1.00 dBm
Laser rx power low warning threshold : 0.0158 mW / -18.01 dBm
May that signal power difference be the problem?
On Wed, Jun 12, 2019 at 10:53 AM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
>
> On Wed, 12 Jun 2019 09:57:02 +0300 İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
>
> > I removed bridge and did same tests again. Unfortunately result is
> > same :/
>
> I sort of expected that, as the ethtool "rx_missed_errors" counter
> says, that packets are dropped inside the NIC, before reaching Linux.
> Something more fundamental is wrong with your setup.
>
> You mentioned there was a switch between the machines in your lab. One
> possibility is that the switch is somehow corrupting the frames before
> the reach the NIC, e.g. in these overload DDoS scenarios. Try to
> remove the switch from the equation (by directly connecting machine
> back-to-back), to identify where the pitfall is...
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ethtool isn't showing xdp statistics
2019-06-12 8:59 ` İbrahim Ercan
@ 2019-06-13 13:02 ` İbrahim Ercan
0 siblings, 0 replies; 10+ messages in thread
From: İbrahim Ercan @ 2019-06-13 13:02 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: xdp-newbies, David Ahern
Hi.
I did same tests on 2 other machines. One of them has same board, CPU
and NIC, second one has just same NIC but different board and CPU.
On first machine(same as initial) I got same results(too much rx_missed_error),
On the second machine I got the output as It should be(almost none
rx_missed_error) and I got zero latency on even 12m pps.
That test tells me there is a issue about board or CPU I use on my
first test. But I'm not sure how I am going to debug it. Here is
device informations
NICs are same on all device as I gave before
Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Failed(initial) device:
Base Board Information
Manufacturer: Intel
Product Name: Greencity
CPU name: Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
Second(successful) device:
Base Board Information
Manufacturer: INTEL Corporation
Product Name: DENLOW_WS
CPU name: Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz
On Wed, Jun 12, 2019 at 11:59 AM İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
>
> I removed switch but still see rx_missed_errors.
>
> Ethtool(enp7s0f0) stat: 374068 ( 374,068) <= fdir_miss /sec
> Ethtool(enp7s0f0) stat: 22269145 ( 22,269,145) <= rx_bytes /sec
> Ethtool(enp7s0f0) stat: 566912795 ( 566,912,795) <= rx_bytes_nic /sec
> Ethtool(enp7s0f0) stat: 8568315 ( 8,568,315) <= rx_missed_errors /sec
> Ethtool(enp7s0f0) stat: 371152 ( 371,152) <= rx_packets /sec
> Ethtool(enp7s0f0) stat: 297774 ( 297,774) <= rx_pkts_nic /sec
>
> I also tried it with another copper NIC (Intel I350 Gigabit Network
> Connection) on same firewall. I loaded same xdp code and send about
> 1.5m pps. Here I didn't see any error
> Ethtool(enp9s0f0) stat: 95234803 ( 95,234,803) <= rx_bytes /sec
> Ethtool(enp9s0f0) stat: 95234803 ( 95,234,803) <=
> rx_long_byte_count /sec
> Ethtool(enp9s0f0) stat: 1488043 ( 1,488,043) <= rx_packets /sec
>
> Then I tried to change SFP module (with same model) but got same
> results. I compared 'ethtool -m' output of attacker and firewall. I
> noticed signal powers different
>
> on attacker side
> # ethtool -m enp9s0f1
> Identifier : 0x03 (SFP)
> Extended identifier : 0x04 (GBIC/SFP
> defined by 2-wire interface ID)
> Connector : 0x07 (LC)
> Transceiver codes : 0x10 0x00 0x00
> 0x00 0x00 0x00 0x00 0x00
> Transceiver type : 10G Ethernet: 10G Base-SR
> Encoding : 0x06 (64B/66B)
> BR, Nominal : 10300MBd
> Rate identifier : 0x00 (unspecified)
> Length (SMF,km) : 0km
> Length (SMF) : 0m
> Length (50um) : 80m
> Length (62.5um) : 30m
> Length (Copper) : 0m
> Length (OM3) : 300m
> Laser wavelength : 850nm
> Vendor name : FINISAR CORP.
> Vendor OUI : 00:90:65
> Vendor PN : FTLX8571D3BCL
> Vendor rev : A
> Optical diagnostics support : Yes
> Laser bias current : 18.160 mA
> Laser output power : 0.5945 mW / -2.26 dBm
> Receiver signal average optical power : 0.6328 mW / -1.99 dBm
> Module temperature : 35.47 degrees C /
> 95.84 degrees F
> Module voltage : 3.3568 V
> Alarm/warning flags implemented : Yes
> Laser bias current high alarm : Off
> Laser bias current low alarm : Off
> Laser bias current high warning : Off
> Laser bias current low warning : Off
> Laser output power high alarm : Off
> Laser output power low alarm : Off
> Laser output power high warning : Off
> Laser output power low warning : Off
> Module temperature high alarm : Off
> Module temperature low alarm : Off
> Module temperature high warning : Off
> Module temperature low warning : Off
> Module voltage high alarm : Off
> Module voltage low alarm : Off
> Module voltage high warning : Off
> Module voltage low warning : Off
> Laser rx power high alarm : Off
> Laser rx power low alarm : Off
> Laser rx power high warning : Off
> Laser rx power low warning : Off
> Laser bias current high alarm threshold : 11.800 mA
> Laser bias current low alarm threshold : 4.000 mA
> Laser bias current high warning threshold : 10.800 mA
> Laser bias current low warning threshold : 5.000 mA
> Laser output power high alarm threshold : 0.8318 mW / -0.80 dBm
> Laser output power low alarm threshold : 0.2512 mW / -6.00 dBm
> Laser output power high warning threshold : 0.6607 mW / -1.80 dBm
> Laser output power low warning threshold : 0.3162 mW / -5.00 dBm
> Module temperature high alarm threshold : 78.00 degrees C /
> 172.40 degrees F
> Module temperature low alarm threshold : -13.00 degrees C /
> 8.60 degrees F
> Module temperature high warning threshold : 73.00 degrees C /
> 163.40 degrees F
> Module temperature low warning threshold : -8.00 degrees C /
> 17.60 degrees F
> Module voltage high alarm threshold : 3.7000 V
> Module voltage low alarm threshold : 2.9000 V
> Module voltage high warning threshold : 3.6000 V
> Module voltage low warning threshold : 3.0000 V
> Laser rx power high alarm threshold : 1.0000 mW / 0.00 dBm
> Laser rx power low alarm threshold : 0.0100 mW / -20.00 dBm
> Laser rx power high warning threshold : 0.7943 mW / -1.00 dBm
> Laser rx power low warning threshold : 0.0158 mW / -18.01 dBm
>
> on firewall side (It didn't change after I replace SFP with another)
> # ethtool -m enp7s0f0
> Identifier : 0x03 (SFP)
> Extended identifier : 0x04 (GBIC/SFP
> defined by 2-wire interface ID)
> Connector : 0x07 (LC)
> Transceiver codes : 0x10 0x00 0x00
> 0x00 0x00 0x00 0x00 0x00
> Transceiver type : 10G Ethernet: 10G Base-SR
> Encoding : 0x06 (64B/66B)
> BR, Nominal : 10300MBd
> Rate identifier : 0x00 (unspecified)
> Length (SMF,km) : 0km
> Length (SMF) : 0m
> Length (50um) : 80m
> Length (62.5um) : 30m
> Length (Copper) : 0m
> Length (OM3) : 300m
> Laser wavelength : 850nm
> Vendor name : FINISAR CORP.
> Vendor OUI : 00:90:65
> Vendor PN : FTLX8571D3BCL
> Vendor rev : A
> Option values : 0x00 0x1a
> Option : RX_LOS implemented
> Option : TX_FAULT implemented
> Option : TX_DISABLE implemented
> BR margin, max : 0%
> BR margin, min : 0%
> Vendor SN : AP40XS0
> Date code : 130202
> Optical diagnostics support : Yes
> Laser bias current : 7.762 mA
> Laser output power : 0.6590 mW / -1.81 dBm
> Receiver signal average optical power : 0.4653 mW / -3.32 dBm
> Module temperature : 30.99 degrees C /
> 87.78 degrees F
> Module voltage : 3.3468 V
> Alarm/warning flags implemented : Yes
> Laser bias current high alarm : Off
> Laser bias current low alarm : Off
> Laser bias current high warning : Off
> Laser bias current low warning : Off
> Laser output power high alarm : Off
> Laser output power low alarm : Off
> Laser output power high warning : Off
> Laser output power low warning : Off
> Module temperature high alarm : Off
> Module temperature low alarm : Off
> Module temperature high warning : Off
> Module temperature low warning : Off
> Module voltage high alarm : Off
> Module voltage low alarm : Off
> Module voltage high warning : Off
> Module voltage low warning : Off
> Laser rx power high alarm : Off
> Laser rx power low alarm : Off
> Laser rx power high warning : Off
> Laser rx power low warning : Off
> Laser bias current high alarm threshold : 13.200 mA
> Laser bias current low alarm threshold : 4.000 mA
> Laser bias current high warning threshold : 12.600 mA
> Laser bias current low warning threshold : 5.000 mA
> Laser output power high alarm threshold : 1.0000 mW / 0.00 dBm
> Laser output power low alarm threshold : 0.2512 mW / -6.00 dBm
> Laser output power high warning threshold : 0.7943 mW / -1.00 dBm
> Laser output power low warning threshold : 0.3162 mW / -5.00 dBm
> Module temperature high alarm threshold : 78.00 degrees C /
> 172.40 degrees F
> Module temperature low alarm threshold : -13.00 degrees C /
> 8.60 degrees F
> Module temperature high warning threshold : 73.00 degrees C /
> 163.40 degrees F
> Module temperature low warning threshold : -8.00 degrees C /
> 17.60 degrees F
> Module voltage high alarm threshold : 3.7000 V
> Module voltage low alarm threshold : 2.9000 V
> Module voltage high warning threshold : 3.6000 V
> Module voltage low warning threshold : 3.0000 V
> Laser rx power high alarm threshold : 1.0000 mW / 0.00 dBm
> Laser rx power low alarm threshold : 0.0100 mW / -20.00 dBm
> Laser rx power high warning threshold : 0.7943 mW / -1.00 dBm
> Laser rx power low warning threshold : 0.0158 mW / -18.01 dBm
>
> May that signal power difference be the problem?
>
>
>
>
>
> On Wed, Jun 12, 2019 at 10:53 AM Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> >
> >
> > On Wed, 12 Jun 2019 09:57:02 +0300 İbrahim Ercan <ibrahim.metu@gmail.com> wrote:
> >
> > > I removed bridge and did same tests again. Unfortunately result is
> > > same :/
> >
> > I sort of expected that, as the ethtool "rx_missed_errors" counter
> > says, that packets are dropped inside the NIC, before reaching Linux.
> > Something more fundamental is wrong with your setup.
> >
> > You mentioned there was a switch between the machines in your lab. One
> > possibility is that the switch is somehow corrupting the frames before
> > the reach the NIC, e.g. in these overload DDoS scenarios. Try to
> > remove the switch from the equation (by directly connecting machine
> > back-to-back), to identify where the pitfall is...
> >
> > --
> > Best regards,
> > Jesper Dangaard Brouer
> > MSc.CS, Principal Kernel Engineer at Red Hat
> > LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2019-06-13 13:02 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-10 9:55 ethtool isn't showing xdp statistics İbrahim Ercan
2019-06-10 10:15 ` Jesper Dangaard Brouer
2019-06-11 9:18 ` İbrahim Ercan
2019-06-11 10:42 ` Jesper Dangaard Brouer
2019-06-11 13:18 ` İbrahim Ercan
2019-06-11 14:45 ` Jesper Dangaard Brouer
2019-06-12 6:57 ` İbrahim Ercan
2019-06-12 7:53 ` Jesper Dangaard Brouer
2019-06-12 8:59 ` İbrahim Ercan
2019-06-13 13:02 ` İbrahim Ercan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.