All of lore.kernel.org
 help / color / mirror / Atom feed
* many rx packets dropped since 2.6.37
@ 2012-04-25 11:03 Chris
  2012-04-25 11:30 ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Chris @ 2012-04-25 11:03 UTC (permalink / raw)
  To: eric.dumazet; +Cc: linux-kernel

Hello,

There is a bug since 2.6.37 with dropped rx packets in virtual
environments (like kvm) and bonded network interfaces.

References:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/890475
https://bugzilla.redhat.com/show_bug.cgi?id=709551

This change has led to:
commit caf586e5f23cebb2a68cbaf288d59dbbf2d74052
Author: Eric Dumazet <eric.dumazet@gmail.com>
Date:   Thu Sep 30 21:06:55 2010 +0000

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=caf586e5f23cebb2a68cbaf288d59dbbf2d74052
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=8990f468a

Would it be possible to remove this patch?

--
Chris

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-25 11:03 many rx packets dropped since 2.6.37 Chris
@ 2012-04-25 11:30 ` Eric Dumazet
  2012-04-25 12:34   ` Chris
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2012-04-25 11:30 UTC (permalink / raw)
  To: Chris; +Cc: linux-kernel

On Wed, 2012-04-25 at 13:03 +0200, Chris wrote:
> Hello,
> 
> There is a bug since 2.6.37 with dropped rx packets in virtual
> environments (like kvm) and bonded network interfaces.
> 
> References:
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/890475
> https://bugzilla.redhat.com/show_bug.cgi?id=709551
> 
> This change has led to:
> commit caf586e5f23cebb2a68cbaf288d59dbbf2d74052
> Author: Eric Dumazet <eric.dumazet@gmail.com>
> Date:   Thu Sep 30 21:06:55 2010 +0000
> 
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=caf586e5f23cebb2a68cbaf288d59dbbf2d74052
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=8990f468a
> 
> Would it be possible to remove this patch?
> 
> --
> Chris

Its no possible to remove this patch.

If you want to ignore fact that packets _are_ dropped, just ignore the
counter.

We want to track dropped packets, not hide them.




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-25 11:30 ` Eric Dumazet
@ 2012-04-25 12:34   ` Chris
  2012-04-25 12:46     ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Chris @ 2012-04-25 12:34 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel

2012/4/25 Eric Dumazet <eric.dumazet@gmail.com>:
> On Wed, 2012-04-25 at 13:03 +0200, Chris wrote:
>> Hello,
>>
>> There is a bug since 2.6.37 with dropped rx packets in virtual
>> environments (like kvm) and bonded network interfaces.
>>
>> References:
>> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/890475
>> https://bugzilla.redhat.com/show_bug.cgi?id=709551
>>
>> This change has led to:
>> commit caf586e5f23cebb2a68cbaf288d59dbbf2d74052
>> Author: Eric Dumazet <eric.dumazet@gmail.com>
>> Date:   Thu Sep 30 21:06:55 2010 +0000
>>
>> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=caf586e5f23cebb2a68cbaf288d59dbbf2d74052
>> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=8990f468a
>>
>> Would it be possible to remove this patch?
>>
>> --
>> Chris
>
> Its no possible to remove this patch.
>
> If you want to ignore fact that packets _are_ dropped, just ignore the
> counter.
>
> We want to track dropped packets, not hide them.

Are you sure that these packets are really dropped? I see a lot of
more packets dropped with virtio_net instead of e1000. (KVM
environments)

Why do we see dropped packets only in virtual environments and bonded
network interfaces and not on real network interfaces?

--
Chris

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-25 12:34   ` Chris
@ 2012-04-25 12:46     ` Eric Dumazet
  2012-04-25 14:38       ` Chris
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2012-04-25 12:46 UTC (permalink / raw)
  To: Chris; +Cc: linux-kernel

On Wed, 2012-04-25 at 14:34 +0200, Chris wrote:

> Are you sure that these packets are really dropped? I see a lot of
> more packets dropped with virtio_net instead of e1000. (KVM
> environments)
> 

Yep, we are sure packets are dropped, just read the patch and you'll can
see this.

> Why do we see dropped packets only in virtual environments and bonded
> network interfaces and not on real network interfaces?

This has to be investigated, eventually.

About the bond, if you are in active-backup mode, a frame coming to
inactive device is going to be dropped, or you could have duplication of
logical frames.

You can use tcpdump to get a copy of these frames.




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-25 12:46     ` Eric Dumazet
@ 2012-04-25 14:38       ` Chris
  2012-04-25 17:43         ` Eric Dumazet
  2012-04-25 18:00         ` David Miller
  0 siblings, 2 replies; 11+ messages in thread
From: Chris @ 2012-04-25 14:38 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel

2012/4/25 Eric Dumazet <eric.dumazet@gmail.com>:
>> Why do we see dropped packets only in virtual environments and bonded
>> network interfaces and not on real network interfaces?
>
> This has to be investigated, eventually.

Do you have the opportunity to test it? Just install fedora 16 (linux
3.3) or ubuntu 12.04 (linux 3.2) as guest under kvm and use as virtual
network card virtio_net or e1000.

You will see many lost rx packets and i have no idea why???

--
Chris

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-25 14:38       ` Chris
@ 2012-04-25 17:43         ` Eric Dumazet
  2012-04-26 13:53           ` Chris
  2012-04-25 18:00         ` David Miller
  1 sibling, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2012-04-25 17:43 UTC (permalink / raw)
  To: Chris; +Cc: linux-kernel

On Wed, 2012-04-25 at 16:38 +0200, Chris wrote:
> 2012/4/25 Eric Dumazet <eric.dumazet@gmail.com>:
> >> Why do we see dropped packets only in virtual environments and bonded
> >> network interfaces and not on real network interfaces?
> >
> > This has to be investigated, eventually.
> 
> Do you have the opportunity to test it? Just install fedora 16 (linux
> 3.3) or ubuntu 12.04 (linux 3.2) as guest under kvm and use as virtual
> network card virtio_net or e1000.
> 
> You will see many lost rx packets and i have no idea why???


You can use drop_monitor / dropwatch to figure out

https://fedorahosted.org/dropwatch/

http://prefetch.net/blog/index.php/2011/07/11/using-netstat-and-dropwatch-to-observe-packet-loss-on-linux-servers/

Or just ignore the thing, if no user side effect is noticed.




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-25 14:38       ` Chris
  2012-04-25 17:43         ` Eric Dumazet
@ 2012-04-25 18:00         ` David Miller
  1 sibling, 0 replies; 11+ messages in thread
From: David Miller @ 2012-04-25 18:00 UTC (permalink / raw)
  To: xchris89x; +Cc: eric.dumazet, linux-kernel

From: Chris <xchris89x@googlemail.com>
Date: Wed, 25 Apr 2012 16:38:59 +0200

> You will see many lost rx packets and i have no idea why???

You need to understand, the packet drops were always happening.  Both
before Eric's patch and afterwards.

The only change is that now they are counted in the statistics, so
you can actually _see_ it.

So asking to have Eric's patch reverted will only hide the situation
from you again, and is therefore quite a silly request.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-25 17:43         ` Eric Dumazet
@ 2012-04-26 13:53           ` Chris
  2012-04-26 14:07             ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Chris @ 2012-04-26 13:53 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel, davem

2012/4/25 Eric Dumazet <eric.dumazet@gmail.com>:
> On Wed, 2012-04-25 at 16:38 +0200, Chris wrote:
>> 2012/4/25 Eric Dumazet <eric.dumazet@gmail.com>:
>> >> Why do we see dropped packets only in virtual environments and bonded
>> >> network interfaces and not on real network interfaces?
>> >
>> > This has to be investigated, eventually.
>>
>> Do you have the opportunity to test it? Just install fedora 16 (linux
>> 3.3) or ubuntu 12.04 (linux 3.2) as guest under kvm and use as virtual
>> network card virtio_net or e1000.
>>
>> You will see many lost rx packets and i have no idea why???
>
>
> You can use drop_monitor / dropwatch to figure out
>
> https://fedorahosted.org/dropwatch/
>
> http://prefetch.net/blog/index.php/2011/07/11/using-netstat-and-dropwatch-to-observe-packet-loss-on-linux-servers/

Hello Eric,

Okay this is a Fedora-16 VM under KVM with 4 minutes uptime only and dropwatch!

349 dropped packets in 4 minutes uptime??

[root@test ~]# uname -r
3.3.2-6.fc16.x86_64
[root@test ~]# dropwatch -l kas
Initalizing kallsyms db
dropwatch> start
Enabling monitoring...
Kernel monitoring activated.
Issue Ctrl-C to stop monitoring
1 drops at netlink_unicast+1bc
3 drops at ip_rcv_finish+f0
1 drops at __netif_receive_skb+583
1 drops at ip_rcv+d3
9 drops at ip_rcv_finish+f0
9 drops at ip_rcv_finish+f0
1 drops at __netif_receive_skb+583
6 drops at ip_rcv_finish+f0
1 drops at nf_hook_slow+12b
4 drops at ip_rcv_finish+f0
1 drops at __netif_receive_skb+583
4 drops at ip_rcv_finish+f0
1 drops at nf_hook_slow+12b
3 drops at ip_rcv_finish+f0
3 drops at nf_hook_slow+12b
2 drops at __netif_receive_skb+583
1 drops at ip6_mc_input+1f6
7 drops at ip_rcv_finish+f0
1 drops at ip_rcv_finish+f0
1 drops at netlink_unicast+1bc
1 drops at skb_queue_purge+20
2 drops at __netif_receive_skb+583
1 drops at ip_rcv_finish+f0
1 drops at nf_hook_slow+12b
1 drops at ip_rcv_finish+f0
1 drops at __netif_receive_skb+583
19 drops at ip_rcv+d3
2 drops at ip_rcv_finish+f0
37 drops at ip_rcv+d3
2 drops at __netif_receive_skb+583
43 drops at ip_rcv+d3
5 drops at ip_rcv_finish+f0
1 drops at __netif_receive_skb+583
1 drops at nf_hook_slow+12b
1 drops at ip6_mc_input+1f6
7 drops at ip_rcv_finish+f0
39 drops at ip_rcv+d3
2 drops at ip6_mc_input+1f6
1 drops at netlink_unicast+1bc
1 drops at skb_queue_purge+20
29 drops at ip_rcv+d3
7 drops at ip_rcv_finish+f0
2 drops at __netif_receive_skb+583
2 drops at ip6_mc_input+1f6
54 drops at ip_rcv+d3
3 drops at __netif_receive_skb+583
2 drops at nf_hook_slow+12b
3 drops at ip_rcv_finish+f0
1 drops at ip6_mc_input+1f6
36 drops at ip_rcv+d3
6 drops at ip_rcv_finish+f0
2 drops at __netif_receive_skb+583
1 drops at nf_hook_slow+12b
1 drops at icmpv6_rcv+305
37 drops at ip_rcv+d3
1 drops at __netif_receive_skb+583
2 drops at ip_rcv_finish+f0
4 drops at ip_rcv_finish+f0
39 drops at ip_rcv+d3
2 drops at __netif_receive_skb+583
1 drops at netlink_unicast+1bc
1 drops at skb_queue_purge+20
1 drops at nf_hook_slow+12b
38 drops at ip_rcv+d3
3 drops at __netif_receive_skb+583
2 drops at ip6_mc_input+1f6
1 drops at ip_rcv_finish+f0
46 drops at ip_rcv+d3
1 drops at __netif_receive_skb+583
1 drops at ip_rcv_finish+f0
^CGot a stop message
dropwatch> exit
Shutting down ...
[root@test ~]# ifconfig | grep dropped
          RX packets:161562 errors:0 dropped:349 overruns:0 frame:0
          TX packets:2140 errors:0 dropped:0 overruns:0 carrier:0
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
[root@test ~]# uptime
 15:47:00 up 4 min,  3 users,  load average: 0.01, 0.04, 0.03
[root@test ~]# netstat -s
Ip:
    1553 total packets received
    610 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    345 incoming packets delivered
    478 requests sent out
    71 dropped because of missing route
Icmp:
    4 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
        echo requests: 2
        echo replies: 2
    4 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        echo request: 2
        echo replies: 2
IcmpMsg:
        InType0: 2
        InType8: 2
        OutType0: 2
        OutType8: 2
Tcp:
    1 active connections openings
    2 passive connection openings
    0 failed connection attempts
    0 connection resets received
    2 connections established
    2353 segments received
    2096 segments send out
    0 segments retransmited
    0 bad segments received.
    0 resets sent
Udp:
    51 packets received
    0 packets to unknown port received.
    0 packet receive errors
    51 packets sent
    0 receive buffer errors
    0 send buffer errors
UdpLite:
TcpExt:
    1 TCP sockets finished time wait in fast timer
    3 delayed acks sent
    2 packets directly queued to recvmsg prequeue.
    2 packets directly received from prequeue
    2009 packets header predicted
    165 acknowledgments not containing data received
    IPReversePathFilter: 82
IpExt:
    InMcastPkts: 13
    InBcastPkts: 60
    InOctets: 146431
    OutOctets: 63310
    InMcastOctets: 416
    InBcastOctets: 12060

--
Chris

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-26 13:53           ` Chris
@ 2012-04-26 14:07             ` Eric Dumazet
  2012-04-26 14:29               ` Chris
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2012-04-26 14:07 UTC (permalink / raw)
  To: Chris; +Cc: linux-kernel, davem

On Thu, 2012-04-26 at 15:53 +0200, Chris wrote:
> 2012/4/25 Eric Dumazet <eric.dumazet@gmail.com>:
> > On Wed, 2012-04-25 at 16:38 +0200, Chris wrote:
> >> 2012/4/25 Eric Dumazet <eric.dumazet@gmail.com>:
> >> >> Why do we see dropped packets only in virtual environments and bonded
> >> >> network interfaces and not on real network interfaces?
> >> >
> >> > This has to be investigated, eventually.
> >>
> >> Do you have the opportunity to test it? Just install fedora 16 (linux
> >> 3.3) or ubuntu 12.04 (linux 3.2) as guest under kvm and use as virtual
> >> network card virtio_net or e1000.
> >>
> >> You will see many lost rx packets and i have no idea why???
> >
> >
> > You can use drop_monitor / dropwatch to figure out
> >
> > https://fedorahosted.org/dropwatch/
> >
> > http://prefetch.net/blog/index.php/2011/07/11/using-netstat-and-dropwatch-to-observe-packet-loss-on-linux-servers/
> 
> Hello Eric,
> 
> Okay this is a Fedora-16 VM under KVM with 4 minutes uptime only and dropwatch!
> 
> 349 dropped packets in 4 minutes uptime??
> 

yes, apparently.

Mostely because of : 

610 with invalid addresses
71 dropped because of missing route
and IPReversePathFilter: 82

It seems you have some work to do on your network config

> [root@test ~]# uname -r
> 3.3.2-6.fc16.x86_64
> [root@test ~]# dropwatch -l kas
> Initalizing kallsyms db
> dropwatch> start
> Enabling monitoring...
> Kernel monitoring activated.
> Issue Ctrl-C to stop monitoring
> 1 drops at netlink_unicast+1bc

this is a false positive, bugfix here :
http://git.kernel.org/?p=linux/kernel/git/davem/net-next.git;a=commitdiff;h=bfb253c9b277acd9f85b1886ff82b1dd5fbff2ae

> 3 drops at ip_rcv_finish+f0
> 1 drops at __netif_receive_skb+583
> 1 drops at ip_rcv+d3
> 9 drops at ip_rcv_finish+f0
> 9 drops at ip_rcv_finish+f0
> 1 drops at __netif_receive_skb+583
> 6 drops at ip_rcv_finish+f0
> 1 drops at nf_hook_slow+12b

do you have drops at iptables level ?

> 4 drops at ip_rcv_finish+f0
> 1 drops at __netif_receive_skb+583
> 4 drops at ip_rcv_finish+f0
> 1 drops at nf_hook_slow+12b
> 3 drops at ip_rcv_finish+f0
> 3 drops at nf_hook_slow+12b
> 2 drops at __netif_receive_skb+583
> 1 drops at ip6_mc_input+1f6
> 7 drops at ip_rcv_finish+f0
> 1 drops at ip_rcv_finish+f0
> 1 drops at netlink_unicast+1bc
> 1 drops at skb_queue_purge+20
> 2 drops at __netif_receive_skb+583
> 1 drops at ip_rcv_finish+f0
> 1 drops at nf_hook_slow+12b
> 1 drops at ip_rcv_finish+f0
> 1 drops at __netif_receive_skb+583
> 19 drops at ip_rcv+d3
> 2 drops at ip_rcv_finish+f0
> 37 drops at ip_rcv+d3
> 2 drops at __netif_receive_skb+583
> 43 drops at ip_rcv+d3
> 5 drops at ip_rcv_finish+f0
> 1 drops at __netif_receive_skb+583
> 1 drops at nf_hook_slow+12b
> 1 drops at ip6_mc_input+1f6
> 7 drops at ip_rcv_finish+f0
> 39 drops at ip_rcv+d3
> 2 drops at ip6_mc_input+1f6
> 1 drops at netlink_unicast+1bc
> 1 drops at skb_queue_purge+20
> 29 drops at ip_rcv+d3
> 7 drops at ip_rcv_finish+f0
> 2 drops at __netif_receive_skb+583
> 2 drops at ip6_mc_input+1f6
> 54 drops at ip_rcv+d3
> 3 drops at __netif_receive_skb+583
> 2 drops at nf_hook_slow+12b
> 3 drops at ip_rcv_finish+f0
> 1 drops at ip6_mc_input+1f6
> 36 drops at ip_rcv+d3
> 6 drops at ip_rcv_finish+f0
> 2 drops at __netif_receive_skb+583
> 1 drops at nf_hook_slow+12b
> 1 drops at icmpv6_rcv+305
> 37 drops at ip_rcv+d3
> 1 drops at __netif_receive_skb+583
> 2 drops at ip_rcv_finish+f0
> 4 drops at ip_rcv_finish+f0
> 39 drops at ip_rcv+d3
> 2 drops at __netif_receive_skb+583
> 1 drops at netlink_unicast+1bc
> 1 drops at skb_queue_purge+20
> 1 drops at nf_hook_slow+12b
> 38 drops at ip_rcv+d3
> 3 drops at __netif_receive_skb+583
> 2 drops at ip6_mc_input+1f6
> 1 drops at ip_rcv_finish+f0
> 46 drops at ip_rcv+d3
> 1 drops at __netif_receive_skb+583
> 1 drops at ip_rcv_finish+f0
> ^CGot a stop message
> dropwatch> exit
> Shutting down ...
> [root@test ~]# ifconfig | grep dropped
>           RX packets:161562 errors:0 dropped:349 overruns:0 frame:0
>           TX packets:2140 errors:0 dropped:0 overruns:0 carrier:0
>           RX packets:4 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
> [root@test ~]# uptime
>  15:47:00 up 4 min,  3 users,  load average: 0.01, 0.04, 0.03
> [root@test ~]# netstat -s
> Ip:
>     1553 total packets received
>     610 with invalid addresses
>     0 forwarded
>     0 incoming packets discarded
>     345 incoming packets delivered
>     478 requests sent out
>     71 dropped because of missing route
> Icmp:
>     4 ICMP messages received
>     0 input ICMP message failed.
>     ICMP input histogram:
>         echo requests: 2
>         echo replies: 2
>     4 ICMP messages sent
>     0 ICMP messages failed
>     ICMP output histogram:
>         echo request: 2
>         echo replies: 2
> IcmpMsg:
>         InType0: 2
>         InType8: 2
>         OutType0: 2
>         OutType8: 2
> Tcp:
>     1 active connections openings
>     2 passive connection openings
>     0 failed connection attempts
>     0 connection resets received
>     2 connections established
>     2353 segments received
>     2096 segments send out
>     0 segments retransmited
>     0 bad segments received.
>     0 resets sent
> Udp:
>     51 packets received
>     0 packets to unknown port received.
>     0 packet receive errors
>     51 packets sent
>     0 receive buffer errors
>     0 send buffer errors
> UdpLite:
> TcpExt:
>     1 TCP sockets finished time wait in fast timer
>     3 delayed acks sent
>     2 packets directly queued to recvmsg prequeue.
>     2 packets directly received from prequeue
>     2009 packets header predicted
>     165 acknowledgments not containing data received
>     IPReversePathFilter: 82
> IpExt:
>     InMcastPkts: 13
>     InBcastPkts: 60
>     InOctets: 146431
>     OutOctets: 63310
>     InMcastOctets: 416
>     InBcastOctets: 12060
> 
> --
> Chris



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-26 14:07             ` Eric Dumazet
@ 2012-04-26 14:29               ` Chris
  2012-04-26 14:44                 ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Chris @ 2012-04-26 14:29 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel, davem

2012/4/26 Eric Dumazet <eric.dumazet@gmail.com>:
>> Okay this is a Fedora-16 VM under KVM with 4 minutes uptime only and dropwatch!
>>
>> 349 dropped packets in 4 minutes uptime??
>>
>
> yes, apparently.
>
> Mostely because of :
>
> 610 with invalid addresses
> 71 dropped because of missing route
> and IPReversePathFilter: 82
>
> It seems you have some work to do on your network config

What do you mean exactly?

This is netstat -s on the hostsystem with 24 days uptime. KVM guests
are connected via bridged networking.

# netstat -s
Ip:
    14750606 total packets received
    4394457 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    5461822 incoming packets delivered
    5149263 requests sent out
Icmp:
    136871 ICMP messages received
    22 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 30
        timeout in transit: 5
        echo requests: 136809
        echo replies: 27
    149082 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 12269
        echo request: 4
        echo replies: 136809
IcmpMsg:
        InType0: 27
        InType3: 30
        InType8: 136809
        InType11: 5
        OutType0: 136809
        OutType3: 12269
        OutType8: 4
Tcp:
    430 active connections openings
    32123 passive connection openings
    117 failed connection attempts
    39 connection resets received
    2 connections established
    5250012 segments received
    4916354 segments send out
    8781 segments retransmited
    0 bad segments received.
    70 resets sent
Udp:
    74938 packets received
    0 packets to unknown port received.
    0 packet receive errors
    75044 packets sent
UdpLite:
TcpExt:
    2 invalid SYN cookies received
    117 resets received for embryonic SYN_RECV sockets
    809 TCP sockets finished time wait in fast timer
    1 time wait sockets recycled by time stamp
    2400 packets rejects in established connections because of timestamp
    101608 delayed acks sent
    978 delayed acks further delayed because of locked socket
    Quick ack mode was activated 2993 times
    36910 packets directly queued to recvmsg prequeue.
    13392680 packets directly received from backlog
    169 packets directly received from prequeue
    4493014 packets header predicted
    2027 packets header predicted and directly queued to user
    109135 acknowledgments not containing data received
    2686390 predicted acknowledgments
    46 times recovered from packet loss due to SACK data
    TCPDSACKUndo: 16
    613 congestion windows recovered after partial ack
    5 TCP data loss events
    696 timeouts after SACK recovery
    18 timeouts in loss state
    46 fast retransmits
    9 forward retransmits
    193 retransmits in slow start
    6614 other TCP timeouts
    1 sack retransmits failed
    3000 DSACKs sent for old packets
    2091 DSACKs received
    11 connections reset due to unexpected data
    24 connections reset due to early user close
    26 connections aborted due to timeout
    TCPDSACKIgnoredOld: 1333
    TCPDSACKIgnoredNoUndo: 407
    TCPSackShifted: 28
    TCPSackMerged: 93
    TCPSackShiftFallback: 443
IpExt:
    InMcastPkts: 952759
    OutMcastPkts: 20
    InBcastPkts: 887543
    InOctets: 19724020920
    OutOctets: 918687039
    InMcastOctets: 73525673
    OutMcastOctets: 4004
    InBcastOctets: 139495386

>
>> [root@test ~]# uname -r
>> 3.3.2-6.fc16.x86_64
>> [root@test ~]# dropwatch -l kas
>> Initalizing kallsyms db
>> dropwatch> start
>> Enabling monitoring...
>> Kernel monitoring activated.
>> Issue Ctrl-C to stop monitoring
>> 1 drops at netlink_unicast+1bc
>
> this is a false positive, bugfix here :
> http://git.kernel.org/?p=linux/kernel/git/davem/net-next.git;a=commitdiff;h=bfb253c9b277acd9f85b1886ff82b1dd5fbff2ae

Okay :)

>> 3 drops at ip_rcv_finish+f0
>> 1 drops at __netif_receive_skb+583
>> 1 drops at ip_rcv+d3
>> 9 drops at ip_rcv_finish+f0
>> 9 drops at ip_rcv_finish+f0
>> 1 drops at __netif_receive_skb+583
>> 6 drops at ip_rcv_finish+f0
>> 1 drops at nf_hook_slow+12b
>
> do you have drops at iptables level ?

I do not know. Should i turn off iptables for testing?

--
Chris

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: many rx packets dropped since 2.6.37
  2012-04-26 14:29               ` Chris
@ 2012-04-26 14:44                 ` Eric Dumazet
  0 siblings, 0 replies; 11+ messages in thread
From: Eric Dumazet @ 2012-04-26 14:44 UTC (permalink / raw)
  To: Chris; +Cc: linux-kernel, davem

On Thu, 2012-04-26 at 16:29 +0200, Chris wrote:

> 
> I do not know. Should i turn off iptables for testing?

Maybe my question is dumb, but what particular problem do you want to
track ?




^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-04-26 14:44 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-25 11:03 many rx packets dropped since 2.6.37 Chris
2012-04-25 11:30 ` Eric Dumazet
2012-04-25 12:34   ` Chris
2012-04-25 12:46     ` Eric Dumazet
2012-04-25 14:38       ` Chris
2012-04-25 17:43         ` Eric Dumazet
2012-04-26 13:53           ` Chris
2012-04-26 14:07             ` Eric Dumazet
2012-04-26 14:29               ` Chris
2012-04-26 14:44                 ` Eric Dumazet
2012-04-25 18:00         ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.