All of lore.kernel.org
 help / color / mirror / Atom feed
* Problems with /proc/net/tcp6  - possible bug - ipv6
@ 2011-01-22  6:30 PK
  2011-01-22  8:59 ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: PK @ 2011-01-22  6:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: runningdoglackey

Creating many ipv6 connections hits a ceiling on connections/fds ; okay, fine.

But in my case I'm seeing millions of entries spring up within a few seconds and 
then vanish within a few minutes, in /proc/net/tcp6 (vanish due to garbage 
collection?)

Furthermore I can trigger this easily on vanilla kernels from 2.6.36 to 
2.6.38-rc1-next-20110121  inside a ubuntu 10.10 amd64 vm, causing the kernel to 
spew warnings.  There is also some corruption in the logs (see kernel-sample.log 
line 296), but that may be unrelated.

More explanation, kernel config of the primary machine I saw this on, sample 
ruby script to reproduce (inside the ubuntu VMs I apt-get and use ruby-1.9.1), 
are located at
https://github.com/runningdogx/net6-bug

Seems to only affect 64-bit.  So far I have not been able to reproduce on 32-bit 
ubuntu VMs of any kernel version.
Seems to only affect IPv6.  So far I have not been able to reproduce using IPv4 
connections (and watching /proc/net/tcp of course).
Does not trigger the bug if the connections are made to ::1.  Only externally 
routable local and global IPv6 addresses seem to cause problems.
Seems to have been introduced between 2.6.35 and 2.6.36 (see README on github 
for more kernels I've tried)

All the tested Ubuntu VMs are stock 10.10 userland, with vanilla kernels (the 
latest ubuntu kernel is 2.6.35-something, and my initial test didn't show it 
suffering from this problem)

Originally noticed on separate Gentoo 64-bit non-vm system when doing web 
benchmarking.

not subscribed, so please keep me in cc although I'll try to follow the thread


      

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6  - possible bug - ipv6
  2011-01-22  6:30 Problems with /proc/net/tcp6 - possible bug - ipv6 PK
@ 2011-01-22  8:59 ` Eric Dumazet
  2011-01-22 15:15   ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2011-01-22  8:59 UTC (permalink / raw)
  To: PK; +Cc: linux-kernel, netdev

Le vendredi 21 janvier 2011 à 22:30 -0800, PK a écrit :
> Creating many ipv6 connections hits a ceiling on connections/fds ; okay, fine.
> 
> But in my case I'm seeing millions of entries spring up within a few seconds and 
> then vanish within a few minutes, in /proc/net/tcp6 (vanish due to garbage 
> collection?)
> 
> Furthermore I can trigger this easily on vanilla kernels from 2.6.36 to 
> 2.6.38-rc1-next-20110121  inside a ubuntu 10.10 amd64 vm, causing the kernel to 
> spew warnings.  There is also some corruption in the logs (see kernel-sample.log 
> line 296), but that may be unrelated.
> 
> More explanation, kernel config of the primary machine I saw this on, sample 
> ruby script to reproduce (inside the ubuntu VMs I apt-get and use ruby-1.9.1), 
> are located at
> https://github.com/runningdogx/net6-bug
> 
> Seems to only affect 64-bit.  So far I have not been able to reproduce on 32-bit 
> ubuntu VMs of any kernel version.
> Seems to only affect IPv6.  So far I have not been able to reproduce using IPv4 
> connections (and watching /proc/net/tcp of course).
> Does not trigger the bug if the connections are made to ::1.  Only externally 
> routable local and global IPv6 addresses seem to cause problems.
> Seems to have been introduced between 2.6.35 and 2.6.36 (see README on github 
> for more kernels I've tried)
> 
> All the tested Ubuntu VMs are stock 10.10 userland, with vanilla kernels (the 
> latest ubuntu kernel is 2.6.35-something, and my initial test didn't show it 
> suffering from this problem)
> 
> Originally noticed on separate Gentoo 64-bit non-vm system when doing web 
> benchmarking.
> 
> not subscribed, so please keep me in cc although I'll try to follow the thread
> 
> 

Hi PK (Sorry, your real name is hidden)

I could not reproduce this on current linux-2.6 kernel.

How many vcpus running in your VM, and memory ?

Note : a recent commit did fix /proc/net/tcp[6] behavior

commit 1bde5ac49398a064c753bb490535cfad89e99a5f
Author: Eric Dumazet <eric.dumazet@gmail.com>
Date:   Thu Dec 23 09:32:46 2010 -0800

    tcp: fix listening_get_next()
    
    Alexey Vlasov found /proc/net/tcp could sometime loop and display
    millions of sockets in LISTEN state.
    
    In 2.6.29, when we converted TCP hash tables to RCU, we left two
    sk_next() calls in listening_get_next().
    
    We must instead use sk_nulls_next() to properly detect an end of chain.
    
    Reported-by: Alexey Vlasov <renton@renton.name>
    Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6  - possible bug - ipv6
  2011-01-22  8:59 ` Eric Dumazet
@ 2011-01-22 15:15   ` Eric Dumazet
  2011-01-22 19:42     ` PK
  2011-01-24 22:42     ` David Miller
  0 siblings, 2 replies; 11+ messages in thread
From: Eric Dumazet @ 2011-01-22 15:15 UTC (permalink / raw)
  To: PK, David Miller; +Cc: linux-kernel, netdev, Tom Herbert

Le samedi 22 janvier 2011 à 09:59 +0100, Eric Dumazet a écrit : 
> Le vendredi 21 janvier 2011 à 22:30 -0800, PK a écrit :
> > Creating many ipv6 connections hits a ceiling on connections/fds ; okay, fine.
> > 
> > But in my case I'm seeing millions of entries spring up within a few seconds and 
> > then vanish within a few minutes, in /proc/net/tcp6 (vanish due to garbage 
> > collection?)
> > 
> > Furthermore I can trigger this easily on vanilla kernels from 2.6.36 to 
> > 2.6.38-rc1-next-20110121  inside a ubuntu 10.10 amd64 vm, causing the kernel to 
> > spew warnings.  There is also some corruption in the logs (see kernel-sample.log 
> > line 296), but that may be unrelated.
> > 
> > More explanation, kernel config of the primary machine I saw this on, sample 
> > ruby script to reproduce (inside the ubuntu VMs I apt-get and use ruby-1.9.1), 
> > are located at
> > https://github.com/runningdogx/net6-bug
> > 
> > Seems to only affect 64-bit.  So far I have not been able to reproduce on 32-bit 
> > ubuntu VMs of any kernel version.
> > Seems to only affect IPv6.  So far I have not been able to reproduce using IPv4 
> > connections (and watching /proc/net/tcp of course).
> > Does not trigger the bug if the connections are made to ::1.  Only externally 
> > routable local and global IPv6 addresses seem to cause problems.
> > Seems to have been introduced between 2.6.35 and 2.6.36 (see README on github 
> > for more kernels I've tried)
> > 
> > All the tested Ubuntu VMs are stock 10.10 userland, with vanilla kernels (the 
> > latest ubuntu kernel is 2.6.35-something, and my initial test didn't show it 
> > suffering from this problem)
> > 
> > Originally noticed on separate Gentoo 64-bit non-vm system when doing web 
> > benchmarking.
> > 
> > not subscribed, so please keep me in cc although I'll try to follow the thread
> > 
> > 


I had some incidents, after hours of testing...

After following patch, I could not reproduce it.

I cant believe this bug was not noticed before today.

Thanks !

[PATCH] tcp: fix bug in listening_get_next()

commit a8b690f98baf9fb19 (tcp: Fix slowness in read /proc/net/tcp)
introduced a bug in handling of SYN_RECV sockets.

st->offset represents number of sockets found since beginning of
listening_hash[st->bucket].

We should not reset st->offset when iterating through
syn_table[st->sbucket], or else if more than ~25 sockets (if
PAGE_SIZE=4096) are in SYN_RECV state, we exit from listening_get_next()
with a too small st->offset

Next time we enter tcp_seek_last_pos(), we are not able to seek past
already found sockets.

Reported-by: PK <runningdoglackey@yahoo.com>
CC: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
 net/ipv4/tcp_ipv4.c |    1 -
 1 file changed, 1 deletion(-)

diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 856f684..02f583b 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1994,7 +1994,6 @@ static void *listening_get_next(struct seq_file *seq, void *cur)
 				}
 				req = req->dl_next;
 			}
-			st->offset = 0;
 			if (++st->sbucket >= icsk->icsk_accept_queue.listen_opt->nr_table_entries)
 				break;
 get_req:



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6  - possible bug - ipv6
  2011-01-22 15:15   ` Eric Dumazet
@ 2011-01-22 19:42     ` PK
  2011-01-22 21:20       ` Eric Dumazet
  2011-01-25  0:02       ` David Miller
  2011-01-24 22:42     ` David Miller
  1 sibling, 2 replies; 11+ messages in thread
From: PK @ 2011-01-22 19:42 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel, netdev

Eric Dumazet wrote:
> 
> I had some incidents, after hours of testing...
>
> After following patch, I could not reproduce it.


Looks like that patch solved the /proc/net/tcp6 problem.  The causal commit was 
the one you identified... confirmed with bisect.

These warnings show up when I run the script (or I presume any tcp6 connection 
flooder) with /proc/sys/net/tcp/ipv4/tcp_tw_recycle enabled.  There's textual 
corruption of the traces a lot of the time.  Here's a sample trace that doesn't 
appear to be corrupt.  All the warnings I've seen are from route.c:209, and I 
don't see how that would cause memory corruption.

Jan 22 11:09:08 vbox-alpha kernel: [  907.629431] ------------[ cut here 
]------------
Jan 22 11:09:08 vbox-alpha kernel: [  907.629435] WARNING: at 
net/ipv6/route.c:209 rt6_bind_peer+0x74/0x80()
Jan 22 11:09:08 vbox-alpha kernel: [  907.629436] Hardware name: VirtualBox
Jan 22 11:09:08 vbox-alpha kernel: [  907.629437] Modules linked in: nls_utf8 
isofs fuse snd_intel8x0 snd_ac97_codec ac97_bus snd_pcm snd_seq_midi snd_rawmidi 
snd_seq_midi_eve
nt snd_seq snd_timer snd_seq_device snd psmouse e1000 i2c_piix4 soundcore 
snd_page_alloc
Jan 22 11:09:08 vbox-alpha kernel: [  907.629446] Pid: 1741, comm: ruby Tainted: 
G        W   2.6.38-rc2+ #15
Jan 22 11:09:08 vbox-alpha kernel: [  907.629447] Call Trace:
Jan 22 11:09:08 vbox-alpha kernel: [  907.629448]  <IRQ>  [<ffffffff81048b7f>] ? 
warn_slowpath_common+0x7f/0xc0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629452]  [<ffffffff81048bda>] ? 
warn_slowpath_null+0x1a/0x20
Jan 22 11:09:08 vbox-alpha kernel: [  907.629454]  [<ffffffff814534a4>] ? 
rt6_bind_peer+0x74/0x80
Jan 22 11:09:08 vbox-alpha kernel: [  907.629456]  [<ffffffff8146aa2d>] ? 
tcp_v6_get_peer+0xbd/0xd0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629458]  [<ffffffff8140c6f7>] ? 
tcp_time_wait+0x287/0x300
Jan 22 11:09:08 vbox-alpha kernel: [  907.629460]  [<ffffffff813fd089>] ? 
tcp_fin+0x119/0x1f0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629462]  [<ffffffff813fd8de>] ? 
tcp_data_queue+0x77e/0xc60
Jan 22 11:09:08 vbox-alpha kernel: [  907.629464]  [<ffffffff814017f9>] ? 
tcp_rcv_state_process+0x6f9/0xa40
Jan 22 11:09:08 vbox-alpha kernel: [  907.629466]  [<ffffffff81400e25>] ? 
tcp_rcv_established+0x345/0x620
Jan 22 11:09:08 vbox-alpha kernel: [  907.629468]  [<ffffffff8146b0ee>] ? 
tcp_v6_do_rcv+0x18e/0x540
Jan 22 11:09:08 vbox-alpha kernel: [  907.629470]  [<ffffffff8146c468>] ? 
tcp_v6_rcv+0x728/0x7e0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629473]  [<ffffffff81447bdd>] ? 
ip6_input_finish+0x17d/0x3a0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629475]  [<ffffffff81447e58>] ? 
ip6_input+0x58/0x60
Jan 22 11:09:08 vbox-alpha kernel: [  907.629477]  [<ffffffff81447651>] ? 
ip6_rcv_finish+0x21/0x50
Jan 22 11:09:08 vbox-alpha kernel: [  907.629488]  [<ffffffff81447998>] ? 
ipv6_rcv+0x318/0x3e0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629491]  [<ffffffff813b3e7a>] ? 
__netif_receive_skb+0x40a/0x690
Jan 22 11:09:08 vbox-alpha kernel: [  907.629493]  [<ffffffff813b420a>] ? 
process_backlog+0x10a/0x210
Jan 22 11:09:08 vbox-alpha kernel: [  907.629496]  [<ffffffff814996b5>] ? 
_raw_spin_lock_irq+0x15/0x20
Jan 22 11:09:08 vbox-alpha kernel: [  907.629498]  [<ffffffff813b9612>] ? 
net_rx_action+0x112/0x2f0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629500]  [<ffffffff8104f85b>] ? 
__do_softirq+0xab/0x200
Jan 22 11:09:08 vbox-alpha kernel: [  907.629502]  [<ffffffff81003e7c>] ? 
call_softirq+0x1c/0x30
Jan 22 11:09:08 vbox-alpha kernel: [  907.629503]  <EOI>  [<ffffffff81005505>] ? 
do_softirq+0x65/0xa0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629506]  [<ffffffff8104f354>] ? 
local_bh_enable+0x94/0xa0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629508]  [<ffffffff813b8372>] ? 
dev_queue_xmit+0x1c2/0x620
Jan 22 11:09:08 vbox-alpha kernel: [  907.629510]  [<ffffffff814452c6>] ? 
ip6_finish_output2+0xb6/0x370
Jan 22 11:09:08 vbox-alpha kernel: [  907.629512]  [<ffffffff8113d1c0>] ? 
__pollwait+0x0/0xf0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629514]  [<ffffffff81446770>] ? 
ip6_finish_output+0x90/0xc0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629516]  [<ffffffff81446818>] ? 
ip6_output+0x78/0xf0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629518]  [<ffffffff81443804>] ? 
dst_output+0x14/0x20
Jan 22 11:09:08 vbox-alpha kernel: [  907.629520]  [<ffffffff81446c88>] ? 
ip6_xmit+0x3f8/0x4c0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629523]  [<ffffffff814729f8>] ? 
inet6_csk_xmit+0x268/0x2e0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629525]  [<ffffffff8149968f>] ? 
_raw_spin_lock_irqsave+0x2f/0x40
Jan 22 11:09:08 vbox-alpha kernel: [  907.629527]  [<ffffffff814029a7>] ? 
tcp_transmit_skb+0x407/0x8f0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629529]  [<ffffffff81405197>] ? 
tcp_write_xmit+0x1e7/0x9d0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629531]  [<ffffffff81405c27>] ? 
tcp_send_fin+0xa7/0x1d0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629533]  [<ffffffff81405b06>] ? 
__tcp_push_pending_frames+0x26/0xa0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629535]  [<ffffffff81405bed>] ? 
tcp_send_fin+0x6d/0x1d0
Jan 22 11:09:08 vbox-alpha kernel: [  907.629536]  [<ffffffff813f64e9>] ? 
tcp_close+0x109/0x440
Jan 22 11:09:08 vbox-alpha kernel: [  907.629539]  [<ffffffff8141a9de>] ? 
inet_release+0x5e/0x80
Jan 22 11:09:08 vbox-alpha kernel: [  907.629541]  [<ffffffff8144226f>] ? 
inet6_release+0x3f/0x50
Jan 22 11:09:08 vbox-alpha kernel: [  907.629548]  [<ffffffff813a1559>] ? 
sock_release+0x29/0x90
Jan 22 11:09:08 vbox-alpha kernel: [  907.629549]  [<ffffffff813a15d7>] ? 
sock_close+0x17/0x30
Jan 22 11:09:08 vbox-alpha kernel: [  907.629551]  [<ffffffff8112cc0a>] ? 
fput+0xea/0x260
Jan 22 11:09:08 vbox-alpha kernel: [  907.629553]  [<ffffffff8112928d>] ? 
filp_close+0x5d/0x90
Jan 22 11:09:08 vbox-alpha kernel: [  907.629555]  [<ffffffff81129377>] ? 
sys_close+0xb7/0x120
Jan 22 11:09:08 vbox-alpha kernel: [  907.629558]  [<ffffffff81002f82>] ? 
system_call_fastpath+0x16/0x1b
Jan 22 11:09:08 vbox-alpha kernel: [  907.629559] ---[ end trace 
e63dd54cc0b51607 ]---



      

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6  - possible bug - ipv6
  2011-01-22 19:42     ` PK
@ 2011-01-22 21:20       ` Eric Dumazet
  2011-01-22 21:40         ` Eric Dumazet
  2011-01-25  0:02       ` David Miller
  1 sibling, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2011-01-22 21:20 UTC (permalink / raw)
  To: PK; +Cc: linux-kernel, netdev, David Miller

Le samedi 22 janvier 2011 à 11:42 -0800, PK a écrit :
> Eric Dumazet wrote:
> > 
> > I had some incidents, after hours of testing...
> >
> > After following patch, I could not reproduce it.
> 
> 
> Looks like that patch solved the /proc/net/tcp6 problem.  The causal commit was 
> the one you identified... confirmed with bisect.
> 
> These warnings show up when I run the script (or I presume any tcp6 connection 
> flooder) with /proc/sys/net/tcp/ipv4/tcp_tw_recycle enabled.  There's textual 
> corruption of the traces a lot of the time.  Here's a sample trace that doesn't 
> appear to be corrupt.  All the warnings I've seen are from route.c:209, and I 
> don't see how that would cause memory corruption.

Thats a different isse, already reported, under investigation.

David did some changes recently

http://comments.gmane.org/gmane.linux.network/179874





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6  - possible bug - ipv6
  2011-01-22 21:20       ` Eric Dumazet
@ 2011-01-22 21:40         ` Eric Dumazet
  2011-01-24 22:31           ` David Miller
  2011-01-24 22:40           ` David Miller
  0 siblings, 2 replies; 11+ messages in thread
From: Eric Dumazet @ 2011-01-22 21:40 UTC (permalink / raw)
  To: PK; +Cc: linux-kernel, netdev, David Miller

Le samedi 22 janvier 2011 à 22:20 +0100, Eric Dumazet a écrit :
> Le samedi 22 janvier 2011 à 11:42 -0800, PK a écrit :
> > Eric Dumazet wrote:
> > > 
> > > I had some incidents, after hours of testing...
> > >
> > > After following patch, I could not reproduce it.
> > 
> > 
> > Looks like that patch solved the /proc/net/tcp6 problem.  The causal commit was 
> > the one you identified... confirmed with bisect.
> > 
> > These warnings show up when I run the script (or I presume any tcp6 connection 
> > flooder) with /proc/sys/net/tcp/ipv4/tcp_tw_recycle enabled.  There's textual 
> > corruption of the traces a lot of the time.  Here's a sample trace that doesn't 
> > appear to be corrupt.  All the warnings I've seen are from route.c:209, and I 
> > don't see how that would cause memory corruption.
> 
> Thats a different isse, already reported, under investigation.
> 
> David did some changes recently
> 
> http://comments.gmane.org/gmane.linux.network/179874
> 
> 
> 

In my testings, I even have crashes in cleanup_once() if I
enable /proc/sys/net/ipv4/tcp_tw_recycle 




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6 - possible bug - ipv6
  2011-01-22 21:40         ` Eric Dumazet
@ 2011-01-24 22:31           ` David Miller
  2011-01-24 22:40           ` David Miller
  1 sibling, 0 replies; 11+ messages in thread
From: David Miller @ 2011-01-24 22:31 UTC (permalink / raw)
  To: eric.dumazet; +Cc: runningdoglackey, linux-kernel, netdev

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Sat, 22 Jan 2011 22:40:44 +0100

> Le samedi 22 janvier 2011 à 22:20 +0100, Eric Dumazet a écrit :
>> Le samedi 22 janvier 2011 à 11:42 -0800, PK a écrit :
>> > Eric Dumazet wrote:
>> > > 
>> > > I had some incidents, after hours of testing...
>> > >
>> > > After following patch, I could not reproduce it.
>> > 
>> > 
>> > Looks like that patch solved the /proc/net/tcp6 problem.  The causal commit was 
>> > the one you identified... confirmed with bisect.
>> > 
>> > These warnings show up when I run the script (or I presume any tcp6 connection 
>> > flooder) with /proc/sys/net/tcp/ipv4/tcp_tw_recycle enabled.  There's textual 
>> > corruption of the traces a lot of the time.  Here's a sample trace that doesn't 
>> > appear to be corrupt.  All the warnings I've seen are from route.c:209, and I 
>> > don't see how that would cause memory corruption.
>> 
>> Thats a different isse, already reported, under investigation.
>> 
>> David did some changes recently
>> 
>> http://comments.gmane.org/gmane.linux.network/179874
>> 
>> 
>> 
> 
> In my testings, I even have crashes in cleanup_once() if I
> enable /proc/sys/net/ipv4/tcp_tw_recycle 

I'm looking into this, thanks guys.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6 - possible bug - ipv6
  2011-01-22 21:40         ` Eric Dumazet
  2011-01-24 22:31           ` David Miller
@ 2011-01-24 22:40           ` David Miller
  1 sibling, 0 replies; 11+ messages in thread
From: David Miller @ 2011-01-24 22:40 UTC (permalink / raw)
  To: eric.dumazet; +Cc: runningdoglackey, linux-kernel, netdev

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Sat, 22 Jan 2011 22:40:44 +0100

> In my testings, I even have crashes in cleanup_once() if I
> enable /proc/sys/net/ipv4/tcp_tw_recycle 

Luckily, this bug was easy to fix, I've just committed the
following to net-2.6

The other crash (the !RTF_CACHE WARN assertion) I'm looking
into now.

--------------------
>From 1c5642cf754939c318a0230b0f546a9e20888292 Mon Sep 17 00:00:00 2001
From: David S. Miller <davem@davemloft.net>
Date: Mon, 24 Jan 2011 14:37:46 -0800
Subject: [PATCH] inetpeer: Use correct AVL tree base pointer in inet_getpeer().

Family was hard-coded to AF_INET but should be daddr->family.

This fixes crashes when unlinking ipv6 peer entries, since the
unlink code was looking up the base pointer properly.

Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
---
 net/ipv4/inetpeer.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index d9bc857..a96e656 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -475,7 +475,7 @@ static int cleanup_once(unsigned long ttl)
 struct inet_peer *inet_getpeer(struct inetpeer_addr *daddr, int create)
 {
 	struct inet_peer __rcu **stack[PEER_MAXDEPTH], ***stackptr;
-	struct inet_peer_base *base = family_to_base(AF_INET);
+	struct inet_peer_base *base = family_to_base(daddr->family);
 	struct inet_peer *p;
 
 	/* Look up for the address quickly, lockless.
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6 - possible bug - ipv6
  2011-01-22 15:15   ` Eric Dumazet
  2011-01-22 19:42     ` PK
@ 2011-01-24 22:42     ` David Miller
  1 sibling, 0 replies; 11+ messages in thread
From: David Miller @ 2011-01-24 22:42 UTC (permalink / raw)
  To: eric.dumazet; +Cc: runningdoglackey, linux-kernel, netdev, therbert

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Sat, 22 Jan 2011 16:15:44 +0100

> [PATCH] tcp: fix bug in listening_get_next()
> 
> commit a8b690f98baf9fb19 (tcp: Fix slowness in read /proc/net/tcp)
> introduced a bug in handling of SYN_RECV sockets.
> 
> st->offset represents number of sockets found since beginning of
> listening_hash[st->bucket].
> 
> We should not reset st->offset when iterating through
> syn_table[st->sbucket], or else if more than ~25 sockets (if
> PAGE_SIZE=4096) are in SYN_RECV state, we exit from listening_get_next()
> with a too small st->offset
> 
> Next time we enter tcp_seek_last_pos(), we are not able to seek past
> already found sockets.
> 
> Reported-by: PK <runningdoglackey@yahoo.com>
> CC: Tom Herbert <therbert@google.com>
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>

Applied and queued up for -stable, thanks Eric.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6 - possible bug - ipv6
  2011-01-22 19:42     ` PK
  2011-01-22 21:20       ` Eric Dumazet
@ 2011-01-25  0:02       ` David Miller
  1 sibling, 0 replies; 11+ messages in thread
From: David Miller @ 2011-01-25  0:02 UTC (permalink / raw)
  To: runningdoglackey; +Cc: eric.dumazet, linux-kernel, netdev

From: PK <runningdoglackey@yahoo.com>
Date: Sat, 22 Jan 2011 11:42:54 -0800 (PST)

> These warnings show up when I run the script (or I presume any tcp6 connection 
> flooder) with /proc/sys/net/tcp/ipv4/tcp_tw_recycle enabled.  There's textual 
> corruption of the traces a lot of the time.  Here's a sample trace that doesn't 
> appear to be corrupt.  All the warnings I've seen are from route.c:209, and I 
> don't see how that would cause memory corruption.

Please give this patch a try:

--------------------
>From d80bc0fd262ef840ed4e82593ad6416fa1ba3fc4 Mon Sep 17 00:00:00 2001
From: David S. Miller <davem@davemloft.net>
Date: Mon, 24 Jan 2011 16:01:58 -0800
Subject: [PATCH] ipv6: Always clone offlink routes.

Do not handle PMTU vs. route lookup creation any differently
wrt. offlink routes, always clone them.

Reported-by: PK <runningdoglackey@yahoo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
---
 net/ipv6/route.c |    9 +--------
 1 files changed, 1 insertions(+), 8 deletions(-)

diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 373bd04..1534508 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -72,8 +72,6 @@
 #define RT6_TRACE(x...) do { ; } while (0)
 #endif
 
-#define CLONE_OFFLINK_ROUTE 0
-
 static struct rt6_info * ip6_rt_copy(struct rt6_info *ort);
 static struct dst_entry	*ip6_dst_check(struct dst_entry *dst, u32 cookie);
 static unsigned int	 ip6_default_advmss(const struct dst_entry *dst);
@@ -738,13 +736,8 @@ restart:
 
 	if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP))
 		nrt = rt6_alloc_cow(rt, &fl->fl6_dst, &fl->fl6_src);
-	else {
-#if CLONE_OFFLINK_ROUTE
+	else
 		nrt = rt6_alloc_clone(rt, &fl->fl6_dst);
-#else
-		goto out2;
-#endif
-	}
 
 	dst_release(&rt->dst);
 	rt = nrt ? : net->ipv6.ip6_null_entry;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: Problems with /proc/net/tcp6 - possible bug - ipv6
@ 2011-01-31 22:51 PK
  0 siblings, 0 replies; 11+ messages in thread
From: PK @ 2011-01-31 22:51 UTC (permalink / raw)
  To: David Miller; +Cc: eric.dumazet, linux-kernel, netdev

David Miller wrote
> 
> Please give this patch a  try:
> 
> --------------------
> From  d80bc0fd262ef840ed4e82593ad6416fa1ba3fc4 Mon Sep 17 00:00:00 2001
> From: David  S. Miller <davem@davemloft.net>
> Date: Mon, 24  Jan 2011 16:01:58 -0800
> Subject: [PATCH] ipv6: Always clone offlink  routes.


That patch and all the others seem to be in the official tree, so I pulled 
earlier
today to test against.

I no longer see kernel warnings or any problems with /proc/net/tcp6, but the 
tcp6
layer still has issues with tcp_tw_recycle and a listening socket + looped
connect/disconnects.

First there are intermittent net unreachable connection failures when trying to 
connect
to a local closed tcp6 port, and eventually connection attempts start failing 
with
timeouts.  At that point the tcp6 layer seems quite hosed.  It usually gets
to that point within a few minutes of starting the loop.  Stopping the script 
after that
point seems to have no positive effect.

https://github.com/runningdogx/net6-bug

Using that script, I get something like the following output, although sometimes 
it
takes a few more minutes before the timeouts begin.  Using 127.0.0.1 to test
against tcp4 shows no net unreachables and no timeouts.  All the errors 
displayed
once the timestamped loops start are from attempts to connect to a port that's
supposed to be closed.

Kernel log is empty since boot.
All this still in a standard ubuntu 10.10 amd64 smp vm.

----output----
# ruby net6-bug/tcp6br.rb ::1 3333

If you're not root, you'll need to enable tcp_tw_recycle yourself
Server listening on ::1:3333

Chose port 55555 (should be closed) to test if stack is functioning
14:28:06  SYN_S:1  SYN_R:0  TWAIT:7  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:11  SYN_S:1  SYN_R:0  TWAIT:8  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:16  SYN_S:1  SYN_R:0  TWAIT:12  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:21  SYN_S:1  SYN_R:0  TWAIT:12  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:26  SYN_S:1  SYN_R:0  TWAIT:12  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:31  SYN_S:1  SYN_R:0  TWAIT:17  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:36  SYN_S:0  SYN_R:0  TWAIT:15  FW1:1  FW2:0  CLOSING:0  LACK:0
tcp socket error: Net Unreachable
14:28:41  SYN_S:1  SYN_R:0  TWAIT:17  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:46  SYN_S:1  SYN_R:0  TWAIT:16  FW1:0  FW2:0  CLOSING:0  LACK:0
14:28:51  SYN_S:1  SYN_R:0  TWAIT:19  FW1:0  FW2:0  CLOSING:0  LACK:1
14:28:56  SYN_S:1  SYN_R:0  TWAIT:18  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:01  SYN_S:1  SYN_R:0  TWAIT:19  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:06  SYN_S:1  SYN_R:0  TWAIT:10  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:11  SYN_S:1  SYN_R:0  TWAIT:8  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:16  SYN_S:1  SYN_R:0  TWAIT:8  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:21  SYN_S:1  SYN_R:0  TWAIT:7  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:26  SYN_S:1  SYN_R:0  TWAIT:4  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:31  SYN_S:1  SYN_R:0  TWAIT:5  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:36  SYN_S:1  SYN_R:0  TWAIT:5  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:41  SYN_S:1  SYN_R:0  TWAIT:4  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:46  SYN_S:1  SYN_R:0  TWAIT:5  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:51  SYN_S:1  SYN_R:0  TWAIT:3  FW1:0  FW2:0  CLOSING:0  LACK:0
14:29:56  SYN_S:1  SYN_R:0  TWAIT:4  FW1:0  FW2:0  CLOSING:0  LACK:0
14:30:01  SYN_S:1  SYN_R:0  TWAIT:5  FW1:4  FW2:0  CLOSING:0  LACK:1
tcp socket error: Net Unreachable
14:30:06  SYN_S:1  SYN_R:0  TWAIT:6  FW1:2  FW2:0  CLOSING:0  LACK:1
14:30:32  SYN_S:1  SYN_R:0  TWAIT:5  FW1:0  FW2:0  CLOSING:0  LACK:0
14:30:37  SYN_S:1  SYN_R:0  TWAIT:5  FW1:0  FW2:0  CLOSING:0  LACK:0
14:30:42  SYN_S:1  SYN_R:0  TWAIT:3  FW1:0  FW2:0  CLOSING:0  LACK:0
14:30:47  SYN_S:1  SYN_R:0  TWAIT:3  FW1:0  FW2:0  CLOSING:0  LACK:0
!! TCP SOCKET TIMED OUT CONNECTING TO A LOCAL CLOSED PORT
14:34:02  SYN_S:1  SYN_R:0  TWAIT:0  FW1:0  FW2:0  CLOSING:0  LACK:0
!! TCP SOCKET TIMED OUT CONNECTING TO A LOCAL CLOSED PORT
14:37:16  SYN_S:1  SYN_R:0  TWAIT:0  FW1:0  FW2:0  CLOSING:0  LACK:0
!! TCP SOCKET TIMED OUT CONNECTING TO A LOCAL CLOSED PORT
14:40:30  SYN_S:1  SYN_R:0  TWAIT:0  FW1:0  FW2:0  CLOSING:0  LACK:0
^C


      

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-01-31 22:51 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-22  6:30 Problems with /proc/net/tcp6 - possible bug - ipv6 PK
2011-01-22  8:59 ` Eric Dumazet
2011-01-22 15:15   ` Eric Dumazet
2011-01-22 19:42     ` PK
2011-01-22 21:20       ` Eric Dumazet
2011-01-22 21:40         ` Eric Dumazet
2011-01-24 22:31           ` David Miller
2011-01-24 22:40           ` David Miller
2011-01-25  0:02       ` David Miller
2011-01-24 22:42     ` David Miller
2011-01-31 22:51 PK

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.