All of lore.kernel.org
 help / color / mirror / Atom feed
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Denys Fedoryshchenko <denys@visp.net.lb>
Cc: netdev@vger.kernel.org
Subject: Re: Bug, kernel panic, NULL dereference , cleanup_once / icmp_route_lookup.clone.19.clone / nat , 2.6.39-rc7-git11
Date: Wed, 18 May 2011 11:37:51 +0200	[thread overview]
Message-ID: <1305711471.2983.27.camel@edumazet-laptop> (raw)
In-Reply-To: <41a1892fed59b411bb08d3ecb0d8cda5@visp.net.lb>

Le mercredi 18 mai 2011 à 12:27 +0300, Denys Fedoryshchenko a écrit :
> On Wed, 18 May 2011 01:16:29 +0300, Denys Fedoryshchenko wrote:
> > Just got recently. 32Bit, PPPoE NAS, shapers, firewall, NAT
> > Kernel i mention in subject, 2.6.39-rc7-git11
> > If required i can give more information
> >
> > sharanal (sorry for ugly name) is libpcap based traffic analyser,
> > sure userspace
> >
>  Here is some info, i hope it will be a little useful
> 
>  (gdb)  l *(cleanup_once + 0x49)
>  0xc02e85cc is in cleanup_once (include/linux/list.h:88).
>  83       * This is only for internal list manipulation where we know
>  84       * the prev/next entries already!
>  85       */
>  86      static inline void __list_del(struct list_head * prev, struct 
>  list_head * next)
>  87      {
>  88              next->prev = prev;
>  89              prev->next = next;
>  90      }
>  91
>  92      /**
> 
>  (gdb)  l *(inet_getpeer + 0x2ab)
>  0xc02e8ae8 is in inet_getpeer (net/ipv4/inetpeer.c:530).
>  525             if (base->total >= inet_peer_threshold)
>  526                     /* Remove one less-recently-used entry. */
>  527                     cleanup_once(0, stack);
>  528
>  529             return p;
>  530     }
>  531
>  532     static int compute_total(void)
>  533     {
>  534             return v4_peers.total + v6_peers.total;
> 

I really begin to think we have a bug here...

In previous reports, I suggested to use slub_nomerge because I thought
one corruption from another kernel layer was going on.

(inetpeer was using 64 bytes objects). But now that inetpeer objects are
bigger and sit in another kmemcache, its bad news.

Could you try this, and eventually add some SLUB debugging stuff as
well ?




  reply	other threads:[~2011-05-18  9:37 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-17 22:16 Bug, kernel panic, NULL dereference , cleanup_once / icmp_route_lookup.clone.19.clone / nat , 2.6.39-rc7-git11 Denys Fedoryshchenko
2011-05-18  9:27 ` Denys Fedoryshchenko
2011-05-18  9:37   ` Eric Dumazet [this message]
2011-05-18  9:53     ` Denys Fedoryshchenko
2011-05-18 10:05       ` Eric Dumazet
2011-05-18 11:44         ` Eric Dumazet
2011-05-18 12:46           ` Denys Fedoryshchenko
2011-05-18 15:52             ` Eric Dumazet
2011-05-18 19:29               ` Eric Dumazet
2011-05-19  5:19                 ` Eric Dumazet
2011-05-19  6:11                   ` Denys Fedoryshchenko
2011-05-19  6:30                     ` Eric Dumazet
2011-05-19  6:39                       ` Denys Fedoryshchenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1305711471.2983.27.camel@edumazet-laptop \
    --to=eric.dumazet@gmail.com \
    --cc=denys@visp.net.lb \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.