All of lore.kernel.org
 help / color / mirror / Atom feed
* audit_tree: sleep inside atomic
@ 2010-06-21 15:15 Jiri Slaby
  2010-09-03 13:52 ` Jiri Slaby
  0 siblings, 1 reply; 5+ messages in thread
From: Jiri Slaby @ 2010-06-21 15:15 UTC (permalink / raw)
  To: Eric Paris; +Cc: LKML, Al Viro

Hi,

stanse found a sleep inside atomic added by the following commit:
commit fb36de479642bc9bdd3af251ae48b882d8a1ad5d
Author: Eric Paris <eparis@redhat.com>
Date:   Thu Dec 17 20:12:05 2009 -0500

    audit: reimplement audit_trees using fsnotify rather than inotify

    Simply switch audit_trees from using inotify to using fsnotify for it's
    inode pinning and disappearing act information.

    Signed-off-by: Eric Paris <eparis@redhat.com>


In untag_chunk, there is
  spin_lock(&entry->lock);
  ...
  new = alloc_chunk(size);
  ...
  spin_unlock(&entry->lock);

with
static struct audit_chunk *alloc_chunk(int count)
{
  struct audit_chunk *chunk;
  ...
  chunk = kzalloc(size, GFP_KERNEL);

But this can sleep. How big the allocations are? Could it be ATOMIC or
moved outside the spinlock?

thanks,
-- 
js

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: audit_tree: sleep inside atomic
  2010-06-21 15:15 audit_tree: sleep inside atomic Jiri Slaby
@ 2010-09-03 13:52 ` Jiri Slaby
  2010-09-14  0:05   ` Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: Jiri Slaby @ 2010-09-03 13:52 UTC (permalink / raw)
  To: Eric Paris; +Cc: LKML, Al Viro

Ideas, comments?

On 06/21/2010 05:15 PM, Jiri Slaby wrote:
> Hi,
> 
> stanse found a sleep inside atomic added by the following commit:
> commit fb36de479642bc9bdd3af251ae48b882d8a1ad5d
> Author: Eric Paris <eparis@redhat.com>
> Date:   Thu Dec 17 20:12:05 2009 -0500
> 
>     audit: reimplement audit_trees using fsnotify rather than inotify
> 
>     Simply switch audit_trees from using inotify to using fsnotify for it's
>     inode pinning and disappearing act information.
> 
>     Signed-off-by: Eric Paris <eparis@redhat.com>
> 
> 
> In untag_chunk, there is
>   spin_lock(&entry->lock);
>   ...
>   new = alloc_chunk(size);
>   ...
>   spin_unlock(&entry->lock);
> 
> with
> static struct audit_chunk *alloc_chunk(int count)
> {
>   struct audit_chunk *chunk;
>   ...
>   chunk = kzalloc(size, GFP_KERNEL);
> 
> But this can sleep. How big the allocations are? Could it be ATOMIC or
> moved outside the spinlock?
> 
> thanks,
-- 
js

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: audit_tree: sleep inside atomic
  2010-09-03 13:52 ` Jiri Slaby
@ 2010-09-14  0:05   ` Andrew Morton
  2010-09-15 20:08     ` Eric Paris
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2010-09-14  0:05 UTC (permalink / raw)
  To: Jiri Slaby; +Cc: Eric Paris, LKML, Al Viro

On Fri, 03 Sep 2010 15:52:30 +0200
Jiri Slaby <jirislaby@gmail.com> wrote:

> Ideas, comments?

Apparently not.

The question is: why is nobody reporting this bug?  Obviously nobody's
running that code path.  Why not?

> On 06/21/2010 05:15 PM, Jiri Slaby wrote:
> > Hi,
> > 
> > stanse found a sleep inside atomic added by the following commit:
> > commit fb36de479642bc9bdd3af251ae48b882d8a1ad5d
> > Author: Eric Paris <eparis@redhat.com>
> > Date:   Thu Dec 17 20:12:05 2009 -0500
> > 
> >     audit: reimplement audit_trees using fsnotify rather than inotify
> > 
> >     Simply switch audit_trees from using inotify to using fsnotify for it's
> >     inode pinning and disappearing act information.
> > 
> >     Signed-off-by: Eric Paris <eparis@redhat.com>
> > 
> > 
> > In untag_chunk, there is
> >   spin_lock(&entry->lock);
> >   ...
> >   new = alloc_chunk(size);
> >   ...
> >   spin_unlock(&entry->lock);
> > 
> > with
> > static struct audit_chunk *alloc_chunk(int count)
> > {
> >   struct audit_chunk *chunk;
> >   ...
> >   chunk = kzalloc(size, GFP_KERNEL);
> > 
> > But this can sleep. How big the allocations are? Could it be ATOMIC or
> > moved outside the spinlock?

Yes, we could make it GFP_ATOMIC - the code tries to handle allocation
failures.

But if we did that we'd be adding a rarely-executed codepath to an
apparently-never-executed code path.  We'd end up shipping stuff which
nobody had tested, ever.

Plus GFP_ATOMIC is unreliable and using it because we screwed up the
locking is lame.

untag_chunk() could be converted to use GFP_KERNEL outside hash_lock
and ->entry_lock.  The usual way is to take a peek, see if it looks
like we'll probably need to do an allocation and if so, do it outside
the locks then free it again if it turned out that we didn't need it
after all.  Or to maintain a one-deep static-local cache and preload
that cache if it's empty.  Neither are particularly pretty.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: audit_tree: sleep inside atomic
  2010-09-14  0:05   ` Andrew Morton
@ 2010-09-15 20:08     ` Eric Paris
  2010-10-30  6:21       ` Al Viro
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Paris @ 2010-09-15 20:08 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Jiri Slaby, LKML, Al Viro

On Mon, 2010-09-13 at 17:05 -0700, Andrew Morton wrote:
> On Fri, 03 Sep 2010 15:52:30 +0200
> Jiri Slaby <jirislaby@gmail.com> wrote:
> 
> > Ideas, comments?
> 
> Apparently not.

Sorry, I've been slacking off on vacation the last couple weeks.

> The question is: why is nobody reporting this bug?  Obviously nobody's
> running that code path.  Why not?

The only people who run this code path, that I know of, are govt orgs
who run in certified environments.  I don't know of any upstream kernel
users who really would hit it.

In any case I don't think it would be particularly painful to just
always allocate a chunk between the two locks.  this is not a hot path
by any stretch of the imagination.  I'll see if I can't code something
up today/tomorrow.

-Eric

> > On 06/21/2010 05:15 PM, Jiri Slaby wrote:
> > > Hi,
> > > 
> > > stanse found a sleep inside atomic added by the following commit:
> > > commit fb36de479642bc9bdd3af251ae48b882d8a1ad5d
> > > Author: Eric Paris <eparis@redhat.com>
> > > Date:   Thu Dec 17 20:12:05 2009 -0500
> > > 
> > >     audit: reimplement audit_trees using fsnotify rather than inotify
> > > 
> > >     Simply switch audit_trees from using inotify to using fsnotify for it's
> > >     inode pinning and disappearing act information.
> > > 
> > >     Signed-off-by: Eric Paris <eparis@redhat.com>
> > > 
> > > 
> > > In untag_chunk, there is
> > >   spin_lock(&entry->lock);
> > >   ...
> > >   new = alloc_chunk(size);
> > >   ...
> > >   spin_unlock(&entry->lock);
> > > 
> > > with
> > > static struct audit_chunk *alloc_chunk(int count)
> > > {
> > >   struct audit_chunk *chunk;
> > >   ...
> > >   chunk = kzalloc(size, GFP_KERNEL);
> > > 
> > > But this can sleep. How big the allocations are? Could it be ATOMIC or
> > > moved outside the spinlock?
> 
> Yes, we could make it GFP_ATOMIC - the code tries to handle allocation
> failures.
> 
> But if we did that we'd be adding a rarely-executed codepath to an
> apparently-never-executed code path.  We'd end up shipping stuff which
> nobody had tested, ever.
> 
> Plus GFP_ATOMIC is unreliable and using it because we screwed up the
> locking is lame.
> 
> untag_chunk() could be converted to use GFP_KERNEL outside hash_lock
> and ->entry_lock.  The usual way is to take a peek, see if it looks
> like we'll probably need to do an allocation and if so, do it outside
> the locks then free it again if it turned out that we didn't need it
> after all.  Or to maintain a one-deep static-local cache and preload
> that cache if it's empty.  Neither are particularly pretty.



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: audit_tree: sleep inside atomic
  2010-09-15 20:08     ` Eric Paris
@ 2010-10-30  6:21       ` Al Viro
  0 siblings, 0 replies; 5+ messages in thread
From: Al Viro @ 2010-10-30  6:21 UTC (permalink / raw)
  To: Eric Paris; +Cc: Andrew Morton, Jiri Slaby, LKML

On Wed, Sep 15, 2010 at 04:08:25PM -0400, Eric Paris wrote:
> On Mon, 2010-09-13 at 17:05 -0700, Andrew Morton wrote:
> > On Fri, 03 Sep 2010 15:52:30 +0200
> > Jiri Slaby <jirislaby@gmail.com> wrote:
> > 
> > > Ideas, comments?
> > 
> > Apparently not.
> 
> Sorry, I've been slacking off on vacation the last couple weeks.
> 
> > The question is: why is nobody reporting this bug?  Obviously nobody's
> > running that code path.  Why not?
> 
> The only people who run this code path, that I know of, are govt orgs
> who run in certified environments.  I don't know of any upstream kernel
> users who really would hit it.
> 
> In any case I don't think it would be particularly painful to just
> always allocate a chunk between the two locks.  this is not a hot path
> by any stretch of the imagination.  I'll see if I can't code something
> up today/tomorrow.

It's not even a matter of path being hot; we should do allocation before
grabbing entry->lock if size is non-zero.  End of the story.

Fixed in audit branch I'll push today.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-10-30  6:21 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-06-21 15:15 audit_tree: sleep inside atomic Jiri Slaby
2010-09-03 13:52 ` Jiri Slaby
2010-09-14  0:05   ` Andrew Morton
2010-09-15 20:08     ` Eric Paris
2010-10-30  6:21       ` Al Viro

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.