linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug
@ 2018-06-21  3:32 Jia-Ju Bai
  2018-06-21  4:29 ` Matthew Wilcox
  0 siblings, 1 reply; 7+ messages in thread
From: Jia-Ju Bai @ 2018-06-21  3:32 UTC (permalink / raw)
  To: paul, eparis, jack, amir73il
  Cc: linux-audit, linux-kernel, linux-fsdevel, Jia-Ju Bai

The kernel may sleep with holding a spinlock.
The function call paths (from bottom to top) in Linux-4.16.7 are:

[FUNC] kmem_cache_alloc(GFP_KERNEL)
fs/notify/mark.c, 439: 
		kmem_cache_alloc in fsnotify_attach_connector_to_object
fs/notify/mark.c, 520: 
		fsnotify_attach_connector_to_object in fsnotify_add_mark_list
fs/notify/mark.c, 590: 
		fsnotify_add_mark_list in fsnotify_add_mark_locked
kernel/audit_tree.c, 437: 
		fsnotify_add_mark_locked in tag_chunk
kernel/audit_tree.c, 423: 
		spin_lock in tag_chunk

[FUNC] kmem_cache_alloc(GFP_KERNEL)
fs/notify/mark.c, 439: 
		kmem_cache_alloc in fsnotify_attach_connector_to_object
fs/notify/mark.c, 520: 
		fsnotify_attach_connector_to_object in fsnotify_add_mark_list
fs/notify/mark.c, 590: 
		fsnotify_add_mark_list in fsnotify_add_mark_locked
kernel/audit_tree.c, 291: 
		fsnotify_add_mark_locked in untag_chunk
kernel/audit_tree.c, 258: 
		spin_lock in untag_chunk

To fix this bug, GFP_KERNEL is replaced with GFP_ATOMIC.

This bug is found by my static analysis tool (DSAC-2) and checked by my
code review.

Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com>
---
 fs/notify/mark.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/notify/mark.c b/fs/notify/mark.c
index e9191b416434..c664853b8585 100644
--- a/fs/notify/mark.c
+++ b/fs/notify/mark.c
@@ -436,7 +436,7 @@ static int fsnotify_attach_connector_to_object(
 {
 	struct fsnotify_mark_connector *conn;
 
-	conn = kmem_cache_alloc(fsnotify_mark_connector_cachep, GFP_KERNEL);
+	conn = kmem_cache_alloc(fsnotify_mark_connector_cachep, GFP_ATOMIC);
 	if (!conn)
 		return -ENOMEM;
 	spin_lock_init(&conn->lock);
-- 
2.17.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug
  2018-06-21  3:32 [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug Jia-Ju Bai
@ 2018-06-21  4:29 ` Matthew Wilcox
  2018-06-22  9:23   ` Jan Kara
  0 siblings, 1 reply; 7+ messages in thread
From: Matthew Wilcox @ 2018-06-21  4:29 UTC (permalink / raw)
  To: Jia-Ju Bai
  Cc: paul, eparis, jack, amir73il, linux-audit, linux-kernel, linux-fsdevel

On Thu, Jun 21, 2018 at 11:32:45AM +0800, Jia-Ju Bai wrote:
> The kernel may sleep with holding a spinlock.
> The function call paths (from bottom to top) in Linux-4.16.7 are:
> 
> [FUNC] kmem_cache_alloc(GFP_KERNEL)
> fs/notify/mark.c, 439: 
> 		kmem_cache_alloc in fsnotify_attach_connector_to_object
> fs/notify/mark.c, 520: 
> 		fsnotify_attach_connector_to_object in fsnotify_add_mark_list
> fs/notify/mark.c, 590: 
> 		fsnotify_add_mark_list in fsnotify_add_mark_locked
> kernel/audit_tree.c, 437: 
> 		fsnotify_add_mark_locked in tag_chunk
> kernel/audit_tree.c, 423: 
> 		spin_lock in tag_chunk

There are several locks here; your report would be improved by saying
which one is the problem.  I'm assuming it's old_entry->lock.

        spin_lock(&old_entry->lock);
...
        if (fsnotify_add_inode_mark_locked(chunk_entry,
                             old_entry->connector->inode, 1)) {
...
        return fsnotify_add_mark_locked(mark, inode, NULL, allow_dups);
...
        ret = fsnotify_add_mark_list(mark, inode, mnt, allow_dups);
...
        if (inode)
                connp = &inode->i_fsnotify_marks;
        conn = fsnotify_grab_connector(connp);
        if (!conn) {
                err = fsnotify_attach_connector_to_object(connp, inode, mnt);

It seems to me that this is safe because old_entry is looked up from
fsnotify_find_mark, and it can't be removed while its lock is held.
Therefore there's always a 'conn' returned from fsnotify_grab_connector(),
and so this path will never be taken.

But this code path is confusing to me, and I could be wrong.  Jan, please
confirm my analysis is correct?

> [FUNC] kmem_cache_alloc(GFP_KERNEL)
> fs/notify/mark.c, 439: 
> 		kmem_cache_alloc in fsnotify_attach_connector_to_object
> fs/notify/mark.c, 520: 
> 		fsnotify_attach_connector_to_object in fsnotify_add_mark_list
> fs/notify/mark.c, 590: 
> 		fsnotify_add_mark_list in fsnotify_add_mark_locked
> kernel/audit_tree.c, 291: 
> 		fsnotify_add_mark_locked in untag_chunk
> kernel/audit_tree.c, 258: 
> 		spin_lock in untag_chunk

I'm just going to assume this one is the same.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug
  2018-06-21  4:29 ` Matthew Wilcox
@ 2018-06-22  9:23   ` Jan Kara
  2018-06-22 18:56     ` Paul Moore
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Kara @ 2018-06-22  9:23 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Jia-Ju Bai, paul, eparis, jack, amir73il, linux-audit,
	linux-kernel, linux-fsdevel

On Wed 20-06-18 21:29:12, Matthew Wilcox wrote:
> On Thu, Jun 21, 2018 at 11:32:45AM +0800, Jia-Ju Bai wrote:
> > The kernel may sleep with holding a spinlock.
> > The function call paths (from bottom to top) in Linux-4.16.7 are:
> > 
> > [FUNC] kmem_cache_alloc(GFP_KERNEL)
> > fs/notify/mark.c, 439: 
> > 		kmem_cache_alloc in fsnotify_attach_connector_to_object
> > fs/notify/mark.c, 520: 
> > 		fsnotify_attach_connector_to_object in fsnotify_add_mark_list
> > fs/notify/mark.c, 590: 
> > 		fsnotify_add_mark_list in fsnotify_add_mark_locked
> > kernel/audit_tree.c, 437: 
> > 		fsnotify_add_mark_locked in tag_chunk
> > kernel/audit_tree.c, 423: 
> > 		spin_lock in tag_chunk
> 
> There are several locks here; your report would be improved by saying
> which one is the problem.  I'm assuming it's old_entry->lock.
> 
>         spin_lock(&old_entry->lock);
> ...
>         if (fsnotify_add_inode_mark_locked(chunk_entry,
>                              old_entry->connector->inode, 1)) {
> ...
>         return fsnotify_add_mark_locked(mark, inode, NULL, allow_dups);
> ...
>         ret = fsnotify_add_mark_list(mark, inode, mnt, allow_dups);
> ...
>         if (inode)
>                 connp = &inode->i_fsnotify_marks;
>         conn = fsnotify_grab_connector(connp);
>         if (!conn) {
>                 err = fsnotify_attach_connector_to_object(connp, inode, mnt);
> 
> It seems to me that this is safe because old_entry is looked up from
> fsnotify_find_mark, and it can't be removed while its lock is held.
> Therefore there's always a 'conn' returned from fsnotify_grab_connector(),
> and so this path will never be taken.
> 
> But this code path is confusing to me, and I could be wrong.  Jan, please
> confirm my analysis is correct?

Yes, you are correct. The presence of another mark in the list (and the
fact we pin it there using refcount & mark_mutex) guarantees we won't need
to allocate the connector. I agree the audit code's use of fsnotify would
deserve some cleanup.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug
  2018-06-22  9:23   ` Jan Kara
@ 2018-06-22 18:56     ` Paul Moore
  2018-06-25  9:22       ` Jan Kara
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Moore @ 2018-06-22 18:56 UTC (permalink / raw)
  To: jack
  Cc: willy, baijiaju1990, Eric Paris, amir73il, linux-audit,
	linux-kernel, linux-fsdevel

On Fri, Jun 22, 2018 at 5:23 AM Jan Kara <jack@suse.cz> wrote:
> On Wed 20-06-18 21:29:12, Matthew Wilcox wrote:
> > On Thu, Jun 21, 2018 at 11:32:45AM +0800, Jia-Ju Bai wrote:
> > > The kernel may sleep with holding a spinlock.
> > > The function call paths (from bottom to top) in Linux-4.16.7 are:
> > >
> > > [FUNC] kmem_cache_alloc(GFP_KERNEL)
> > > fs/notify/mark.c, 439:
> > >             kmem_cache_alloc in fsnotify_attach_connector_to_object
> > > fs/notify/mark.c, 520:
> > >             fsnotify_attach_connector_to_object in fsnotify_add_mark_list
> > > fs/notify/mark.c, 590:
> > >             fsnotify_add_mark_list in fsnotify_add_mark_locked
> > > kernel/audit_tree.c, 437:
> > >             fsnotify_add_mark_locked in tag_chunk
> > > kernel/audit_tree.c, 423:
> > >             spin_lock in tag_chunk
> >
> > There are several locks here; your report would be improved by saying
> > which one is the problem.  I'm assuming it's old_entry->lock.
> >
> >         spin_lock(&old_entry->lock);
> > ...
> >         if (fsnotify_add_inode_mark_locked(chunk_entry,
> >                              old_entry->connector->inode, 1)) {
> > ...
> >         return fsnotify_add_mark_locked(mark, inode, NULL, allow_dups);
> > ...
> >         ret = fsnotify_add_mark_list(mark, inode, mnt, allow_dups);
> > ...
> >         if (inode)
> >                 connp = &inode->i_fsnotify_marks;
> >         conn = fsnotify_grab_connector(connp);
> >         if (!conn) {
> >                 err = fsnotify_attach_connector_to_object(connp, inode, mnt);
> >
> > It seems to me that this is safe because old_entry is looked up from
> > fsnotify_find_mark, and it can't be removed while its lock is held.
> > Therefore there's always a 'conn' returned from fsnotify_grab_connector(),
> > and so this path will never be taken.
> >
> > But this code path is confusing to me, and I could be wrong.  Jan, please
> > confirm my analysis is correct?
>
> Yes, you are correct. The presence of another mark in the list (and the
> fact we pin it there using refcount & mark_mutex) guarantees we won't need
> to allocate the connector. I agree the audit code's use of fsnotify would
> deserve some cleanup.

I'm always open to suggestions and patches (hint, hint) from the
fsnotify experts ;)

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug
  2018-06-22 18:56     ` Paul Moore
@ 2018-06-25  9:22       ` Jan Kara
  2018-06-25 12:55         ` Jan Kara
  2018-06-25 22:25         ` Paul Moore
  0 siblings, 2 replies; 7+ messages in thread
From: Jan Kara @ 2018-06-25  9:22 UTC (permalink / raw)
  To: Paul Moore
  Cc: jack, willy, baijiaju1990, Eric Paris, amir73il, linux-audit,
	linux-kernel, linux-fsdevel

On Fri 22-06-18 14:56:09, Paul Moore wrote:
> On Fri, Jun 22, 2018 at 5:23 AM Jan Kara <jack@suse.cz> wrote:
> > On Wed 20-06-18 21:29:12, Matthew Wilcox wrote:
> > > On Thu, Jun 21, 2018 at 11:32:45AM +0800, Jia-Ju Bai wrote:
> > > > The kernel may sleep with holding a spinlock.
> > > > The function call paths (from bottom to top) in Linux-4.16.7 are:
> > > >
> > > > [FUNC] kmem_cache_alloc(GFP_KERNEL)
> > > > fs/notify/mark.c, 439:
> > > >             kmem_cache_alloc in fsnotify_attach_connector_to_object
> > > > fs/notify/mark.c, 520:
> > > >             fsnotify_attach_connector_to_object in fsnotify_add_mark_list
> > > > fs/notify/mark.c, 590:
> > > >             fsnotify_add_mark_list in fsnotify_add_mark_locked
> > > > kernel/audit_tree.c, 437:
> > > >             fsnotify_add_mark_locked in tag_chunk
> > > > kernel/audit_tree.c, 423:
> > > >             spin_lock in tag_chunk
> > >
> > > There are several locks here; your report would be improved by saying
> > > which one is the problem.  I'm assuming it's old_entry->lock.
> > >
> > >         spin_lock(&old_entry->lock);
> > > ...
> > >         if (fsnotify_add_inode_mark_locked(chunk_entry,
> > >                              old_entry->connector->inode, 1)) {
> > > ...
> > >         return fsnotify_add_mark_locked(mark, inode, NULL, allow_dups);
> > > ...
> > >         ret = fsnotify_add_mark_list(mark, inode, mnt, allow_dups);
> > > ...
> > >         if (inode)
> > >                 connp = &inode->i_fsnotify_marks;
> > >         conn = fsnotify_grab_connector(connp);
> > >         if (!conn) {
> > >                 err = fsnotify_attach_connector_to_object(connp, inode, mnt);
> > >
> > > It seems to me that this is safe because old_entry is looked up from
> > > fsnotify_find_mark, and it can't be removed while its lock is held.
> > > Therefore there's always a 'conn' returned from fsnotify_grab_connector(),
> > > and so this path will never be taken.
> > >
> > > But this code path is confusing to me, and I could be wrong.  Jan, please
> > > confirm my analysis is correct?
> >
> > Yes, you are correct. The presence of another mark in the list (and the
> > fact we pin it there using refcount & mark_mutex) guarantees we won't need
> > to allocate the connector. I agree the audit code's use of fsnotify would
> > deserve some cleanup.
> 
> I'm always open to suggestions and patches (hint, hint) from the
> fsnotify experts ;)

Yeah, I was looking into it on Friday and today :). Currently I've got a
bit stuck because I think I've found some races in audit_tree code and I
haven't yet decided how to fix them. E.g. am I right the following can
happen?

CPU1                                    CPU2
tag_chunk(inode, tree1)                 tag_chunk(inode, tree2)
  old_entry = fsnotify_find_mark();       old_entry = fsnotify_find_mark();
  old = container_of(old_entry);          old = container_of(old_entry);
  chunk = alloc_chunk(old->count + 1);    chunk = alloc_chunk(old->count + 1);
  mutex_lock(&group->mark_mutex);
  adds new mark
  replaces chunk
  old->dead = 1;
  mutex_unlock(&group->mark_mutex);
                                          mutex_lock(&group->mark_mutex);
                                          if (!(old_entry->flags &
                                                FSNOTIFY_MARK_FLAG_ATTACHED)) {
                                            Check fails as old_entry is
                                            not yet destroyed
                                          adds new mark
                                          replaces old chunk again ->
                                            list corruption, lost refs, ...
                                          mutex_unlock(&group->mark_mutex);

Generally there's a bigger problem that audit_tree code can have multiple
marks attached to one inode but only one of them is the "valid" one (i.e.,
the one embedded in the latest chunk). This is only a temporary state until
fsnotify_destroy_mark() detaches the mark and then on last reference drop
we really remove the mark from inode's list but during that window it is
undefined which mark is returned from fsnotify_find_mark()...

So am I right the above can really happen or is there some higher level
synchronization I'm missing? If this can really happen, I think I'll need
to rework the code so that audit_tree has just one mark attached and
let it probably point to the current chunk.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug
  2018-06-25  9:22       ` Jan Kara
@ 2018-06-25 12:55         ` Jan Kara
  2018-06-25 22:25         ` Paul Moore
  1 sibling, 0 replies; 7+ messages in thread
From: Jan Kara @ 2018-06-25 12:55 UTC (permalink / raw)
  To: Paul Moore
  Cc: jack, willy, baijiaju1990, Eric Paris, amir73il, linux-audit,
	linux-kernel, linux-fsdevel

On Mon 25-06-18 11:22:57, Jan Kara wrote:
> On Fri 22-06-18 14:56:09, Paul Moore wrote:
> > On Fri, Jun 22, 2018 at 5:23 AM Jan Kara <jack@suse.cz> wrote:
> > > On Wed 20-06-18 21:29:12, Matthew Wilcox wrote:
> > > > On Thu, Jun 21, 2018 at 11:32:45AM +0800, Jia-Ju Bai wrote:
> > > > > The kernel may sleep with holding a spinlock.
> > > > > The function call paths (from bottom to top) in Linux-4.16.7 are:
> > > > >
> > > > > [FUNC] kmem_cache_alloc(GFP_KERNEL)
> > > > > fs/notify/mark.c, 439:
> > > > >             kmem_cache_alloc in fsnotify_attach_connector_to_object
> > > > > fs/notify/mark.c, 520:
> > > > >             fsnotify_attach_connector_to_object in fsnotify_add_mark_list
> > > > > fs/notify/mark.c, 590:
> > > > >             fsnotify_add_mark_list in fsnotify_add_mark_locked
> > > > > kernel/audit_tree.c, 437:
> > > > >             fsnotify_add_mark_locked in tag_chunk
> > > > > kernel/audit_tree.c, 423:
> > > > >             spin_lock in tag_chunk
> > > >
> > > > There are several locks here; your report would be improved by saying
> > > > which one is the problem.  I'm assuming it's old_entry->lock.
> > > >
> > > >         spin_lock(&old_entry->lock);
> > > > ...
> > > >         if (fsnotify_add_inode_mark_locked(chunk_entry,
> > > >                              old_entry->connector->inode, 1)) {
> > > > ...
> > > >         return fsnotify_add_mark_locked(mark, inode, NULL, allow_dups);
> > > > ...
> > > >         ret = fsnotify_add_mark_list(mark, inode, mnt, allow_dups);
> > > > ...
> > > >         if (inode)
> > > >                 connp = &inode->i_fsnotify_marks;
> > > >         conn = fsnotify_grab_connector(connp);
> > > >         if (!conn) {
> > > >                 err = fsnotify_attach_connector_to_object(connp, inode, mnt);
> > > >
> > > > It seems to me that this is safe because old_entry is looked up from
> > > > fsnotify_find_mark, and it can't be removed while its lock is held.
> > > > Therefore there's always a 'conn' returned from fsnotify_grab_connector(),
> > > > and so this path will never be taken.
> > > >
> > > > But this code path is confusing to me, and I could be wrong.  Jan, please
> > > > confirm my analysis is correct?
> > >
> > > Yes, you are correct. The presence of another mark in the list (and the
> > > fact we pin it there using refcount & mark_mutex) guarantees we won't need
> > > to allocate the connector. I agree the audit code's use of fsnotify would
> > > deserve some cleanup.
> > 
> > I'm always open to suggestions and patches (hint, hint) from the
> > fsnotify experts ;)
> 
> Yeah, I was looking into it on Friday and today :). Currently I've got a
> bit stuck because I think I've found some races in audit_tree code and I
> haven't yet decided how to fix them. E.g. am I right the following can
> happen?
> 
> CPU1                                    CPU2
> tag_chunk(inode, tree1)                 tag_chunk(inode, tree2)
>   old_entry = fsnotify_find_mark();       old_entry = fsnotify_find_mark();
>   old = container_of(old_entry);          old = container_of(old_entry);
>   chunk = alloc_chunk(old->count + 1);    chunk = alloc_chunk(old->count + 1);
>   mutex_lock(&group->mark_mutex);
>   adds new mark
>   replaces chunk
>   old->dead = 1;
>   mutex_unlock(&group->mark_mutex);
>                                           mutex_lock(&group->mark_mutex);
>                                           if (!(old_entry->flags &
>                                                 FSNOTIFY_MARK_FLAG_ATTACHED)) {
>                                             Check fails as old_entry is
>                                             not yet destroyed
>                                           adds new mark
>                                           replaces old chunk again ->
>                                             list corruption, lost refs, ...
>                                           mutex_unlock(&group->mark_mutex);
> 
> Generally there's a bigger problem that audit_tree code can have multiple
> marks attached to one inode but only one of them is the "valid" one (i.e.,
> the one embedded in the latest chunk). This is only a temporary state until
> fsnotify_destroy_mark() detaches the mark and then on last reference drop
> we really remove the mark from inode's list but during that window it is
> undefined which mark is returned from fsnotify_find_mark()...
> 
> So am I right the above can really happen or is there some higher level
> synchronization I'm missing? If this can really happen, I think I'll need
> to rework the code so that audit_tree has just one mark attached and
> let it probably point to the current chunk.

Also am I right to assume that if two tag_chunk() calls race, both try to
add new fsnotify mark in create_chunk() and one of them fails, then the
resulting ENOSPC error from create_chunk() is actually a bug? Because from
looking at the code it seems that the desired functionality is that
tag_chunk() should add 'tree' to the chunk, expanding chunk as needed.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug
  2018-06-25  9:22       ` Jan Kara
  2018-06-25 12:55         ` Jan Kara
@ 2018-06-25 22:25         ` Paul Moore
  1 sibling, 0 replies; 7+ messages in thread
From: Paul Moore @ 2018-06-25 22:25 UTC (permalink / raw)
  To: jack
  Cc: willy, baijiaju1990, Eric Paris, amir73il, linux-audit,
	linux-kernel, linux-fsdevel, rgb

On Mon, Jun 25, 2018 at 5:23 AM Jan Kara <jack@suse.cz> wrote:
> On Fri 22-06-18 14:56:09, Paul Moore wrote:
> > On Fri, Jun 22, 2018 at 5:23 AM Jan Kara <jack@suse.cz> wrote:
> > > On Wed 20-06-18 21:29:12, Matthew Wilcox wrote:
> > > > On Thu, Jun 21, 2018 at 11:32:45AM +0800, Jia-Ju Bai wrote:
> > > > > The kernel may sleep with holding a spinlock.
> > > > > The function call paths (from bottom to top) in Linux-4.16.7 are:
> > > > >
> > > > > [FUNC] kmem_cache_alloc(GFP_KERNEL)
> > > > > fs/notify/mark.c, 439:
> > > > >             kmem_cache_alloc in fsnotify_attach_connector_to_object
> > > > > fs/notify/mark.c, 520:
> > > > >             fsnotify_attach_connector_to_object in fsnotify_add_mark_list
> > > > > fs/notify/mark.c, 590:
> > > > >             fsnotify_add_mark_list in fsnotify_add_mark_locked
> > > > > kernel/audit_tree.c, 437:
> > > > >             fsnotify_add_mark_locked in tag_chunk
> > > > > kernel/audit_tree.c, 423:
> > > > >             spin_lock in tag_chunk
> > > >
> > > > There are several locks here; your report would be improved by saying
> > > > which one is the problem.  I'm assuming it's old_entry->lock.
> > > >
> > > >         spin_lock(&old_entry->lock);
> > > > ...
> > > >         if (fsnotify_add_inode_mark_locked(chunk_entry,
> > > >                              old_entry->connector->inode, 1)) {
> > > > ...
> > > >         return fsnotify_add_mark_locked(mark, inode, NULL, allow_dups);
> > > > ...
> > > >         ret = fsnotify_add_mark_list(mark, inode, mnt, allow_dups);
> > > > ...
> > > >         if (inode)
> > > >                 connp = &inode->i_fsnotify_marks;
> > > >         conn = fsnotify_grab_connector(connp);
> > > >         if (!conn) {
> > > >                 err = fsnotify_attach_connector_to_object(connp, inode, mnt);
> > > >
> > > > It seems to me that this is safe because old_entry is looked up from
> > > > fsnotify_find_mark, and it can't be removed while its lock is held.
> > > > Therefore there's always a 'conn' returned from fsnotify_grab_connector(),
> > > > and so this path will never be taken.
> > > >
> > > > But this code path is confusing to me, and I could be wrong.  Jan, please
> > > > confirm my analysis is correct?
> > >
> > > Yes, you are correct. The presence of another mark in the list (and the
> > > fact we pin it there using refcount & mark_mutex) guarantees we won't need
> > > to allocate the connector. I agree the audit code's use of fsnotify would
> > > deserve some cleanup.
> >
> > I'm always open to suggestions and patches (hint, hint) from the
> > fsnotify experts ;)
>
> Yeah, I was looking into it on Friday and today :). Currently I've got a
> bit stuck because I think I've found some races in audit_tree code and I
> haven't yet decided how to fix them. E.g. am I right the following can
> happen?
>
> CPU1                                    CPU2
> tag_chunk(inode, tree1)                 tag_chunk(inode, tree2)
>   old_entry = fsnotify_find_mark();       old_entry = fsnotify_find_mark();
>   old = container_of(old_entry);          old = container_of(old_entry);
>   chunk = alloc_chunk(old->count + 1);    chunk = alloc_chunk(old->count + 1);
>   mutex_lock(&group->mark_mutex);
>   adds new mark
>   replaces chunk
>   old->dead = 1;
>   mutex_unlock(&group->mark_mutex);
>                                           mutex_lock(&group->mark_mutex);
>                                           if (!(old_entry->flags &
>                                                 FSNOTIFY_MARK_FLAG_ATTACHED)) {
>                                             Check fails as old_entry is
>                                             not yet destroyed
>                                           adds new mark
>                                           replaces old chunk again ->
>                                             list corruption, lost refs, ...
>                                           mutex_unlock(&group->mark_mutex);
>
> Generally there's a bigger problem that audit_tree code can have multiple
> marks attached to one inode but only one of them is the "valid" one (i.e.,
> the one embedded in the latest chunk). This is only a temporary state until
> fsnotify_destroy_mark() detaches the mark and then on last reference drop
> we really remove the mark from inode's list but during that window it is
> undefined which mark is returned from fsnotify_find_mark()...
>
> So am I right the above can really happen or is there some higher level
> synchronization I'm missing? If this can really happen, I think I'll need
> to rework the code so that audit_tree has just one mark attached and
> let it probably point to the current chunk.

Full disclosure: I inherited almost all of this code, and aside from
work by Richard to add some new functionality early in my tenure, and
the fixes you've sent, I've not ventured too far into this
audit/fsnotify code.  This means I don't have a quick answer for you,
in fact since you've been digging through the code recently, you're
probably more knowledgeable than I am with this particular corner of
the audit code.

I've added Richard to the CC line; he was the last audit person to
spend any quality time with this code, he may have some insight;
although it has been many years if the git log is to be believed.

As far as the "bigger problem" is concerned, we actually are aware of
this - partially anyway - we just haven't had the cycles to deal with
it:

* https://github.com/linux-audit/audit-kernel/issues/12

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-06-25 22:25 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-21  3:32 [PATCH] kernel: audit_tree: Fix a sleep-in-atomic-context bug Jia-Ju Bai
2018-06-21  4:29 ` Matthew Wilcox
2018-06-22  9:23   ` Jan Kara
2018-06-22 18:56     ` Paul Moore
2018-06-25  9:22       ` Jan Kara
2018-06-25 12:55         ` Jan Kara
2018-06-25 22:25         ` Paul Moore

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).