From: Mike Galbraith <efault@gmx.de>
To: Paul Menage <menage@google.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>,
LKML <linux-kernel@vger.kernel.org>,
Colin Cross <ccross@android.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Ingo Molnar <mingo@elte.hu>
Subject: Re: query: [PATCH 2/2] cgroup: Remove call to synchronize_rcu in cgroup_attach_task
Date: Wed, 13 Apr 2011 18:56:59 +0200 [thread overview]
Message-ID: <1302713819.7448.22.camel@marge.simson.net> (raw)
In-Reply-To: <BANLkTikqvP90Etu=L24DPWbckrawM-6n=Q@mail.gmail.com>
On Wed, 2011-04-13 at 15:16 +0200, Paul Menage wrote:
> On Wed, Apr 13, 2011 at 5:11 AM, Mike Galbraith <efault@gmx.de> wrote:
> > If the user _does_ that rmdir(), it's more or less back to square one.
> > RCU grace periods should not impact userland, but if you try to do
> > create/attach/detach/destroy, you run into the same bottleneck, as does
> > any asynchronous GC, though that's not such a poke in the eye. I tried
> > a straight forward move to schedule_work(), and it seems to work just
> > fine. rmdir() no longer takes ~30ms on my box, but closer to 20us.
>
> > + /*
> > + * Release the subsystem state objects.
> > + */
> > + for_each_subsys(cgrp->root, ss)
> > + ss->destroy(ss, cgrp);
> > +
> > + cgrp->root->number_of_cgroups--;
> > + mutex_unlock(&cgroup_mutex);
> > +
> > + /*
> > + * Drop the active superblock reference that we took when we
> > + * created the cgroup
> > + */
> > + deactivate_super(cgrp->root->sb);
> > +
> > + /*
> > + * if we're getting rid of the cgroup, refcount should ensure
> > + * that there are no pidlists left.
> > + */
> > + BUG_ON(!list_empty(&cgrp->pidlists));
> > +
> > + kfree(cgrp);
>
> We might want to punt this through RCU again, in case the subsystem
> destroy() callbacks left anything around that was previously depending
> on the RCU barrier.
>
> Also, I'd be concerned that subsystems might get confused by the fact
> that a new group called 'foo' could be created before the old 'foo'
> has been cleaned up? (And do any subsystems rely on being able to
> access the cgroup dentry up until the point when destroy() is called?
Yeah, I already have head scratching sessions planned for these, why I
used 'seems' to work fine, and Not-signed-off-by: :)
-Mike
next prev parent reply other threads:[~2011-04-13 16:57 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-07 9:55 query: [PATCH 2/2] cgroup: Remove call to synchronize_rcu in cgroup_attach_task Mike Galbraith
2011-04-13 2:02 ` Li Zefan
2011-04-13 3:11 ` Mike Galbraith
2011-04-13 13:16 ` Paul Menage
2011-04-13 16:56 ` Mike Galbraith [this message]
2011-04-14 7:26 ` Mike Galbraith
2011-04-14 8:34 ` Mike Galbraith
2011-04-14 8:44 ` Mike Galbraith
2011-04-18 14:21 ` Mike Galbraith
2011-04-28 9:38 ` Mike Galbraith
2011-04-29 12:34 ` Mike Galbraith
2011-05-02 13:46 ` Paul E. McKenney
2011-05-02 14:29 ` Mike Galbraith
2011-05-02 15:04 ` Mike Galbraith
2011-05-02 23:03 ` Paul E. McKenney
2011-04-13 13:10 ` Paul Menage
2011-04-13 16:52 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1302713819.7448.22.camel@marge.simson.net \
--to=efault@gmx.de \
--cc=a.p.zijlstra@chello.nl \
--cc=ccross@android.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
--cc=menage@google.com \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).