linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Petr Mladek <pmladek@suse.com>,
	cgroups@vger.kernel.org, Cyril Hrubis <chrubis@suse.cz>,
	linux-kernel@vger.kernel.org
Subject: Re: [BUG] cgroup/workques/fork: deadlock when moving cgroups
Date: Fri, 15 Apr 2016 11:25:26 -0400	[thread overview]
Message-ID: <20160415152526.GJ12583@htj.duckdns.org> (raw)
In-Reply-To: <20160415150815.GM32377@dhcp22.suse.cz>

Hello,

On Fri, Apr 15, 2016 at 05:08:15PM +0200, Michal Hocko wrote:
> On Fri 15-04-16 10:38:15, Tejun Heo wrote:
> > Not necessarily.  The only thing necessary is flushing the work item
> > after releasing locks but before returning to user.
> > cpuset_post_attach_flush() does exactly the same thing.
> 
> Ahh, ok, didn't know that __cgroup_procs_write is doing something
> controller specific. Yes then we wouldn't need a generic callback if
> another code like above would be acceptable.

Yeah, I thought it'd be an one-off thing so didn't made it a generic
callback.  Making it a generic callback isn't a problem tho.

> > > really relies on the previous behavior I guess we can solve it with a
> > > post_move cgroup callback which would be called from a lockless context.
> > > 
> > > Anyway, before we go that way, can we at least consider the possibility
> > > of removing the kworker creation dependency on the global rwsem? AFAIU
> > > this locking was added because of the pid controller. Do we even care
> > > about something as volatile as kworkers in the pid controller?
> > 
> > It's not just pid controller and the global percpu locking has lower
> 
> where else would the locking matter? I have only checked the git history
> to build my picture so I might be missing something of course.

IIRC, there were followup patches which fixed and/or simplified
locking paths.  It's just generally a lot simpler to deal with.  The
downside obviously is that cgroup core operations can't depend on task
creation.  I didn't expect memcg to trigger it too tho.  I don't think
we wanna be doing heavy-lifting operations like node migration or page
relabeling while holding cgroup lock anyway, so would much prefer
making them async.

> > hotpath overhead.  We can try to exclude kworkers out of the locking
> > but that can get really nasty and there are already attempts to add
> > cgroup support to workqueue.  Will think more about it.  For now tho,
> > do you think making charge moving async would be difficult?
> 
> Well it certainly is not that trivial because it relies on being
> exclusive with global context. I will have to look closer of course but
> I cannot guarantee I will get to it before I get back from LSF. We can
> certainly discuss that at the conference. Johannes will be there as
> well.

I see.  For cpuset, it didn't really matter but what we can do is
creating a mechanism on cgroup core side which is called after a
migration operation is done after dropping the usual locks and
guarantees that no new migration will be started before the callbacks
finish.  If we have that, relocating charge moving outside the attach
path should be pretty trivial, right?

Thanks.

-- 
tejun

  reply	other threads:[~2016-04-15 15:25 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-13  9:42 [BUG] cgroup/workques/fork: deadlock when moving cgroups Petr Mladek
2016-04-13 18:33 ` Tejun Heo
2016-04-13 18:57   ` Tejun Heo
2016-04-13 19:23   ` Michal Hocko
2016-04-13 19:28     ` Michal Hocko
2016-04-13 19:37     ` Tejun Heo
2016-04-13 19:48       ` Michal Hocko
2016-04-14  7:06         ` Michal Hocko
2016-04-14 15:32           ` Tejun Heo
2016-04-14 17:50     ` Johannes Weiner
2016-04-15  7:06       ` Michal Hocko
2016-04-15 14:38         ` Tejun Heo
2016-04-15 15:08           ` Michal Hocko
2016-04-15 15:25             ` Tejun Heo [this message]
2016-04-17 12:00               ` Michal Hocko
2016-04-18 14:40           ` Petr Mladek
2016-04-19 14:01             ` Michal Hocko
2016-04-19 15:39               ` Petr Mladek
2016-04-15 19:17       ` [PATCH for-4.6-fixes] memcg: remove lru_add_drain_all() invocation from mem_cgroup_move_charge() Tejun Heo
2016-04-17 12:07         ` Michal Hocko
2016-04-20 21:29           ` Tejun Heo
2016-04-21  3:27             ` Michal Hocko
2016-04-21 15:00               ` Petr Mladek
2016-04-21 15:51                 ` Tejun Heo
2016-04-21 23:06           ` [PATCH 1/2] cgroup, cpuset: replace cpuset_post_attach_flush() with cgroup_subsys->post_attach callback Tejun Heo
2016-04-21 23:09             ` [PATCH 2/2] memcg: relocate charge moving from ->attach to ->post_attach Tejun Heo
2016-04-22 13:57               ` Petr Mladek
2016-04-25  8:25               ` Michal Hocko
2016-04-25 19:42                 ` Tejun Heo
2016-04-25 19:44               ` Tejun Heo
2016-04-21 23:11             ` [PATCH 1/2] cgroup, cpuset: replace cpuset_post_attach_flush() with cgroup_subsys->post_attach callback Tejun Heo
2016-04-21 15:56         ` [PATCH for-4.6-fixes] memcg: remove lru_add_drain_all() invocation from mem_cgroup_move_charge() Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160415152526.GJ12583@htj.duckdns.org \
    --to=tj@kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=chrubis@suse.cz \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mhocko@kernel.org \
    --cc=pmladek@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).