linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Roman Gushchin <guro@fb.com>
Cc: linux-mm@kvack.org, Vladimir Davydov <vdavydov.dev@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	David Rientjes <rientjes@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>,
	kernel-team@fb.com, cgroups@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [v7 2/5] mm, oom: cgroup-aware OOM killer
Date: Wed, 6 Sep 2017 15:22:49 +0200	[thread overview]
Message-ID: <20170906132249.c2llo5zyrzgviqzc@dhcp22.suse.cz> (raw)
In-Reply-To: <20170906125750.GB12904@castle>

On Wed 06-09-17 13:57:50, Roman Gushchin wrote:
> On Wed, Sep 06, 2017 at 10:31:58AM +0200, Michal Hocko wrote:
> > On Tue 05-09-17 21:23:57, Roman Gushchin wrote:
> > > On Tue, Sep 05, 2017 at 04:57:00PM +0200, Michal Hocko wrote:
> > [...]
> > > > Hmm. The changelog says "By default, it will look for the biggest leaf
> > > > cgroup, and kill the largest task inside." But you are accumulating
> > > > oom_score up the hierarchy and so parents will have higher score than
> > > > the layer of their children and the larger the sub-hierarchy the more
> > > > biased it will become. Say you have
> > > > 	root
> > > >          /\
> > > >         /  \
> > > >        A    D
> > > >       / \
> > > >      B   C
> > > > 
> > > > B (5), C(15) thus A(20) and D(20). Unless I am missing something we are
> > > > going to go down A path and then chose C even though D is the largest
> > > > leaf group, right?
> > > 
> > > You're right, changelog is not accurate, I'll fix it.
> > > The behavior is correct, IMO.
> > 
> > Please explain why. This is really a non-intuitive semantic. Why should
> > larger hierarchies be punished more than shallow ones? I would
> > completely agree if the whole hierarchy would be a killable entity (aka
> > A would be kill-all).
> 
> I think it's a reasonable and clear policy: we're looking for a memcg
> with the smallest oom_priority and largest memory footprint recursively.

But this can get really complex for non-trivial setups. Anything with
deeper and larger hierarchies will get quite complex IMHO.

Btw. do you have any specific usecase for the priority based oom
killer? I remember David was asking for this because it _would_ be
useful but you didn't have it initially. And I agree with that I am
just not sure the semantic is thought through wery well. I am thinking
whether it would be easier to push this further without priority thing
for now and add it later with a clear example of the configuration and
how it should work and a more thought through semantic. Would that sound
acceptable? I believe the rest is quite useful to get merged on its own.

> Then we reclaim some memory from it (by killing the biggest process
> or all processes, depending on memcg preferences).
> 
> In general, if there are two memcgs of equal importance (which is defined
> by setting the oom_priority), we're choosing the largest, because there
> are more chances that it contain a leaking process. The same is true
> right now for processes.

Yes except this is not the case as shown above. We can easily select a
smaller leaf memcg just because it is in a larger hierarchy and that
sounds very dubious to me. Even when all the priorities are the same.

> I agree, that for size-based comparison we could use a different policy:
> comparing leaf cgroups despite their level. But I don't see a clever
> way to apply oom_priorities in this case. Comparing oom_priority
> on each level is a simple and powerful policy, and it works well
> for delegation.

You are already shaping semantic around the implementation and that is a
clear sign of problem.
 
> > [...]
> > > > I do not understand why do we have to handle root cgroup specially here.
> > > > select_victim_memcg already iterates all memcgs in the oom hierarchy
> > > > (including root) so if the root memcg is the largest one then we
> > > > should simply consider it no?
> > > 
> > > We don't have necessary stats for the root cgroup, so we can't calculate
> > > it's oom_score.
> > 
> > We used to charge pages to the root memcg as well so we might resurrect
> > that idea. In any case this is something that could be hidden in
> > memcg_oom_badness rather then special cased somewhere else.
> 
> In theory I agree, but I do not see a good way to calculate root memcg
> oom_score.

Why cannot you emulate that by the largest task in the root? The same
way you actually do in select_victim_root_cgroup_task now?
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2017-09-06 13:22 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-04 14:21 [v7 0/5] cgroup-aware OOM killer Roman Gushchin
2017-09-04 14:21 ` [v7 1/5] mm, oom: refactor the oom_kill_process() function Roman Gushchin
2017-09-05 13:34   ` Michal Hocko
2017-09-04 14:21 ` [v7 2/5] mm, oom: cgroup-aware OOM killer Roman Gushchin
2017-09-05 14:57   ` Michal Hocko
2017-09-05 20:23     ` Roman Gushchin
2017-09-06  8:31       ` Michal Hocko
2017-09-06 12:57         ` Roman Gushchin
2017-09-06 13:22           ` Michal Hocko [this message]
2017-09-06 13:41             ` Roman Gushchin
2017-09-06 14:10               ` Michal Hocko
2017-09-06  8:34       ` Michal Hocko
2017-09-06 12:33         ` Roman Gushchin
2017-09-07 16:18   ` Christopher Lameter
2017-09-11  8:49     ` Michal Hocko
2017-09-04 14:21 ` [v7 3/5] mm, oom: introduce oom_priority for memory cgroups Roman Gushchin
2017-09-04 14:21 ` [v7 4/5] mm, oom, docs: describe the cgroup-aware OOM killer Roman Gushchin
2017-09-04 14:21 ` [v7 5/5] mm, oom: cgroup v2 mount option to disable " Roman Gushchin
2017-09-04 17:32   ` Shakeel Butt
2017-09-04 17:51     ` Roman Gushchin
2017-09-05 13:44   ` Michal Hocko
2017-09-05 14:30     ` Roman Gushchin
2017-09-05 15:12       ` Michal Hocko
2017-09-05 19:16         ` Roman Gushchin
2017-09-06  8:42           ` Michal Hocko
2017-09-06 17:40             ` Roman Gushchin
2017-09-06 17:59               ` Michal Hocko
2017-09-06 20:59               ` David Rientjes
2017-09-07 14:43                 ` Christopher Lameter
2017-09-07 14:52                   ` Roman Gushchin
2017-09-07 15:03                     ` Christopher Lameter
2017-09-07 16:42                       ` Roman Gushchin
2017-09-07 17:03                         ` Christopher Lameter
2017-09-07 21:55                   ` David Rientjes
2017-09-07 16:21         ` Christopher Lameter
2017-09-05 21:53     ` Johannes Weiner
2017-09-06  8:28       ` Michal Hocko
2017-09-07 16:14         ` Johannes Weiner
2017-09-11  9:05           ` Michal Hocko
2017-09-11 12:50             ` Roman Gushchin
2017-09-07 16:27         ` Christopher Lameter
2017-09-07 22:03           ` David Rientjes
2017-09-08 21:07             ` Christopher Lameter
2017-09-09  8:45               ` David Rientjes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170906132249.c2llo5zyrzgviqzc@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).