linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Roman Gushchin <guro@fb.com>
Cc: linux-mm@kvack.org, Vladimir Davydov <vdavydov.dev@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	David Rientjes <rientjes@google.com>, Tejun Heo <tj@kernel.org>,
	kernel-team@fb.com, cgroups@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [v6 2/4] mm, oom: cgroup-aware OOM killer
Date: Fri, 25 Aug 2017 12:58:23 +0200	[thread overview]
Message-ID: <20170825105823.GA7769@dhcp22.suse.cz> (raw)
In-Reply-To: <20170825103951.GA3185@castle.dhcp.TheFacebook.com>

On Fri 25-08-17 11:39:51, Roman Gushchin wrote:
> On Fri, Aug 25, 2017 at 10:14:03AM +0200, Michal Hocko wrote:
> > On Thu 24-08-17 15:58:01, Roman Gushchin wrote:
> > > On Thu, Aug 24, 2017 at 04:13:37PM +0200, Michal Hocko wrote:
> > > > On Thu 24-08-17 14:58:42, Roman Gushchin wrote:
> > [...]
> > > > > Both ways are not ideal, and sum of the processes is not ideal too.
> > > > > Especially, if you take oom_score_adj into account. Will you respect it?
> > > > 
> > > > Yes, and I do not see any reason why we shouldn't.
> > > 
> > > It makes things even more complicated.
> > > Right now task's oom_score can be in (~ -total_memory, ~ +2*total_memory) range,
> > > and it you're starting summing it, it can be multiplied by number of tasks...
> > > Weird.
> > 
> > oom_score_adj is just a normalized bias so if tasks inside oom will use
> > it the whole memcg will get accumulated bias from all such tasks so it
> > is not completely off. I agree that the more tasks use the bias the more
> > biased the whole memcg will be. This might or might not be a problem.
> > As you are trying to reimplement the existing oom killer implementation
> > I do not think we cannot simply ignore API which people are used to.
> > 
> > If this was a configurable oom policy then I could see how ignoring
> > oom_score_adj is acceptable because it would be an explicit opt-in.
> >
> > > It also will be different in case of system and memcg-wide OOM.
> > 
> > Why, we do honor oom_score_adj for the memcg OOM now and in fact the
> > kernel memcg OOM killer shouldn't be very much different from the global
> > one except for the tasks scope.
> 
> Assume, you have two tasks (2Gb and 1Gb) in a cgroup with limit 3Gb.
> The second task has oom_score_adj +100. Total memory is 64Gb, for example.
> 
> I case of memcg-wide oom first task will be selected;
> in case of system-wide OOM - the second.
> 
> Personally I don't like this, but it looks like we have to respect
> oom_score_adj set to -1000, I'll alter my patch.

I cannot say I would love how oom_score_adj works but it's been like
that for a long time and people do rely on that. So we cannot simply
change it under people feets.
 
> > > > > I've started actually with such approach, but then found it weird.
> > > > > 
> > > > > > Besides that you have
> > > > > > to check each task for over-killing anyway. So I do not see any
> > > > > > performance merits here.
> > > > > 
> > > > > It's an implementation detail, and we can hopefully get rid of it at some point.
> > > > 
> > > > Well, we might do some estimations and ignore oom scopes but I that
> > > > sounds really complicated and error prone. Unless we have anything like
> > > > that then I would start from tasks and build up the necessary to make a
> > > > decision at the higher level.
> > > 
> > > Seriously speaking, do you have an example, when summing per-process
> > > oom_score will work better?
> > 
> > The primary reason I am pushing for this is to have the common iterator
> > code path (which we have since Vladimir has unified memcg and global oom
> > paths) and only parametrize the value calculation and victim selection.
> 
> I agree, but I'm not sure that we can (and have to) totally unify the way,
> how oom_score is calculated for processes and cgroups.
> 
> But I'd like to see an unified oom_priority approach. This will allow
> to define an OOM killing order in a clear way, and use size-based tiebreaking
> for items of the same priority. Root-cgroup processes will be compared with
> other memory consumers by oom_priority first and oom_score afterwards.

This again changes the existing semantic so I really thing we should be
careful and this all should be opt-in.
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2017-08-25 10:58 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-23 16:51 [v6 1/4] mm, oom: refactor the oom_kill_process() function Roman Gushchin
2017-08-23 16:51 ` [v6 0/4] cgroup-aware OOM killer Roman Gushchin
2017-08-23 16:51 ` [v6 2/4] mm, oom: " Roman Gushchin
2017-08-23 23:19   ` David Rientjes
2017-08-25 10:57     ` Roman Gushchin
2017-08-24 11:47   ` Michal Hocko
2017-08-24 12:28     ` Roman Gushchin
2017-08-24 12:58       ` Michal Hocko
2017-08-24 13:58         ` Roman Gushchin
2017-08-24 14:13           ` Michal Hocko
2017-08-24 14:58             ` Roman Gushchin
2017-08-25  8:14               ` Michal Hocko
2017-08-25 10:39                 ` Roman Gushchin
2017-08-25 10:58                   ` Michal Hocko [this message]
2017-08-30 11:22                 ` Roman Gushchin
2017-08-30 20:56                   ` David Rientjes
2017-08-31 13:34                     ` Roman Gushchin
2017-08-31 20:01                       ` David Rientjes
2017-08-23 16:52 ` [v6 3/4] mm, oom: introduce oom_priority for memory cgroups Roman Gushchin
2017-08-24 12:10   ` Michal Hocko
2017-08-24 12:51     ` Roman Gushchin
2017-08-24 13:48       ` Michal Hocko
2017-08-24 14:11         ` Roman Gushchin
2017-08-28 20:54           ` David Rientjes
2017-08-23 16:52 ` [v6 4/4] mm, oom, docs: describe the cgroup-aware OOM killer Roman Gushchin
2017-08-24 11:15 ` [v6 1/4] mm, oom: refactor the oom_kill_process() function Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170825105823.GA7769@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).