linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: David Rientjes <rientjes@google.com>
Cc: Yafang Shao <laoar.shao@gmail.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org
Subject: Re: [PATCH] mm, oom: make the calculation of oom badness more accurate
Date: Wed, 8 Jul 2020 21:02:25 +0200	[thread overview]
Message-ID: <20200708190225.GM7271@dhcp22.suse.cz> (raw)
In-Reply-To: <alpine.DEB.2.23.453.2007081052050.700996@chino.kir.corp.google.com>

On Wed 08-07-20 10:57:27, David Rientjes wrote:
> On Wed, 8 Jul 2020, Michal Hocko wrote:
> 
> > I have only now realized that David is not on Cc. Add him here. The
> > patch is http://lkml.kernel.org/r/1594214649-9837-1-git-send-email-laoar.shao@gmail.com.
> > 
> > I believe the main problem is that we are normalizing to oom_score_adj
> > units rather than usage/total. I have a very vague recollection this has
> > been done in the past but I didn't get to dig into details yet.
> > 
> 
> The memcg max is 4194304 pages, and an oom_score_adj of -998 would yield a 
> page adjustment of:
> 
> adj = -998 * 4194304 / 1000 = −4185915 pages
> 
> The largest pid 58406 (data_sim) has rss 3967322 pages,
> pgtables 37101568 / 4096 = 9058 pages, and swapents 0.  So it's unadjusted 
> badness is
> 
> 3967322 + 9058 pages = 3976380 pages
> 
> Factoring in oom_score_adj, all of these processes will have a badness of 
> 1 because oom_badness() doesn't underflow, which I think is the point of 
> Yafang's proposal.
> 
> I think the patch can work but, as you mention, also needs an update to 
> proc_oom_score().  proc_oom_score() is using the global amount of memory 
> so Yafang is likely not seeing it go negative for that reason but it could 
> happen.

Yes, memcg just makes it more obvious but the same might happen for the
global case. I am not sure how we can both alow underflow and present
the value that would fit the existing model. The exported value should
really reflect what the oom killer is using for the calculation or we
are going to see discrepancies between the real oom decision and
presented values. So I believe we really have to change the calculation
rather than just make it tolerant to underflows.

But I have to think about that much more.
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-07-08 19:02 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-08 13:24 [PATCH] mm, oom: make the calculation of oom badness more accurate Yafang Shao
2020-07-08 14:28 ` Michal Hocko
2020-07-08 14:32   ` Michal Hocko
2020-07-08 17:57     ` David Rientjes
2020-07-08 19:02       ` Michal Hocko [this message]
2020-07-09  2:14         ` Yafang Shao
2020-07-09  6:26           ` Michal Hocko
2020-07-09  6:41             ` Michal Hocko
2020-07-09  7:31             ` Yafang Shao
2020-07-09  8:17               ` Michal Hocko
2020-07-09  1:57       ` Yafang Shao
2020-07-08 15:11   ` Yafang Shao
2020-07-08 16:09     ` Michal Hocko
2020-07-09  1:57       ` Yafang Shao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200708190225.GM7271@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).