All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <guro@fb.com>, Greg Thelen <gthelen@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>, Cgroups <cgroups@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memcg: oom: ignore oom warnings from memory.max
Date: Mon, 4 May 2020 12:23:51 -0700	[thread overview]
Message-ID: <CALvZod79hWns9366B+8ZK2Roz8c+vkdA80HqFNMep56_pumdRQ@mail.gmail.com> (raw)
In-Reply-To: <20200504160613.GU22838@dhcp22.suse.cz>

On Mon, May 4, 2020 at 9:06 AM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 04-05-20 08:35:57, Shakeel Butt wrote:
> > On Mon, May 4, 2020 at 8:00 AM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > On Mon 04-05-20 07:53:01, Shakeel Butt wrote:
> [...]
> > > > I am trying to see if "no eligible task" is really an issue and should
> > > > be warned for the "other use cases". The only real use-case I can
> > > > think of are resource managers adjusting the limit dynamically. I
> > > > don't see "no eligible task" a concerning reason for such use-case.
> > >
> > > It is very much a concerning reason to notify about like any other OOM
> > > situation due to hard limit breach. In this case it is worse in some
> > > sense because the limit cannot be trimmed down because there is no
> > > directly reclaimable memory at all. Such an oom situation is
> > > effectivelly conserved.
> > > --
> >
> > Let me make a more precise statement and tell me if you agree. The "no
> > eligible task" is concerning for the charging path but not for the
> > writer of memory.max. The writer can read the usage and
> > cgroup.[procs|events] to figure out the situation if needed.
>
> I really hate to repeat myself but this is no different from a regular
> oom situation.

Conceptually yes there is no difference but there is no *divine
restriction* to not make a difference if there is a real world
use-case which would benefit from it.

> Admin sets the hard limit and the kernel tries to act
> upon that.
>
> You cannot make any assumption about what admin wanted or didn't want
> to see.

Actually we always make assumptions on how the feature we implement
will be used and as new use-cases come the assumptions evolve.

> We simply trigger the oom killer on memory.max and this is a
> documented behavior. No eligible task or no task at all is a simply a
> corner case

For "sweep before tear down" use-case this is not a corner case.

> when the kernel cannot act and mentions that along with the
> oom report so that whoever consumes that information can debug or act on
> that fact.
>
> Silencing the oom report is simply removing a potentially useful
> aid to debug further a potential problem.

*Potentially* useful for debugging versus actually beneficial for
"sweep before tear down" use-case. Also I am not saying to make "no
dumps for memory.max when no eligible tasks" a set in stone rule. We
can always reevaluate when such information will actually be useful.

Johannes/Andrew, what's your opinion?

  reply	other threads:[~2020-05-04 19:24 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-30 18:27 [PATCH] memcg: oom: ignore oom warnings from memory.max Shakeel Butt
2020-04-30 18:27 ` Shakeel Butt
2020-04-30 19:06 ` Roman Gushchin
2020-04-30 19:06   ` Roman Gushchin
2020-04-30 19:30   ` Johannes Weiner
2020-04-30 19:30     ` Johannes Weiner
2020-04-30 20:23     ` Roman Gushchin
2020-04-30 20:23       ` Roman Gushchin
2020-04-30 19:31   ` Shakeel Butt
2020-04-30 19:31     ` Shakeel Butt
2020-04-30 19:31     ` Shakeel Butt
2020-04-30 19:29 ` Johannes Weiner
2020-04-30 20:20   ` Shakeel Butt
2020-04-30 20:20     ` Shakeel Butt
2020-04-30 20:20     ` Shakeel Butt
2020-05-04  6:57     ` Michal Hocko
2020-05-04  6:57       ` Michal Hocko
2020-05-04 13:54       ` Shakeel Butt
2020-05-04 13:54         ` Shakeel Butt
2020-05-04 13:54         ` Shakeel Butt
2020-05-01  1:39 ` Yafang Shao
2020-05-01  1:39   ` Yafang Shao
2020-05-01  2:04   ` Shakeel Butt
2020-05-01  2:04     ` Shakeel Butt
2020-05-01  2:12     ` Yafang Shao
2020-05-01  2:12       ` Yafang Shao
2020-05-01  2:12       ` Yafang Shao
2020-05-04  7:03   ` Michal Hocko
2020-05-04  7:03     ` Michal Hocko
2020-05-04  7:26     ` Yafang Shao
2020-05-04  7:26       ` Yafang Shao
2020-05-04  7:26       ` Yafang Shao
2020-05-04  7:35       ` Michal Hocko
2020-05-04  7:40         ` Yafang Shao
2020-05-04  7:40           ` Yafang Shao
2020-05-04  7:40           ` Yafang Shao
2020-05-04  8:03           ` Michal Hocko
2020-05-04  8:03             ` Michal Hocko
2020-05-04  6:56 ` Michal Hocko
2020-05-04  6:56   ` Michal Hocko
2020-05-04 13:54   ` Shakeel Butt
2020-05-04 13:54     ` Shakeel Butt
2020-05-04 14:11     ` Michal Hocko
2020-05-04 14:53       ` Shakeel Butt
2020-05-04 14:53         ` Shakeel Butt
2020-05-04 14:53         ` Shakeel Butt
2020-05-04 15:00         ` Michal Hocko
2020-05-04 15:35           ` Shakeel Butt
2020-05-04 15:35             ` Shakeel Butt
2020-05-04 15:35             ` Shakeel Butt
2020-05-04 15:39             ` Yafang Shao
2020-05-04 15:39               ` Yafang Shao
2020-05-04 15:39               ` Yafang Shao
2020-05-04 16:06             ` Michal Hocko
2020-05-04 16:06               ` Michal Hocko
2020-05-04 19:23               ` Shakeel Butt [this message]
2020-05-04 19:23                 ` Shakeel Butt
2020-05-05  7:13                 ` Michal Hocko
2020-05-05  7:13                   ` Michal Hocko
2020-05-05 15:03                   ` Shakeel Butt
2020-05-05 15:03                     ` Shakeel Butt
2020-05-05 16:57                     ` Johannes Weiner
2020-05-05 16:57                       ` Johannes Weiner
2020-05-05 15:27                 ` Johannes Weiner
2020-05-05 15:27                   ` Johannes Weiner
2020-05-05 15:35                   ` Shakeel Butt
2020-05-05 15:35                     ` Shakeel Butt
2020-05-05 15:35                     ` Shakeel Butt
2020-05-05 15:49                     ` Michal Hocko
2020-05-05 15:49                       ` Michal Hocko
2020-05-05 16:40                     ` Johannes Weiner
2020-05-05 16:40                       ` Johannes Weiner
2020-05-04 14:20     ` Tetsuo Handa
2020-05-04 14:20       ` Tetsuo Handa
2020-05-04 14:57       ` Shakeel Butt
2020-05-04 14:57         ` Shakeel Butt
2020-05-04 15:44         ` Tetsuo Handa
2020-05-04 15:44           ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALvZod79hWns9366B+8ZK2Roz8c+vkdA80HqFNMep56_pumdRQ@mail.gmail.com \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=gthelen@google.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.