linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Waiman Long <llong@redhat.com>
Cc: Shakeel Butt <shakeelb@google.com>,
	Aaron Tomlin <atomlin@redhat.com>, Linux MM <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] mm/oom_kill: allow oom kill allocating task for non-global case
Date: Mon, 7 Jun 2021 21:36:46 +0200	[thread overview]
Message-ID: <YL51Tp/3jVHUrpuj@dhcp22.suse.cz> (raw)
In-Reply-To: <6d23ce58-4c4b-116a-6d74-c2cf4947492b@redhat.com>

On Mon 07-06-21 15:18:38, Waiman Long wrote:
> On 6/7/21 3:04 PM, Michal Hocko wrote:
> > On Mon 07-06-21 14:51:05, Waiman Long wrote:
> > > On 6/7/21 2:43 PM, Shakeel Butt wrote:
> > > > On Mon, Jun 7, 2021 at 9:45 AM Waiman Long <llong@redhat.com> wrote:
> > > > > On 6/7/21 12:31 PM, Aaron Tomlin wrote:
> > > > > > At the present time, in the context of memcg OOM, even when
> > > > > > sysctl_oom_kill_allocating_task is enabled/or set, the "allocating"
> > > > > > task cannot be selected, as a target for the OOM killer.
> > > > > > 
> > > > > > This patch removes the restriction entirely.
> > > > > > 
> > > > > > Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
> > > > > > ---
> > > > > >     mm/oom_kill.c | 6 +++---
> > > > > >     1 file changed, 3 insertions(+), 3 deletions(-)
> > > > > > 
> > > > > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > > > > > index eefd3f5fde46..3bae33e2d9c2 100644
> > > > > > --- a/mm/oom_kill.c
> > > > > > +++ b/mm/oom_kill.c
> > > > > > @@ -1089,9 +1089,9 @@ bool out_of_memory(struct oom_control *oc)
> > > > > >                 oc->nodemask = NULL;
> > > > > >         check_panic_on_oom(oc);
> > > > > > 
> > > > > > -     if (!is_memcg_oom(oc) && sysctl_oom_kill_allocating_task &&
> > > > > > -         current->mm && !oom_unkillable_task(current) &&
> > > > > > -         oom_cpuset_eligible(current, oc) &&
> > > > > > +     if (sysctl_oom_kill_allocating_task && current->mm &&
> > > > > > +            !oom_unkillable_task(current) &&
> > > > > > +            oom_cpuset_eligible(current, oc) &&
> > > > > >             current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
> > > > > >                 get_task_struct(current);
> > > > > >                 oc->chosen = current;
> > > > > To provide more context for this patch, we are actually seeing that in a
> > > > > customer report about OOM happened in a container where the dominating
> > > > > task used up most of the memory and it happened to be the task that
> > > > > triggered the OOM with the result that no killable process could be
> > > > > found.
> > > > Why was there no killable process? What about the process allocating
> > > > the memory or is this remote memcg charging?
> > > It is because the other processes have a oom_adjust_score of -1000. So they
> > > are non-killable. Anyway, they don't consume that much memory and killing
> > > them won't free up that much.
> > > 
> > > The other process that uses most of the memory is the one that trigger the
> > > OOM kill in the first place because the memory limit has been reached in new
> > > memory allocation. Based on the current logic, this process cannot be killed
> > > at all even if we set the oom_kill_allocating_task to 1 if the OOM happens
> > > only within the memcg context, not in a global OOM situation. This patch is
> > > to allow this process to be killed under this circumstance.
> > Do you have the oom report? I do not see why the allocating task hasn't
> > been chosen.
> 
> A partial OOM report below:

Do you happen to have the full report?

> [ 8221.433608] memory: usage 21280kB, limit 204800kB, failcnt 49116
>   :
> [ 8227.239769] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents  oom_score_adj name
> [ 8227.242495] [1611298]     0 1611298    35869      635 167936        0         -1000 conmon
> [ 8227.242518] [1702509]     0 1702509    35869      701 176128        0         -1000 conmon
> [ 8227.242522] [1703345] 1001050000 1703294   183440        0 2125824        0           999 node
> [ 8227.242706] Out of memory and no killable processes...
> [ 8227.242731] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999
> [ 8227.242732] node cpuset=crio-b8ac7e23f7b520c0365461defb66738231918243586e287bfb9e206bb3a0227a.scope mems_allowed=0-1
> 
> So in this case, node cannot kill itself and no other processes are
> available to be killed.

The process is clearly listed as eligible so the oom killer should find
it and if it hasn't then this should be investigated. Which kernel is
this?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2021-06-07 20:32 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07 16:31 [RFC PATCH] mm/oom_kill: allow oom kill allocating task for non-global case Aaron Tomlin
2021-06-07 16:42 ` Waiman Long
2021-06-07 18:43   ` Shakeel Butt
2021-06-07 18:51     ` Waiman Long
2021-06-07 19:04       ` Michal Hocko
2021-06-07 19:18         ` Waiman Long
2021-06-07 19:36           ` Michal Hocko [this message]
2021-06-07 20:03             ` Michal Hocko
2021-06-07 20:44               ` Waiman Long
2021-06-08  6:22                 ` Michal Hocko
2021-06-08  9:39                   ` Aaron Tomlin
2021-06-08 10:00                   ` Aaron Tomlin
2021-06-08 13:58                     ` Michal Hocko
2021-06-08 15:22                       ` Tetsuo Handa
2021-06-08 16:17                         ` Michal Hocko
2021-06-09 14:35                   ` Aaron Tomlin
2021-06-10 10:00                     ` Michal Hocko
2021-06-10 12:23                       ` Aaron Tomlin
2021-06-10 12:43                         ` Michal Hocko
2021-06-10 13:36                           ` Aaron Tomlin
2021-06-10 14:06                             ` Tetsuo Handa
2021-06-07 20:42             ` Waiman Long
2021-06-07 21:16               ` Aaron Tomlin
2021-06-07 19:04       ` Shakeel Butt
2021-06-07 20:07         ` Waiman Long
2021-06-07 19:01 ` Michal Hocko
2021-06-07 19:26   ` Waiman Long
2021-06-07 19:47     ` Michal Hocko
2021-06-07 21:17   ` Aaron Tomlin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YL51Tp/3jVHUrpuj@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=atomlin@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=llong@redhat.com \
    --cc=shakeelb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).