linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
To: Michal Hocko <mhocko@suse.com>, Aaron Tomlin <atomlin@redhat.com>
Cc: Waiman Long <llong@redhat.com>,
	Shakeel Butt <shakeelb@google.com>, Linux MM <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] mm/oom_kill: allow oom kill allocating task for non-global case
Date: Wed, 9 Jun 2021 00:22:13 +0900	[thread overview]
Message-ID: <931bbf2e-19e3-c598-c244-ae5e7d00dfb0@i-love.sakura.ne.jp> (raw)
In-Reply-To: <YL93eXFZodiCM509@dhcp22.suse.cz>

On 2021/06/08 22:58, Michal Hocko wrote:
> I do not see this message to be ever printed on 4.18 for memcg oom:
>         /* Found nothing?!?! Either we hang forever, or we panic. */
>         if (!oc->chosen && !is_sysrq_oom(oc) && !is_memcg_oom(oc)) {
>                 dump_header(oc, NULL);
>                 panic("Out of memory and no killable processes...\n");
>         }
> 
> So how come it got triggered here? Is it possible that there is a global
> oom killer somehow going on along with the memcg OOM? Because the below
> stack clearly points to a memcg OOM and a new one AFAICS.

4.18 does print this message, and panic() will be called if global OOM
killer invocation were in progress. 

4.18.0-193.51.1.el8.x86_64 is doing

----------
        select_bad_process(oc);
        /* Found nothing?!?! */
        if (!oc->chosen) {
                dump_header(oc, NULL);
                pr_warn("Out of memory and no killable processes...\n");
                /*
                 * If we got here due to an actual allocation at the
                 * system level, we cannot survive this and will enter
                 * an endless loop in the allocator. Bail out now.
                 */
                if (!is_sysrq_oom(oc) && !is_memcg_oom(oc))
                        panic("System is deadlocked on memory\n");
        }
----------

and this message is printed when oom_evaluate_task() found that MMF_OOM_SKIP
was already set on all (possibly the only and the last) OOM victims.

----------
static int oom_evaluate_task(struct task_struct *task, void *arg)
{
(...snipped...)
        /*
         * This task already has access to memory reserves and is being killed.
         * Don't allow any other task to have access to the reserves unless
         * the task has MMF_OOM_SKIP because chances that it would release
         * any memory is quite low.
         */
        if (!is_sysrq_oom(oc) && tsk_is_oom_victim(task)) {
                if (test_bit(MMF_OOM_SKIP, &task->signal->oom_mm->flags))
                        goto next;
                goto abort;
        }
(...snipped...)
next:
        return 0;
(...snipped...)
}
----------

Since dump_tasks() from dump_header(oc, NULL) does not exclude tasks
which already has MMF_OOM_SKIP set, it is possible that the last OOM
killable victim was already OOM killed but the OOM reaper failed to reclaim
memory and set MMF_OOM_SKIP. (Well, maybe we want to exclude (or annotate)
MMF_OOM_SKIP tasks when showing OOM victim candidates...)

Therefore,

> 
> That being said, a full chain of oom events would be definitely useful
> to get a better idea.

I think checking whether

        pr_info("oom_reaper: unable to reap pid:%d (%s)\n",
                task_pid_nr(tsk), tsk->comm);

and/or

        pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
                        task_pid_nr(tsk), tsk->comm,
                        K(get_mm_counter(mm, MM_ANONPAGES)),
                        K(get_mm_counter(mm, MM_FILEPAGES)),
                        K(get_mm_counter(mm, MM_SHMEMPAGES)));

message was already printed prior to starting infinite
"Out of memory and no killable processes..." looping
(this message is repeated forever, isn't it?) will be useful.

Note that neither of these messages will be printed if hitting

----------
        /*
         * If the mm has invalidate_{start,end}() notifiers that could block,
         * sleep to give the oom victim some more time.
         * TODO: we really want to get rid of this ugly hack and make sure that
         * notifiers cannot block for unbounded amount of time
         */
        if (mm_has_blockable_invalidate_notifiers(mm)) {
                up_read(&mm->mmap_sem);
                schedule_timeout_idle(HZ);
                return true;
        }
----------

case, and also dmesg available in the vmcore might be too late to examine.
Maybe better to check /var/log/messages instead of vmcore file.


  reply	other threads:[~2021-06-08 15:22 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07 16:31 [RFC PATCH] mm/oom_kill: allow oom kill allocating task for non-global case Aaron Tomlin
2021-06-07 16:42 ` Waiman Long
2021-06-07 18:43   ` Shakeel Butt
2021-06-07 18:51     ` Waiman Long
2021-06-07 19:04       ` Michal Hocko
2021-06-07 19:18         ` Waiman Long
2021-06-07 19:36           ` Michal Hocko
2021-06-07 20:03             ` Michal Hocko
2021-06-07 20:44               ` Waiman Long
2021-06-08  6:22                 ` Michal Hocko
2021-06-08  9:39                   ` Aaron Tomlin
2021-06-08 10:00                   ` Aaron Tomlin
2021-06-08 13:58                     ` Michal Hocko
2021-06-08 15:22                       ` Tetsuo Handa [this message]
2021-06-08 16:17                         ` Michal Hocko
2021-06-09 14:35                   ` Aaron Tomlin
2021-06-10 10:00                     ` Michal Hocko
2021-06-10 12:23                       ` Aaron Tomlin
2021-06-10 12:43                         ` Michal Hocko
2021-06-10 13:36                           ` Aaron Tomlin
2021-06-10 14:06                             ` Tetsuo Handa
2021-06-11  6:55                               ` Michal Hocko
2021-06-11  9:27                               ` Aaron Tomlin
2021-06-07 20:42             ` Waiman Long
2021-06-07 21:16               ` Aaron Tomlin
2021-06-07 19:04       ` Shakeel Butt
2021-06-07 20:07         ` Waiman Long
2021-06-07 19:01 ` Michal Hocko
2021-06-07 19:26   ` Waiman Long
2021-06-07 19:47     ` Michal Hocko
2021-06-07 21:17   ` Aaron Tomlin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=931bbf2e-19e3-c598-c244-ae5e7d00dfb0@i-love.sakura.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=atomlin@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=llong@redhat.com \
    --cc=mhocko@suse.com \
    --cc=shakeelb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).