All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: 禹舟键 <ufo19890607@gmail.com>
Cc: akpm@linux-foundation.org, rientjes@google.com,
	kirill.shutemov@linux.intel.com, aarcange@redhat.com,
	penguin-kernel@i-love.sakura.ne.jp, guro@fb.com,
	yang.s@alibaba-inc.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Wind Yu <yuzhoujian@didichuxing.com>
Subject: Re: [PATCH] Add the memcg print oom info for system oom
Date: Thu, 17 May 2018 12:23:30 +0200	[thread overview]
Message-ID: <20180517102330.GS12670@dhcp22.suse.cz> (raw)
In-Reply-To: <CAHCio2gOLnj4NpkFrxpYVygg6ZeSeuwgp2Lwr6oTHRxHpbmcWw@mail.gmail.com>

On Thu 17-05-18 17:44:43, 禹舟键 wrote:
> Hi Michal
> I think the current OOM report is imcomplete. I can get the task which
> invoked the oom-killer and the task which has been killed by the
> oom-killer, and memory info when the oom happened. But I cannot infer the
> certain memcg to which the task killed by oom-killer belongs, because that
> task has been killed, and the dump_task will print all of the tasks in the
> system.

I can see how the origin memcg might be useful, but ...
> 
> mem_cgroup_print_oom_info will print five lines of content including
> memcg's name , usage, limit. I don't think five lines of content will cause
> a big problem. Or it at least prints the memcg's name.

this is not 5 lines at all. We dump memcg stats for the whole oom memcg
subtree. For your patch it would be the whole subtree of the memcg of
the oom victim. With cgroup v1 this can be quite deep as tasks can
belong to inter-nodes as well. Would be

		pr_info("Task in ");
		pr_cont_cgroup_path(task_cgroup(p, memory_cgrp_id));
		pr_cont(" killed as a result of limit of ");

part of that output sufficient for your usecase? You will not get memory
consumption of the group but is that really so relevant when we are
killing individual tasks? Please note that there are proposals to make
the global oom killer memcg aware and select by the memcg size rather
than pick on random tasks
(http://lkml.kernel.org/r/20171130152824.1591-1-guro@fb.com). Maybe that
will be more interesting for your container usecase.
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: 禹舟键 <ufo19890607@gmail.com>
Cc: akpm@linux-foundation.org, rientjes@google.com,
	kirill.shutemov@linux.intel.com, aarcange@redhat.com,
	penguin-kernel@i-love.sakura.ne.jp, guro@fb.com,
	yang.s@alibaba-inc.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Wind Yu <yuzhoujian@didichuxing.com>
Subject: Re: [PATCH] Add the memcg print oom info for system oom
Date: Thu, 17 May 2018 12:23:30 +0200	[thread overview]
Message-ID: <20180517102330.GS12670@dhcp22.suse.cz> (raw)
In-Reply-To: <CAHCio2gOLnj4NpkFrxpYVygg6ZeSeuwgp2Lwr6oTHRxHpbmcWw@mail.gmail.com>

On Thu 17-05-18 17:44:43, c|1e??e?(R) wrote:
> Hi Michal
> I think the current OOM report is imcomplete. I can get the task which
> invoked the oom-killer and the task which has been killed by the
> oom-killer, and memory info when the oom happened. But I cannot infer the
> certain memcg to which the task killed by oom-killer belongs, because that
> task has been killed, and the dump_task will print all of the tasks in the
> system.

I can see how the origin memcg might be useful, but ...
> 
> mem_cgroup_print_oom_info will print five lines of content including
> memcg's name , usage, limit. I don't think five lines of content will cause
> a big problem. Or it at least prints the memcg's name.

this is not 5 lines at all. We dump memcg stats for the whole oom memcg
subtree. For your patch it would be the whole subtree of the memcg of
the oom victim. With cgroup v1 this can be quite deep as tasks can
belong to inter-nodes as well. Would be

		pr_info("Task in ");
		pr_cont_cgroup_path(task_cgroup(p, memory_cgrp_id));
		pr_cont(" killed as a result of limit of ");

part of that output sufficient for your usecase? You will not get memory
consumption of the group but is that really so relevant when we are
killing individual tasks? Please note that there are proposals to make
the global oom killer memcg aware and select by the memcg size rather
than pick on random tasks
(http://lkml.kernel.org/r/20171130152824.1591-1-guro@fb.com). Maybe that
will be more interesting for your container usecase.
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2018-05-17 10:23 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-17  7:00 [PATCH] Add the memcg print oom info for system oom ufo19890607
2018-05-17  7:11 ` Michal Hocko
2018-05-17  9:44   ` 禹舟键
2018-05-17 10:23     ` Michal Hocko [this message]
2018-05-17 10:23       ` Michal Hocko
2018-05-17 10:42       ` Roman Gushchin
2018-05-17 10:42         ` Roman Gushchin
2018-05-21 21:11       ` David Rientjes
2018-05-22  6:37         ` Michal Hocko
2018-05-22 22:54           ` David Rientjes
2018-05-19 21:18 ` kbuild test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180517102330.GS12670@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=rientjes@google.com \
    --cc=ufo19890607@gmail.com \
    --cc=yang.s@alibaba-inc.com \
    --cc=yuzhoujian@didichuxing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.