All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	David Rientjes <rientjes@google.com>,
	Sellami Abdelkader <abdelkader.sellami@sap.com>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] oom: print nodemask in the oom report
Date: Tue, 4 Oct 2016 16:16:07 +0200	[thread overview]
Message-ID: <20161004141607.GC32214@dhcp22.suse.cz> (raw)
In-Reply-To: <65c637df-a9a3-777d-f6d3-322033980f86@suse.cz>

On Tue 04-10-16 15:24:53, Vlastimil Babka wrote:
> On 09/30/2016 11:41 PM, Michal Hocko wrote:
[...]
> > Fix this by always priting the nodemask. It is either mempolicy mask
> > (and non-null) or the one defined by the cpusets.
> 
> I wonder if it's helpful to print the cpuset one when that's printed
> separately, and seeing both pieces of information (nodemask and cpuset)
> unmodified might tell us more. Is it to make it easier to deal with NULL
> nodemask? Or to make sure the info gets through pr_warn() and not pr_info()?

I am not sure I understand the question. I wanted to print the nodemask
separatelly in the same line with all other allocation request
parameters like order and gfp mask because that is what the page
allocator got (via policy_nodemask). cpusets builds on top - aka applies
__cpuset_zone_allowed on top of the nodemask. So imho it makes sense to
look at the cpuset as an allocation domain while the mempolicy as a
restriction within this domain.

Does that answer your question?

> > The new output for
> > the above oom report would be
> > 
> > PoolThread invoked oom-killer: gfp_mask=0x280da(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0, order=0, oom_adj=0, oom_score_adj=0
> > 
> > This patch doesn't touch show_mem and the node filtering based on the
> > cpuset node mask because mempolicy is always a subset of cpusets and
> > seeing the full cpuset oom context might be helpful for tunning more
> > specific mempolicies inside cpusets (e.g. when they turn out to be too
> > restrictive). To prevent from ugly ifdefs the mask is printed even
> > for !NUMA configurations but this should be OK (a single node will be
> > printed).
> > 
> > Reported-by: Sellami Abdelkader <abdelkader.sellami@sap.com>
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> 
> Other than that,
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks!

-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	David Rientjes <rientjes@google.com>,
	Sellami Abdelkader <abdelkader.sellami@sap.com>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] oom: print nodemask in the oom report
Date: Tue, 4 Oct 2016 16:16:07 +0200	[thread overview]
Message-ID: <20161004141607.GC32214@dhcp22.suse.cz> (raw)
In-Reply-To: <65c637df-a9a3-777d-f6d3-322033980f86@suse.cz>

On Tue 04-10-16 15:24:53, Vlastimil Babka wrote:
> On 09/30/2016 11:41 PM, Michal Hocko wrote:
[...]
> > Fix this by always priting the nodemask. It is either mempolicy mask
> > (and non-null) or the one defined by the cpusets.
> 
> I wonder if it's helpful to print the cpuset one when that's printed
> separately, and seeing both pieces of information (nodemask and cpuset)
> unmodified might tell us more. Is it to make it easier to deal with NULL
> nodemask? Or to make sure the info gets through pr_warn() and not pr_info()?

I am not sure I understand the question. I wanted to print the nodemask
separatelly in the same line with all other allocation request
parameters like order and gfp mask because that is what the page
allocator got (via policy_nodemask). cpusets builds on top - aka applies
__cpuset_zone_allowed on top of the nodemask. So imho it makes sense to
look at the cpuset as an allocation domain while the mempolicy as a
restriction within this domain.

Does that answer your question?

> > The new output for
> > the above oom report would be
> > 
> > PoolThread invoked oom-killer: gfp_mask=0x280da(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0, order=0, oom_adj=0, oom_score_adj=0
> > 
> > This patch doesn't touch show_mem and the node filtering based on the
> > cpuset node mask because mempolicy is always a subset of cpusets and
> > seeing the full cpuset oom context might be helpful for tunning more
> > specific mempolicies inside cpusets (e.g. when they turn out to be too
> > restrictive). To prevent from ugly ifdefs the mask is printed even
> > for !NUMA configurations but this should be OK (a single node will be
> > printed).
> > 
> > Reported-by: Sellami Abdelkader <abdelkader.sellami@sap.com>
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> 
> Other than that,
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks!

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-10-04 14:16 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-30 21:41 [PATCH] oom: print nodemask in the oom report Michal Hocko
2016-09-30 21:41 ` Michal Hocko
2016-10-04 13:24 ` Vlastimil Babka
2016-10-04 13:24   ` Vlastimil Babka
2016-10-04 14:16   ` Michal Hocko [this message]
2016-10-04 14:16     ` Michal Hocko
2016-10-04 15:02     ` Vlastimil Babka
2016-10-04 15:02       ` Vlastimil Babka
2016-10-04 15:12       ` Michal Hocko
2016-10-04 15:12         ` Michal Hocko
2016-10-05  9:51         ` Vlastimil Babka
2016-10-05  9:51           ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161004141607.GC32214@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=abdelkader.sellami@sap.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.