linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Huang\, Ying" <ying.huang@intel.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Feng Tang <feng.tang@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Matthew Wilcox <willy@infradead.org>,
	Mel Gorman <mgorman@suse.de>,
	dave.hansen@intel.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 0/2] mm: fix OOMs for binding workloads to movable zone only node
Date: Fri, 06 Nov 2020 12:32:44 +0800	[thread overview]
Message-ID: <87zh3vp0k3.fsf@yhuang-dev.intel.com> (raw)
In-Reply-To: <20201105120818.GC21348@dhcp22.suse.cz> (Michal Hocko's message of "Thu, 5 Nov 2020 13:08:18 +0100")

Michal Hocko <mhocko@suse.com> writes:

> On Thu 05-11-20 09:40:28, Feng Tang wrote:
>> On Wed, Nov 04, 2020 at 09:53:43AM +0100, Michal Hocko wrote:
>>  
>> > > > As I've said in reply to your second patch. I think we can make the oom
>> > > > killer behavior more sensible in this misconfigured cases but I do not
>> > > > think we want break the cpuset isolation for such a configuration.
>> > > 
>> > > Do you mean we skip the killing and just let the allocation fail? We've
>> > > checked the oom killer code first, when the oom happens, both DRAM
>> > > node and unmovable node have lots of free memory, and killing process
>> > > won't improve the situation.
>> > 
>> > We already do skip oom killer and fail for lowmem allocation requests already.
>> > This is similar in some sense. Another option would be to kill the
>> > allocating context which will have less corner cases potentially because
>> > some allocation failures might be unexpected.
>> 
>> Yes, this can avoid the helpless oom killing to kill a good process(no
>> memory pressure at all)
>> 
>> And I think the important thing is to judge whether this usage (binding
>> docker like workload to unmovable node) is a valid case :) 
>
> I am confused. Why wouldbe an unmovable node a problem. Movable
> allocations can be satisfied from the Zone Normal just fine. It is other
> way around that is a problem.
>
>> Initially, I thought it invalid too, but later think it still makes some
>> sense for the 2 cases:
>>     * user want to bind his workload to one node(most of user space
>>       memory) to avoid cross-node traffic, and that node happens to
>>       be configured as unmovable
>
> See above
>
>>     * one small DRAM node + big PMEM node, and memory latency insensitive
>>       workload could be bound to the cheaper unmovable PMEM node
>
> Please elaborate some more. As long as you have movable and normal nodes
> then this should be possible with a deal of care - most notably the
> movable:kernel ratio memory shouldn't be too big.
>
> Besides that why does PMEM node have to be MOVABLE only in the first
> place?

The performance of PMEM is much worse than that of DRAM.  If we found
that some pages on PMEM are accessed frequently (hot), we may want to
move them to DRAM to optimize the system performance.  If the unmovable
pages are allocated on PMEM and hot, it's possible that we cannot move
the pages to DRAM unless rebooting the system.  So we think we should
make the PMEM nodes to be MOVABLE only.

Best Regards,
Huang, Ying


  parent reply	other threads:[~2020-11-06  4:32 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-04  6:10 [RFC PATCH 0/2] mm: fix OOMs for binding workloads to movable zone only node Feng Tang
2020-11-04  6:10 ` [RFC PATCH 1/2] mm, oom: dump meminfo for all memory nodes Feng Tang
2020-11-04  7:18   ` Michal Hocko
2020-11-04  6:10 ` [RFC PATCH 2/2] mm, page_alloc: loose the node binding check to avoid helpless oom killing Feng Tang
2020-11-04  7:23   ` Michal Hocko
2020-11-04  7:13 ` [RFC PATCH 0/2] mm: fix OOMs for binding workloads to movable zone only node Michal Hocko
2020-11-04  7:38   ` Feng Tang
2020-11-04  7:58     ` Michal Hocko
2020-11-04  8:40       ` Feng Tang
2020-11-04  8:53         ` Michal Hocko
2020-11-05  1:40           ` Feng Tang
2020-11-05 12:08             ` Michal Hocko
2020-11-05 12:53               ` Vlastimil Babka
2020-11-05 12:58                 ` Michal Hocko
2020-11-05 13:07                   ` Feng Tang
2020-11-05 13:12                     ` Michal Hocko
2020-11-05 13:43                       ` Feng Tang
2020-11-05 16:16                         ` Michal Hocko
2020-11-06  7:06                           ` Feng Tang
2020-11-06  8:10                             ` Michal Hocko
2020-11-06  9:08                               ` Feng Tang
2020-11-06 10:35                                 ` Michal Hocko
2020-11-05 13:14                   ` Vlastimil Babka
2020-11-05 13:19                     ` Michal Hocko
2020-11-05 13:34                       ` Vlastimil Babka
2020-11-06  4:32               ` Huang, Ying [this message]
2020-11-06  7:43                 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87zh3vp0k3.fsf@yhuang-dev.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=feng.tang@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).