linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Feng Tang <feng.tang@intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	David Rientjes <rientjes@google.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Dave Hansen <dave.hansen@intel.com>,
	Ben Widawsky <ben.widawsky@intel.com>,
	Andi Kleen <ak@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [PATCH v4 02/13] mm/mempolicy: convert single preferred_node to full nodemask
Date: Wed, 14 Apr 2021 14:17:02 +0200	[thread overview]
Message-ID: <YHbdPkhPp5x2o2ob@dhcp22.suse.cz> (raw)
In-Reply-To: <1615952410-36895-3-git-send-email-feng.tang@intel.com>

On Wed 17-03-21 11:39:59, Feng Tang wrote:
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> The NUMA APIs currently allow passing in a "preferred node" as a
> single bit set in a nodemask.  If more than one bit it set, bits
> after the first are ignored.  Internally, this is implemented as
> a single integer: mempolicy->preferred_node.
> 
> This single node is generally OK for location-based NUMA where
> memory being allocated will eventually be operated on by a single
> CPU.  However, in systems with multiple memory types, folks want
> to target a *type* of memory instead of a location.  For instance,
> someone might want some high-bandwidth memory but do not care about
> the CPU next to which it is allocated.  Or, they want a cheap,
> high capacity allocation and want to target all NUMA nodes which
> have persistent memory in volatile mode.  In both of these cases,
> the application wants to target a *set* of nodes, but does not
> want strict MPOL_BIND behavior as that could lead to OOM killer or
> SIGSEGV.
> 
> To get that behavior, a MPOL_PREFERRED mode is desirable, but one
> that honors multiple nodes to be set in the nodemask.
> 
> The first step in that direction is to be able to internally store
> multiple preferred nodes, which is implemented in this patch.
> 
> This should not have any function changes and just switches the
> internal representation of mempolicy->preferred_node from an
> integer to a nodemask called 'mempolicy->preferred_nodes'.
> 
> This is not a pie-in-the-sky dream for an API.  This was a response to a
> specific ask of more than one group at Intel.  Specifically:
> 
> 1. There are existing libraries that target memory types such as
>    https://github.com/memkind/memkind.  These are known to suffer
>    from SIGSEGV's when memory is low on targeted memory "kinds" that
>    span more than one node.  The MCDRAM on a Xeon Phi in "Cluster on
>    Die" mode is an example of this.
> 2. Volatile-use persistent memory users want to have a memory policy
>    which is targeted at either "cheap and slow" (PMEM) or "expensive and
>    fast" (DRAM).  However, they do not want to experience allocation
>    failures when the targeted type is unavailable.
> 3. Allocate-then-run.  Generally, we let the process scheduler decide
>    on which physical CPU to run a task.  That location provides a
>    default allocation policy, and memory availability is not generally
>    considered when placing tasks.  For situations where memory is
>    valuable and constrained, some users want to allocate memory first,
>    *then* allocate close compute resources to the allocation.  This is
>    the reverse of the normal (CPU) model.  Accelerators such as GPUs
>    that operate on core-mm-managed memory are interested in this model.

This is a very useful background for the feature. The changelog for the
specific patch is rather modest and it would help to add more details
about the change. The mempolicy code is a maze and it is quite easy to
get lost there. I hope we are not going to miss something just by hunting
preferred_node usage...
 
[...]
> @@ -345,22 +345,26 @@ static void mpol_rebind_preferred(struct mempolicy *pol,
>  						const nodemask_t *nodes)
>  {
>  	nodemask_t tmp;
> +	nodemask_t preferred_node;

This is rather harsh. Some distribution kernels use high NODES_SHIFT
(SLES has 10 for x86) so this will consume additional 1K on the stack.
Unless I am missing something this shouldn't be called in deep call
chains but still.

> +
> +	/* MPOL_PREFERRED uses only the first node in the mask */
> +	preferred_node = nodemask_of_node(first_node(*nodes));
>  
>  	if (pol->flags & MPOL_F_STATIC_NODES) {
>  		int node = first_node(pol->w.user_nodemask);
>  
>  		if (node_isset(node, *nodes)) {
> -			pol->v.preferred_node = node;
> +			pol->v.preferred_nodes = nodemask_of_node(node);
>  			pol->flags &= ~MPOL_F_LOCAL;
>  		} else
>  			pol->flags |= MPOL_F_LOCAL;
>  	} else if (pol->flags & MPOL_F_RELATIVE_NODES) {
>  		mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes);
> -		pol->v.preferred_node = first_node(tmp);
> +		pol->v.preferred_nodes = tmp;
>  	} else if (!(pol->flags & MPOL_F_LOCAL)) {
> -		pol->v.preferred_node = node_remap(pol->v.preferred_node,
> -						   pol->w.cpuset_mems_allowed,
> -						   *nodes);
> +		nodes_remap(tmp, pol->v.preferred_nodes,
> +			    pol->w.cpuset_mems_allowed, preferred_node);
> +		pol->v.preferred_nodes = tmp;
>  		pol->w.cpuset_mems_allowed = *nodes;
>  	}

I have to say that I really disliked the original code (becasuse it
fiddles with user provided input behind the back) I got lost here
completely. What the heck is going on?
a) why do we even care remaping a hint which is overriden by the cpuset
at the page allocator level and b) why do we need to allocate _two_
potentially large temporary bitmaps for that here?

I haven't spotted anything unexpected in the rest.
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2021-04-14 12:17 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-17  3:39 [PATCH v4 00/13] Introduced multi-preference mempolicy Feng Tang
2021-03-17  3:39 ` [PATCH v4 01/13] mm/mempolicy: Add comment for missing LOCAL Feng Tang
2021-03-17  3:39 ` [PATCH v4 02/13] mm/mempolicy: convert single preferred_node to full nodemask Feng Tang
2021-04-14 12:17   ` Michal Hocko [this message]
2021-03-17  3:40 ` [PATCH v4 03/13] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
2021-04-14 12:50   ` Michal Hocko
2021-04-20  7:16     ` Feng Tang
2021-05-13  7:23       ` Feng Tang
2021-05-13  7:25       ` [RFC PATCH 2/2] mempolicy: kill MPOL_F_LOCAL bit Feng Tang
2021-05-13 13:55         ` Andi Kleen
2021-03-17  3:40 ` [PATCH v4 04/13] mm/mempolicy: allow preferred code to take a nodemask Feng Tang
2021-04-14 12:55   ` Michal Hocko
2021-04-19  8:49     ` Feng Tang
2021-03-17  3:40 ` [PATCH v4 05/13] mm/mempolicy: refactor rebind code for PREFERRED_MANY Feng Tang
2021-04-14 12:57   ` Michal Hocko
2021-03-17  3:40 ` [PATCH v4 06/13] mm/mempolicy: kill v.preferred_nodes Feng Tang
2021-04-14 12:58   ` Michal Hocko
2021-03-17  3:40 ` [PATCH v4 07/13] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Feng Tang
2021-04-14 13:01   ` Michal Hocko
2021-03-17  3:40 ` [PATCH v4 08/13] mm/mempolicy: Create a page allocator for policy Feng Tang
2021-04-14 13:08   ` Michal Hocko
2021-04-15  8:17     ` Feng Tang
2021-03-17  3:40 ` [PATCH v4 09/13] mm/mempolicy: Thread allocation for many preferred Feng Tang
2021-03-17  3:40 ` [PATCH v4 10/13] mm/mempolicy: VMA " Feng Tang
2021-04-14 13:14   ` Michal Hocko
2021-03-17  3:40 ` [PATCH v4 11/13] mm/mempolicy: huge-page " Feng Tang
2021-03-17  7:19   ` kernel test robot
2021-04-14 13:25   ` Michal Hocko
2021-04-15  7:41     ` Feng Tang
2021-03-17  3:40 ` [PATCH v4 12/13] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
2021-03-17  3:40 ` [PATCH v4 13/13] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Feng Tang
2021-04-14 11:21 ` [PATCH v4 00/13] Introduced multi-preference mempolicy Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YHbdPkhPp5x2o2ob@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=ben.widawsky@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=feng.tang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mike.kravetz@oracle.com \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).