linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jagdish Gediya <jvgediya@linux.ibm.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com,
	baolin.wang@linux.alibaba.com, dave.hansen@linux.intel.com,
	ying.huang@intel.com
Subject: Re: [PATCH v2 1/5] mm: demotion: Set demotion list differently
Date: Thu, 14 Apr 2022 16:10:24 +0530	[thread overview]
Message-ID: <Ylf6GI1J5cIXagyl@li-6e1fa1cc-351b-11b2-a85c-b897023bb5f3.ibm.com> (raw)
In-Reply-To: <20220414100214.00005ad8@Huawei.com>

On Thu, Apr 14, 2022 at 10:02:14AM +0100, Jonathan Cameron wrote:
> On Wed, 13 Apr 2022 14:52:02 +0530
> Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> 
> > Sharing used_targets between multiple nodes in a single
> > pass limits some of the opportunities for demotion target
> > sharing.
> > 
> > Don't share the used targets between multiple nodes in a
> > single pass, instead accumulate all the used targets in
> > source nodes shared by all pass, and reset 'used_targets'
> > to source nodes while finding demotion targets for any new
> > node.
> > 
> > This results into some more opportunities to share demotion
> > targets between multiple source nodes, e.g. with below NUMA
> > topology, where node 0 & 1 are cpu + dram nodes, node 2 & 3
> > are equally slower memory only nodes, and node 4 is slowest
> > memory only node,
> > 
> > available: 5 nodes (0-4)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus: 2 3
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus:
> > node 2 size: n MB
> > node 2 free: n MB
> > node 3 cpus:
> > node 3 size: n MB
> > node 3 free: n MB
> > node 4 cpus:
> > node 4 size: n MB
> > node 4 free: n MB
> > node distances:
> > node   0   1   2   3   4
> >   0:  10  20  40  40  80
> >   1:  20  10  40  40  80
> >   2:  40  40  10  40  80
> >   3:  40  40  40  10  80
> >   4:  80  80  80  80  10
> > 
> > The existing implementation gives below demotion targets,
> > 
> > node    demotion_target
> >  0              3, 2
> >  1              4
> >  2              X
> >  3              X
> >  4              X
> > 
> > With this patch applied, below are the demotion targets,
> > 
> > node    demotion_target
> >  0              3, 2
> >  1              3, 2
> 
> Is there an easy way to make the allocation stateful enough so
> that when it sees two identical choices, it alternates between
> them?  Whilst it's going to be workload dependent, my view
> of 'ideal' for this would be.
> 
>    0              3
>    1              2
> 
> Maybe we'll just have to make do with most systems this effects
> having to have some fun userspace code that does cleverer
> balancing - possibly using HMAT info rather than just SLIT
> to give us visibility of interconnect bottlenecks that make
> some migration paths 'unwise'.
>   
> I'm not sure the current HMAT presentation via sysfs gives
> us enough info though so we'll probably need to extend that.
> 
> Jonathan

This patch series also have the support to override the default
demotion targets found by the kernel, however current implementation
of this user space interface doesn't support to set per node demotion
targets, but I am going to modify the user space inetrface according
to Huang's suggestion which can control exact desired targets for
specific nodes.

> >  2              4
> >  3              4
> >  4              X
> > 
> > e.g. with below NUMA topology, where node 0, 1 & 2 are
> > cpu + dram nodes and node 3 is slow memory node,
> > 
> > available: 4 nodes (0-3)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus: 2 3
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus: 4 5
> > node 2 size: n MB
> > node 2 free: n MB
> > node 3 cpus:
> > node 3 size: n MB
> > node 3 free: n MB
> > node distances:
> > node   0   1   2   3
> >   0:  10  20  20  40
> >   1:  20  10  20  40
> >   2:  20  20  10  40
> >   3:  40  40  40  10
> > 
> > The existing implementation gives below demotion targets,
> > 
> > node    demotion_target
> >  0              3
> >  1              X
> >  2              X
> >  3              X
> > 
> > With this patch applied, below are the demotion targets,
> > 
> > node    demotion_target
> >  0              3
> >  1              3
> >  2              3
> >  3              X
> > 
> > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> > Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> > ---
> >  mm/migrate.c | 25 ++++++++++++++-----------
> >  1 file changed, 14 insertions(+), 11 deletions(-)
> > 
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index de175e2fdba5..516f4e1348c1 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -2383,7 +2383,7 @@ static void __set_migration_target_nodes(void)
> >  {
> >  	nodemask_t next_pass	= NODE_MASK_NONE;
> >  	nodemask_t this_pass	= NODE_MASK_NONE;
> > -	nodemask_t used_targets = NODE_MASK_NONE;
> > +	nodemask_t source_nodes = NODE_MASK_NONE;
> >  	int node, best_distance;
> >  
> >  	/*
> > @@ -2401,20 +2401,23 @@ static void __set_migration_target_nodes(void)
> >  again:
> >  	this_pass = next_pass;
> >  	next_pass = NODE_MASK_NONE;
> > +
> >  	/*
> > -	 * To avoid cycles in the migration "graph", ensure
> > -	 * that migration sources are not future targets by
> > -	 * setting them in 'used_targets'.  Do this only
> > -	 * once per pass so that multiple source nodes can
> > -	 * share a target node.
> > -	 *
> > -	 * 'used_targets' will become unavailable in future
> > -	 * passes.  This limits some opportunities for
> > -	 * multiple source nodes to share a destination.
> > +	 * Accumulate source nodes to avoid the cycle in migration
> > +	 * list.
> >  	 */
> > -	nodes_or(used_targets, used_targets, this_pass);
> > +	nodes_or(source_nodes, source_nodes, this_pass);
> >  
> >  	for_each_node_mask(node, this_pass) {
> > +		/*
> > +		 * To avoid cycles in the migration "graph", ensure
> > +		 * that migration sources are not future targets by
> > +		 * setting them in 'used_targets'. Reset used_targets
> > +		 * to source nodes for each node in this pass so that
> > +		 * multiple source nodes can share a target node.
> > +		 */
> > +		nodemask_t used_targets = source_nodes;
> > +
> >  		best_distance = -1;
> >  
> >  		/*
> 

  reply	other threads:[~2022-04-14 10:40 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-13  9:22 [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
2022-04-13  9:22 ` [PATCH v2 1/5] mm: demotion: Set demotion list differently Jagdish Gediya
2022-04-14  7:09   ` ying.huang
2022-04-14  8:48     ` Jagdish Gediya
2022-04-14  8:57       ` ying.huang
2022-04-14  8:55   ` Baolin Wang
2022-04-14  9:02   ` Jonathan Cameron
2022-04-14 10:40     ` Jagdish Gediya [this message]
2022-04-21  6:13   ` ying.huang
2022-04-13  9:22 ` [PATCH v2 2/5] mm: demotion: Add new node state N_DEMOTION_TARGETS Jagdish Gediya
2022-04-21  4:33   ` Wei Xu
2022-04-13  9:22 ` [PATCH v2 3/5] mm: demotion: Add support to set targets from userspace Jagdish Gediya
2022-04-21  4:26   ` Wei Xu
2022-04-22  9:13     ` Jagdish Gediya
2022-04-21  5:31   ` Wei Xu
2022-04-13  9:22 ` [PATCH v2 4/5] device-dax/kmem: Set node state as N_DEMOTION_TARGETS Jagdish Gediya
2022-04-13  9:22 ` [PATCH v2 5/5] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS Jagdish Gediya
2022-04-13 21:44 ` [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Andrew Morton
2022-04-14 10:16   ` Jagdish Gediya
2022-04-14  7:00 ` ying.huang
2022-04-14 10:19   ` Jagdish Gediya
2022-04-21  3:11   ` Yang Shi
2022-04-21  5:41     ` Wei Xu
2022-04-21  6:24       ` ying.huang
2022-04-21  6:49         ` Wei Xu
2022-04-21  7:08           ` ying.huang
2022-04-21  7:29             ` Wei Xu
2022-04-21  7:45               ` ying.huang
2022-04-21 18:26                 ` Wei Xu
2022-04-22  0:58                   ` ying.huang
2022-04-22  4:46                     ` Wei Xu
2022-04-22  5:40                       ` ying.huang
2022-04-22  6:13                         ` Wei Xu
2022-04-22  6:21                           ` ying.huang
2022-04-22 11:00                             ` Jagdish Gediya
2022-04-22 16:43                               ` Wei Xu
2022-04-22 17:29                                 ` Yang Shi
2022-04-24  3:02                               ` ying.huang
2022-04-25  3:50                                 ` Aneesh Kumar K.V
2022-04-25  6:10                                   ` ying.huang
2022-04-25  8:09                                     ` Aneesh Kumar K V
2022-04-25  8:54                                       ` Aneesh Kumar K V
2022-04-25 20:17                                       ` Davidlohr Bueso
2022-04-26  8:42                                       ` ying.huang
2022-04-26  9:02                                         ` Aneesh Kumar K V
2022-04-26  9:44                                           ` ying.huang
2022-04-27  4:27                                         ` Wei Xu
2022-04-25  7:26                                 ` Jagdish Gediya
2022-04-25 16:56                                 ` Wei Xu
2022-04-27  5:06                                   ` Aneesh Kumar K V
2022-04-27 18:27                                     ` Wei Xu
2022-04-28  0:56                                       ` ying.huang
2022-04-28  4:11                                         ` Wei Xu
2022-04-28 17:14                                           ` Yang Shi
2022-04-29  1:27                                             ` Alistair Popple
2022-04-29  2:21                                               ` ying.huang
2022-04-29  2:58                                                 ` Wei Xu
2022-04-29  3:27                                                   ` ying.huang
2022-04-29  4:45                                                     ` Alistair Popple
2022-04-29 18:53                                                       ` Yang Shi
2022-04-29 18:52                                                   ` Yang Shi
2022-04-27  7:11                                   ` ying.huang
2022-04-27 16:27                                     ` Wei Xu
2022-04-28  8:37                                       ` ying.huang
     [not found]                                         ` <DM6PR11MB4107867291AFE0C210D9052ADCFD9@DM6PR11MB4107.namprd11.prod.outlook.com>
2022-04-30  2:21                                           ` Wei Xu
2022-04-21 17:56       ` Yang Shi
2022-04-21 23:48         ` ying.huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Ylf6GI1J5cIXagyl@li-6e1fa1cc-351b-11b2-a85c-b897023bb5f3.ibm.com \
    --to=jvgediya@linux.ibm.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).