All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Dave Hansen <dave.hansen@linux.intel.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	dan.j.williams@intel.com, keith.busch@intel.com
Subject: Re: [PATCH 1/4] node: Define and export memory migration path
Date: Thu, 17 Oct 2019 14:12:05 +0300	[thread overview]
Message-ID: <20191017111205.krurdatuv7d4brs4@box> (raw)
In-Reply-To: <20191016221149.74AE222C@viggo.jf.intel.com>

On Wed, Oct 16, 2019 at 03:11:49PM -0700, Dave Hansen wrote:
> 
> From: Keith Busch <keith.busch@intel.com>
> 
> Prepare for the kernel to auto-migrate pages to other memory nodes
> with a user defined node migration table. This allows creating single
> migration target for each NUMA node to enable the kernel to do NUMA
> page migrations instead of simply reclaiming colder pages. A node
> with no target is a "terminal node", so reclaim acts normally there.
> The migration target does not fundamentally _need_ to be a single node,
> but this implementation starts there to limit complexity.
> 
> If you consider the migration path as a graph, cycles (loops) in the
> graph are disallowed.  This avoids wasting resources by constantly
> migrating (A->B, B->A, A->B ...).  The expectation is that cycles will
> never be allowed, and this rule is enforced if the user tries to make
> such a cycle.
> 
> Signed-off-by: Keith Busch <keith.busch@intel.com>
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> ---
> 
>  b/drivers/base/node.c  |   73 +++++++++++++++++++++++++++++++++++++++++++++++++
>  b/include/linux/node.h |    6 ++++
>  2 files changed, 79 insertions(+)
> 
> diff -puN drivers/base/node.c~0003-node-Define-and-export-memory-migration-path drivers/base/node.c
> --- a/drivers/base/node.c~0003-node-Define-and-export-memory-migration-path	2019-10-16 15:06:55.895952599 -0700
> +++ b/drivers/base/node.c	2019-10-16 15:06:55.902952599 -0700
> @@ -101,6 +101,10 @@ static const struct attribute_group *nod
>  	NULL,
>  };
>  
> +#define TERMINAL_NODE -1

Wouldn't we have a confusion with NUMA_NO_NODE, which is also -1?

> +static int node_migration[MAX_NUMNODES] = {[0 ...  MAX_NUMNODES - 1] = TERMINAL_NODE};

This is the first time is see range initializer in kernel code. It is GCC
extension. Do we use it anywhere already?

Many distributions compile kernel with NODES_SHIFT==10, which means this
array will take 4k even on single node machine.

Should it be dynamic?

> +static DEFINE_SPINLOCK(node_migration_lock);
> +
>  static void node_remove_accesses(struct node *node)
>  {
>  	struct node_access_nodes *c, *cnext;
> @@ -530,6 +534,74 @@ static ssize_t node_read_distance(struct
>  }
>  static DEVICE_ATTR(distance, S_IRUGO, node_read_distance, NULL);
>  
> +static ssize_t migration_path_show(struct device *dev,
> +				   struct device_attribute *attr,
> +				   char *buf)
> +{
> +	return sprintf(buf, "%d\n", node_migration[dev->id]);
> +}
> +
> +static ssize_t migration_path_store(struct device *dev,
> +				    struct device_attribute *attr,
> +				    const char *buf, size_t count)
> +{
> +	int i, err, nid = dev->id;
> +	nodemask_t visited = NODE_MASK_NONE;
> +	long next;
> +
> +	err = kstrtol(buf, 0, &next);
> +	if (err)
> +		return -EINVAL;
> +
> +	if (next < 0) {

Any negative number to set it to terminal node? Why not limit it to -1?
We may find use for user negative numbers later.

> +		spin_lock(&node_migration_lock);
> +		WRITE_ONCE(node_migration[nid], TERMINAL_NODE);
> +		spin_unlock(&node_migration_lock);
> +		return count;
> +	}
> +	if (next >= MAX_NUMNODES || !node_online(next))
> +		return -EINVAL;

What prevents offlining after the check?

> +	/*
> +	 * Follow the entire migration path from 'nid' through the point where
> +	 * we hit a TERMINAL_NODE.
> +	 *
> +	 * Don't allow loops migration cycles in the path.
> +	 */
> +	node_set(nid, visited);
> +	spin_lock(&node_migration_lock);
> +	for (i = next; node_migration[i] != TERMINAL_NODE;
> +	     i = node_migration[i]) {
> +		/* Fail if we have visited this node already */
> +		if (node_test_and_set(i, visited)) {
> +			spin_unlock(&node_migration_lock);
> +			return -EINVAL;
> +		}
> +	}
> +	WRITE_ONCE(node_migration[nid], next);
> +	spin_unlock(&node_migration_lock);
> +
> +	return count;
> +}
> +static DEVICE_ATTR_RW(migration_path);
> +
> +/**
> + * next_migration_node() - Get the next node in the migration path
> + * @current_node: The starting node to lookup the next node
> + *
> + * @returns: node id for next memory node in the migration path hierarchy from
> + * 	     @current_node; -1 if @current_node is terminal or its migration
> + * 	     node is not online.
> + */
> +int next_migration_node(int current_node)
> +{
> +	int nid = READ_ONCE(node_migration[current_node]);
> +
> +	if (nid >= 0 && node_online(nid))
> +		return nid;
> +	return TERMINAL_NODE;
> +}
> +
>  static struct attribute *node_dev_attrs[] = {
>  	&dev_attr_cpumap.attr,
>  	&dev_attr_cpulist.attr,
> @@ -537,6 +609,7 @@ static struct attribute *node_dev_attrs[
>  	&dev_attr_numastat.attr,
>  	&dev_attr_distance.attr,
>  	&dev_attr_vmstat.attr,
> +	&dev_attr_migration_path.attr,
>  	NULL
>  };
>  ATTRIBUTE_GROUPS(node_dev);
> diff -puN include/linux/node.h~0003-node-Define-and-export-memory-migration-path include/linux/node.h
> --- a/include/linux/node.h~0003-node-Define-and-export-memory-migration-path	2019-10-16 15:06:55.898952599 -0700
> +++ b/include/linux/node.h	2019-10-16 15:06:55.902952599 -0700
> @@ -134,6 +134,7 @@ static inline int register_one_node(int
>  	return error;
>  }
>  
> +extern int next_migration_node(int current_node);
>  extern void unregister_one_node(int nid);
>  extern int register_cpu_under_node(unsigned int cpu, unsigned int nid);
>  extern int unregister_cpu_under_node(unsigned int cpu, unsigned int nid);
> @@ -186,6 +187,11 @@ static inline void register_hugetlbfs_wi
>  						node_registration_func_t unreg)
>  {
>  }
> +
> +static inline int next_migration_node(int current_node)
> +{
> +	return -1;
> +}
>  #endif
>  
>  #define to_node(device) container_of(device, struct node, dev)
> _
> 

-- 
 Kirill A. Shutemov

  reply	other threads:[~2019-10-17 11:12 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16 22:11 [PATCH 0/4] [RFC] Migrate Pages in lieu of discard Dave Hansen
2019-10-16 22:11 ` Dave Hansen
2019-10-16 22:11 ` [PATCH 1/4] node: Define and export memory migration path Dave Hansen
2019-10-16 22:11   ` Dave Hansen
2019-10-17 11:12   ` Kirill A. Shutemov [this message]
2019-10-17 11:44     ` Kirill A. Shutemov
2019-10-16 22:11 ` [PATCH 2/4] mm/migrate: Defer allocating new page until needed Dave Hansen
2019-10-16 22:11   ` Dave Hansen
2019-10-17 11:27   ` Kirill A. Shutemov
2019-10-16 22:11 ` [PATCH 3/4] mm/vmscan: Attempt to migrate page in lieu of discard Dave Hansen
2019-10-16 22:11   ` Dave Hansen
2019-10-17 17:30   ` Yang Shi
2019-10-17 17:30     ` Yang Shi
2019-10-18 18:15     ` Dave Hansen
2019-10-18 21:02       ` Yang Shi
2019-10-18 21:02         ` Yang Shi
2019-10-16 22:11 ` [PATCH 4/4] mm/vmscan: Consider anonymous pages without swap Dave Hansen
2019-10-16 22:11   ` Dave Hansen
2019-10-17  3:45 ` [PATCH 0/4] [RFC] Migrate Pages in lieu of discard Shakeel Butt
2019-10-17  3:45   ` Shakeel Butt
2019-10-17 14:26   ` Dave Hansen
2019-10-17 16:58     ` Shakeel Butt
2019-10-17 16:58       ` Shakeel Butt
2019-10-17 20:51       ` Dave Hansen
2019-10-17 17:20     ` Yang Shi
2019-10-17 17:20       ` Yang Shi
2019-10-17 21:05       ` Dave Hansen
2019-10-17 22:58       ` Shakeel Butt
2019-10-17 22:58         ` Shakeel Butt
2019-10-18 21:44         ` Yang Shi
2019-10-18 21:44           ` Yang Shi
2019-10-17 16:01 ` Suleiman Souhlal
2019-10-17 16:01   ` Suleiman Souhlal
2019-10-17 16:32   ` Dave Hansen
2019-10-17 16:39     ` Shakeel Butt
2019-10-17 16:39       ` Shakeel Butt
2019-10-18  8:11     ` Suleiman Souhlal
2019-10-18  8:11       ` Suleiman Souhlal
2019-10-18 15:10       ` Dave Hansen
2019-10-18 15:39         ` Suleiman Souhlal
2019-10-18 15:39           ` Suleiman Souhlal
2019-10-18  7:44 ` Michal Hocko
2019-10-18 14:54   ` Dave Hansen
2019-10-18 21:39     ` Yang Shi
2019-10-18 21:39       ` Yang Shi
2019-10-18 21:55       ` Dan Williams
2019-10-18 21:55         ` Dan Williams
2019-10-22 13:49     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191017111205.krurdatuv7d4brs4@box \
    --to=kirill@shutemov.name \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.