linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
@ 2022-04-22 19:55 Jagdish Gediya
  2022-04-22 19:55 ` [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources Jagdish Gediya
                   ` (7 more replies)
  0 siblings, 8 replies; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
NUMA node which are N_MEMORY and slow memory(persistent memory)
only NUMA node which are also N_MEMORY. As the current demotion
target finding algorithm works based on N_MEMORY and best distance,
it will choose DRAM only NUMA node as demotion target instead of
persistent memory node on such systems. If DRAM only NUMA node is
filled with demoted pages then at some point new allocations can
start falling to persistent memory, so basically cold pages are in
fast memor (due to demotion) and new pages are in slow memory, this
is why persistent memory nodes should be utilized for demotion and
dram node should be avoided for demotion so that they can be used
for new allocations.

Current implementation can work fine on the system where the memory
only numa nodes are possible only for persistent/slow memory but it
is not suitable for the like of systems mentioned above.

This patch series introduces the new node state N_DEMOTION_TARGETS,
which is used to distinguish the nodes which can be used as demotion
targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
nodes which can be used as demotion targets.

node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
certain type of memory which registers through dax kmem (e.g. HBM)
may not be the right choices for demotion so in future they should
be distinguished based on certain attributes and dax kmem driver
should avoid setting them as N_DEMOTION_TARGETS, however current
implementation also doesn't distinguish any  such memory and it
considers all N_MEMORY as demotion targets so this patch series
doesn't modify the current behavior.

below command can be used to view the available demotion targets in
the system,

$ cat /sys/devices/system/node/demotion_targets

This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
It may be possible that some memory node desired as demotion target
is not detected in the system from dax-device kmem probe path. It is
also possible that some of the dax-devices are not preferred as
demotion target e.g. HBM, for such devices, node shouldn't be set to
N_DEMOTION_TARGETS, so support is also added to set the demotion
target list from user space so that default behavior can be overridden
to avoid or add specific node to demotion targets manually.

Override the demotion targets in the system (which sets the
node_states[N_DEMOTION_TARGETS] in kernel),
$ echo <node list> > /sys/devices/system/node/demotion_targets

As by default node attributes under /sys/devices/system/node/ are read-
only, support is added to write node_states[] via sysfs so that
node_states[N_DEMOTION_TARGETS] can be modified from user space via
sysfs.

It is also helpful to know per node demotion target path prepared by
kernel to understand the demotion behaviour during reclaim, so this
patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
interface to view per-node demotion targets via sysfs.

Current code which sets migration targets is modified in
this patch series to avoid some of the limitations on the demotion
target sharing and to use N_DEMOTION_TARGETS only nodes while
finding demotion targets.

Changelog
----------

v2:
In v1, only 1st patch of this patch series was sent, which was
implemented to avoid some of the limitations on the demotion
target sharing, however for certain numa topology, the demotion
targets found by that patch was not most optimal, so 1st patch
in this series is modified according to suggestions from Huang
and Baolin. Different examples of demotion list comparasion
between existing implementation and changed implementation can
be found in the commit message of 1st patch.

v3:
- Modify patch 1 subject to make it more specific
- Remove /sys/kernel/mm/numa/demotion_targets interface, use
  /sys/devices/system/node/demotion_targets instead and make
  it writable to override node_states[N_DEMOTION_TARGETS].
- Add support to view per node demotion targets via sysfs

Jagdish Gediya (8):
  mm: demotion: Fix demotion targets sharing among sources
  mm: demotion: Add new node state N_DEMOTION_TARGETS
  drivers/base/node: Add support to write node_states[] via sysfs
  device-dax/kmem: Set node state as N_DEMOTION_TARGETS
  mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
  mm: demotion: expose per-node demotion targets via sysfs
  docs: numa: Add documentation for demotion

 Documentation/admin-guide/mm/index.rst        |  1 +
 .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
 drivers/base/node.c                           | 70 ++++++++++++++++---
 drivers/dax/kmem.c                            |  2 +
 include/linux/migrate.h                       |  1 +
 include/linux/nodemask.h                      |  1 +
 mm/migrate.c                                  | 54 ++++++++++----
 7 files changed, 162 insertions(+), 24 deletions(-)
 create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst

-- 
2.35.1



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
@ 2022-04-22 19:55 ` Jagdish Gediya
  2022-04-24  3:25   ` ying.huang
  2022-04-22 19:55 ` [PATCH v3 2/7] mm: demotion: Add new node state N_DEMOTION_TARGETS Jagdish Gediya
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Sharing used_targets between multiple nodes in a single
pass limits some of the opportunities for demotion target
sharing.

Don't share the used targets between multiple nodes in a
single pass, instead accumulate all the used targets in
source nodes shared by all pass, and reset 'used_targets'
to source nodes while finding demotion targets for any new
node.

This results into some more opportunities to share demotion
targets between multiple source nodes, e.g. with below NUMA
topology, where node 0 & 1 are cpu + dram nodes, node 2 & 3
are equally slower memory only nodes, and node 4 is slowest
memory only node,

available: 5 nodes (0-4)
node 0 cpus: 0 1
node 0 size: n MB
node 0 free: n MB
node 1 cpus: 2 3
node 1 size: n MB
node 1 free: n MB
node 2 cpus:
node 2 size: n MB
node 2 free: n MB
node 3 cpus:
node 3 size: n MB
node 3 free: n MB
node 4 cpus:
node 4 size: n MB
node 4 free: n MB
node distances:
node   0   1   2   3   4
  0:  10  20  40  40  80
  1:  20  10  40  40  80
  2:  40  40  10  40  80
  3:  40  40  40  10  80
  4:  80  80  80  80  10

The existing implementation gives below demotion targets,

node    demotion_target
 0              3, 2
 1              4
 2              X
 3              X
 4              X

With this patch applied, below are the demotion targets,

node    demotion_target
 0              3, 2
 1              3, 2
 2              4
 3              4
 4              X

e.g. with below NUMA topology, where node 0, 1 & 2 are
cpu + dram nodes and node 3 is slow memory node,

available: 4 nodes (0-3)
node 0 cpus: 0 1
node 0 size: n MB
node 0 free: n MB
node 1 cpus: 2 3
node 1 size: n MB
node 1 free: n MB
node 2 cpus: 4 5
node 2 size: n MB
node 2 free: n MB
node 3 cpus:
node 3 size: n MB
node 3 free: n MB
node distances:
node   0   1   2   3
  0:  10  20  20  40
  1:  20  10  20  40
  2:  20  20  10  40
  3:  40  40  40  10

The existing implementation gives below demotion targets,

node    demotion_target
 0              3
 1              X
 2              X
 3              X

With this patch applied, below are the demotion targets,

node    demotion_target
 0              3
 1              3
 2              3
 3              X

Fixes: 79c28a416722 ("mm/numa: automatically generate node migration order")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: "Huang, Ying" <ying.huang@intel.com>
---
 mm/migrate.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 6c31ee1e1c9b..8bbe1e478122 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2355,7 +2355,7 @@ static void __set_migration_target_nodes(void)
 {
 	nodemask_t next_pass	= NODE_MASK_NONE;
 	nodemask_t this_pass	= NODE_MASK_NONE;
-	nodemask_t used_targets = NODE_MASK_NONE;
+	nodemask_t source_nodes = NODE_MASK_NONE;
 	int node, best_distance;
 
 	/*
@@ -2373,20 +2373,23 @@ static void __set_migration_target_nodes(void)
 again:
 	this_pass = next_pass;
 	next_pass = NODE_MASK_NONE;
+
 	/*
-	 * To avoid cycles in the migration "graph", ensure
-	 * that migration sources are not future targets by
-	 * setting them in 'used_targets'.  Do this only
-	 * once per pass so that multiple source nodes can
-	 * share a target node.
-	 *
-	 * 'used_targets' will become unavailable in future
-	 * passes.  This limits some opportunities for
-	 * multiple source nodes to share a destination.
+	 * Accumulate source nodes to avoid the cycle in migration
+	 * list.
 	 */
-	nodes_or(used_targets, used_targets, this_pass);
+	nodes_or(source_nodes, source_nodes, this_pass);
 
 	for_each_node_mask(node, this_pass) {
+		/*
+		 * To avoid cycles in the migration "graph", ensure
+		 * that migration sources are not future targets by
+		 * setting them in 'used_targets'. Reset used_targets
+		 * to source nodes for each node in this pass so that
+		 * multiple source nodes can share a target node.
+		 */
+		nodemask_t used_targets = source_nodes;
+
 		best_distance = -1;
 
 		/*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 2/7] mm: demotion: Add new node state N_DEMOTION_TARGETS
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
  2022-04-22 19:55 ` [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources Jagdish Gediya
@ 2022-04-22 19:55 ` Jagdish Gediya
  2022-04-22 20:29   ` Wei Xu
  2022-04-22 19:55 ` [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs Jagdish Gediya
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Some systems(e.g. PowerVM) have DRAM(fast memory) only NUMA node
which are N_MEMORY as well as slow memory(persistent memory) only
NUMA node which are also N_MEMORY. As the current demotion target
finding algorithm works based on N_MEMORY and best distance, it can
choose DRAM only NUMA node as demotion target instead of persistent
memory node on such systems. If DRAM only NUMA node is filled with
demoted pages then at some point new allocations can start falling
to persistent memory, so basically cold pages are in fast memory
(due to demotion) and new pages are in slow memory, this is why
persistent memory nodes should be utilized for demotion and dram node
should be avoided for demotion so that they can be used for new
allocations.

Add new state N_DEMOTION_TARGETS, node_states[N_DEMOTION_TARGETS]
then can be used to hold the list of nodes which can be used
as demotion targets, later patches in the series builds demotion
targets based on nodes available in node_states[N_DEMOTION_TARGETS].

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
---
 drivers/base/node.c      | 4 ++++
 include/linux/nodemask.h | 1 +
 2 files changed, 5 insertions(+)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index ec8bb24a5a22..6eef22e6413e 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -1038,6 +1038,9 @@ static struct node_attr node_state_attr[] = {
 	[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
 	[N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
 					   N_GENERIC_INITIATOR),
+	[N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
+					  N_DEMOTION_TARGETS),
+
 };
 
 static struct attribute *node_state_attrs[] = {
@@ -1050,6 +1053,7 @@ static struct attribute *node_state_attrs[] = {
 	&node_state_attr[N_MEMORY].attr.attr,
 	&node_state_attr[N_CPU].attr.attr,
 	&node_state_attr[N_GENERIC_INITIATOR].attr.attr,
+	&node_state_attr[N_DEMOTION_TARGETS].attr.attr,
 	NULL
 };
 
diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 567c3ddba2c4..17844300fd57 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -400,6 +400,7 @@ enum node_states {
 	N_MEMORY,		/* The node has memory(regular, high, movable) */
 	N_CPU,		/* The node has one or more cpus */
 	N_GENERIC_INITIATOR,	/* The node has one or more Generic Initiators */
+	N_DEMOTION_TARGETS,	/* Nodes that should be considered as demotion targets */
 	NR_NODE_STATES
 };
 
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
  2022-04-22 19:55 ` [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources Jagdish Gediya
  2022-04-22 19:55 ` [PATCH v3 2/7] mm: demotion: Add new node state N_DEMOTION_TARGETS Jagdish Gediya
@ 2022-04-22 19:55 ` Jagdish Gediya
  2022-04-22 20:32   ` Wei Xu
                     ` (2 more replies)
  2022-04-22 19:55 ` [PATCH v3 4/7] device-dax/kmem: Set node state as N_DEMOTION_TARGETS Jagdish Gediya
                   ` (4 subsequent siblings)
  7 siblings, 3 replies; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Current /sys/devices/system/node/* interface doesn't support
to write node_states[], however write support is needed in case
users want to set them manually e.g. when user want to override
default N_DEMOTION_TARGETS found by the kernel.

Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
_NODE_ATTR_RW which can be used for node_states[] which can
be written from sysfs.

It may be necessary to validate written values and take action
based on them in a state specific way so a callback 'write' is
introduced in 'struct node_attr'.

A new function demotion_targets_write() is added to validate
the input nodes for N_DEMOTION_TARGETS which should be subset
of N_MEMORY and to build new demotion list based on new nodes.

Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
---
 drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
 1 file changed, 49 insertions(+), 13 deletions(-)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index 6eef22e6413e..e03eedbc421b 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -20,6 +20,7 @@
 #include <linux/pm_runtime.h>
 #include <linux/swap.h>
 #include <linux/slab.h>
+#include <linux/migrate.h>
 
 static struct bus_type node_subsys = {
 	.name = "node",
@@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
 struct node_attr {
 	struct device_attribute attr;
 	enum node_states state;
+	int (*write)(nodemask_t nodes);
 };
 
 static ssize_t show_node_state(struct device *dev,
@@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
 			  nodemask_pr_args(&node_states[na->state]));
 }
 
-#define _NODE_ATTR(name, state) \
-	{ __ATTR(name, 0444, show_node_state, NULL), state }
+static ssize_t store_node_state(struct device *s,
+				struct device_attribute *attr,
+				const char *buf, size_t count)
+{
+	nodemask_t nodes;
+	struct node_attr *na = container_of(attr, struct node_attr, attr);
+
+	if (nodelist_parse(buf, nodes))
+		return -EINVAL;
+
+	if (na->write) {
+		if (na->write(nodes))
+			return -EINVAL;
+	} else {
+		node_states[na->state] = nodes;
+	}
+
+	return count;
+}
+
+static int demotion_targets_write(nodemask_t nodes)
+{
+	if (nodes_subset(nodes, node_states[N_MEMORY])) {
+		node_states[N_DEMOTION_TARGETS] = nodes;
+		set_migration_target_nodes();
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+#define _NODE_ATTR_RO(name, state) \
+	{ __ATTR(name, 0444, show_node_state, NULL), state, NULL }
+
+#define _NODE_ATTR_RW(name, state, write_fn) \
+	{ __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
 
 static struct node_attr node_state_attr[] = {
-	[N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
-	[N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
-	[N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
+	[N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
+	[N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
+	[N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
 #ifdef CONFIG_HIGHMEM
-	[N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
+	[N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
 #endif
-	[N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
-	[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
-	[N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
-					   N_GENERIC_INITIATOR),
-	[N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
-					  N_DEMOTION_TARGETS),
-
+	[N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
+	[N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
+	[N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
+					      N_GENERIC_INITIATOR),
+	[N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
+					     N_DEMOTION_TARGETS,
+					     demotion_targets_write),
 };
 
 static struct attribute *node_state_attrs[] = {
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 4/7] device-dax/kmem: Set node state as N_DEMOTION_TARGETS
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
                   ` (2 preceding siblings ...)
  2022-04-22 19:55 ` [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs Jagdish Gediya
@ 2022-04-22 19:55 ` Jagdish Gediya
  2022-04-22 20:34   ` Wei Xu
  2022-04-22 19:55 ` [PATCH v3 5/7] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS Jagdish Gediya
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Set dax-device node as N_DEMOTION_TARGETS so that it
can be used as demotion target.

In future, support should be added to distinguish the
dax-devices which are not preferred as demotion target
e.g. HBM, for such devices, node shouldn't be set to
N_DEMOTION_TARGETS.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
---
 drivers/dax/kmem.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
index a37622060fff..f42ab9d04bdf 100644
--- a/drivers/dax/kmem.c
+++ b/drivers/dax/kmem.c
@@ -147,6 +147,8 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
 
 	dev_set_drvdata(dev, data);
 
+	node_set_state(numa_node, N_DEMOTION_TARGETS);
+
 	return 0;
 
 err_request_mem:
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 5/7] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
                   ` (3 preceding siblings ...)
  2022-04-22 19:55 ` [PATCH v3 4/7] device-dax/kmem: Set node state as N_DEMOTION_TARGETS Jagdish Gediya
@ 2022-04-22 19:55 ` Jagdish Gediya
  2022-04-22 20:39   ` Wei Xu
  2022-04-22 19:55 ` [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs Jagdish Gediya
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Only nodes which has state N_DEMOTION_TARGETS should be
used as demotion targets, make nodes which are not in demotion
targets as source nodes while building demotion target list
so that demotion targets are only chosen from N_DEMOTION_TARGETS.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
---
 mm/migrate.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 8bbe1e478122..5b92a09fbe4a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2366,10 +2366,10 @@ static void __set_migration_target_nodes(void)
 	disable_all_migrate_targets();
 
 	/*
-	 * Allocations go close to CPUs, first.  Assume that
-	 * the migration path starts at the nodes with CPUs.
+	 * Some systems can have DRAM(fast memory) only NUMA nodes, demotion targets
+	 * need to be found for them as well.
 	 */
-	next_pass = node_states[N_CPU];
+	nodes_andnot(next_pass, node_states[N_ONLINE], node_states[N_DEMOTION_TARGETS]);
 again:
 	this_pass = next_pass;
 	next_pass = NODE_MASK_NONE;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
                   ` (4 preceding siblings ...)
  2022-04-22 19:55 ` [PATCH v3 5/7] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS Jagdish Gediya
@ 2022-04-22 19:55 ` Jagdish Gediya
  2022-04-22 20:47   ` Wei Xu
                     ` (2 more replies)
  2022-04-22 19:55 ` [PATCH v3 7/7] docs: numa: Add documentation for demotion Jagdish Gediya
  2022-04-24  3:19 ` [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS ying.huang
  7 siblings, 3 replies; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Kernel prepares per-node demotion target list based on
node_states[N_DEMOTION_TARGETS], If enabled through sysfs,
demotion kicks in during reclaim, and pages get migrated
according to demotion target list prepared by kernel.

It is helpful to know demotion target list prepared by
kernel to understand the demotion behaviour, so add
interface /sys/devices/system/node/nodeX/demotion_targets
to view per-node demotion targets via sysfs.

Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
---
 drivers/base/node.c     | 10 ++++++++++
 include/linux/migrate.h |  1 +
 mm/migrate.c            | 17 +++++++++++++++++
 3 files changed, 28 insertions(+)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index e03eedbc421b..92326219aac2 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -561,11 +561,21 @@ static ssize_t node_read_distance(struct device *dev,
 }
 static DEVICE_ATTR(distance, 0444, node_read_distance, NULL);
 
+static ssize_t demotion_targets_show(struct device *dev,
+				     struct device_attribute *attr, char *buf)
+{
+	nodemask_t demotion_targets = node_get_demotion_targets(dev->id);
+
+	return sysfs_emit(buf, "%*pbl\n", nodemask_pr_args(&demotion_targets));
+}
+static DEVICE_ATTR_RO(demotion_targets);
+
 static struct attribute *node_dev_attrs[] = {
 	&dev_attr_meminfo.attr,
 	&dev_attr_numastat.attr,
 	&dev_attr_distance.attr,
 	&dev_attr_vmstat.attr,
+	&dev_attr_demotion_targets.attr,
 	NULL
 };
 
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 90e75d5a54d6..072019441a24 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -173,6 +173,7 @@ int migrate_vma_setup(struct migrate_vma *args);
 void migrate_vma_pages(struct migrate_vma *migrate);
 void migrate_vma_finalize(struct migrate_vma *migrate);
 int next_demotion_node(int node);
+nodemask_t node_get_demotion_targets(int node);
 
 #else /* CONFIG_MIGRATION disabled: */
 
diff --git a/mm/migrate.c b/mm/migrate.c
index 5b92a09fbe4a..da864831bc0c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2187,6 +2187,23 @@ struct demotion_nodes {
 
 static struct demotion_nodes *node_demotion __read_mostly;
 
+nodemask_t node_get_demotion_targets(int node)
+{
+	nodemask_t demotion_targets = NODE_MASK_NONE;
+	unsigned short target_nr;
+
+	if (!node_demotion)
+		return NODE_MASK_NONE;
+
+	rcu_read_lock();
+	target_nr = READ_ONCE(node_demotion[node].nr);
+	for (int i = 0; i < target_nr; i++)
+		node_set(READ_ONCE(node_demotion[node].nodes[i]), demotion_targets);
+	rcu_read_unlock();
+
+	return demotion_targets;
+}
+
 /**
  * next_demotion_node() - Get the next node in the demotion path
  * @node: The starting node to lookup the next node
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 7/7] docs: numa: Add documentation for demotion
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
                   ` (5 preceding siblings ...)
  2022-04-22 19:55 ` [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs Jagdish Gediya
@ 2022-04-22 19:55 ` Jagdish Gediya
  2022-04-24  3:19 ` [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS ying.huang
  7 siblings, 0 replies; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-22 19:55 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, aneesh.kumar, shy828301,
	weixugc, gthelen, dan.j.williams, Jagdish Gediya

Add documentation for demotion mentioning about why is it
required and all the sysfs interfaces available related to
demotion.

Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
---
 Documentation/admin-guide/mm/index.rst        |  1 +
 .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++++++
 2 files changed, 58 insertions(+)
 create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst

diff --git a/Documentation/admin-guide/mm/index.rst b/Documentation/admin-guide/mm/index.rst
index c21b5823f126..4bd0ed3de9c5 100644
--- a/Documentation/admin-guide/mm/index.rst
+++ b/Documentation/admin-guide/mm/index.rst
@@ -34,6 +34,7 @@ the Linux memory management.
    memory-hotplug
    nommu-mmap
    numa_memory_policy
+   numa_demotion
    numaperf
    pagemap
    soft-dirty
diff --git a/Documentation/admin-guide/mm/numa_demotion.rst b/Documentation/admin-guide/mm/numa_demotion.rst
new file mode 100644
index 000000000000..252be9dc0517
--- /dev/null
+++ b/Documentation/admin-guide/mm/numa_demotion.rst
@@ -0,0 +1,57 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==================
+NUMA Demotion
+==================
+
+What is demotion required?
+============================
+
+With the advent of various new memory types, Systems have multiple
+types of memory, e.g. DRAM and PMEM (persistent memory).  The memory
+subsystem of such systems can be called memory tiering system,
+because the performance of the different types of memory are usually
+different.
+
+In a  system with some DRAM and some persistent memory, once DRAM
+fills up, reclaim will start and some of the DRAM contents will be
+thrown out to swap even if there is space in persistent memory.
+Allocations will, at some point, start falling over to the slower
+persistent memory.
+
+Instead of page being discarded during reclaim, it can be moved to
+persistent memory. Allowing page migration during reclaim enables
+these systems to migrate pages from fast tiers to slow tiers when
+the fast tier is under pressure.
+
+SYSFS interface
+======================
+
+Enable/Disable demotion
+------------------------
+
+By default demotion is disabled, it can be enabled/disabled using below
+sysfs interface,
+
+echo 0/1 or false/true > /sys/kernel/mm/numa/demotion_enabled
+
+Read system demotion targets
+-----------------------------
+cat /sys/devices/system/node/demotion_targets
+
+Kernel shows node_states[N_DEMOTION_TARGETS] when this command
+is run.
+
+Override default demotion targets
+---------------------------------
+echo <nodelist> > /sys/devices/system/node/demotion_targets
+
+If nodelist is valid and subset of N_MEMORY then
+node_states[N_DEMOTION_TARGETS] is set to this new nodelist, and
+kernel builds the new demotion list based on it.
+
+Read per node demotion targets
+-------------------------------
+cat /sys/devices/system/node/nodeX/demotion_targets
+
+It shows per node demotion targets configured by kernel.
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 2/7] mm: demotion: Add new node state N_DEMOTION_TARGETS
  2022-04-22 19:55 ` [PATCH v3 2/7] mm: demotion: Add new node state N_DEMOTION_TARGETS Jagdish Gediya
@ 2022-04-22 20:29   ` Wei Xu
  0 siblings, 0 replies; 35+ messages in thread
From: Wei Xu @ 2022-04-22 20:29 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	ying.huang, aneesh.kumar, shy828301, gthelen, dan.j.williams

On Fri, Apr 22, 2022 at 12:55 PM Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>
> Some systems(e.g. PowerVM) have DRAM(fast memory) only NUMA node
> which are N_MEMORY as well as slow memory(persistent memory) only
> NUMA node which are also N_MEMORY. As the current demotion target
> finding algorithm works based on N_MEMORY and best distance, it can
> choose DRAM only NUMA node as demotion target instead of persistent
> memory node on such systems. If DRAM only NUMA node is filled with
> demoted pages then at some point new allocations can start falling
> to persistent memory, so basically cold pages are in fast memory
> (due to demotion) and new pages are in slow memory, this is why
> persistent memory nodes should be utilized for demotion and dram node
> should be avoided for demotion so that they can be used for new
> allocations.
>
> Add new state N_DEMOTION_TARGETS, node_states[N_DEMOTION_TARGETS]
> then can be used to hold the list of nodes which can be used
> as demotion targets, later patches in the series builds demotion
> targets based on nodes available in node_states[N_DEMOTION_TARGETS].
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> ---
>  drivers/base/node.c      | 4 ++++
>  include/linux/nodemask.h | 1 +
>  2 files changed, 5 insertions(+)
>

Acked-by: Wei Xu <weixugc@google.com>

> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index ec8bb24a5a22..6eef22e6413e 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -1038,6 +1038,9 @@ static struct node_attr node_state_attr[] = {
>         [N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
>         [N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
>                                            N_GENERIC_INITIATOR),
> +       [N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> +                                         N_DEMOTION_TARGETS),
> +
>  };
>
>  static struct attribute *node_state_attrs[] = {
> @@ -1050,6 +1053,7 @@ static struct attribute *node_state_attrs[] = {
>         &node_state_attr[N_MEMORY].attr.attr,
>         &node_state_attr[N_CPU].attr.attr,
>         &node_state_attr[N_GENERIC_INITIATOR].attr.attr,
> +       &node_state_attr[N_DEMOTION_TARGETS].attr.attr,
>         NULL
>  };
>
> diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
> index 567c3ddba2c4..17844300fd57 100644
> --- a/include/linux/nodemask.h
> +++ b/include/linux/nodemask.h
> @@ -400,6 +400,7 @@ enum node_states {
>         N_MEMORY,               /* The node has memory(regular, high, movable) */
>         N_CPU,          /* The node has one or more cpus */
>         N_GENERIC_INITIATOR,    /* The node has one or more Generic Initiators */
> +       N_DEMOTION_TARGETS,     /* Nodes that should be considered as demotion targets */
>         NR_NODE_STATES
>  };
>
> --
> 2.35.1
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs
  2022-04-22 19:55 ` [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs Jagdish Gediya
@ 2022-04-22 20:32   ` Wei Xu
  2022-04-24  6:25   ` Aneesh Kumar K.V
  2022-04-24  6:29   ` ying.huang
  2 siblings, 0 replies; 35+ messages in thread
From: Wei Xu @ 2022-04-22 20:32 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	ying.huang, aneesh.kumar, shy828301, gthelen, dan.j.williams

On Fri, Apr 22, 2022 at 12:55 PM Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>
> Current /sys/devices/system/node/* interface doesn't support
> to write node_states[], however write support is needed in case
> users want to set them manually e.g. when user want to override
> default N_DEMOTION_TARGETS found by the kernel.
>
> Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
> _NODE_ATTR_RW which can be used for node_states[] which can
> be written from sysfs.
>
> It may be necessary to validate written values and take action
> based on them in a state specific way so a callback 'write' is
> introduced in 'struct node_attr'.
>
> A new function demotion_targets_write() is added to validate
> the input nodes for N_DEMOTION_TARGETS which should be subset
> of N_MEMORY and to build new demotion list based on new nodes.
>
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>

Acked-by: Wei Xu <weixugc@google.com>

> ---
>  drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
>  1 file changed, 49 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 6eef22e6413e..e03eedbc421b 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -20,6 +20,7 @@
>  #include <linux/pm_runtime.h>
>  #include <linux/swap.h>
>  #include <linux/slab.h>
> +#include <linux/migrate.h>
>
>  static struct bus_type node_subsys = {
>         .name = "node",
> @@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
>  struct node_attr {
>         struct device_attribute attr;
>         enum node_states state;
> +       int (*write)(nodemask_t nodes);
>  };
>
>  static ssize_t show_node_state(struct device *dev,
> @@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
>                           nodemask_pr_args(&node_states[na->state]));
>  }
>
> -#define _NODE_ATTR(name, state) \
> -       { __ATTR(name, 0444, show_node_state, NULL), state }
> +static ssize_t store_node_state(struct device *s,
> +                               struct device_attribute *attr,
> +                               const char *buf, size_t count)
> +{
> +       nodemask_t nodes;
> +       struct node_attr *na = container_of(attr, struct node_attr, attr);
> +
> +       if (nodelist_parse(buf, nodes))
> +               return -EINVAL;
> +
> +       if (na->write) {
> +               if (na->write(nodes))
> +                       return -EINVAL;
> +       } else {
> +               node_states[na->state] = nodes;
> +       }
> +
> +       return count;
> +}
> +
> +static int demotion_targets_write(nodemask_t nodes)
> +{
> +       if (nodes_subset(nodes, node_states[N_MEMORY])) {
> +               node_states[N_DEMOTION_TARGETS] = nodes;
> +               set_migration_target_nodes();
> +               return 0;
> +       }
> +
> +       return -EINVAL;
> +}
> +
> +#define _NODE_ATTR_RO(name, state) \
> +       { __ATTR(name, 0444, show_node_state, NULL), state, NULL }
> +
> +#define _NODE_ATTR_RW(name, state, write_fn) \
> +       { __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
>
>  static struct node_attr node_state_attr[] = {
> -       [N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
> -       [N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
> -       [N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
> +       [N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
> +       [N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
> +       [N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
>  #ifdef CONFIG_HIGHMEM
> -       [N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
> +       [N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
>  #endif
> -       [N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
> -       [N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
> -       [N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
> -                                          N_GENERIC_INITIATOR),
> -       [N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> -                                         N_DEMOTION_TARGETS),
> -
> +       [N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
> +       [N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
> +       [N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
> +                                             N_GENERIC_INITIATOR),
> +       [N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
> +                                            N_DEMOTION_TARGETS,
> +                                            demotion_targets_write),
>  };
>
>  static struct attribute *node_state_attrs[] = {
> --
> 2.35.1
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 4/7] device-dax/kmem: Set node state as N_DEMOTION_TARGETS
  2022-04-22 19:55 ` [PATCH v3 4/7] device-dax/kmem: Set node state as N_DEMOTION_TARGETS Jagdish Gediya
@ 2022-04-22 20:34   ` Wei Xu
  0 siblings, 0 replies; 35+ messages in thread
From: Wei Xu @ 2022-04-22 20:34 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	ying.huang, aneesh.kumar, shy828301, gthelen, dan.j.williams

On Fri, Apr 22, 2022 at 12:55 PM Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>
> Set dax-device node as N_DEMOTION_TARGETS so that it
> can be used as demotion target.
>
> In future, support should be added to distinguish the
> dax-devices which are not preferred as demotion target
> e.g. HBM, for such devices, node shouldn't be set to
> N_DEMOTION_TARGETS.
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> ---

Acked-by: Wei Xu <weixugc@google.com>

>  drivers/dax/kmem.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
> index a37622060fff..f42ab9d04bdf 100644
> --- a/drivers/dax/kmem.c
> +++ b/drivers/dax/kmem.c
> @@ -147,6 +147,8 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
>
>         dev_set_drvdata(dev, data);
>
> +       node_set_state(numa_node, N_DEMOTION_TARGETS);
> +
>         return 0;
>
>  err_request_mem:
> --
> 2.35.1
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 5/7] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
  2022-04-22 19:55 ` [PATCH v3 5/7] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS Jagdish Gediya
@ 2022-04-22 20:39   ` Wei Xu
  0 siblings, 0 replies; 35+ messages in thread
From: Wei Xu @ 2022-04-22 20:39 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	ying.huang, aneesh.kumar, shy828301, gthelen, dan.j.williams

On Fri, Apr 22, 2022 at 12:55 PM Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>
> Only nodes which has state N_DEMOTION_TARGETS should be
> used as demotion targets, make nodes which are not in demotion
> targets as source nodes while building demotion target list
> so that demotion targets are only chosen from N_DEMOTION_TARGETS.
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> ---

Acked-by: Wei Xu <weixugc@google.com>

>  mm/migrate.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 8bbe1e478122..5b92a09fbe4a 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2366,10 +2366,10 @@ static void __set_migration_target_nodes(void)
>         disable_all_migrate_targets();
>
>         /*
> -        * Allocations go close to CPUs, first.  Assume that
> -        * the migration path starts at the nodes with CPUs.
> +        * Some systems can have DRAM(fast memory) only NUMA nodes, demotion targets
> +        * need to be found for them as well.
>          */
> -       next_pass = node_states[N_CPU];
> +       nodes_andnot(next_pass, node_states[N_ONLINE], node_states[N_DEMOTION_TARGETS]);
>  again:
>         this_pass = next_pass;
>         next_pass = NODE_MASK_NONE;
> --
> 2.35.1
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs
  2022-04-22 19:55 ` [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs Jagdish Gediya
@ 2022-04-22 20:47   ` Wei Xu
  2022-04-23  7:30   ` kernel test robot
  2022-04-23  8:38   ` kernel test robot
  2 siblings, 0 replies; 35+ messages in thread
From: Wei Xu @ 2022-04-22 20:47 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	ying.huang, aneesh.kumar, shy828301, gthelen, dan.j.williams

On Fri, Apr 22, 2022 at 12:55 PM Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>
> Kernel prepares per-node demotion target list based on
> node_states[N_DEMOTION_TARGETS], If enabled through sysfs,
> demotion kicks in during reclaim, and pages get migrated
> according to demotion target list prepared by kernel.
>
> It is helpful to know demotion target list prepared by
> kernel to understand the demotion behaviour, so add
> interface /sys/devices/system/node/nodeX/demotion_targets
> to view per-node demotion targets via sysfs.
>
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> ---
>  drivers/base/node.c     | 10 ++++++++++
>  include/linux/migrate.h |  1 +
>  mm/migrate.c            | 17 +++++++++++++++++
>  3 files changed, 28 insertions(+)
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index e03eedbc421b..92326219aac2 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -561,11 +561,21 @@ static ssize_t node_read_distance(struct device *dev,
>  }
>  static DEVICE_ATTR(distance, 0444, node_read_distance, NULL);
>
> +static ssize_t demotion_targets_show(struct device *dev,
> +                                    struct device_attribute *attr, char *buf)
> +{
> +       nodemask_t demotion_targets = node_get_demotion_targets(dev->id);
> +
> +       return sysfs_emit(buf, "%*pbl\n", nodemask_pr_args(&demotion_targets));
> +}
> +static DEVICE_ATTR_RO(demotion_targets);
> +
>  static struct attribute *node_dev_attrs[] = {
>         &dev_attr_meminfo.attr,
>         &dev_attr_numastat.attr,
>         &dev_attr_distance.attr,
>         &dev_attr_vmstat.attr,
> +       &dev_attr_demotion_targets.attr,
>         NULL
>  };
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 90e75d5a54d6..072019441a24 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -173,6 +173,7 @@ int migrate_vma_setup(struct migrate_vma *args);
>  void migrate_vma_pages(struct migrate_vma *migrate);
>  void migrate_vma_finalize(struct migrate_vma *migrate);
>  int next_demotion_node(int node);
> +nodemask_t node_get_demotion_targets(int node);

I think a stub implementation for !CONFIG_MIGRATION is also needed.

>  #else /* CONFIG_MIGRATION disabled: */
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5b92a09fbe4a..da864831bc0c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2187,6 +2187,23 @@ struct demotion_nodes {
>
>  static struct demotion_nodes *node_demotion __read_mostly;
>
> +nodemask_t node_get_demotion_targets(int node)
> +{
> +       nodemask_t demotion_targets = NODE_MASK_NONE;
> +       unsigned short target_nr;
> +
> +       if (!node_demotion)
> +               return NODE_MASK_NONE;
> +
> +       rcu_read_lock();
> +       target_nr = READ_ONCE(node_demotion[node].nr);
> +       for (int i = 0; i < target_nr; i++)
> +               node_set(READ_ONCE(node_demotion[node].nodes[i]), demotion_targets);
> +       rcu_read_unlock();
> +
> +       return demotion_targets;
> +}
> +
>  /**
>   * next_demotion_node() - Get the next node in the demotion path
>   * @node: The starting node to lookup the next node
> --
> 2.35.1
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs
  2022-04-22 19:55 ` [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs Jagdish Gediya
  2022-04-22 20:47   ` Wei Xu
@ 2022-04-23  7:30   ` kernel test robot
  2022-04-23  8:38   ` kernel test robot
  2 siblings, 0 replies; 35+ messages in thread
From: kernel test robot @ 2022-04-23  7:30 UTC (permalink / raw)
  To: Jagdish Gediya, linux-mm, linux-kernel, akpm
  Cc: llvm, kbuild-all, baolin.wang, dave.hansen, ying.huang,
	aneesh.kumar, shy828301, weixugc, gthelen, dan.j.williams,
	Jagdish Gediya

Hi Jagdish,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on driver-core/driver-core-testing]
[also build test ERROR on linus/master linux/master v5.18-rc3]
[cannot apply to hnaz-mm/master next-20220422]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Jagdish-Gediya/mm-demotion-Introduce-new-node-state-N_DEMOTION_TARGETS/20220423-035714
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 4e224719f5d9b92abf1e0edfb2a83053208f3026
config: x86_64-randconfig-a016 (https://download.01.org/0day-ci/archive/20220423/202204231557.Jmw6QI8T-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 5bd87350a5ae429baf8f373cb226a57b62f87280)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/0717e30f61ac83bd6ec65395bf46fdb5131cb83f
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Jagdish-Gediya/mm-demotion-Introduce-new-node-state-N_DEMOTION_TARGETS/20220423-035714
        git checkout 0717e30f61ac83bd6ec65395bf46fdb5131cb83f
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> drivers/base/node.c:567:32: error: call to undeclared function 'node_get_demotion_targets'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
           nodemask_t demotion_targets = node_get_demotion_targets(dev->id);
                                         ^
>> drivers/base/node.c:567:13: error: initializing 'nodemask_t' with an expression of incompatible type 'int'
           nodemask_t demotion_targets = node_get_demotion_targets(dev->id);
                      ^                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2 errors generated.


vim +/node_get_demotion_targets +567 drivers/base/node.c

   563	
   564	static ssize_t demotion_targets_show(struct device *dev,
   565					     struct device_attribute *attr, char *buf)
   566	{
 > 567		nodemask_t demotion_targets = node_get_demotion_targets(dev->id);
   568	
   569		return sysfs_emit(buf, "%*pbl\n", nodemask_pr_args(&demotion_targets));
   570	}
   571	static DEVICE_ATTR_RO(demotion_targets);
   572	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs
  2022-04-22 19:55 ` [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs Jagdish Gediya
  2022-04-22 20:47   ` Wei Xu
  2022-04-23  7:30   ` kernel test robot
@ 2022-04-23  8:38   ` kernel test robot
  2 siblings, 0 replies; 35+ messages in thread
From: kernel test robot @ 2022-04-23  8:38 UTC (permalink / raw)
  To: Jagdish Gediya, linux-mm, linux-kernel, akpm
  Cc: kbuild-all, baolin.wang, dave.hansen, ying.huang, aneesh.kumar,
	shy828301, weixugc, gthelen, dan.j.williams, Jagdish Gediya

Hi Jagdish,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on driver-core/driver-core-testing]
[also build test ERROR on linus/master linux/master v5.18-rc3]
[cannot apply to hnaz-mm/master next-20220422]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Jagdish-Gediya/mm-demotion-Introduce-new-node-state-N_DEMOTION_TARGETS/20220423-035714
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 4e224719f5d9b92abf1e0edfb2a83053208f3026
config: sparc-randconfig-r034-20220422 (https://download.01.org/0day-ci/archive/20220423/202204231602.2DeA4BIa-lkp@intel.com/config)
compiler: sparc64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/0717e30f61ac83bd6ec65395bf46fdb5131cb83f
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Jagdish-Gediya/mm-demotion-Introduce-new-node-state-N_DEMOTION_TARGETS/20220423-035714
        git checkout 0717e30f61ac83bd6ec65395bf46fdb5131cb83f
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross W=1 O=build_dir ARCH=sparc SHELL=/bin/bash drivers/base/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/base/node.c: In function 'demotion_targets_show':
>> drivers/base/node.c:567:39: error: implicit declaration of function 'node_get_demotion_targets' [-Werror=implicit-function-declaration]
     567 |         nodemask_t demotion_targets = node_get_demotion_targets(dev->id);
         |                                       ^~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/base/node.c:567:39: error: invalid initializer
   cc1: some warnings being treated as errors


vim +/node_get_demotion_targets +567 drivers/base/node.c

   563	
   564	static ssize_t demotion_targets_show(struct device *dev,
   565					     struct device_attribute *attr, char *buf)
   566	{
 > 567		nodemask_t demotion_targets = node_get_demotion_targets(dev->id);
   568	
   569		return sysfs_emit(buf, "%*pbl\n", nodemask_pr_args(&demotion_targets));
   570	}
   571	static DEVICE_ATTR_RO(demotion_targets);
   572	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
                   ` (6 preceding siblings ...)
  2022-04-22 19:55 ` [PATCH v3 7/7] docs: numa: Add documentation for demotion Jagdish Gediya
@ 2022-04-24  3:19 ` ying.huang
  2022-04-25 11:15   ` Jagdish Gediya
  7 siblings, 1 reply; 35+ messages in thread
From: ying.huang @ 2022-04-24  3:19 UTC (permalink / raw)
  To: Jagdish Gediya, linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, aneesh.kumar, shy828301, weixugc,
	gthelen, dan.j.williams

On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> NUMA node which are N_MEMORY and slow memory(persistent memory)
> only NUMA node which are also N_MEMORY. As the current demotion
> target finding algorithm works based on N_MEMORY and best distance,
> it will choose DRAM only NUMA node as demotion target instead of
> persistent memory node on such systems. If DRAM only NUMA node is
> filled with demoted pages then at some point new allocations can
> start falling to persistent memory, so basically cold pages are in
> fast memor (due to demotion) and new pages are in slow memory, this
> is why persistent memory nodes should be utilized for demotion and
> dram node should be avoided for demotion so that they can be used
> for new allocations.
> 
> Current implementation can work fine on the system where the memory
> only numa nodes are possible only for persistent/slow memory but it
> is not suitable for the like of systems mentioned above.

Can you share the NUMA topology information of your machine?  And the
demotion order before and after your change?

Whether it's good to use the PMEM nodes as the demotion targets of the
DRAM-only node too?

Best Regards,
Huang, Ying

> This patch series introduces the new node state N_DEMOTION_TARGETS,
> which is used to distinguish the nodes which can be used as demotion
> targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> nodes which can be used as demotion targets.
> 
> node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> certain type of memory which registers through dax kmem (e.g. HBM)
> may not be the right choices for demotion so in future they should
> be distinguished based on certain attributes and dax kmem driver
> should avoid setting them as N_DEMOTION_TARGETS, however current
> implementation also doesn't distinguish any  such memory and it
> considers all N_MEMORY as demotion targets so this patch series
> doesn't modify the current behavior.
> 
> below command can be used to view the available demotion targets in
> the system,
> 
> $ cat /sys/devices/system/node/demotion_targets
> 
> This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> It may be possible that some memory node desired as demotion target
> is not detected in the system from dax-device kmem probe path. It is
> also possible that some of the dax-devices are not preferred as
> demotion target e.g. HBM, for such devices, node shouldn't be set to
> N_DEMOTION_TARGETS, so support is also added to set the demotion
> target list from user space so that default behavior can be overridden
> to avoid or add specific node to demotion targets manually.
> 
> Override the demotion targets in the system (which sets the
> node_states[N_DEMOTION_TARGETS] in kernel),
> $ echo <node list> > /sys/devices/system/node/demotion_targets
> 
> As by default node attributes under /sys/devices/system/node/ are read-
> only, support is added to write node_states[] via sysfs so that
> node_states[N_DEMOTION_TARGETS] can be modified from user space via
> sysfs.
> 
> It is also helpful to know per node demotion target path prepared by
> kernel to understand the demotion behaviour during reclaim, so this
> patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> interface to view per-node demotion targets via sysfs.
> 
> Current code which sets migration targets is modified in
> this patch series to avoid some of the limitations on the demotion
> target sharing and to use N_DEMOTION_TARGETS only nodes while
> finding demotion targets.
> 
> Changelog
> ----------
> 
> v2:
> In v1, only 1st patch of this patch series was sent, which was
> implemented to avoid some of the limitations on the demotion
> target sharing, however for certain numa topology, the demotion
> targets found by that patch was not most optimal, so 1st patch
> in this series is modified according to suggestions from Huang
> and Baolin. Different examples of demotion list comparasion
> between existing implementation and changed implementation can
> be found in the commit message of 1st patch.
> 
> v3:
> - Modify patch 1 subject to make it more specific
> - Remove /sys/kernel/mm/numa/demotion_targets interface, use
>   /sys/devices/system/node/demotion_targets instead and make
>   it writable to override node_states[N_DEMOTION_TARGETS].
> - Add support to view per node demotion targets via sysfs
> 
> Jagdish Gediya (8):
>   mm: demotion: Fix demotion targets sharing among sources
>   mm: demotion: Add new node state N_DEMOTION_TARGETS
>   drivers/base/node: Add support to write node_states[] via sysfs
>   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
>   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
>   mm: demotion: expose per-node demotion targets via sysfs
>   docs: numa: Add documentation for demotion
> 
>  Documentation/admin-guide/mm/index.rst        |  1 +
>  .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
>  drivers/base/node.c                           | 70 ++++++++++++++++---
>  drivers/dax/kmem.c                            |  2 +
>  include/linux/migrate.h                       |  1 +
>  include/linux/nodemask.h                      |  1 +
>  mm/migrate.c                                  | 54 ++++++++++----
>  7 files changed, 162 insertions(+), 24 deletions(-)
>  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> 




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
  2022-04-22 19:55 ` [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources Jagdish Gediya
@ 2022-04-24  3:25   ` ying.huang
  2022-04-25  9:32     ` Jagdish Gediya
  0 siblings, 1 reply; 35+ messages in thread
From: ying.huang @ 2022-04-24  3:25 UTC (permalink / raw)
  To: Jagdish Gediya, linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, aneesh.kumar, shy828301, weixugc,
	gthelen, dan.j.williams

> Subject: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources

IMHO, this patch doesn't fix some bugs in the original code.  Instead,
it just enhances the original code.  For example, the subject could be,

  mm: demotion: support to share demotion targets among sources

Best Regards,
Huang, Ying

[snip]




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs
  2022-04-22 19:55 ` [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs Jagdish Gediya
  2022-04-22 20:32   ` Wei Xu
@ 2022-04-24  6:25   ` Aneesh Kumar K.V
  2022-04-25  9:42     ` Jagdish Gediya
  2022-04-24  6:29   ` ying.huang
  2 siblings, 1 reply; 35+ messages in thread
From: Aneesh Kumar K.V @ 2022-04-24  6:25 UTC (permalink / raw)
  To: Jagdish Gediya, linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, ying.huang, shy828301, weixugc,
	gthelen, dan.j.williams, Jagdish Gediya

Jagdish Gediya <jvgediya@linux.ibm.com> writes:

> Current /sys/devices/system/node/* interface doesn't support
> to write node_states[], however write support is needed in case
> users want to set them manually e.g. when user want to override
> default N_DEMOTION_TARGETS found by the kernel.
>
> Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
> _NODE_ATTR_RW which can be used for node_states[] which can
> be written from sysfs.
>
> It may be necessary to validate written values and take action
> based on them in a state specific way so a callback 'write' is
> introduced in 'struct node_attr'.
>
> A new function demotion_targets_write() is added to validate
> the input nodes for N_DEMOTION_TARGETS which should be subset
> of N_MEMORY and to build new demotion list based on new nodes.
>
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> ---
>  drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
>  1 file changed, 49 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 6eef22e6413e..e03eedbc421b 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -20,6 +20,7 @@
>  #include <linux/pm_runtime.h>
>  #include <linux/swap.h>
>  #include <linux/slab.h>
> +#include <linux/migrate.h>
>  
>  static struct bus_type node_subsys = {
>  	.name = "node",
> @@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
>  struct node_attr {
>  	struct device_attribute attr;
>  	enum node_states state;
> +	int (*write)(nodemask_t nodes);
>  };
>  
>  static ssize_t show_node_state(struct device *dev,
> @@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
>  			  nodemask_pr_args(&node_states[na->state]));
>  }
>  
> -#define _NODE_ATTR(name, state) \
> -	{ __ATTR(name, 0444, show_node_state, NULL), state }
> +static ssize_t store_node_state(struct device *s,
> +				struct device_attribute *attr,
> +				const char *buf, size_t count)
> +{
> +	nodemask_t nodes;
> +	struct node_attr *na = container_of(attr, struct node_attr, attr);
> +
> +	if (nodelist_parse(buf, nodes))
> +		return -EINVAL;
> +
> +	if (na->write) {
> +		if (na->write(nodes))
> +			return -EINVAL;
> +	} else {
> +		node_states[na->state] = nodes;
> +	}
> +
> +	return count;
> +}
> +
> +static int demotion_targets_write(nodemask_t nodes)
> +{
> +	if (nodes_subset(nodes, node_states[N_MEMORY])) {
> +		node_states[N_DEMOTION_TARGETS] = nodes;
> +		set_migration_target_nodes();
> +		return 0;
> +	}

Does this require locking to avoid upating
node_states[N_DEMOTION_TARGETS] while a parallel
set_migratiotn_target_nodes() is running. 

> +
> +	return -EINVAL;
> +}
> +
> +#define _NODE_ATTR_RO(name, state) \
> +	{ __ATTR(name, 0444, show_node_state, NULL), state, NULL }
> +
> +#define _NODE_ATTR_RW(name, state, write_fn) \
> +	{ __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
>  
>  static struct node_attr node_state_attr[] = {
> -	[N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
> -	[N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
> -	[N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
> +	[N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
> +	[N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
> +	[N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
>  #ifdef CONFIG_HIGHMEM
> -	[N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
> +	[N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
>  #endif
> -	[N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
> -	[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
> -	[N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
> -					   N_GENERIC_INITIATOR),
> -	[N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> -					  N_DEMOTION_TARGETS),
> -
> +	[N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
> +	[N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
> +	[N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
> +					      N_GENERIC_INITIATOR),
> +	[N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
> +					     N_DEMOTION_TARGETS,
> +					     demotion_targets_write),
>  };
>  
>  static struct attribute *node_state_attrs[] = {
> -- 
> 2.35.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs
  2022-04-22 19:55 ` [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs Jagdish Gediya
  2022-04-22 20:32   ` Wei Xu
  2022-04-24  6:25   ` Aneesh Kumar K.V
@ 2022-04-24  6:29   ` ying.huang
  2 siblings, 0 replies; 35+ messages in thread
From: ying.huang @ 2022-04-24  6:29 UTC (permalink / raw)
  To: Jagdish Gediya, linux-mm, linux-kernel, akpm
  Cc: baolin.wang, dave.hansen, aneesh.kumar, shy828301, weixugc,
	gthelen, dan.j.williams

On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> Current /sys/devices/system/node/* interface doesn't support
> to write node_states[], however write support is needed in case
> users want to set them manually e.g. when user want to override
> default N_DEMOTION_TARGETS found by the kernel.
> 
> Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
> _NODE_ATTR_RW which can be used for node_states[] which can
> be written from sysfs.
> 
> It may be necessary to validate written values and take action
> based on them in a state specific way so a callback 'write' is
> introduced in 'struct node_attr'.
> 
> A new function demotion_targets_write() is added to validate
> the input nodes for N_DEMOTION_TARGETS which should be subset
> of N_MEMORY and to build new demotion list based on new nodes.
> 
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>

How about discussing the ABI design firstly?  I started a discussion in
the following thread.  I think we can start with the requirements.

https://lore.kernel.org/lkml/200e95cf36c1642512d99431014db8943fed715d.camel@intel.com/

Best Regards,
Huang, Ying

> ---
>  drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
>  1 file changed, 49 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 6eef22e6413e..e03eedbc421b 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -20,6 +20,7 @@
>  #include <linux/pm_runtime.h>
>  #include <linux/swap.h>
>  #include <linux/slab.h>
> +#include <linux/migrate.h>
>  
> 
> 
> 
>  static struct bus_type node_subsys = {
>  	.name = "node",
> @@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
>  struct node_attr {
>  	struct device_attribute attr;
>  	enum node_states state;
> +	int (*write)(nodemask_t nodes);
>  };
>  
> 
> 
> 
>  static ssize_t show_node_state(struct device *dev,
> @@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
>  			  nodemask_pr_args(&node_states[na->state]));
>  }
>  
> 
> 
> 
> -#define _NODE_ATTR(name, state) \
> -	{ __ATTR(name, 0444, show_node_state, NULL), state }
> +static ssize_t store_node_state(struct device *s,
> +				struct device_attribute *attr,
> +				const char *buf, size_t count)
> +{
> +	nodemask_t nodes;
> +	struct node_attr *na = container_of(attr, struct node_attr, attr);
> +
> +	if (nodelist_parse(buf, nodes))
> +		return -EINVAL;
> +
> +	if (na->write) {
> +		if (na->write(nodes))
> +			return -EINVAL;
> +	} else {
> +		node_states[na->state] = nodes;
> +	}
> +
> +	return count;
> +}
> +
> +static int demotion_targets_write(nodemask_t nodes)
> +{
> +	if (nodes_subset(nodes, node_states[N_MEMORY])) {
> +		node_states[N_DEMOTION_TARGETS] = nodes;
> +		set_migration_target_nodes();
> +		return 0;
> +	}
> +
> +	return -EINVAL;
> +}
> +
> +#define _NODE_ATTR_RO(name, state) \
> +	{ __ATTR(name, 0444, show_node_state, NULL), state, NULL }
> +
> +#define _NODE_ATTR_RW(name, state, write_fn) \
> +	{ __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
>  
> 
> 
> 
>  static struct node_attr node_state_attr[] = {
> -	[N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
> -	[N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
> -	[N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
> +	[N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
> +	[N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
> +	[N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
>  #ifdef CONFIG_HIGHMEM
> -	[N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
> +	[N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
>  #endif
> -	[N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
> -	[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
> -	[N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
> -					   N_GENERIC_INITIATOR),
> -	[N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> -					  N_DEMOTION_TARGETS),
> -
> +	[N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
> +	[N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
> +	[N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
> +					      N_GENERIC_INITIATOR),
> +	[N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
> +					     N_DEMOTION_TARGETS,
> +					     demotion_targets_write),
>  };
>  
> 
> 
> 
>  static struct attribute *node_state_attrs[] = {




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
  2022-04-24  3:25   ` ying.huang
@ 2022-04-25  9:32     ` Jagdish Gediya
  2022-04-26  7:26       ` ying.huang
  0 siblings, 1 reply; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-25  9:32 UTC (permalink / raw)
  To: ying.huang
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	aneesh.kumar, shy828301, weixugc, gthelen, dan.j.williams

On Sun, Apr 24, 2022 at 11:25:50AM +0800, ying.huang@intel.com wrote:
> > Subject: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
> 
> IMHO, this patch doesn't fix some bugs in the original code.  Instead,
> it just enhances the original code.  For example, the subject could be,

I think it is fixing a bug, there is a comment in the code which
mentions that 'used_targets will become unavailable in future passes.
This limits some opportunities for multiple source nodes to share a
destination'. As per this comment, it was intended to share a node as
demotion targets with some limits but the code limits not some but all
such opportunities as no common node can be demotion targets for
multiple source node as per current code.

>   mm: demotion: support to share demotion targets among sources
> 
> Best Regards,
> Huang, Ying
> 
> [snip]
> 
> 
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs
  2022-04-24  6:25   ` Aneesh Kumar K.V
@ 2022-04-25  9:42     ` Jagdish Gediya
  0 siblings, 0 replies; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-25  9:42 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	ying.huang, shy828301, weixugc, gthelen, dan.j.williams

On Sun, Apr 24, 2022 at 11:55:45AM +0530, Aneesh Kumar K.V wrote:
> Jagdish Gediya <jvgediya@linux.ibm.com> writes:
> 
> > Current /sys/devices/system/node/* interface doesn't support
> > to write node_states[], however write support is needed in case
> > users want to set them manually e.g. when user want to override
> > default N_DEMOTION_TARGETS found by the kernel.
> >
> > Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
> > _NODE_ATTR_RW which can be used for node_states[] which can
> > be written from sysfs.
> >
> > It may be necessary to validate written values and take action
> > based on them in a state specific way so a callback 'write' is
> > introduced in 'struct node_attr'.
> >
> > A new function demotion_targets_write() is added to validate
> > the input nodes for N_DEMOTION_TARGETS which should be subset
> > of N_MEMORY and to build new demotion list based on new nodes.
> >
> > Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> > ---
> >  drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
> >  1 file changed, 49 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/base/node.c b/drivers/base/node.c
> > index 6eef22e6413e..e03eedbc421b 100644
> > --- a/drivers/base/node.c
> > +++ b/drivers/base/node.c
> > @@ -20,6 +20,7 @@
> >  #include <linux/pm_runtime.h>
> >  #include <linux/swap.h>
> >  #include <linux/slab.h>
> > +#include <linux/migrate.h>
> >  
> >  static struct bus_type node_subsys = {
> >  	.name = "node",
> > @@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
> >  struct node_attr {
> >  	struct device_attribute attr;
> >  	enum node_states state;
> > +	int (*write)(nodemask_t nodes);
> >  };
> >  
> >  static ssize_t show_node_state(struct device *dev,
> > @@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
> >  			  nodemask_pr_args(&node_states[na->state]));
> >  }
> >  
> > -#define _NODE_ATTR(name, state) \
> > -	{ __ATTR(name, 0444, show_node_state, NULL), state }
> > +static ssize_t store_node_state(struct device *s,
> > +				struct device_attribute *attr,
> > +				const char *buf, size_t count)
> > +{
> > +	nodemask_t nodes;
> > +	struct node_attr *na = container_of(attr, struct node_attr, attr);
> > +
> > +	if (nodelist_parse(buf, nodes))
> > +		return -EINVAL;
> > +
> > +	if (na->write) {
> > +		if (na->write(nodes))
> > +			return -EINVAL;
> > +	} else {
> > +		node_states[na->state] = nodes;
> > +	}
> > +
> > +	return count;
> > +}
> > +
> > +static int demotion_targets_write(nodemask_t nodes)
> > +{
> > +	if (nodes_subset(nodes, node_states[N_MEMORY])) {
> > +		node_states[N_DEMOTION_TARGETS] = nodes;
> > +		set_migration_target_nodes();
> > +		return 0;
> > +	}
> 
> Does this require locking to avoid upating
> node_states[N_DEMOTION_TARGETS] while a parallel
> set_migratiotn_target_nodes() is running. 

I think locking is needed if set_migratiotn_target_nodes() is called
form here because currently exclusion is provided by memory hotplug
events being single-threaded, but as this path is not hot-plug event,
separate lock is needed.

> > +
> > +	return -EINVAL;
> > +}
> > +
> > +#define _NODE_ATTR_RO(name, state) \
> > +	{ __ATTR(name, 0444, show_node_state, NULL), state, NULL }
> > +
> > +#define _NODE_ATTR_RW(name, state, write_fn) \
> > +	{ __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
> >  
> >  static struct node_attr node_state_attr[] = {
> > -	[N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
> > -	[N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
> > -	[N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
> > +	[N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
> > +	[N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
> > +	[N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
> >  #ifdef CONFIG_HIGHMEM
> > -	[N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
> > +	[N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
> >  #endif
> > -	[N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
> > -	[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
> > -	[N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
> > -					   N_GENERIC_INITIATOR),
> > -	[N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> > -					  N_DEMOTION_TARGETS),
> > -
> > +	[N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
> > +	[N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
> > +	[N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
> > +					      N_GENERIC_INITIATOR),
> > +	[N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
> > +					     N_DEMOTION_TARGETS,
> > +					     demotion_targets_write),
> >  };
> >  
> >  static struct attribute *node_state_attrs[] = {
> > -- 
> > 2.35.1


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-24  3:19 ` [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS ying.huang
@ 2022-04-25 11:15   ` Jagdish Gediya
  2022-04-25 13:57     ` Jonathan Cameron
  2022-04-26  7:55     ` ying.huang
  0 siblings, 2 replies; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-25 11:15 UTC (permalink / raw)
  To: ying.huang
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	aneesh.kumar, shy828301, weixugc, gthelen, dan.j.williams

On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > only NUMA node which are also N_MEMORY. As the current demotion
> > target finding algorithm works based on N_MEMORY and best distance,
> > it will choose DRAM only NUMA node as demotion target instead of
> > persistent memory node on such systems. If DRAM only NUMA node is
> > filled with demoted pages then at some point new allocations can
> > start falling to persistent memory, so basically cold pages are in
> > fast memor (due to demotion) and new pages are in slow memory, this
> > is why persistent memory nodes should be utilized for demotion and
> > dram node should be avoided for demotion so that they can be used
> > for new allocations.
> > 
> > Current implementation can work fine on the system where the memory
> > only numa nodes are possible only for persistent/slow memory but it
> > is not suitable for the like of systems mentioned above.
> 
> Can you share the NUMA topology information of your machine?  And the
> demotion order before and after your change?
> 
> Whether it's good to use the PMEM nodes as the demotion targets of the
> DRAM-only node too?

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13392 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1971 MB
node distances:
node   0   1
  0:  10  40
  1:  40  10

1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
   for 0 even when 1 is DRAM node and there is no demotion targets for 1.

$ cat /sys/bus/nd/devices/dax0.0/target_node
2
$
# cd /sys/bus/dax/drivers/
:/sys/bus/dax/drivers# ls
device_dax  kmem
:/sys/bus/dax/drivers# cd device_dax/
:/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
:/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
:/sys/bus/dax/drivers/device_dax# numactl -H
available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13380 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1961 MB
node 2 cpus:
node 2 size: 0 MB
node 2 free: 0 MB
node distances:
node   0   1   2
  0:  10  40  80
  1:  40  10  80
  2:  80  80  10

2) Once this new node brought online,  without N_DEMOTION_TARGETS
patch series, 1 is demotion target for 0 and 2 is demotion target
for 1.

With this patch series applied,
1) No demotion target for either 0 or 1 before dax device is online
2) 2 is demotion target for both 0 and 1 after dax device is online.

> Best Regards,
> Huang, Ying
> 
> > This patch series introduces the new node state N_DEMOTION_TARGETS,
> > which is used to distinguish the nodes which can be used as demotion
> > targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> > nodes which can be used as demotion targets.
> > 
> > node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> > certain type of memory which registers through dax kmem (e.g. HBM)
> > may not be the right choices for demotion so in future they should
> > be distinguished based on certain attributes and dax kmem driver
> > should avoid setting them as N_DEMOTION_TARGETS, however current
> > implementation also doesn't distinguish any  such memory and it
> > considers all N_MEMORY as demotion targets so this patch series
> > doesn't modify the current behavior.
> > 
> > below command can be used to view the available demotion targets in
> > the system,
> > 
> > $ cat /sys/devices/system/node/demotion_targets
> > 
> > This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> > It may be possible that some memory node desired as demotion target
> > is not detected in the system from dax-device kmem probe path. It is
> > also possible that some of the dax-devices are not preferred as
> > demotion target e.g. HBM, for such devices, node shouldn't be set to
> > N_DEMOTION_TARGETS, so support is also added to set the demotion
> > target list from user space so that default behavior can be overridden
> > to avoid or add specific node to demotion targets manually.
> > 
> > Override the demotion targets in the system (which sets the
> > node_states[N_DEMOTION_TARGETS] in kernel),
> > $ echo <node list> > /sys/devices/system/node/demotion_targets
> > 
> > As by default node attributes under /sys/devices/system/node/ are read-
> > only, support is added to write node_states[] via sysfs so that
> > node_states[N_DEMOTION_TARGETS] can be modified from user space via
> > sysfs.
> > 
> > It is also helpful to know per node demotion target path prepared by
> > kernel to understand the demotion behaviour during reclaim, so this
> > patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> > interface to view per-node demotion targets via sysfs.
> > 
> > Current code which sets migration targets is modified in
> > this patch series to avoid some of the limitations on the demotion
> > target sharing and to use N_DEMOTION_TARGETS only nodes while
> > finding demotion targets.
> > 
> > Changelog
> > ----------
> > 
> > v2:
> > In v1, only 1st patch of this patch series was sent, which was
> > implemented to avoid some of the limitations on the demotion
> > target sharing, however for certain numa topology, the demotion
> > targets found by that patch was not most optimal, so 1st patch
> > in this series is modified according to suggestions from Huang
> > and Baolin. Different examples of demotion list comparasion
> > between existing implementation and changed implementation can
> > be found in the commit message of 1st patch.
> > 
> > v3:
> > - Modify patch 1 subject to make it more specific
> > - Remove /sys/kernel/mm/numa/demotion_targets interface, use
> >   /sys/devices/system/node/demotion_targets instead and make
> >   it writable to override node_states[N_DEMOTION_TARGETS].
> > - Add support to view per node demotion targets via sysfs
> > 
> > Jagdish Gediya (8):
> >   mm: demotion: Fix demotion targets sharing among sources
> >   mm: demotion: Add new node state N_DEMOTION_TARGETS
> >   drivers/base/node: Add support to write node_states[] via sysfs
> >   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
> >   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
> >   mm: demotion: expose per-node demotion targets via sysfs
> >   docs: numa: Add documentation for demotion
> > 
> >  Documentation/admin-guide/mm/index.rst        |  1 +
> >  .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
> >  drivers/base/node.c                           | 70 ++++++++++++++++---
> >  drivers/dax/kmem.c                            |  2 +
> >  include/linux/migrate.h                       |  1 +
> >  include/linux/nodemask.h                      |  1 +
> >  mm/migrate.c                                  | 54 ++++++++++----
> >  7 files changed, 162 insertions(+), 24 deletions(-)
> >  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> > 
> 
> 
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-25 11:15   ` Jagdish Gediya
@ 2022-04-25 13:57     ` Jonathan Cameron
  2022-04-25 14:44       ` Aneesh Kumar K V
  2022-04-25 14:53       ` Aneesh Kumar K V
  2022-04-26  7:55     ` ying.huang
  1 sibling, 2 replies; 35+ messages in thread
From: Jonathan Cameron @ 2022-04-25 13:57 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: ying.huang, linux-mm, linux-kernel, akpm, baolin.wang,
	dave.hansen, aneesh.kumar, shy828301, weixugc, gthelen,
	dan.j.williams

On Mon, 25 Apr 2022 16:45:38 +0530
Jagdish Gediya <jvgediya@linux.ibm.com> wrote:

> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:  
> > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > only NUMA node which are also N_MEMORY. As the current demotion
> > > target finding algorithm works based on N_MEMORY and best distance,
> > > it will choose DRAM only NUMA node as demotion target instead of
> > > persistent memory node on such systems. If DRAM only NUMA node is
> > > filled with demoted pages then at some point new allocations can
> > > start falling to persistent memory, so basically cold pages are in
> > > fast memor (due to demotion) and new pages are in slow memory, this
> > > is why persistent memory nodes should be utilized for demotion and
> > > dram node should be avoided for demotion so that they can be used
> > > for new allocations.
> > > 
> > > Current implementation can work fine on the system where the memory
> > > only numa nodes are possible only for persistent/slow memory but it
> > > is not suitable for the like of systems mentioned above.  
> > 
> > Can you share the NUMA topology information of your machine?  And the
> > demotion order before and after your change?
> > 
> > Whether it's good to use the PMEM nodes as the demotion targets of the
> > DRAM-only node too?  
> 
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13392 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1971 MB
> node distances:
> node   0   1
>   0:  10  40
>   1:  40  10
> 
> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>    for 0 even when 1 is DRAM node and there is no demotion targets for 1.

I'm not convinced the distinction between DRAM and persistent memory is
valid. There will definitely be systems with a large pool
of remote DRAM (and potentially no NV memory) where the right choice
is to demote to that DRAM pool.

Basing the decision on whether the memory is from kmem or
normal DRAM doesn't provide sufficient information to make the decision.

> 
> $ cat /sys/bus/nd/devices/dax0.0/target_node
> 2
> $
> # cd /sys/bus/dax/drivers/
> :/sys/bus/dax/drivers# ls
> device_dax  kmem
> :/sys/bus/dax/drivers# cd device_dax/
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> :/sys/bus/dax/drivers/device_dax# numactl -H
> available: 3 nodes (0-2)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13380 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1961 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 free: 0 MB
> node distances:
> node   0   1   2
>   0:  10  40  80
>   1:  40  10  80
>   2:  80  80  10
> 
> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> patch series, 1 is demotion target for 0 and 2 is demotion target
> for 1.
> 
> With this patch series applied,
> 1) No demotion target for either 0 or 1 before dax device is online

I'd argue that is wrong.  At this state you have a tiered memory system
be it one with just DRAM.  Using it as such is correct behavior that
we should not be preventing.  Sure some usecases wouldn't want that
arrangement but some do want it.

For your case we could add a heuristic along the lines of the demotion
target should be at least as big as the starting point but that would
be a bit hacky.

Jonathan

> 2) 2 is demotion target for both 0 and 1 after dax device is online.
> 
> > Best Regards,
> > Huang, Ying
> >   
> > > This patch series introduces the new node state N_DEMOTION_TARGETS,
> > > which is used to distinguish the nodes which can be used as demotion
> > > targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> > > nodes which can be used as demotion targets.
> > > 
> > > node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> > > certain type of memory which registers through dax kmem (e.g. HBM)
> > > may not be the right choices for demotion so in future they should
> > > be distinguished based on certain attributes and dax kmem driver
> > > should avoid setting them as N_DEMOTION_TARGETS, however current
> > > implementation also doesn't distinguish any  such memory and it
> > > considers all N_MEMORY as demotion targets so this patch series
> > > doesn't modify the current behavior.
> > > 
> > > below command can be used to view the available demotion targets in
> > > the system,
> > > 
> > > $ cat /sys/devices/system/node/demotion_targets
> > > 
> > > This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> > > It may be possible that some memory node desired as demotion target
> > > is not detected in the system from dax-device kmem probe path. It is
> > > also possible that some of the dax-devices are not preferred as
> > > demotion target e.g. HBM, for such devices, node shouldn't be set to
> > > N_DEMOTION_TARGETS, so support is also added to set the demotion
> > > target list from user space so that default behavior can be overridden
> > > to avoid or add specific node to demotion targets manually.
> > > 
> > > Override the demotion targets in the system (which sets the
> > > node_states[N_DEMOTION_TARGETS] in kernel),
> > > $ echo <node list> > /sys/devices/system/node/demotion_targets
> > > 
> > > As by default node attributes under /sys/devices/system/node/ are read-
> > > only, support is added to write node_states[] via sysfs so that
> > > node_states[N_DEMOTION_TARGETS] can be modified from user space via
> > > sysfs.
> > > 
> > > It is also helpful to know per node demotion target path prepared by
> > > kernel to understand the demotion behaviour during reclaim, so this
> > > patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> > > interface to view per-node demotion targets via sysfs.
> > > 
> > > Current code which sets migration targets is modified in
> > > this patch series to avoid some of the limitations on the demotion
> > > target sharing and to use N_DEMOTION_TARGETS only nodes while
> > > finding demotion targets.
> > > 
> > > Changelog
> > > ----------
> > > 
> > > v2:
> > > In v1, only 1st patch of this patch series was sent, which was
> > > implemented to avoid some of the limitations on the demotion
> > > target sharing, however for certain numa topology, the demotion
> > > targets found by that patch was not most optimal, so 1st patch
> > > in this series is modified according to suggestions from Huang
> > > and Baolin. Different examples of demotion list comparasion
> > > between existing implementation and changed implementation can
> > > be found in the commit message of 1st patch.
> > > 
> > > v3:
> > > - Modify patch 1 subject to make it more specific
> > > - Remove /sys/kernel/mm/numa/demotion_targets interface, use
> > >   /sys/devices/system/node/demotion_targets instead and make
> > >   it writable to override node_states[N_DEMOTION_TARGETS].
> > > - Add support to view per node demotion targets via sysfs
> > > 
> > > Jagdish Gediya (8):
> > >   mm: demotion: Fix demotion targets sharing among sources
> > >   mm: demotion: Add new node state N_DEMOTION_TARGETS
> > >   drivers/base/node: Add support to write node_states[] via sysfs
> > >   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
> > >   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
> > >   mm: demotion: expose per-node demotion targets via sysfs
> > >   docs: numa: Add documentation for demotion
> > > 
> > >  Documentation/admin-guide/mm/index.rst        |  1 +
> > >  .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
> > >  drivers/base/node.c                           | 70 ++++++++++++++++---
> > >  drivers/dax/kmem.c                            |  2 +
> > >  include/linux/migrate.h                       |  1 +
> > >  include/linux/nodemask.h                      |  1 +
> > >  mm/migrate.c                                  | 54 ++++++++++----
> > >  7 files changed, 162 insertions(+), 24 deletions(-)
> > >  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> > >   
> > 
> > 
> >   
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-25 13:57     ` Jonathan Cameron
@ 2022-04-25 14:44       ` Aneesh Kumar K V
  2022-04-26 10:43         ` Jonathan Cameron
  2022-04-27  1:29         ` ying.huang
  2022-04-25 14:53       ` Aneesh Kumar K V
  1 sibling, 2 replies; 35+ messages in thread
From: Aneesh Kumar K V @ 2022-04-25 14:44 UTC (permalink / raw)
  To: Jonathan Cameron, Jagdish Gediya
  Cc: ying.huang, linux-mm, linux-kernel, akpm, baolin.wang,
	dave.hansen, shy828301, weixugc, gthelen, dan.j.williams

On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> On Mon, 25 Apr 2022 16:45:38 +0530
> Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> 
>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>> filled with demoted pages then at some point new allocations can
>>>> start falling to persistent memory, so basically cold pages are in
>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>> is why persistent memory nodes should be utilized for demotion and
>>>> dram node should be avoided for demotion so that they can be used
>>>> for new allocations.
>>>>
>>>> Current implementation can work fine on the system where the memory
>>>> only numa nodes are possible only for persistent/slow memory but it
>>>> is not suitable for the like of systems mentioned above.
>>>
>>> Can you share the NUMA topology information of your machine?  And the
>>> demotion order before and after your change?
>>>
>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>> DRAM-only node too?
>>
>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node   0   1
>>    0:  10  40
>>    1:  40  10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> 
> I'm not convinced the distinction between DRAM and persistent memory is
> valid. There will definitely be systems with a large pool
> of remote DRAM (and potentially no NV memory) where the right choice
> is to demote to that DRAM pool.
> 
> Basing the decision on whether the memory is from kmem or
> normal DRAM doesn't provide sufficient information to make the decision.
> 

Hence the suggestion for the ability to override this from userspace. 
Now, for example, we could build a system with memory from the remote 
machine (memory inception in case of power which will mostly be plugged 
in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI 
memory.

In the former case, we won't consider that for demotion with this series 
because that is not instantiated via dax kmem. So yes definitely we 
would need the ability to override this from userspace so that we could 
put these remote memory NUMA nodes as demotion targets if we want.

>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax  kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node   0   1   2
>>    0:  10  40  80
>>    1:  40  10  80
>>    2:  80  80  10
>>
>> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
>> patch series, 1 is demotion target for 0 and 2 is demotion target
>> for 1.
>>
>> With this patch series applied,
>> 1) No demotion target for either 0 or 1 before dax device is online
> 
> I'd argue that is wrong.  At this state you have a tiered memory system
> be it one with just DRAM.  Using it as such is correct behavior that
> we should not be preventing.  Sure some usecases wouldn't want that
> arrangement but some do want it.
> 
> For your case we could add a heuristic along the lines of the demotion
> target should be at least as big as the starting point but that would
> be a bit hacky.
> 

Hence the proposal to do a per node demotion target override with the 
semantics that i explained here


https://lore.kernel.org/linux-mm/8735i1zurt.fsf@linux.ibm.com/

Let me know if that interface would be good to handle all the possible 
demotion target configs we would want to have.

-aneesh


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-25 13:57     ` Jonathan Cameron
  2022-04-25 14:44       ` Aneesh Kumar K V
@ 2022-04-25 14:53       ` Aneesh Kumar K V
  2022-04-26 10:37         ` Jonathan Cameron
  1 sibling, 1 reply; 35+ messages in thread
From: Aneesh Kumar K V @ 2022-04-25 14:53 UTC (permalink / raw)
  To: Jonathan Cameron, Jagdish Gediya
  Cc: ying.huang, linux-mm, linux-kernel, akpm, baolin.wang,
	dave.hansen, shy828301, weixugc, gthelen, dan.j.williams

On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> On Mon, 25 Apr 2022 16:45:38 +0530
> Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>

....

>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node   0   1
>>    0:  10  40
>>    1:  40  10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> 
> I'm not convinced the distinction between DRAM and persistent memory is
> valid. There will definitely be systems with a large pool
> of remote DRAM (and potentially no NV memory) where the right choice
> is to demote to that DRAM pool.
> 
> Basing the decision on whether the memory is from kmem or
> normal DRAM doesn't provide sufficient information to make the decision.
> 
>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax  kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node   0   1   2
>>    0:  10  40  80
>>    1:  40  10  80
>>    2:  80  80  10
>>
>> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
>> patch series, 1 is demotion target for 0 and 2 is demotion target
>> for 1.
>>
>> With this patch series applied,
>> 1) No demotion target for either 0 or 1 before dax device is online
> 
> I'd argue that is wrong.  At this state you have a tiered memory system
> be it one with just DRAM.  Using it as such is correct behavior that
> we should not be preventing.  Sure some usecases wouldn't want that
> arrangement but some do want it.
> 

I missed this in my earlier reply. Are you suggesting that we would want 
Node 1 (DRAM only memory numa node) to act as demotion target for Node 
0?  Any reason why we would want to do that? That is clearly opposite of 
what we are trying to do here. IMHO node using Node1 as demotion target 
for Node0 is a better default?



-aneesh


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
  2022-04-25  9:32     ` Jagdish Gediya
@ 2022-04-26  7:26       ` ying.huang
  0 siblings, 0 replies; 35+ messages in thread
From: ying.huang @ 2022-04-26  7:26 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	aneesh.kumar, shy828301, weixugc, gthelen, dan.j.williams

On Mon, 2022-04-25 at 15:02 +0530, Jagdish Gediya wrote:
> On Sun, Apr 24, 2022 at 11:25:50AM +0800, ying.huang@intel.com wrote:
> > > Subject: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
> > 
> > IMHO, this patch doesn't fix some bugs in the original code.  Instead,
> > it just enhances the original code.  For example, the subject could be,
> 
> I think it is fixing a bug, there is a comment in the code which
> mentions that 'used_targets will become unavailable in future passes.
> This limits some opportunities for multiple source nodes to share a
> destination'. As per this comment, it was intended to share a node as
> demotion targets with some limits but the code limits not some but all
> such opportunities as no common node can be demotion targets for
> multiple source node as per current code.

IMHO, the original code is just to keep as simple as possible to address
the issue for the real machines at that time.  That provides a base line
for future improvement like you have done.  If the original code
wouldn't work well for the target machines, then we fixed a bug.  But if
that doen't work well for some new kind of machines, then we need to
improve the code, add more feature, not to fix a bug.

Best Regards,
Huang, Ying

> >   mm: demotion: support to share demotion targets among sources
> > 
> > Best Regards,
> > Huang, Ying
> > 
> > [snip]
> > 
> > 
> > 




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-25 11:15   ` Jagdish Gediya
  2022-04-25 13:57     ` Jonathan Cameron
@ 2022-04-26  7:55     ` ying.huang
  2022-04-26  9:07       ` Aneesh Kumar K V
  2022-04-26  9:37       ` Jagdish Gediya
  1 sibling, 2 replies; 35+ messages in thread
From: ying.huang @ 2022-04-26  7:55 UTC (permalink / raw)
  To: Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	aneesh.kumar, shy828301, weixugc, gthelen, dan.j.williams

On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > only NUMA node which are also N_MEMORY. As the current demotion
> > > target finding algorithm works based on N_MEMORY and best distance,
> > > it will choose DRAM only NUMA node as demotion target instead of
> > > persistent memory node on such systems. If DRAM only NUMA node is
> > > filled with demoted pages then at some point new allocations can
> > > start falling to persistent memory, so basically cold pages are in
> > > fast memor (due to demotion) and new pages are in slow memory, this
> > > is why persistent memory nodes should be utilized for demotion and
> > > dram node should be avoided for demotion so that they can be used
> > > for new allocations.
> > > 
> > > Current implementation can work fine on the system where the memory
> > > only numa nodes are possible only for persistent/slow memory but it
> > > is not suitable for the like of systems mentioned above.
> > 
> > Can you share the NUMA topology information of your machine?  And the
> > demotion order before and after your change?
> > 
> > Whether it's good to use the PMEM nodes as the demotion targets of the
> > DRAM-only node too?
> 
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13392 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1971 MB
> node distances:
> node   0   1
>   0:  10  40
>   1:  40  10
> 
> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>    for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> 
> $ cat /sys/bus/nd/devices/dax0.0/target_node
> 2
> $
> # cd /sys/bus/dax/drivers/
> :/sys/bus/dax/drivers# ls
> device_dax  kmem
> :/sys/bus/dax/drivers# cd device_dax/
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> :/sys/bus/dax/drivers/device_dax# numactl -H
> available: 3 nodes (0-2)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13380 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1961 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 free: 0 MB
> node distances:
> node   0   1   2
>   0:  10  40  80
>   1:  40  10  80
>   2:  80  80  10
> 

This looks like a virtual machine, not a real machine.  That's
unfortunate.  I am looking forward to a real issue, not a theoritical
possible issue.

> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> patch series, 1 is demotion target for 0 and 2 is demotion target
> for 1.
> 
> With this patch series applied,
> 1) No demotion target for either 0 or 1 before dax device is online
> 2) 2 is demotion target for both 0 and 1 after dax device is online.
> 

So with your change, if a node hasn't N_DEMOTION_TARGETS, it will become
a top-level demotion source even if it hasn't N_CPU?  If so, I cannot
clear N_DEMOTION_TARGETS for a node in middle or bottom level?

Best Regards,
Huang, Ying

> > 
[snip]




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-26  7:55     ` ying.huang
@ 2022-04-26  9:07       ` Aneesh Kumar K V
  2022-04-26  9:10         ` ying.huang
  2022-04-26  9:37       ` Jagdish Gediya
  1 sibling, 1 reply; 35+ messages in thread
From: Aneesh Kumar K V @ 2022-04-26  9:07 UTC (permalink / raw)
  To: ying.huang, Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	shy828301, weixugc, gthelen, dan.j.williams

On 4/26/22 1:25 PM, ying.huang@intel.com wrote:
> On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>> filled with demoted pages then at some point new allocations can
>>>> start falling to persistent memory, so basically cold pages are in
>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>> is why persistent memory nodes should be utilized for demotion and
>>>> dram node should be avoided for demotion so that they can be used
>>>> for new allocations.
>>>>
>>>> Current implementation can work fine on the system where the memory
>>>> only numa nodes are possible only for persistent/slow memory but it
>>>> is not suitable for the like of systems mentioned above.
>>>
>>> Can you share the NUMA topology information of your machine?  And the
>>> demotion order before and after your change?
>>>
>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>> DRAM-only node too?
>>
>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node   0   1
>>    0:  10  40
>>    1:  40  10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax  kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node   0   1   2
>>    0:  10  40  80
>>    1:  40  10  80
>>    2:  80  80  10
>>
> 
> This looks like a virtual machine, not a real machine.  That's
> unfortunate.  I am looking forward to a real issue, not a theoritical
> possible issue.
> 

This is the source of confusion i guess. A large class of ppc64 systems 
are virtualized. The firmware include a hypervisor (PowerVM) and end 
user creates guest (aka LPAR) on them. That is the way end user will use 
this system. There is no baremetal access on this (There is an openpower 
variant, but all new systems built by IBM these days do have PowerVM on 
them).


So this is not a theoretical possibility.

-aneesh


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-26  9:07       ` Aneesh Kumar K V
@ 2022-04-26  9:10         ` ying.huang
  0 siblings, 0 replies; 35+ messages in thread
From: ying.huang @ 2022-04-26  9:10 UTC (permalink / raw)
  To: Aneesh Kumar K V, Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	shy828301, weixugc, gthelen, dan.j.williams

On Tue, 2022-04-26 at 14:37 +0530, Aneesh Kumar K V wrote:
> On 4/26/22 1:25 PM, ying.huang@intel.com wrote:
> > On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > filled with demoted pages then at some point new allocations can
> > > > > start falling to persistent memory, so basically cold pages are in
> > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > dram node should be avoided for demotion so that they can be used
> > > > > for new allocations.
> > > > > 
> > > > > Current implementation can work fine on the system where the memory
> > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > is not suitable for the like of systems mentioned above.
> > > > 
> > > > Can you share the NUMA topology information of your machine?  And the
> > > > demotion order before and after your change?
> > > > 
> > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > DRAM-only node too?
> > > 
> > > $ numactl -H
> > > available: 2 nodes (0-1)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13392 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1971 MB
> > > node distances:
> > > node   0   1
> > >    0:  10  40
> > >    1:  40  10
> > > 
> > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > >     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > > 
> > > $ cat /sys/bus/nd/devices/dax0.0/target_node
> > > 2
> > > $
> > > # cd /sys/bus/dax/drivers/
> > > :/sys/bus/dax/drivers# ls
> > > device_dax  kmem
> > > :/sys/bus/dax/drivers# cd device_dax/
> > > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> > > :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> > > :/sys/bus/dax/drivers/device_dax# numactl -H
> > > available: 3 nodes (0-2)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13380 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1961 MB
> > > node 2 cpus:
> > > node 2 size: 0 MB
> > > node 2 free: 0 MB
> > > node distances:
> > > node   0   1   2
> > >    0:  10  40  80
> > >    1:  40  10  80
> > >    2:  80  80  10
> > > 
> > 
> > This looks like a virtual machine, not a real machine.  That's
> > unfortunate.  I am looking forward to a real issue, not a theoritical
> > possible issue.
> > 
> 
> This is the source of confusion i guess. A large class of ppc64 systems 
> are virtualized. The firmware include a hypervisor (PowerVM) and end 
> user creates guest (aka LPAR) on them. That is the way end user will use 
> this system. There is no baremetal access on this (There is an openpower 
> variant, but all new systems built by IBM these days do have PowerVM on 
> them).
> 
> 
> So this is not a theoretical possibility.
> 

Now I get it.  Thanks for detailed explanation.

Best Regards,
Huang, Ying





^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-26  7:55     ` ying.huang
  2022-04-26  9:07       ` Aneesh Kumar K V
@ 2022-04-26  9:37       ` Jagdish Gediya
  1 sibling, 0 replies; 35+ messages in thread
From: Jagdish Gediya @ 2022-04-26  9:37 UTC (permalink / raw)
  To: ying.huang
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	aneesh.kumar, shy828301, weixugc, gthelen, dan.j.williams

On Tue, Apr 26, 2022 at 03:55:36PM +0800, ying.huang@intel.com wrote:
> On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > filled with demoted pages then at some point new allocations can
> > > > start falling to persistent memory, so basically cold pages are in
> > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > is why persistent memory nodes should be utilized for demotion and
> > > > dram node should be avoided for demotion so that they can be used
> > > > for new allocations.
> > > > 
> > > > Current implementation can work fine on the system where the memory
> > > > only numa nodes are possible only for persistent/slow memory but it
> > > > is not suitable for the like of systems mentioned above.
> > > 
> > > Can you share the NUMA topology information of your machine?  And the
> > > demotion order before and after your change?
> > > 
> > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > DRAM-only node too?
> > 
> > $ numactl -H
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 1 2 3 4 5 6 7
> > node 0 size: 14272 MB
> > node 0 free: 13392 MB
> > node 1 cpus:
> > node 1 size: 2028 MB
> > node 1 free: 1971 MB
> > node distances:
> > node   0   1
> >   0:  10  40
> >   1:  40  10
> > 
> > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> >    for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > 
> > $ cat /sys/bus/nd/devices/dax0.0/target_node
> > 2
> > $
> > # cd /sys/bus/dax/drivers/
> > :/sys/bus/dax/drivers# ls
> > device_dax  kmem
> > :/sys/bus/dax/drivers# cd device_dax/
> > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> > :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> > :/sys/bus/dax/drivers/device_dax# numactl -H
> > available: 3 nodes (0-2)
> > node 0 cpus: 0 1 2 3 4 5 6 7
> > node 0 size: 14272 MB
> > node 0 free: 13380 MB
> > node 1 cpus:
> > node 1 size: 2028 MB
> > node 1 free: 1961 MB
> > node 2 cpus:
> > node 2 size: 0 MB
> > node 2 free: 0 MB
> > node distances:
> > node   0   1   2
> >   0:  10  40  80
> >   1:  40  10  80
> >   2:  80  80  10
> > 
> 
> This looks like a virtual machine, not a real machine.  That's
> unfortunate.  I am looking forward to a real issue, not a theoritical
> possible issue.
> 
> > 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> > patch series, 1 is demotion target for 0 and 2 is demotion target
> > for 1.
> > 
> > With this patch series applied,
> > 1) No demotion target for either 0 or 1 before dax device is online
> > 2) 2 is demotion target for both 0 and 1 after dax device is online.
> > 
> 
> So with your change, if a node hasn't N_DEMOTION_TARGETS, it will become
> a top-level demotion source even if it hasn't N_CPU?  If so, I cannot
> clear N_DEMOTION_TARGETS for a node in middle or bottom level?

Yes, only N_MEMORY node also become demotion source because it is not
N_DEMOTION_TARGETS. You can clear N_DEMOTION_TARGETS from middle
or bottom but in that case, as the implementation works based on the
passes, cleared node will not be found as demotion target hence
demotion target will not be found for it, but does it make sense to
use faster persistent memory as demotion target leaving slowerer
persistent memory out of demotion list, if not, then it is not an
issue.

> Best Regards,
> Huang, Ying
> 
> > > 
> [snip]
> 
> 
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-25 14:53       ` Aneesh Kumar K V
@ 2022-04-26 10:37         ` Jonathan Cameron
  0 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2022-04-26 10:37 UTC (permalink / raw)
  To: Aneesh Kumar K V
  Cc: Jagdish Gediya, ying.huang, linux-mm, linux-kernel, akpm,
	baolin.wang, dave.hansen, shy828301, weixugc, gthelen,
	dan.j.williams

On Mon, 25 Apr 2022 20:23:56 +0530
Aneesh Kumar K V <aneesh.kumar@linux.ibm.com> wrote:

> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > On Mon, 25 Apr 2022 16:45:38 +0530
> > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> >  
> 
> ....
> 
> >> $ numactl -H
> >> available: 2 nodes (0-1)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13392 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1971 MB
> >> node distances:
> >> node   0   1
> >>    0:  10  40
> >>    1:  40  10
> >>
> >> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> >>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.  
> > 
> > I'm not convinced the distinction between DRAM and persistent memory is
> > valid. There will definitely be systems with a large pool
> > of remote DRAM (and potentially no NV memory) where the right choice
> > is to demote to that DRAM pool.
> > 
> > Basing the decision on whether the memory is from kmem or
> > normal DRAM doesn't provide sufficient information to make the decision.
> >   
> >>
> >> $ cat /sys/bus/nd/devices/dax0.0/target_node
> >> 2
> >> $
> >> # cd /sys/bus/dax/drivers/
> >> :/sys/bus/dax/drivers# ls
> >> device_dax  kmem
> >> :/sys/bus/dax/drivers# cd device_dax/
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> >> :/sys/bus/dax/drivers/device_dax# numactl -H
> >> available: 3 nodes (0-2)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13380 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1961 MB
> >> node 2 cpus:
> >> node 2 size: 0 MB
> >> node 2 free: 0 MB
> >> node distances:
> >> node   0   1   2
> >>    0:  10  40  80
> >>    1:  40  10  80
> >>    2:  80  80  10
> >>
> >> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> >> patch series, 1 is demotion target for 0 and 2 is demotion target
> >> for 1.
> >>
> >> With this patch series applied,
> >> 1) No demotion target for either 0 or 1 before dax device is online  
> > 
> > I'd argue that is wrong.  At this state you have a tiered memory system
> > be it one with just DRAM.  Using it as such is correct behavior that
> > we should not be preventing.  Sure some usecases wouldn't want that
> > arrangement but some do want it.
> >   
> 
> I missed this in my earlier reply. Are you suggesting that we would want 
> Node 1 (DRAM only memory numa node) to act as demotion target for Node 
> 0?  Any reason why we would want to do that? That is clearly opposite of 
> what we are trying to do here. IMHO node using Node1 as demotion target 
> for Node0 is a better default?

In this case, because of the small size that probably wouldn't make sense.
But, if that were a CXL memory pool with multiple TB of DDR then yes
we would want the default case to use that memory for the demotion path.

So I don't think DDR vs NV via kmem alone is the right basis for a decision
on the default behavior.

Sure we can make this all a userspace problem.

Jonathan

> 
> 
> 
> -aneesh



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-25 14:44       ` Aneesh Kumar K V
@ 2022-04-26 10:43         ` Jonathan Cameron
  2022-04-27  1:29         ` ying.huang
  1 sibling, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2022-04-26 10:43 UTC (permalink / raw)
  To: Aneesh Kumar K V
  Cc: Jagdish Gediya, ying.huang, linux-mm, linux-kernel, akpm,
	baolin.wang, dave.hansen, shy828301, weixugc, gthelen,
	dan.j.williams

On Mon, 25 Apr 2022 20:14:58 +0530
Aneesh Kumar K V <aneesh.kumar@linux.ibm.com> wrote:

> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > On Mon, 25 Apr 2022 16:45:38 +0530
> > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> >   
> >> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:  
> >>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:  
> >>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> >>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
> >>>> only NUMA node which are also N_MEMORY. As the current demotion
> >>>> target finding algorithm works based on N_MEMORY and best distance,
> >>>> it will choose DRAM only NUMA node as demotion target instead of
> >>>> persistent memory node on such systems. If DRAM only NUMA node is
> >>>> filled with demoted pages then at some point new allocations can
> >>>> start falling to persistent memory, so basically cold pages are in
> >>>> fast memor (due to demotion) and new pages are in slow memory, this
> >>>> is why persistent memory nodes should be utilized for demotion and
> >>>> dram node should be avoided for demotion so that they can be used
> >>>> for new allocations.
> >>>>
> >>>> Current implementation can work fine on the system where the memory
> >>>> only numa nodes are possible only for persistent/slow memory but it
> >>>> is not suitable for the like of systems mentioned above.  
> >>>
> >>> Can you share the NUMA topology information of your machine?  And the
> >>> demotion order before and after your change?
> >>>
> >>> Whether it's good to use the PMEM nodes as the demotion targets of the
> >>> DRAM-only node too?  
> >>
> >> $ numactl -H
> >> available: 2 nodes (0-1)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13392 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1971 MB
> >> node distances:
> >> node   0   1
> >>    0:  10  40
> >>    1:  40  10
> >>
> >> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> >>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.  
> > 
> > I'm not convinced the distinction between DRAM and persistent memory is
> > valid. There will definitely be systems with a large pool
> > of remote DRAM (and potentially no NV memory) where the right choice
> > is to demote to that DRAM pool.
> > 
> > Basing the decision on whether the memory is from kmem or
> > normal DRAM doesn't provide sufficient information to make the decision.
> >   
> 
> Hence the suggestion for the ability to override this from userspace. 
> Now, for example, we could build a system with memory from the remote 
> machine (memory inception in case of power which will mostly be plugged 
> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI 
> memory.
> 
> In the former case, we won't consider that for demotion with this series 
> because that is not instantiated via dax kmem. So yes definitely we 
> would need the ability to override this from userspace so that we could 
> put these remote memory NUMA nodes as demotion targets if we want.


Agreed.  I would like to have a better 'guess' at the right default
though if possible.  With hindsight my instinct would have been to
have a default of no demotion path at all and hence ensure distros will carry
appropriate userspace setup scripts.  Ah well, too late :)

> 
> >>
> >> $ cat /sys/bus/nd/devices/dax0.0/target_node
> >> 2
> >> $
> >> # cd /sys/bus/dax/drivers/
> >> :/sys/bus/dax/drivers# ls
> >> device_dax  kmem
> >> :/sys/bus/dax/drivers# cd device_dax/
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> >> :/sys/bus/dax/drivers/device_dax# numactl -H
> >> available: 3 nodes (0-2)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13380 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1961 MB
> >> node 2 cpus:
> >> node 2 size: 0 MB
> >> node 2 free: 0 MB
> >> node distances:
> >> node   0   1   2
> >>    0:  10  40  80
> >>    1:  40  10  80
> >>    2:  80  80  10
> >>
> >> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> >> patch series, 1 is demotion target for 0 and 2 is demotion target
> >> for 1.
> >>
> >> With this patch series applied,
> >> 1) No demotion target for either 0 or 1 before dax device is online  
> > 
> > I'd argue that is wrong.  At this state you have a tiered memory system
> > be it one with just DRAM.  Using it as such is correct behavior that
> > we should not be preventing.  Sure some usecases wouldn't want that
> > arrangement but some do want it.
> > 
> > For your case we could add a heuristic along the lines of the demotion
> > target should be at least as big as the starting point but that would
> > be a bit hacky.
> >   
> 
> Hence the proposal to do a per node demotion target override with the 
> semantics that i explained here
> 
> 
> https://lore.kernel.org/linux-mm/8735i1zurt.fsf@linux.ibm.com/
> 
> Let me know if that interface would be good to handle all the possible 
> demotion target configs we would want to have.

At first glance it looks good to me.

Jonathan

> 
> -aneesh



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-25 14:44       ` Aneesh Kumar K V
  2022-04-26 10:43         ` Jonathan Cameron
@ 2022-04-27  1:29         ` ying.huang
  2022-04-27  2:57           ` Aneesh Kumar K V
  1 sibling, 1 reply; 35+ messages in thread
From: ying.huang @ 2022-04-27  1:29 UTC (permalink / raw)
  To: Aneesh Kumar K V, Jonathan Cameron, Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	shy828301, weixugc, gthelen, dan.j.williams

On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > On Mon, 25 Apr 2022 16:45:38 +0530
> > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> > 
> > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > filled with demoted pages then at some point new allocations can
> > > > > start falling to persistent memory, so basically cold pages are in
> > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > dram node should be avoided for demotion so that they can be used
> > > > > for new allocations.
> > > > > 
> > > > > Current implementation can work fine on the system where the memory
> > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > is not suitable for the like of systems mentioned above.
> > > > 
> > > > Can you share the NUMA topology information of your machine?  And the
> > > > demotion order before and after your change?
> > > > 
> > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > DRAM-only node too?
> > > 
> > > $ numactl -H
> > > available: 2 nodes (0-1)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13392 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1971 MB
> > > node distances:
> > > node   0   1
> > >    0:  10  40
> > >    1:  40  10
> > > 
> > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > >     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > 
> > I'm not convinced the distinction between DRAM and persistent memory is
> > valid. There will definitely be systems with a large pool
> > of remote DRAM (and potentially no NV memory) where the right choice
> > is to demote to that DRAM pool.
> > 
> > Basing the decision on whether the memory is from kmem or
> > normal DRAM doesn't provide sufficient information to make the decision.
> > 
> 
> Hence the suggestion for the ability to override this from userspace. 
> Now, for example, we could build a system with memory from the remote 
> machine (memory inception in case of power which will mostly be plugged 
> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI 
> memory.
> 
> In the former case, we won't consider that for demotion with this series 
> because that is not instantiated via dax kmem. So yes definitely we 
> would need the ability to override this from userspace so that we could 
> put these remote memory NUMA nodes as demotion targets if we want.
> > > 

Is there a driver for the device (memory from the remote machine)?  If
so, we can adjust demotion order for it in the driver.

In general, I think that we can adjust demotion order inside kernel from
various information sources.  In addition to ACPI SLIT, we also have
HMAT, kmem driver, other drivers, etc.

> > 
Best Regards,
Huang, Ying



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-27  1:29         ` ying.huang
@ 2022-04-27  2:57           ` Aneesh Kumar K V
  2022-04-27  3:34             ` ying.huang
  0 siblings, 1 reply; 35+ messages in thread
From: Aneesh Kumar K V @ 2022-04-27  2:57 UTC (permalink / raw)
  To: ying.huang, Jonathan Cameron, Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	shy828301, weixugc, gthelen, dan.j.williams

On 4/27/22 6:59 AM, ying.huang@intel.com wrote:
> On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
>> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
>>> On Mon, 25 Apr 2022 16:45:38 +0530
>>> Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>>>
>>>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
>>>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>>>> filled with demoted pages then at some point new allocations can
>>>>>> start falling to persistent memory, so basically cold pages are in
>>>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>>>> is why persistent memory nodes should be utilized for demotion and
>>>>>> dram node should be avoided for demotion so that they can be used
>>>>>> for new allocations.
>>>>>>
>>>>>> Current implementation can work fine on the system where the memory
>>>>>> only numa nodes are possible only for persistent/slow memory but it
>>>>>> is not suitable for the like of systems mentioned above.
>>>>>
>>>>> Can you share the NUMA topology information of your machine?  And the
>>>>> demotion order before and after your change?
>>>>>
>>>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>>>> DRAM-only node too?
>>>>
>>>> $ numactl -H
>>>> available: 2 nodes (0-1)
>>>> node 0 cpus: 0 1 2 3 4 5 6 7
>>>> node 0 size: 14272 MB
>>>> node 0 free: 13392 MB
>>>> node 1 cpus:
>>>> node 1 size: 2028 MB
>>>> node 1 free: 1971 MB
>>>> node distances:
>>>> node   0   1
>>>>     0:  10  40
>>>>     1:  40  10
>>>>
>>>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>>>      for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>>>
>>> I'm not convinced the distinction between DRAM and persistent memory is
>>> valid. There will definitely be systems with a large pool
>>> of remote DRAM (and potentially no NV memory) where the right choice
>>> is to demote to that DRAM pool.
>>>
>>> Basing the decision on whether the memory is from kmem or
>>> normal DRAM doesn't provide sufficient information to make the decision.
>>>
>>
>> Hence the suggestion for the ability to override this from userspace.
>> Now, for example, we could build a system with memory from the remote
>> machine (memory inception in case of power which will mostly be plugged
>> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
>> memory.
>>
>> In the former case, we won't consider that for demotion with this series
>> because that is not instantiated via dax kmem. So yes definitely we
>> would need the ability to override this from userspace so that we could
>> put these remote memory NUMA nodes as demotion targets if we want.
>>>>
> 
> Is there a driver for the device (memory from the remote machine)?  If
> so, we can adjust demotion order for it in the driver.
> 

At this point, it is managed by hypervisor, is hotplugged into the the 
LPAR with more additional properties specified via device tree. So there 
is no inception specific device driver.

> In general, I think that we can adjust demotion order inside kernel from
> various information sources.  In addition to ACPI SLIT, we also have
> HMAT, kmem driver, other drivers, etc.
> 

Managing inception memory will any way requires a userspace component to 
track the owner machine for the remote memory. So we should be ok to 
have userspace manage demotion order.

-aneesh



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
  2022-04-27  2:57           ` Aneesh Kumar K V
@ 2022-04-27  3:34             ` ying.huang
  0 siblings, 0 replies; 35+ messages in thread
From: ying.huang @ 2022-04-27  3:34 UTC (permalink / raw)
  To: Aneesh Kumar K V, Jonathan Cameron, Jagdish Gediya
  Cc: linux-mm, linux-kernel, akpm, baolin.wang, dave.hansen,
	shy828301, weixugc, gthelen, dan.j.williams

On Wed, 2022-04-27 at 08:27 +0530, Aneesh Kumar K V wrote:
> On 4/27/22 6:59 AM, ying.huang@intel.com wrote:
> > On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
> > > On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > > > On Mon, 25 Apr 2022 16:45:38 +0530
> > > > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> > > > 
> > > > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > > > filled with demoted pages then at some point new allocations can
> > > > > > > start falling to persistent memory, so basically cold pages are in
> > > > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > > > dram node should be avoided for demotion so that they can be used
> > > > > > > for new allocations.
> > > > > > > 
> > > > > > > Current implementation can work fine on the system where the memory
> > > > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > > > is not suitable for the like of systems mentioned above.
> > > > > > 
> > > > > > Can you share the NUMA topology information of your machine?  And the
> > > > > > demotion order before and after your change?
> > > > > > 
> > > > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > > > DRAM-only node too?
> > > > > 
> > > > > $ numactl -H
> > > > > available: 2 nodes (0-1)
> > > > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > > > node 0 size: 14272 MB
> > > > > node 0 free: 13392 MB
> > > > > node 1 cpus:
> > > > > node 1 size: 2028 MB
> > > > > node 1 free: 1971 MB
> > > > > node distances:
> > > > > node   0   1
> > > > >     0:  10  40
> > > > >     1:  40  10
> > > > > 
> > > > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > > > >      for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > > > 
> > > > I'm not convinced the distinction between DRAM and persistent memory is
> > > > valid. There will definitely be systems with a large pool
> > > > of remote DRAM (and potentially no NV memory) where the right choice
> > > > is to demote to that DRAM pool.
> > > > 
> > > > Basing the decision on whether the memory is from kmem or
> > > > normal DRAM doesn't provide sufficient information to make the decision.
> > > > 
> > > 
> > > Hence the suggestion for the ability to override this from userspace.
> > > Now, for example, we could build a system with memory from the remote
> > > machine (memory inception in case of power which will mostly be plugged
> > > in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
> > > memory.
> > > 
> > > In the former case, we won't consider that for demotion with this series
> > > because that is not instantiated via dax kmem. So yes definitely we
> > > would need the ability to override this from userspace so that we could
> > > put these remote memory NUMA nodes as demotion targets if we want.
> > > > > 
> > 
> > Is there a driver for the device (memory from the remote machine)?  If
> > so, we can adjust demotion order for it in the driver.
> > 
> 
> At this point, it is managed by hypervisor, is hotplugged into the the 
> LPAR with more additional properties specified via device tree. So there 
> is no inception specific device driver.

Because there's information in device tree, I still think it's doable in
the kernel.  But it's up to you to choose the appropriate way.

Best Regards,
Huang, Ying

> > In general, I think that we can adjust demotion order inside kernel from
> > various information sources.  In addition to ACPI SLIT, we also have
> > HMAT, kmem driver, other drivers, etc.
> > 
> 
> Managing inception memory will any way requires a userspace component to 
> track the owner machine for the remote memory. So we should be ok to 
> have userspace manage demotion order.
> 
> -aneesh
> 




^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2022-04-27  3:34 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-22 19:55 [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS Jagdish Gediya
2022-04-22 19:55 ` [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources Jagdish Gediya
2022-04-24  3:25   ` ying.huang
2022-04-25  9:32     ` Jagdish Gediya
2022-04-26  7:26       ` ying.huang
2022-04-22 19:55 ` [PATCH v3 2/7] mm: demotion: Add new node state N_DEMOTION_TARGETS Jagdish Gediya
2022-04-22 20:29   ` Wei Xu
2022-04-22 19:55 ` [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs Jagdish Gediya
2022-04-22 20:32   ` Wei Xu
2022-04-24  6:25   ` Aneesh Kumar K.V
2022-04-25  9:42     ` Jagdish Gediya
2022-04-24  6:29   ` ying.huang
2022-04-22 19:55 ` [PATCH v3 4/7] device-dax/kmem: Set node state as N_DEMOTION_TARGETS Jagdish Gediya
2022-04-22 20:34   ` Wei Xu
2022-04-22 19:55 ` [PATCH v3 5/7] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS Jagdish Gediya
2022-04-22 20:39   ` Wei Xu
2022-04-22 19:55 ` [PATCH v3 6/7] mm: demotion: expose per-node demotion targets via sysfs Jagdish Gediya
2022-04-22 20:47   ` Wei Xu
2022-04-23  7:30   ` kernel test robot
2022-04-23  8:38   ` kernel test robot
2022-04-22 19:55 ` [PATCH v3 7/7] docs: numa: Add documentation for demotion Jagdish Gediya
2022-04-24  3:19 ` [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS ying.huang
2022-04-25 11:15   ` Jagdish Gediya
2022-04-25 13:57     ` Jonathan Cameron
2022-04-25 14:44       ` Aneesh Kumar K V
2022-04-26 10:43         ` Jonathan Cameron
2022-04-27  1:29         ` ying.huang
2022-04-27  2:57           ` Aneesh Kumar K V
2022-04-27  3:34             ` ying.huang
2022-04-25 14:53       ` Aneesh Kumar K V
2022-04-26 10:37         ` Jonathan Cameron
2022-04-26  7:55     ` ying.huang
2022-04-26  9:07       ` Aneesh Kumar K V
2022-04-26  9:10         ` ying.huang
2022-04-26  9:37       ` Jagdish Gediya

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).