linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] pseudo-interleaving NUMA placement
@ 2013-11-26 22:03 riel
  2013-11-26 22:03 ` [RFC PATCH 1/4] remove p->numa_migrate_deferred riel
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: riel @ 2013-11-26 22:03 UTC (permalink / raw)
  To: linux-mm; +Cc: linux-kernel, mgorman, chegu_vinod, peterz

This patch set attempts to implement a pseudo-interleaving
policy for workloads that do not fit in one NUMA node.

For each NUMA group, we track the NUMA nodes on which the
workload is actively running, and try to concentrate the
memory on those NUMA nodes.

Unfortunately, the scheduler appears to move tasks around
quite a bit, leading to nodes being dropped from the
"active nodes" mask, and re-added a little later, causing
excessive memory migration.

I am not sure how to solve that. Hopefully somebody will
have an idea :)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [RFC PATCH 1/4] remove p->numa_migrate_deferred
  2013-11-26 22:03 [RFC PATCH 0/4] pseudo-interleaving NUMA placement riel
@ 2013-11-26 22:03 ` riel
  2013-11-26 22:03 ` [RFC PATCH 2/4] track from which nodes NUMA faults are triggered riel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: riel @ 2013-11-26 22:03 UTC (permalink / raw)
  To: linux-mm; +Cc: linux-kernel, mgorman, chegu_vinod, peterz

From: Rik van Riel <riel@redhat.com>

Excessive migration of pages can hurt the performance of workloads
that span multiple NUMA nodes.  However, it turns out that the
p->numa_migrate_deferred knob is a really big hammer, which does
reduce migration rates, but does not actually help performance.

It is time to rip it out, and replace it with something smarter.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 include/linux/sched.h |  1 -
 kernel/sched/fair.c   |  8 --------
 kernel/sysctl.c       |  7 -------
 mm/mempolicy.c        | 45 ---------------------------------------------
 4 files changed, 61 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 42f2baf..9e4cb598 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1345,7 +1345,6 @@ struct task_struct {
 	unsigned int numa_scan_period;
 	unsigned int numa_scan_period_max;
 	int numa_preferred_nid;
-	int numa_migrate_deferred;
 	unsigned long numa_migrate_retry;
 	u64 node_stamp;			/* migration stamp  */
 	struct callback_head numa_work;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7f9b376..410858e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -794,14 +794,6 @@ unsigned int sysctl_numa_balancing_scan_size = 256;
 /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */
 unsigned int sysctl_numa_balancing_scan_delay = 1000;
 
-/*
- * After skipping a page migration on a shared page, skip N more numa page
- * migrations unconditionally. This reduces the number of NUMA migrations
- * in shared memory workloads, and has the effect of pulling tasks towards
- * where their memory lives, over pulling the memory towards the task.
- */
-unsigned int sysctl_numa_balancing_migrate_deferred = 16;
-
 static unsigned int task_nr_scan_windows(struct task_struct *p)
 {
 	unsigned long rss = 0;
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 14c4f51..821e3f1 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -392,13 +392,6 @@ static struct ctl_table kern_table[] = {
 		.mode           = 0644,
 		.proc_handler   = proc_dointvec,
 	},
-	{
-		.procname       = "numa_balancing_migrate_deferred",
-		.data           = &sysctl_numa_balancing_migrate_deferred,
-		.maxlen         = sizeof(unsigned int),
-		.mode           = 0644,
-		.proc_handler   = proc_dointvec,
-	},
 #endif /* CONFIG_NUMA_BALANCING */
 #endif /* CONFIG_SCHED_DEBUG */
 	{
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9a2f6dd..0522aa2 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2247,35 +2247,6 @@ static void sp_free(struct sp_node *n)
 	kmem_cache_free(sn_cache, n);
 }
 
-#ifdef CONFIG_NUMA_BALANCING
-static bool numa_migrate_deferred(struct task_struct *p, int last_cpupid)
-{
-	/* Never defer a private fault */
-	if (cpupid_match_pid(p, last_cpupid))
-		return false;
-
-	if (p->numa_migrate_deferred) {
-		p->numa_migrate_deferred--;
-		return true;
-	}
-	return false;
-}
-
-static inline void defer_numa_migrate(struct task_struct *p)
-{
-	p->numa_migrate_deferred = sysctl_numa_balancing_migrate_deferred;
-}
-#else
-static inline bool numa_migrate_deferred(struct task_struct *p, int last_cpupid)
-{
-	return false;
-}
-
-static inline void defer_numa_migrate(struct task_struct *p)
-{
-}
-#endif /* CONFIG_NUMA_BALANCING */
-
 /**
  * mpol_misplaced - check whether current page node is valid in policy
  *
@@ -2378,24 +2349,8 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 		 */
 		last_cpupid = page_cpupid_xchg_last(page, this_cpupid);
 		if (!cpupid_pid_unset(last_cpupid) && cpupid_to_nid(last_cpupid) != thisnid) {
-
-			/* See sysctl_numa_balancing_migrate_deferred comment */
-			if (!cpupid_match_pid(current, last_cpupid))
-				defer_numa_migrate(current);
-
 			goto out;
 		}
-
-		/*
-		 * The quadratic filter above reduces extraneous migration
-		 * of shared pages somewhat. This code reduces it even more,
-		 * reducing the overhead of page migrations of shared pages.
-		 * This makes workloads with shared pages rely more on
-		 * "move task near its memory", and less on "move memory
-		 * towards its task", which is exactly what we want.
-		 */
-		if (numa_migrate_deferred(current, last_cpupid))
-			goto out;
 	}
 
 	if (curnid != polnid)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH 2/4] track from which nodes NUMA faults are triggered
  2013-11-26 22:03 [RFC PATCH 0/4] pseudo-interleaving NUMA placement riel
  2013-11-26 22:03 ` [RFC PATCH 1/4] remove p->numa_migrate_deferred riel
@ 2013-11-26 22:03 ` riel
  2013-11-26 22:03 ` [RFC PATCH 3/4] build per numa_group active node mask from faults_from statistics riel
  2013-11-26 22:03 ` [RFC PATCH 4/4] use active_nodes nodemask to decide on numa migrations riel
  3 siblings, 0 replies; 5+ messages in thread
From: riel @ 2013-11-26 22:03 UTC (permalink / raw)
  To: linux-mm; +Cc: linux-kernel, mgorman, chegu_vinod, peterz

From: Rik van Riel <riel@redhat.com>

Track which nodes NUMA faults are triggered from. This uses a similar
mechanism to what is used to track the memory involved in numa faults.

This is used, in the next patch, to build up a bitmap of which nodes
a workload is actively running on.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 include/linux/sched.h | 10 ++++++++--
 kernel/sched/fair.c   | 30 +++++++++++++++++++++++-------
 2 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9e4cb598..e4b00d8 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1368,6 +1368,14 @@ struct task_struct {
 	unsigned long *numa_faults_buffer;
 
 	/*
+	 * Track the nodes where faults are incurred. This is not very
+	 * interesting on a per-task basis, but it help with smarter
+	 * numa memory placement for groups of processes.
+	 */
+	unsigned long *numa_faults_from;
+	unsigned long *numa_faults_from_buffer;
+
+	/*
 	 * numa_faults_locality tracks if faults recorded during the last
 	 * scan window were remote/local. The task scan period is adapted
 	 * based on the locality of the faults with different weights
@@ -1467,8 +1475,6 @@ extern void task_numa_fault(int last_node, int node, int pages, int flags);
 extern pid_t task_numa_group_id(struct task_struct *p);
 extern void set_numabalancing_state(bool enabled);
 extern void task_numa_free(struct task_struct *p);
-
-extern unsigned int sysctl_numa_balancing_migrate_deferred;
 #else
 static inline void task_numa_fault(int last_node, int node, int pages,
 				   int flags)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 410858e..89b5217 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -870,6 +870,7 @@ struct numa_group {
 
 	struct rcu_head rcu;
 	unsigned long total_faults;
+	unsigned long *faults_from;
 	unsigned long faults[0];
 };
 
@@ -1327,10 +1328,11 @@ static void task_numa_placement(struct task_struct *p)
 		int priv, i;
 
 		for (priv = 0; priv < 2; priv++) {
-			long diff;
+			long diff, f_diff;
 
 			i = task_faults_idx(nid, priv);
 			diff = -p->numa_faults[i];
+			f_diff = -p->numa_faults_from[i];
 
 			/* Decay existing window, copy faults since last scan */
 			p->numa_faults[i] >>= 1;
@@ -1338,12 +1340,18 @@ static void task_numa_placement(struct task_struct *p)
 			fault_types[priv] += p->numa_faults_buffer[i];
 			p->numa_faults_buffer[i] = 0;
 
+			p->numa_faults_from[i] >>= 1;
+			p->numa_faults_from[i] += p->numa_faults_from_buffer[i];
+			p->numa_faults_from_buffer[i] = 0;
+
 			faults += p->numa_faults[i];
 			diff += p->numa_faults[i];
+			f_diff += p->numa_faults_from[i];
 			p->total_numa_faults += diff;
 			if (p->numa_group) {
 				/* safe because we can only change our own group */
 				p->numa_group->faults[i] += diff;
+				p->numa_group->faults_from[i] += f_diff;
 				p->numa_group->total_faults += diff;
 				group_faults += p->numa_group->faults[i];
 			}
@@ -1412,7 +1420,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags,
 
 	if (unlikely(!p->numa_group)) {
 		unsigned int size = sizeof(struct numa_group) +
-				    2*nr_node_ids*sizeof(unsigned long);
+				    4*nr_node_ids*sizeof(unsigned long);
 
 		grp = kzalloc(size, GFP_KERNEL | __GFP_NOWARN);
 		if (!grp)
@@ -1422,8 +1430,10 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags,
 		spin_lock_init(&grp->lock);
 		INIT_LIST_HEAD(&grp->task_list);
 		grp->gid = p->pid;
+		/* Second half of the array tracks where faults come from */
+		grp->faults_from = grp->faults + 2 * nr_node_ids;
 
-		for (i = 0; i < 2*nr_node_ids; i++)
+		for (i = 0; i < 4*nr_node_ids; i++)
 			grp->faults[i] = p->numa_faults[i];
 
 		grp->total_faults = p->total_numa_faults;
@@ -1482,7 +1492,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags,
 
 	double_lock(&my_grp->lock, &grp->lock);
 
-	for (i = 0; i < 2*nr_node_ids; i++) {
+	for (i = 0; i < 4*nr_node_ids; i++) {
 		my_grp->faults[i] -= p->numa_faults[i];
 		grp->faults[i] += p->numa_faults[i];
 	}
@@ -1509,7 +1519,7 @@ void task_numa_free(struct task_struct *p)
 
 	if (grp) {
 		spin_lock(&grp->lock);
-		for (i = 0; i < 2*nr_node_ids; i++)
+		for (i = 0; i < 4*nr_node_ids; i++)
 			grp->faults[i] -= p->numa_faults[i];
 		grp->total_faults -= p->total_numa_faults;
 
@@ -1522,6 +1532,8 @@ void task_numa_free(struct task_struct *p)
 
 	p->numa_faults = NULL;
 	p->numa_faults_buffer = NULL;
+	p->numa_faults_from = NULL;
+	p->numa_faults_from_buffer = NULL;
 	kfree(numa_faults);
 }
 
@@ -1532,6 +1544,7 @@ void task_numa_fault(int last_cpupid, int node, int pages, int flags)
 {
 	struct task_struct *p = current;
 	bool migrated = flags & TNF_MIGRATED;
+	int this_node = task_node(current);
 	int priv;
 
 	if (!numabalancing_enabled)
@@ -1547,7 +1560,7 @@ void task_numa_fault(int last_cpupid, int node, int pages, int flags)
 
 	/* Allocate buffer to track faults on a per-node basis */
 	if (unlikely(!p->numa_faults)) {
-		int size = sizeof(*p->numa_faults) * 2 * nr_node_ids;
+		int size = sizeof(*p->numa_faults) * 4 * nr_node_ids;
 
 		/* numa_faults and numa_faults_buffer share the allocation */
 		p->numa_faults = kzalloc(size * 2, GFP_KERNEL|__GFP_NOWARN);
@@ -1555,7 +1568,9 @@ void task_numa_fault(int last_cpupid, int node, int pages, int flags)
 			return;
 
 		BUG_ON(p->numa_faults_buffer);
-		p->numa_faults_buffer = p->numa_faults + (2 * nr_node_ids);
+		p->numa_faults_from = p->numa_faults + (2 * nr_node_ids);
+		p->numa_faults_buffer = p->numa_faults + (4 * nr_node_ids);
+		p->numa_faults_from_buffer = p->numa_faults + (6 * nr_node_ids);
 		p->total_numa_faults = 0;
 		memset(p->numa_faults_locality, 0, sizeof(p->numa_faults_locality));
 	}
@@ -1585,6 +1600,7 @@ void task_numa_fault(int last_cpupid, int node, int pages, int flags)
 		p->numa_pages_migrated += pages;
 
 	p->numa_faults_buffer[task_faults_idx(node, priv)] += pages;
+	p->numa_faults_from_buffer[task_faults_idx(this_node, priv)] += pages;
 	p->numa_faults_locality[!!(flags & TNF_FAULT_LOCAL)] += pages;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH 3/4] build per numa_group active node mask from faults_from statistics
  2013-11-26 22:03 [RFC PATCH 0/4] pseudo-interleaving NUMA placement riel
  2013-11-26 22:03 ` [RFC PATCH 1/4] remove p->numa_migrate_deferred riel
  2013-11-26 22:03 ` [RFC PATCH 2/4] track from which nodes NUMA faults are triggered riel
@ 2013-11-26 22:03 ` riel
  2013-11-26 22:03 ` [RFC PATCH 4/4] use active_nodes nodemask to decide on numa migrations riel
  3 siblings, 0 replies; 5+ messages in thread
From: riel @ 2013-11-26 22:03 UTC (permalink / raw)
  To: linux-mm; +Cc: linux-kernel, mgorman, chegu_vinod, peterz

From: Rik van Riel <riel@redhat.com>

The faults_from statistics are used to maintain an active_nodes nodemask
per numa_group. This allows us to be smarter about when to do numa migrations.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
 kernel/sched/fair.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 89b5217..91b8f11 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -869,6 +869,7 @@ struct numa_group {
 	struct list_head task_list;
 
 	struct rcu_head rcu;
+	nodemask_t active_nodes;
 	unsigned long total_faults;
 	unsigned long *faults_from;
 	unsigned long faults[0];
@@ -1228,6 +1229,34 @@ static void numa_migrate_preferred(struct task_struct *p)
 	task_numa_migrate(p);
 }
 
+static void update_numa_active_node_mask(struct task_struct *p)
+{
+	unsigned long faults, max_faults = 0;
+	struct numa_group *numa_group = p->numa_group;
+	int nid;
+
+	for_each_online_node(nid) {
+		faults = numa_group->faults_from[task_faults_idx(nid, 0)] +
+			 numa_group->faults_from[task_faults_idx(nid, 1)];
+		if (faults > max_faults)
+			max_faults = faults;
+	}
+
+	/*
+	 * Mark any node where more than 40% of the faults
+	 * (half minus some hysteresis) as part of this
+	 * group's active nodes.
+	 */
+	for_each_online_node(nid) {
+		faults = numa_group->faults_from[task_faults_idx(nid, 0)] +
+			 numa_group->faults_from[task_faults_idx(nid, 1)];
+		if (faults > max_faults * 4 / 10)
+			node_set(nid, numa_group->active_nodes);
+		else
+			node_clear(nid, numa_group->active_nodes);
+	}
+}
+
 /*
  * When adapting the scan rate, the period is divided into NUMA_PERIOD_SLOTS
  * increments. The more local the fault statistics are, the higher the scan
@@ -1387,6 +1416,8 @@ static void task_numa_placement(struct task_struct *p)
 			}
 		}
 
+		update_numa_active_node_mask(p);
+
 		spin_unlock(group_lock);
 	}
 
@@ -1433,6 +1464,8 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags,
 		/* Second half of the array tracks where faults come from */
 		grp->faults_from = grp->faults + 2 * nr_node_ids;
 
+		node_set(task_node(current), grp->active_nodes);
+
 		for (i = 0; i < 4*nr_node_ids; i++)
 			grp->faults[i] = p->numa_faults[i];
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH 4/4] use active_nodes nodemask to decide on numa migrations
  2013-11-26 22:03 [RFC PATCH 0/4] pseudo-interleaving NUMA placement riel
                   ` (2 preceding siblings ...)
  2013-11-26 22:03 ` [RFC PATCH 3/4] build per numa_group active node mask from faults_from statistics riel
@ 2013-11-26 22:03 ` riel
  3 siblings, 0 replies; 5+ messages in thread
From: riel @ 2013-11-26 22:03 UTC (permalink / raw)
  To: linux-mm; +Cc: linux-kernel, mgorman, chegu_vinod, peterz

From: Rik van Riel <riel@redhat.com>

Use the active_nodes nodemask to make smarter decisions on NUMA migrations.

In order to maximize performance of workloads that do not fit in one NUMA
node, we want to satisfy the following criteria:
1) keep private memory local to each thread
2) avoid excessive NUMA migration of pages
3) distribute shared memory across the active nodes, to
   maximize memory bandwidth available to the workload

This patch accomplishes that by implementing the following policy for
NUMA migrations:
1) always migrate on a private fault
2) never migrate to a node that is not in the set of active nodes
   for the numa_group
3) always migrate from a node outside of the set of active nodes,
   to a node that is in that set
4) within the set of active nodes in the numa_group, only migrate
   from a node with more NUMA page faults, to a node with fewer
   NUMA page faults, with a 25% margin to avoid ping-ponging

This should result in most pages of a workload ending up on the
actively used nodes, with minimal ping-ponging of pages between
those nodes.

Unfortunately, it appears that something (scheduler idle balancer?)
is moving tasks around enough that nodes get dropped and added to
the set of active nodes semi-randomly...

Not-yet-signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
---
 include/linux/sched.h |  7 +++++++
 kernel/sched/fair.c   | 38 ++++++++++++++++++++++++++++++++++++++
 mm/mempolicy.c        |  3 +++
 3 files changed, 48 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e4b00d8..ee17c28 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1475,6 +1475,8 @@ extern void task_numa_fault(int last_node, int node, int pages, int flags);
 extern pid_t task_numa_group_id(struct task_struct *p);
 extern void set_numabalancing_state(bool enabled);
 extern void task_numa_free(struct task_struct *p);
+extern bool should_numa_migrate(struct task_struct *p, int last_cpupid,
+				int src_nid, int dst_nid);
 #else
 static inline void task_numa_fault(int last_node, int node, int pages,
 				   int flags)
@@ -1490,6 +1492,11 @@ static inline void set_numabalancing_state(bool enabled)
 static inline void task_numa_free(struct task_struct *p)
 {
 }
+static inline bool should_numa_migrate(struct task_struct *p, int last_cpupid,
+				       int src_nid, int dst_nid)
+{
+	return true;
+}
 #endif
 
 static inline struct pid *task_pid(struct task_struct *task)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 91b8f11..8906aa4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -931,6 +931,44 @@ static inline unsigned long group_weight(struct task_struct *p, int nid)
 	return 1000 * group_faults(p, nid) / p->numa_group->total_faults;
 }
 
+bool should_numa_migrate(struct task_struct *p, int last_cpupid,
+			 int src_nid, int dst_nid)
+{
+	struct numa_group *ng = p->numa_group;
+	unsigned long src_faults, dst_faults;
+
+	/* Always allow migrate on private faults */
+	if (cpupid_match_pid(p, last_cpupid))
+		return true;
+
+	/* A shared fault, but p->numa_group has not been set up yet. */
+	if (!ng)
+		return true;
+
+	/*
+	 * Do not migrate if the destination is not a node that
+	 * is actively used by this numa group.
+	 */
+	if (!node_isset(dst_nid, ng->active_nodes))
+		return false;
+
+	/*
+	 * Source is a node that is not actively used by this
+	 * numa group, while the destination is. Migrate.
+	 */
+	if (!node_isset(src_nid, ng->active_nodes))
+		return true;
+
+	/*
+	 * Both source and destination are nodes in active
+	 * use by this numa group. Maximize memory bandwidth
+	 * by migrating from more heavily used groups, to less
+	 * heavily used ones, spreading the load around.
+	 * Use a 1/4 hysteresis to avoid spurious page movement.
+	 */
+	return group_faults(p, dst_nid) < (group_faults(p, src_nid * 3 / 4));
+}
+
 static unsigned long weighted_cpuload(const int cpu);
 static unsigned long source_load(int cpu, int type);
 static unsigned long target_load(int cpu, int type);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 0522aa2..e314338 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2351,6 +2351,9 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
 		if (!cpupid_pid_unset(last_cpupid) && cpupid_to_nid(last_cpupid) != thisnid) {
 			goto out;
 		}
+
+		if (!should_numa_migrate(current, last_cpupid, curnid, polnid))
+			goto out;
 	}
 
 	if (curnid != polnid)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-11-26 22:19 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-26 22:03 [RFC PATCH 0/4] pseudo-interleaving NUMA placement riel
2013-11-26 22:03 ` [RFC PATCH 1/4] remove p->numa_migrate_deferred riel
2013-11-26 22:03 ` [RFC PATCH 2/4] track from which nodes NUMA faults are triggered riel
2013-11-26 22:03 ` [RFC PATCH 3/4] build per numa_group active node mask from faults_from statistics riel
2013-11-26 22:03 ` [RFC PATCH 4/4] use active_nodes nodemask to decide on numa migrations riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).