All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@kernel.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 18/18] sched: Swap tasks when reschuling if a CPU on a target node is imbalanced
Date: Tue, 16 Jul 2013 10:41:31 +0100	[thread overview]
Message-ID: <20130716094131.GG5055@suse.de> (raw)
In-Reply-To: <20130715201110.GO17211@twins.programming.kicks-ass.net>

On Mon, Jul 15, 2013 at 10:11:10PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 15, 2013 at 04:20:20PM +0100, Mel Gorman wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 53d8465..d679b01 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -4857,10 +4857,13 @@ fail:
> >  
> >  #ifdef CONFIG_NUMA_BALANCING
> >  /* Migrate current task p to target_cpu */
> > -int migrate_task_to(struct task_struct *p, int target_cpu)
> > +int migrate_task_to(struct task_struct *p, int target_cpu,
> > +		    struct task_struct *swap_p)
> >  {
> >  	struct migration_arg arg = { p, target_cpu };
> >  	int curr_cpu = task_cpu(p);
> > +	struct rq *rq;
> > +	int retval;
> >  
> >  	if (curr_cpu == target_cpu)
> >  		return 0;
> > @@ -4868,7 +4871,39 @@ int migrate_task_to(struct task_struct *p, int target_cpu)
> >  	if (!cpumask_test_cpu(target_cpu, tsk_cpus_allowed(p)))
> >  		return -EINVAL;
> >  
> > -	return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
> > +	if (swap_p == NULL)
> > +		return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
> > +
> > +	/* Make sure the target is still running the expected task */
> > +	rq = cpu_rq(target_cpu);
> > +	local_irq_disable();
> > +	raw_spin_lock(&rq->lock);
> 
> raw_spin_lock_irq() :-)
> 

damnit!

> > +	if (rq->curr != swap_p) {
> > +		raw_spin_unlock(&rq->lock);
> > +		local_irq_enable();
> > +		return -EINVAL;
> > +	}
> > +
> > +	/* Take a reference on the running task on the target cpu */
> > +	get_task_struct(swap_p);
> > +	raw_spin_unlock(&rq->lock);
> > +	local_irq_enable();
> 
> raw_spin_unlock_irq()
> 

Fixed.

> > +
> > +	/* Move current running task to target CPU */
> > +	retval = stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
> > +	if (raw_smp_processor_id() != target_cpu) {
> > +		put_task_struct(swap_p);
> > +		return retval;
> > +	}
> 
> (1)
> 
> > +	/* Move the remote task to the CPU just vacated */
> > +	local_irq_disable();
> > +	if (raw_smp_processor_id() == target_cpu)
> > +		__migrate_task(swap_p, target_cpu, curr_cpu);
> > +	local_irq_enable();
> > +
> > +	put_task_struct(swap_p);
> > +	return retval;
> >  }
> 
> So I know this is very much like what Ingo did in his patches, but
> there's a whole heap of 'problems' with this approach to task flipping.
> 
> So at (1) we just moved ourselves to the remote cpu. This might have
> left our original cpu idle and we might have done a newidle balance,
> even though we intend another task to run here.
> 

True. Minimally a parallel numa hinting fault that selected the source
nid as preferred nid might make the idle_cpu check and move there immediately

> At (1) we just moved ourselves to the remote cpu, however we might not
> be eligible to run, so moving the other task to our original CPU might
> take a while -- exacerbating the previously mention issue.
> 

Also true.

> Since (1) might take a whole lot of time, it might become rather
> unlikely that our task @swap_p is still queued on the cpu where we
> expected him to be.
> 

Which would hurt the intentions of patch 17.

hmm.

I did not want to do this lazily via the active load balancer because it
might never happen or by the time it did happen that it's no longer the
correct decision. This applied whether I set numa_preferred_nid or added
a numa_preferred_cpu.

What I think I can do is set a preferred CPU, wait until the next
wakeup and then move the task during select_task_rq as long as the load
balance permits. I cannot test it right now as all my test machines are
unplugged as part of a move but the patch against patch 17 is below.
Once p->numa_preferred_cpu exists then I should be able to lazily swap tasks
by setting p->numa_preferred_cpu.

Obviously untested and I need to give it more thought but this is the
general idea of what I mean.

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 454ad2e..f388673 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1503,9 +1503,9 @@ struct task_struct {
 #ifdef CONFIG_NUMA_BALANCING
 	int numa_scan_seq;
 	int numa_migrate_seq;
+	int numa_preferred_cpu;
 	unsigned int numa_scan_period;
 	unsigned int numa_scan_period_max;
-	unsigned long numa_migrate_retry;
 	u64 node_stamp;			/* migration stamp  */
 	struct callback_head numa_work;
 
@@ -1604,6 +1604,14 @@ struct task_struct {
 #ifdef CONFIG_NUMA_BALANCING
 extern void task_numa_fault(int last_node, int node, int pages, bool migrated);
 extern void set_numabalancing_state(bool enabled);
+static inline int numa_preferred_cpu(struct task_struct *p)
+{
+	return p->numa_preferred_cpu;
+}
+static inline void reset_numa_preferred_cpu(struct task_struct *p)
+{
+	p->numa_preferred_cpu = -1;
+}
 #else
 static inline void task_numa_fault(int last_node, int node, int pages,
 				   bool migrated)
@@ -1612,6 +1620,14 @@ static inline void task_numa_fault(int last_node, int node, int pages,
 static inline void set_numabalancing_state(bool enabled)
 {
 }
+static inline int numa_preferred_cpu(struct task_struct *p)
+{
+	return -1;
+}
+
+static inline void reset_numa_preferred_cpu(struct task_struct *p)
+{
+}
 #endif
 
 static inline struct pid *task_pid(struct task_struct *task)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 53d8465..309a27d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1553,6 +1553,9 @@ int wake_up_state(struct task_struct *p, unsigned int state)
  */
 static void __sched_fork(struct task_struct *p)
 {
+#ifdef CONFIG_NUMA_BALANCING
+	p->numa_preferred_cpu		= -1;
+#endif
 	p->on_rq			= 0;
 
 	p->se.on_rq			= 0;
@@ -1591,6 +1594,7 @@ static void __sched_fork(struct task_struct *p)
 	p->node_stamp = 0ULL;
 	p->numa_scan_seq = p->mm ? p->mm->numa_scan_seq : 0;
 	p->numa_migrate_seq = 0;
+	p->numa_preferred_cpu = -1;
 	p->numa_scan_period = sysctl_numa_balancing_scan_delay;
 	p->numa_preferred_nid = -1;
 	p->numa_work.next = &p->numa_work;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 07a9f40..21806b5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -940,14 +940,14 @@ static void numa_migrate_preferred(struct task_struct *p)
 	int preferred_cpu = task_cpu(p);
 
 	/* Success if task is already running on preferred CPU */
-	p->numa_migrate_retry = 0;
+	p->numa_preferred_cpu = -1;
 	if (cpu_to_node(preferred_cpu) == p->numa_preferred_nid)
 		return;
 
 	/* Otherwise, try migrate to a CPU on the preferred node */
 	preferred_cpu = task_numa_find_cpu(p, p->numa_preferred_nid);
 	if (migrate_task_to(p, preferred_cpu) != 0)
-		p->numa_migrate_retry = jiffies + HZ*5;
+		p->numa_preferred_cpu = preferred_cpu;
 }
 
 static void task_numa_placement(struct task_struct *p)
@@ -1052,10 +1052,6 @@ void task_numa_fault(int last_nidpid, int node, int pages, bool migrated)
 
 	task_numa_placement(p);
 
-	/* Retry task to preferred node migration if it previously failed */
-	if (p->numa_migrate_retry && time_after(jiffies, p->numa_migrate_retry))
-		numa_migrate_preferred(p);
-
 	/* Record the fault, double the weight if pages were migrated */
 	p->numa_faults_buffer[task_faults_idx(node, priv)] += pages << migrated;
 }
@@ -3538,10 +3534,25 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 	int new_cpu = cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
+	int numa_cpu;
 
 	if (p->nr_cpus_allowed == 1)
 		return prev_cpu;
 
+	/*
+	 * If a previous NUMA CPU migration failed then recheck now and use a
+	 * CPU near the preferred CPU if it would not introduce load imbalance.
+	 */
+	numa_cpu = numa_preferred_cpu(p);
+	if (numa_cpu != -1 && cpumask_test_cpu(numa_cpu, tsk_cpus_allowed(p))) {
+		int least_loaded_cpu;
+
+		reset_numa_preferred_cpu(p);
+		least_loaded_cpu = task_numa_find_cpu(p, cpu_to_node(numa_cpu));
+		if (least_loaded_cpu != prev_cpu)
+			return least_loaded_cpu;
+	}
+
 	if (sd_flag & SD_BALANCE_WAKE) {
 		if (cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
 			want_affine = 1;

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@kernel.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 18/18] sched: Swap tasks when reschuling if a CPU on a target node is imbalanced
Date: Tue, 16 Jul 2013 10:41:31 +0100	[thread overview]
Message-ID: <20130716094131.GG5055@suse.de> (raw)
In-Reply-To: <20130715201110.GO17211@twins.programming.kicks-ass.net>

On Mon, Jul 15, 2013 at 10:11:10PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 15, 2013 at 04:20:20PM +0100, Mel Gorman wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 53d8465..d679b01 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -4857,10 +4857,13 @@ fail:
> >  
> >  #ifdef CONFIG_NUMA_BALANCING
> >  /* Migrate current task p to target_cpu */
> > -int migrate_task_to(struct task_struct *p, int target_cpu)
> > +int migrate_task_to(struct task_struct *p, int target_cpu,
> > +		    struct task_struct *swap_p)
> >  {
> >  	struct migration_arg arg = { p, target_cpu };
> >  	int curr_cpu = task_cpu(p);
> > +	struct rq *rq;
> > +	int retval;
> >  
> >  	if (curr_cpu == target_cpu)
> >  		return 0;
> > @@ -4868,7 +4871,39 @@ int migrate_task_to(struct task_struct *p, int target_cpu)
> >  	if (!cpumask_test_cpu(target_cpu, tsk_cpus_allowed(p)))
> >  		return -EINVAL;
> >  
> > -	return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
> > +	if (swap_p == NULL)
> > +		return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
> > +
> > +	/* Make sure the target is still running the expected task */
> > +	rq = cpu_rq(target_cpu);
> > +	local_irq_disable();
> > +	raw_spin_lock(&rq->lock);
> 
> raw_spin_lock_irq() :-)
> 

damnit!

> > +	if (rq->curr != swap_p) {
> > +		raw_spin_unlock(&rq->lock);
> > +		local_irq_enable();
> > +		return -EINVAL;
> > +	}
> > +
> > +	/* Take a reference on the running task on the target cpu */
> > +	get_task_struct(swap_p);
> > +	raw_spin_unlock(&rq->lock);
> > +	local_irq_enable();
> 
> raw_spin_unlock_irq()
> 

Fixed.

> > +
> > +	/* Move current running task to target CPU */
> > +	retval = stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
> > +	if (raw_smp_processor_id() != target_cpu) {
> > +		put_task_struct(swap_p);
> > +		return retval;
> > +	}
> 
> (1)
> 
> > +	/* Move the remote task to the CPU just vacated */
> > +	local_irq_disable();
> > +	if (raw_smp_processor_id() == target_cpu)
> > +		__migrate_task(swap_p, target_cpu, curr_cpu);
> > +	local_irq_enable();
> > +
> > +	put_task_struct(swap_p);
> > +	return retval;
> >  }
> 
> So I know this is very much like what Ingo did in his patches, but
> there's a whole heap of 'problems' with this approach to task flipping.
> 
> So at (1) we just moved ourselves to the remote cpu. This might have
> left our original cpu idle and we might have done a newidle balance,
> even though we intend another task to run here.
> 

True. Minimally a parallel numa hinting fault that selected the source
nid as preferred nid might make the idle_cpu check and move there immediately

> At (1) we just moved ourselves to the remote cpu, however we might not
> be eligible to run, so moving the other task to our original CPU might
> take a while -- exacerbating the previously mention issue.
> 

Also true.

> Since (1) might take a whole lot of time, it might become rather
> unlikely that our task @swap_p is still queued on the cpu where we
> expected him to be.
> 

Which would hurt the intentions of patch 17.

hmm.

I did not want to do this lazily via the active load balancer because it
might never happen or by the time it did happen that it's no longer the
correct decision. This applied whether I set numa_preferred_nid or added
a numa_preferred_cpu.

What I think I can do is set a preferred CPU, wait until the next
wakeup and then move the task during select_task_rq as long as the load
balance permits. I cannot test it right now as all my test machines are
unplugged as part of a move but the patch against patch 17 is below.
Once p->numa_preferred_cpu exists then I should be able to lazily swap tasks
by setting p->numa_preferred_cpu.

Obviously untested and I need to give it more thought but this is the
general idea of what I mean.

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 454ad2e..f388673 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1503,9 +1503,9 @@ struct task_struct {
 #ifdef CONFIG_NUMA_BALANCING
 	int numa_scan_seq;
 	int numa_migrate_seq;
+	int numa_preferred_cpu;
 	unsigned int numa_scan_period;
 	unsigned int numa_scan_period_max;
-	unsigned long numa_migrate_retry;
 	u64 node_stamp;			/* migration stamp  */
 	struct callback_head numa_work;
 
@@ -1604,6 +1604,14 @@ struct task_struct {
 #ifdef CONFIG_NUMA_BALANCING
 extern void task_numa_fault(int last_node, int node, int pages, bool migrated);
 extern void set_numabalancing_state(bool enabled);
+static inline int numa_preferred_cpu(struct task_struct *p)
+{
+	return p->numa_preferred_cpu;
+}
+static inline void reset_numa_preferred_cpu(struct task_struct *p)
+{
+	p->numa_preferred_cpu = -1;
+}
 #else
 static inline void task_numa_fault(int last_node, int node, int pages,
 				   bool migrated)
@@ -1612,6 +1620,14 @@ static inline void task_numa_fault(int last_node, int node, int pages,
 static inline void set_numabalancing_state(bool enabled)
 {
 }
+static inline int numa_preferred_cpu(struct task_struct *p)
+{
+	return -1;
+}
+
+static inline void reset_numa_preferred_cpu(struct task_struct *p)
+{
+}
 #endif
 
 static inline struct pid *task_pid(struct task_struct *task)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 53d8465..309a27d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1553,6 +1553,9 @@ int wake_up_state(struct task_struct *p, unsigned int state)
  */
 static void __sched_fork(struct task_struct *p)
 {
+#ifdef CONFIG_NUMA_BALANCING
+	p->numa_preferred_cpu		= -1;
+#endif
 	p->on_rq			= 0;
 
 	p->se.on_rq			= 0;
@@ -1591,6 +1594,7 @@ static void __sched_fork(struct task_struct *p)
 	p->node_stamp = 0ULL;
 	p->numa_scan_seq = p->mm ? p->mm->numa_scan_seq : 0;
 	p->numa_migrate_seq = 0;
+	p->numa_preferred_cpu = -1;
 	p->numa_scan_period = sysctl_numa_balancing_scan_delay;
 	p->numa_preferred_nid = -1;
 	p->numa_work.next = &p->numa_work;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 07a9f40..21806b5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -940,14 +940,14 @@ static void numa_migrate_preferred(struct task_struct *p)
 	int preferred_cpu = task_cpu(p);
 
 	/* Success if task is already running on preferred CPU */
-	p->numa_migrate_retry = 0;
+	p->numa_preferred_cpu = -1;
 	if (cpu_to_node(preferred_cpu) == p->numa_preferred_nid)
 		return;
 
 	/* Otherwise, try migrate to a CPU on the preferred node */
 	preferred_cpu = task_numa_find_cpu(p, p->numa_preferred_nid);
 	if (migrate_task_to(p, preferred_cpu) != 0)
-		p->numa_migrate_retry = jiffies + HZ*5;
+		p->numa_preferred_cpu = preferred_cpu;
 }
 
 static void task_numa_placement(struct task_struct *p)
@@ -1052,10 +1052,6 @@ void task_numa_fault(int last_nidpid, int node, int pages, bool migrated)
 
 	task_numa_placement(p);
 
-	/* Retry task to preferred node migration if it previously failed */
-	if (p->numa_migrate_retry && time_after(jiffies, p->numa_migrate_retry))
-		numa_migrate_preferred(p);
-
 	/* Record the fault, double the weight if pages were migrated */
 	p->numa_faults_buffer[task_faults_idx(node, priv)] += pages << migrated;
 }
@@ -3538,10 +3534,25 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 	int new_cpu = cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
+	int numa_cpu;
 
 	if (p->nr_cpus_allowed == 1)
 		return prev_cpu;
 
+	/*
+	 * If a previous NUMA CPU migration failed then recheck now and use a
+	 * CPU near the preferred CPU if it would not introduce load imbalance.
+	 */
+	numa_cpu = numa_preferred_cpu(p);
+	if (numa_cpu != -1 && cpumask_test_cpu(numa_cpu, tsk_cpus_allowed(p))) {
+		int least_loaded_cpu;
+
+		reset_numa_preferred_cpu(p);
+		least_loaded_cpu = task_numa_find_cpu(p, cpu_to_node(numa_cpu));
+		if (least_loaded_cpu != prev_cpu)
+			return least_loaded_cpu;
+	}
+
 	if (sd_flag & SD_BALANCE_WAKE) {
 		if (cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
 			want_affine = 1;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-07-16  9:41 UTC|newest]

Thread overview: 200+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-15 15:20 [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Mel Gorman
2013-07-15 15:20 ` Mel Gorman
2013-07-15 15:20 ` [PATCH 01/18] mm: numa: Document automatic NUMA balancing sysctls Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 15:20 ` [PATCH 02/18] sched: Track NUMA hinting faults on per-node basis Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-17 10:50   ` Peter Zijlstra
2013-07-17 10:50     ` Peter Zijlstra
2013-07-31  7:54     ` Mel Gorman
2013-07-31  7:54       ` Mel Gorman
2013-07-29 10:10   ` Peter Zijlstra
2013-07-29 10:10     ` Peter Zijlstra
2013-07-31  7:54     ` Mel Gorman
2013-07-31  7:54       ` Mel Gorman
2013-07-15 15:20 ` [PATCH 03/18] mm: numa: Account for THP numa hinting faults on the correct node Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-17  0:33   ` Hillf Danton
2013-07-17  0:33     ` Hillf Danton
2013-07-17  1:26     ` Wanpeng Li
2013-07-17  1:26     ` Wanpeng Li
2013-07-15 15:20 ` [PATCH 04/18] mm: numa: Do not migrate or account for hinting faults on the zero page Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-17 11:00   ` Peter Zijlstra
2013-07-17 11:00     ` Peter Zijlstra
2013-07-31  8:11     ` Mel Gorman
2013-07-31  8:11       ` Mel Gorman
2013-07-15 15:20 ` [PATCH 05/18] sched: Select a preferred node with the most numa hinting faults Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 15:20 ` [PATCH 06/18] sched: Update NUMA hinting faults once per scan Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 15:20 ` [PATCH 07/18] sched: Favour moving tasks towards the preferred node Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-25 10:40   ` [PATCH] sched, numa: migrates_degrades_locality() Peter Zijlstra
2013-07-25 10:40     ` Peter Zijlstra
2013-07-31  8:44     ` Mel Gorman
2013-07-31  8:44       ` Mel Gorman
2013-07-31  8:50       ` Peter Zijlstra
2013-07-31  8:50         ` Peter Zijlstra
2013-07-15 15:20 ` [PATCH 08/18] sched: Reschedule task on preferred NUMA node once selected Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-17  1:31   ` Hillf Danton
2013-07-17  1:31     ` Hillf Danton
2013-07-31  9:07     ` Mel Gorman
2013-07-31  9:07       ` Mel Gorman
2013-07-31  9:38       ` Srikar Dronamraju
2013-07-31  9:38         ` Srikar Dronamraju
2013-08-01  4:47   ` Srikar Dronamraju
2013-08-01  4:47     ` Srikar Dronamraju
2013-08-01 15:38     ` Mel Gorman
2013-08-01 15:38       ` Mel Gorman
2013-07-15 15:20 ` [PATCH 09/18] sched: Add infrastructure for split shared/private accounting of NUMA hinting faults Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-17  2:17   ` Hillf Danton
2013-07-17  2:17     ` Hillf Danton
2013-07-31  9:08     ` Mel Gorman
2013-07-31  9:08       ` Mel Gorman
2013-07-15 15:20 ` [PATCH 10/18] sched: Increase NUMA PTE scanning when a new preferred node is selected Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 15:20 ` [PATCH 11/18] sched: Check current->mm before allocating NUMA faults Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 15:20 ` [PATCH 12/18] sched: Set the scan rate proportional to the size of the task being scanned Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 15:20 ` [PATCH 13/18] mm: numa: Scan pages with elevated page_mapcount Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-17  5:22   ` Sam Ben
2013-07-17  5:22     ` Sam Ben
2013-07-31  9:13     ` Mel Gorman
2013-07-31  9:13       ` Mel Gorman
2013-07-15 15:20 ` [PATCH 14/18] sched: Remove check that skips small VMAs Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 15:20 ` [PATCH 15/18] sched: Set preferred NUMA node based on number of private faults Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-18  1:53   ` [PATCH 15/18] fix compilation with !CONFIG_NUMA_BALANCING Rik van Riel
2013-07-18  1:53     ` Rik van Riel
2013-07-31  9:19     ` Mel Gorman
2013-07-31  9:19       ` Mel Gorman
2013-07-26 11:20   ` [PATCH 15/18] sched: Set preferred NUMA node based on number of private faults Peter Zijlstra
2013-07-26 11:20     ` Peter Zijlstra
2013-07-31  9:29     ` Mel Gorman
2013-07-31  9:29       ` Mel Gorman
2013-07-31  9:34       ` Peter Zijlstra
2013-07-31  9:34         ` Peter Zijlstra
2013-07-31 10:10         ` Mel Gorman
2013-07-31 10:10           ` Mel Gorman
2013-07-15 15:20 ` [PATCH 16/18] sched: Avoid overloading CPUs on a preferred NUMA node Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 20:03   ` Peter Zijlstra
2013-07-15 20:03     ` Peter Zijlstra
2013-07-16  8:23     ` Mel Gorman
2013-07-16  8:23       ` Mel Gorman
2013-07-16 10:35       ` Peter Zijlstra
2013-07-16 10:35         ` Peter Zijlstra
2013-07-16 15:55   ` Hillf Danton
2013-07-16 15:55     ` Hillf Danton
2013-07-16 16:01     ` Mel Gorman
2013-07-16 16:01       ` Mel Gorman
2013-07-17 10:54   ` Peter Zijlstra
2013-07-17 10:54     ` Peter Zijlstra
2013-07-31  9:49     ` Mel Gorman
2013-07-31  9:49       ` Mel Gorman
2013-08-01  7:10   ` Srikar Dronamraju
2013-08-01  7:10     ` Srikar Dronamraju
2013-08-01 15:42     ` Mel Gorman
2013-08-01 15:42       ` Mel Gorman
2013-07-15 15:20 ` [PATCH 17/18] sched: Retry migration of tasks to CPU on a preferred node Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-25 10:33   ` Peter Zijlstra
2013-07-25 10:33     ` Peter Zijlstra
2013-07-31 10:03     ` Mel Gorman
2013-07-31 10:03       ` Mel Gorman
2013-07-31 10:05       ` Peter Zijlstra
2013-07-31 10:05         ` Peter Zijlstra
2013-07-31 10:07         ` Mel Gorman
2013-07-31 10:07           ` Mel Gorman
2013-07-25 10:35   ` Peter Zijlstra
2013-07-25 10:35     ` Peter Zijlstra
2013-08-01  5:13   ` Srikar Dronamraju
2013-08-01  5:13     ` Srikar Dronamraju
2013-08-01 15:46     ` Mel Gorman
2013-08-01 15:46       ` Mel Gorman
2013-07-15 15:20 ` [PATCH 18/18] sched: Swap tasks when reschuling if a CPU on a target node is imbalanced Mel Gorman
2013-07-15 15:20   ` Mel Gorman
2013-07-15 20:11   ` Peter Zijlstra
2013-07-15 20:11     ` Peter Zijlstra
2013-07-16  9:41     ` Mel Gorman [this message]
2013-07-16  9:41       ` Mel Gorman
2013-08-01  4:59   ` Srikar Dronamraju
2013-08-01  4:59     ` Srikar Dronamraju
2013-08-01 15:48     ` Mel Gorman
2013-08-01 15:48       ` Mel Gorman
2013-07-15 20:14 ` [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Peter Zijlstra
2013-07-15 20:14   ` Peter Zijlstra
2013-07-16 15:10 ` Srikar Dronamraju
2013-07-16 15:10   ` Srikar Dronamraju
2013-07-25 10:36 ` Peter Zijlstra
2013-07-25 10:36   ` Peter Zijlstra
2013-07-31 10:30   ` Mel Gorman
2013-07-31 10:30     ` Mel Gorman
2013-07-31 10:48     ` Peter Zijlstra
2013-07-31 10:48       ` Peter Zijlstra
2013-07-31 11:57       ` Mel Gorman
2013-07-31 11:57         ` Mel Gorman
2013-07-31 15:30         ` Peter Zijlstra
2013-07-31 15:30           ` Peter Zijlstra
2013-07-31 16:11           ` Mel Gorman
2013-07-31 16:11             ` Mel Gorman
2013-07-31 16:39             ` Peter Zijlstra
2013-07-31 16:39               ` Peter Zijlstra
2013-08-01 15:51               ` Mel Gorman
2013-08-01 15:51                 ` Mel Gorman
2013-07-25 10:38 ` [PATCH] mm, numa: Sanitize task_numa_fault() callsites Peter Zijlstra
2013-07-25 10:38   ` Peter Zijlstra
2013-07-31 11:25   ` Mel Gorman
2013-07-31 11:25     ` Mel Gorman
2013-07-25 10:41 ` [PATCH] sched, numa: Improve scanner Peter Zijlstra
2013-07-25 10:41   ` Peter Zijlstra
2013-07-25 10:46 ` [PATCH] mm, sched, numa: Create a per-task MPOL_INTERLEAVE policy Peter Zijlstra
2013-07-25 10:46   ` Peter Zijlstra
2013-07-26  9:55   ` Peter Zijlstra
2013-07-26  9:55     ` Peter Zijlstra
2013-08-26 16:10     ` Peter Zijlstra
2013-08-26 16:10       ` Peter Zijlstra
2013-08-26 16:14       ` Peter Zijlstra
2013-08-26 16:14         ` Peter Zijlstra
2013-07-30 11:24 ` [PATCH] mm, numa: Change page last {nid,pid} into {cpu,pid} Peter Zijlstra
2013-07-30 11:24   ` Peter Zijlstra
2013-08-01 22:33   ` Rik van Riel
2013-08-01 22:33     ` Rik van Riel
2013-07-30 11:38 ` [PATCH] sched, numa: Use {cpu, pid} to create task groups for shared faults Peter Zijlstra
2013-07-30 11:38   ` Peter Zijlstra
2013-07-31 15:07   ` Peter Zijlstra
2013-07-31 15:07     ` Peter Zijlstra
2013-07-31 15:38     ` Peter Zijlstra
2013-07-31 15:38       ` Peter Zijlstra
2013-07-31 15:45     ` Don Morris
2013-07-31 15:45       ` Don Morris
2013-07-31 16:05       ` Peter Zijlstra
2013-07-31 16:05         ` Peter Zijlstra
2013-08-02 16:47       ` [PATCH -v3] " Peter Zijlstra
2013-08-02 16:47         ` Peter Zijlstra
2013-08-02 16:50         ` [PATCH] mm, numa: Do not group on RO pages Peter Zijlstra
2013-08-02 16:50           ` Peter Zijlstra
2013-08-02 19:56           ` Peter Zijlstra
2013-08-02 19:56             ` Peter Zijlstra
2013-08-05 19:36           ` [PATCH] numa,sched: use group fault statistics in numa placement Rik van Riel
2013-08-05 19:36             ` Rik van Riel
2013-08-09 13:55             ` Don Morris
2013-08-28 16:41         ` [PATCH -v3] sched, numa: Use {cpu, pid} to create task groups for shared faults Peter Zijlstra
2013-08-28 16:41           ` Peter Zijlstra
2013-08-28 17:10           ` Rik van Riel
2013-08-28 17:10             ` Rik van Riel
2013-08-01  6:23   ` [PATCH,RFC] numa,sched: use group fault statistics in numa placement Rik van Riel
2013-08-01  6:23     ` Rik van Riel
2013-08-01 10:37     ` Peter Zijlstra
2013-08-01 10:37       ` Peter Zijlstra
2013-08-01 16:35       ` Rik van Riel
2013-08-01 16:35         ` Rik van Riel
2013-08-01 22:36   ` [RFC PATCH -v2] " Rik van Riel
2013-08-01 22:36     ` Rik van Riel
2013-07-30 13:58 ` [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Andrew Theurer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130716094131.GG5055@suse.de \
    --to=mgorman@suse.de \
    --cc=aarcange@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=srikar@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.