linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] correct load_balance()
@ 2013-02-14  5:48 Joonsoo Kim
  2013-02-14  5:48 ` [PATCH 1/8] sched: change position of resched_cpu() in load_balance() Joonsoo Kim
                   ` (8 more replies)
  0 siblings, 9 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, there are some missing parts for this feature to work properly.
This patchset correct these things and make load_balance() robust.

Others are related to LBF_ALL_PINNED. This is fallback functionality
when all tasks can't be moved as cpu affinity. But, currently,
if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED
flag and 'redo' is triggered. This is not our intention, so correct it.

These are based on v3.8-rc7.

Joonsoo Kim (8):
  sched: change position of resched_cpu() in load_balance()
  sched: explicitly cpu_idle_type checking in rebalance_domains()
  sched: don't consider other cpus in our group in case of NEWLY_IDLE
  sched: clean up move_task() and move_one_task()
  sched: move up affinity check to mitigate useless redoing overhead
  sched: rename load_balance_tmpmask to load_balance_cpu_active
  sched: prevent to re-select dst-cpu in load_balance()
  sched: reset lb_env when redo in load_balance()

 kernel/sched/core.c |    9 +++--
 kernel/sched/fair.c |  107 +++++++++++++++++++++++++++++----------------------
 2 files changed, 67 insertions(+), 49 deletions(-)

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/8] sched: change position of resched_cpu() in load_balance()
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-03-19 12:59   ` Peter Zijlstra
  2013-02-14  5:48 ` [PATCH 2/8] sched: explicitly cpu_idle_type checking in rebalance_domains() Joonsoo Kim
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

cur_ld_moved is reset if env.flags hit LBF_NEED_BREAK.
So, there is possibility that we miss doing resched_cpu().
Correct it as changing position of resched_cpu()
before checking LBF_NEED_BREAK.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 81fa536..6f72851 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5070,17 +5070,17 @@ more_balance:
 		double_rq_unlock(env.dst_rq, busiest);
 		local_irq_restore(flags);
 
-		if (env.flags & LBF_NEED_BREAK) {
-			env.flags &= ~LBF_NEED_BREAK;
-			goto more_balance;
-		}
-
 		/*
 		 * some other cpu did the load balance for us.
 		 */
 		if (cur_ld_moved && env.dst_cpu != smp_processor_id())
 			resched_cpu(env.dst_cpu);
 
+		if (env.flags & LBF_NEED_BREAK) {
+			env.flags &= ~LBF_NEED_BREAK;
+			goto more_balance;
+		}
+
 		/*
 		 * Revisit (affine) tasks on src_cpu that couldn't be moved to
 		 * us and move them to an alternate dst_cpu in our sched_group
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/8] sched: explicitly cpu_idle_type checking in rebalance_domains()
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
  2013-02-14  5:48 ` [PATCH 1/8] sched: change position of resched_cpu() in load_balance() Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-03-19 14:02   ` Peter Zijlstra
  2013-02-14  5:48 ` [PATCH 3/8] sched: don't consider other cpus in our group in case of NEWLY_IDLE Joonsoo Kim
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

After commit 88b8dac0, dst-cpu can be changed in load_balance(),
then we can't know cpu_idle_type of dst-cpu when load_balance()
return positive. So, add explicit cpu_idle_type checking.

Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6f72851..0c6aaf6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5515,10 +5515,10 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 		if (time_after_eq(jiffies, sd->last_balance + interval)) {
 			if (load_balance(cpu, rq, sd, idle, &balance)) {
 				/*
-				 * We've pulled tasks over so either we're no
-				 * longer idle.
+				 * We've pulled tasks over so either we may
+				 * be no longer idle.
 				 */
-				idle = CPU_NOT_IDLE;
+				idle = idle_cpu(cpu) ? CPU_IDLE : CPU_NOT_IDLE;
 			}
 			sd->last_balance = jiffies;
 		}
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/8] sched: don't consider other cpus in our group in case of NEWLY_IDLE
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
  2013-02-14  5:48 ` [PATCH 1/8] sched: change position of resched_cpu() in load_balance() Joonsoo Kim
  2013-02-14  5:48 ` [PATCH 2/8] sched: explicitly cpu_idle_type checking in rebalance_domains() Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-03-19 14:20   ` Peter Zijlstra
  2013-02-14  5:48 ` [PATCH 4/8] sched: clean up move_task() and move_one_task() Joonsoo Kim
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

Commit 88b8dac0 makes load_balance() consider other cpus in its group,
regardless of idle type. When we do NEWLY_IDLE balancing, we should not
consider it, because a motivation of NEWLY_IDLE balancing is to turn
this cpu to non idle state if needed. This is not the case of other cpus.
So, change code not to consider other cpus for NEWLY_IDLE balancing.

With this patch, assign 'if (pulled_task) this_rq->idle_stamp = 0'
in idle_balance() is corrected, because NEWLY_IDLE balancing doesn't
consider other cpus. Assigning to 'this_rq->idle_stamp' is now valid.

Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0c6aaf6..97498f4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5016,8 +5016,15 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 		.cpus		= cpus,
 	};
 
+	/* For NEWLY_IDLE load_balancing, we don't need to consider
+	 * other cpus in our group */
+	if (idle == CPU_NEWLY_IDLE) {
+		env.dst_grpmask = NULL;
+		max_lb_iterations = 0;
+	} else {
+		max_lb_iterations = cpumask_weight(env.dst_grpmask);
+	}
 	cpumask_copy(cpus, cpu_active_mask);
-	max_lb_iterations = cpumask_weight(env.dst_grpmask);
 
 	schedstat_inc(sd, lb_count[idle]);
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/8] sched: clean up move_task() and move_one_task()
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
                   ` (2 preceding siblings ...)
  2013-02-14  5:48 ` [PATCH 3/8] sched: don't consider other cpus in our group in case of NEWLY_IDLE Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-03-19 14:30   ` Peter Zijlstra
  2013-02-14  5:48 ` [PATCH 5/8] sched: move up affinity check to mitigate useless redoing overhead Joonsoo Kim
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

Some validation for task moving is performed in move_tasks() and
move_one_task(). We can move these code to can_migrate_task()
which is already exist for this purpose.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 97498f4..849bc8e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3874,19 +3874,40 @@ task_hot(struct task_struct *p, u64 now, struct sched_domain *sd)
 	return delta < (s64)sysctl_sched_migration_cost;
 }
 
+static unsigned long task_h_load(struct task_struct *p);
+
 /*
  * can_migrate_task - may task p from runqueue rq be migrated to this_cpu?
+ * @load is only meaningful when !@lb_active and return value is true
  */
 static
-int can_migrate_task(struct task_struct *p, struct lb_env *env)
+int can_migrate_task(struct task_struct *p, struct lb_env *env,
+				bool lb_active, unsigned long *load)
 {
 	int tsk_cache_hot = 0;
 	/*
 	 * We do not migrate tasks that are:
-	 * 1) running (obviously), or
-	 * 2) cannot be migrated to this CPU due to cpus_allowed, or
-	 * 3) are cache-hot on their current CPU.
+	 * 1) throttled_lb_pair, or
+	 * 2) task's load is too low, or
+	 * 3) task's too large to imbalance, or
+	 * 4) cannot be migrated to this CPU due to cpus_allowed, or
+	 * 5) running (obviously), or
+	 * 6) are cache-hot on their current CPU.
 	 */
+
+	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
+		return 0;
+
+	if (!lb_active) {
+		*load = task_h_load(p);
+		if (sched_feat(LB_MIN) &&
+			*load < 16 && !env->sd->nr_balance_failed)
+			return 0;
+
+		if ((*load / 2) > env->imbalance)
+			return 0;
+	}
+
 	if (!cpumask_test_cpu(env->dst_cpu, tsk_cpus_allowed(p))) {
 		int new_dst_cpu;
 
@@ -3957,10 +3978,7 @@ static int move_one_task(struct lb_env *env)
 	struct task_struct *p, *n;
 
 	list_for_each_entry_safe(p, n, &env->src_rq->cfs_tasks, se.group_node) {
-		if (throttled_lb_pair(task_group(p), env->src_rq->cpu, env->dst_cpu))
-			continue;
-
-		if (!can_migrate_task(p, env))
+		if (!can_migrate_task(p, env, true, NULL))
 			continue;
 
 		move_task(p, env);
@@ -3975,8 +3993,6 @@ static int move_one_task(struct lb_env *env)
 	return 0;
 }
 
-static unsigned long task_h_load(struct task_struct *p);
-
 static const unsigned int sched_nr_migrate_break = 32;
 
 /*
@@ -4011,18 +4027,7 @@ static int move_tasks(struct lb_env *env)
 			break;
 		}
 
-		if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
-			goto next;
-
-		load = task_h_load(p);
-
-		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
-			goto next;
-
-		if ((load / 2) > env->imbalance)
-			goto next;
-
-		if (!can_migrate_task(p, env))
+		if (!can_migrate_task(p, env, false, &load))
 			goto next;
 
 		move_task(p, env);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/8] sched: move up affinity check to mitigate useless redoing overhead
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
                   ` (3 preceding siblings ...)
  2013-02-14  5:48 ` [PATCH 4/8] sched: clean up move_task() and move_one_task() Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-02-14  5:48 ` [PATCH 6/8] sched: rename load_balance_tmpmask to load_balance_cpu_active Joonsoo Kim
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

Currently, LBF_ALL_PINNED is cleared after affinity check is passed.
So, if can_migrate_task() is failed by small load value or small
imbalance value, we don't clear LBF_ALL_PINNED. At last, we trigger
'redo' in load_balance().

Imbalance value is often so small that any tasks cannot be moved
to other cpus and, of course, this situaltion may be continued after
we change the target cpu. So this patch clear LBF_ALL_PINNED in order
to mitigate useless redoing overhead, if can_migrate_task() is failed
by above reason.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 849bc8e..bb373f4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3888,9 +3888,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env,
 	/*
 	 * We do not migrate tasks that are:
 	 * 1) throttled_lb_pair, or
-	 * 2) task's load is too low, or
-	 * 3) task's too large to imbalance, or
-	 * 4) cannot be migrated to this CPU due to cpus_allowed, or
+	 * 2) cannot be migrated to this CPU due to cpus_allowed, or
+	 * 3) task's load is too low, or
+	 * 4) task's too large to imbalance, or
 	 * 5) running (obviously), or
 	 * 6) are cache-hot on their current CPU.
 	 */
@@ -3898,16 +3898,6 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env,
 	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
 		return 0;
 
-	if (!lb_active) {
-		*load = task_h_load(p);
-		if (sched_feat(LB_MIN) &&
-			*load < 16 && !env->sd->nr_balance_failed)
-			return 0;
-
-		if ((*load / 2) > env->imbalance)
-			return 0;
-	}
-
 	if (!cpumask_test_cpu(env->dst_cpu, tsk_cpus_allowed(p))) {
 		int new_dst_cpu;
 
@@ -3936,6 +3926,16 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env,
 	/* Record that we found atleast one task that could run on dst_cpu */
 	env->flags &= ~LBF_ALL_PINNED;
 
+	if (!lb_active) {
+		*load = task_h_load(p);
+		if (sched_feat(LB_MIN) &&
+			*load < 16 && !env->sd->nr_balance_failed)
+			return 0;
+
+		if ((*load / 2) > env->imbalance)
+			return 0;
+	}
+
 	if (task_running(env->src_rq, p)) {
 		schedstat_inc(p, se.statistics.nr_failed_migrations_running);
 		return 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 6/8] sched: rename load_balance_tmpmask to load_balance_cpu_active
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
                   ` (4 preceding siblings ...)
  2013-02-14  5:48 ` [PATCH 5/8] sched: move up affinity check to mitigate useless redoing overhead Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-03-19 15:01   ` Peter Zijlstra
  2013-02-14  5:48 ` [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance() Joonsoo Kim
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

This name doesn't represent specific meaning.
So rename it to imply it's purpose.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 26058d0..e6f8783 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6814,7 +6814,7 @@ struct task_group root_task_group;
 LIST_HEAD(task_groups);
 #endif
 
-DECLARE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
+DECLARE_PER_CPU(cpumask_var_t, load_balance_cpu_active);
 
 void __init sched_init(void)
 {
@@ -6851,7 +6851,7 @@ void __init sched_init(void)
 #endif /* CONFIG_RT_GROUP_SCHED */
 #ifdef CONFIG_CPUMASK_OFFSTACK
 		for_each_possible_cpu(i) {
-			per_cpu(load_balance_tmpmask, i) = (void *)ptr;
+			per_cpu(load_balance_cpu_active, i) = (void *)ptr;
 			ptr += cpumask_size();
 		}
 #endif /* CONFIG_CPUMASK_OFFSTACK */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bb373f4..7382fa5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4974,7 +4974,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 #define MAX_PINNED_INTERVAL	512
 
 /* Working cpumask for load_balance and load_balance_newidle. */
-DEFINE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
+DEFINE_PER_CPU(cpumask_var_t, load_balance_cpu_active);
 
 static int need_active_balance(struct lb_env *env)
 {
@@ -5009,7 +5009,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 	struct sched_group *group;
 	struct rq *busiest;
 	unsigned long flags;
-	struct cpumask *cpus = __get_cpu_var(load_balance_tmpmask);
+	struct cpumask *cpus = __get_cpu_var(load_balance_cpu_active);
 
 	struct lb_env env = {
 		.sd		= sd,
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance()
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
                   ` (5 preceding siblings ...)
  2013-02-14  5:48 ` [PATCH 6/8] sched: rename load_balance_tmpmask to load_balance_cpu_active Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-03-19 15:05   ` Peter Zijlstra
  2013-02-14  5:48 ` [PATCH 8/8] sched: reset lb_env when redo " Joonsoo Kim
  2013-02-25  4:56 ` [PATCH 0/8] correct load_balance() Joonsoo Kim
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, in that, there is no code for preventing to re-select dst-cpu.
So, same dst-cpu can be selected over and over.

This patch add functionality to load_balance() in order to exclude
cpu which is selected once.

Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e6f8783..d4c6ed0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6814,6 +6814,7 @@ struct task_group root_task_group;
 LIST_HEAD(task_groups);
 #endif
 
+DECLARE_PER_CPU(cpumask_var_t, load_balance_dst_grp);
 DECLARE_PER_CPU(cpumask_var_t, load_balance_cpu_active);
 
 void __init sched_init(void)
@@ -6828,7 +6829,7 @@ void __init sched_init(void)
 	alloc_size += 2 * nr_cpu_ids * sizeof(void **);
 #endif
 #ifdef CONFIG_CPUMASK_OFFSTACK
-	alloc_size += num_possible_cpus() * cpumask_size();
+	alloc_size += num_possible_cpus() * cpumask_size() * 2;
 #endif
 	if (alloc_size) {
 		ptr = (unsigned long)kzalloc(alloc_size, GFP_NOWAIT);
@@ -6851,6 +6852,8 @@ void __init sched_init(void)
 #endif /* CONFIG_RT_GROUP_SCHED */
 #ifdef CONFIG_CPUMASK_OFFSTACK
 		for_each_possible_cpu(i) {
+			per_cpu(load_balance_dst_grp, i) = (void *)ptr;
+			ptr += cpumask_size();
 			per_cpu(load_balance_cpu_active, i) = (void *)ptr;
 			ptr += cpumask_size();
 		}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7382fa5..70631e8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4974,6 +4974,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 #define MAX_PINNED_INTERVAL	512
 
 /* Working cpumask for load_balance and load_balance_newidle. */
+DEFINE_PER_CPU(cpumask_var_t, load_balance_dst_grp);
 DEFINE_PER_CPU(cpumask_var_t, load_balance_cpu_active);
 
 static int need_active_balance(struct lb_env *env)
@@ -5005,17 +5006,17 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 			int *balance)
 {
 	int ld_moved, cur_ld_moved, active_balance = 0;
-	int lb_iterations, max_lb_iterations;
 	struct sched_group *group;
 	struct rq *busiest;
 	unsigned long flags;
+	struct cpumask *dst_grp = __get_cpu_var(load_balance_dst_grp);
 	struct cpumask *cpus = __get_cpu_var(load_balance_cpu_active);
 
 	struct lb_env env = {
 		.sd		= sd,
 		.dst_cpu	= this_cpu,
 		.dst_rq		= this_rq,
-		.dst_grpmask    = sched_group_cpus(sd->groups),
+		.dst_grpmask    = dst_grp,
 		.idle		= idle,
 		.loop_break	= sched_nr_migrate_break,
 		.cpus		= cpus,
@@ -5025,9 +5026,9 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 	 * other cpus in our group */
 	if (idle == CPU_NEWLY_IDLE) {
 		env.dst_grpmask = NULL;
-		max_lb_iterations = 0;
 	} else {
-		max_lb_iterations = cpumask_weight(env.dst_grpmask);
+		cpumask_copy(dst_grp, sched_group_cpus(sd->groups));
+		cpumask_clear_cpu(env.dst_cpu, env.dst_grpmask);
 	}
 	cpumask_copy(cpus, cpu_active_mask);
 
@@ -5055,7 +5056,6 @@ redo:
 	schedstat_add(sd, lb_imbalance[idle], env.imbalance);
 
 	ld_moved = 0;
-	lb_iterations = 1;
 	if (busiest->nr_running > 1) {
 		/*
 		 * Attempt to move tasks. If find_busiest_group has found
@@ -5112,14 +5112,17 @@ more_balance:
 		 * moreover subsequent load balance cycles should correct the
 		 * excess load moved.
 		 */
-		if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0 &&
-				lb_iterations++ < max_lb_iterations) {
+		if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0) {
 
 			env.dst_rq	 = cpu_rq(env.new_dst_cpu);
 			env.dst_cpu	 = env.new_dst_cpu;
 			env.flags	&= ~LBF_SOME_PINNED;
 			env.loop	 = 0;
 			env.loop_break	 = sched_nr_migrate_break;
+
+			/* Prevent to re-select dst_cpu */
+			cpumask_clear_cpu(env.dst_cpu, env.dst_grpmask);
+
 			/*
 			 * Go back to "more_balance" rather than "redo" since we
 			 * need to continue with same src_cpu.
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 8/8] sched: reset lb_env when redo in load_balance()
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
                   ` (6 preceding siblings ...)
  2013-02-14  5:48 ` [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance() Joonsoo Kim
@ 2013-02-14  5:48 ` Joonsoo Kim
  2013-03-19 15:21   ` Peter Zijlstra
  2013-02-25  4:56 ` [PATCH 0/8] correct load_balance() Joonsoo Kim
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-14  5:48 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel, Joonsoo Kim

Commit 88b8dac0 makes load_balance() consider other cpus in its group.
So, now, When we redo in load_balance(), we should reset some fields of
lb_env to ensure that load_balance() works for initial cpu, not for other
cpus in its group. So correct it.

Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 70631e8..25c798c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5014,14 +5014,20 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 
 	struct lb_env env = {
 		.sd		= sd,
-		.dst_cpu	= this_cpu,
-		.dst_rq		= this_rq,
 		.dst_grpmask    = dst_grp,
 		.idle		= idle,
-		.loop_break	= sched_nr_migrate_break,
 		.cpus		= cpus,
 	};
 
+	schedstat_inc(sd, lb_count[idle]);
+	cpumask_copy(cpus, cpu_active_mask);
+
+redo:
+	env.dst_cpu = this_cpu;
+	env.dst_rq = this_rq;
+	env.loop = 0;
+	env.loop_break = sched_nr_migrate_break;
+
 	/* For NEWLY_IDLE load_balancing, we don't need to consider
 	 * other cpus in our group */
 	if (idle == CPU_NEWLY_IDLE) {
@@ -5030,11 +5036,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 		cpumask_copy(dst_grp, sched_group_cpus(sd->groups));
 		cpumask_clear_cpu(env.dst_cpu, env.dst_grpmask);
 	}
-	cpumask_copy(cpus, cpu_active_mask);
 
-	schedstat_inc(sd, lb_count[idle]);
-
-redo:
 	group = find_busiest_group(&env, balance);
 
 	if (*balance == 0)
@@ -5133,11 +5135,9 @@ more_balance:
 		/* All tasks on this runqueue were pinned by CPU affinity */
 		if (unlikely(env.flags & LBF_ALL_PINNED)) {
 			cpumask_clear_cpu(cpu_of(busiest), cpus);
-			if (!cpumask_empty(cpus)) {
-				env.loop = 0;
-				env.loop_break = sched_nr_migrate_break;
+			if (!cpumask_empty(cpus))
 				goto redo;
-			}
+
 			goto out_balanced;
 		}
 	}
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 0/8] correct load_balance()
  2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
                   ` (7 preceding siblings ...)
  2013-02-14  5:48 ` [PATCH 8/8] sched: reset lb_env when redo " Joonsoo Kim
@ 2013-02-25  4:56 ` Joonsoo Kim
  2013-03-19  5:11   ` Joonsoo Kim
  8 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-02-25  4:56 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel

On Thu, Feb 14, 2013 at 02:48:33PM +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> But, there are some missing parts for this feature to work properly.
> This patchset correct these things and make load_balance() robust.
> 
> Others are related to LBF_ALL_PINNED. This is fallback functionality
> when all tasks can't be moved as cpu affinity. But, currently,
> if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED
> flag and 'redo' is triggered. This is not our intention, so correct it.
> 
> These are based on v3.8-rc7.
> 
> Joonsoo Kim (8):
>   sched: change position of resched_cpu() in load_balance()
>   sched: explicitly cpu_idle_type checking in rebalance_domains()
>   sched: don't consider other cpus in our group in case of NEWLY_IDLE
>   sched: clean up move_task() and move_one_task()
>   sched: move up affinity check to mitigate useless redoing overhead
>   sched: rename load_balance_tmpmask to load_balance_cpu_active
>   sched: prevent to re-select dst-cpu in load_balance()
>   sched: reset lb_env when redo in load_balance()
> 
>  kernel/sched/core.c |    9 +++--
>  kernel/sched/fair.c |  107 +++++++++++++++++++++++++++++----------------------
>  2 files changed, 67 insertions(+), 49 deletions(-)

Hello, Ingo and Peter.

Could you review this patch set?
Please let me know what I should do for merging this?

Thanks.

> -- 
> 1.7.9.5
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 0/8] correct load_balance()
  2013-02-25  4:56 ` [PATCH 0/8] correct load_balance() Joonsoo Kim
@ 2013-03-19  5:11   ` Joonsoo Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-03-19  5:11 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: Srivatsa Vaddagiri, linux-kernel

On Mon, Feb 25, 2013 at 01:56:59PM +0900, Joonsoo Kim wrote:
> On Thu, Feb 14, 2013 at 02:48:33PM +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> > But, there are some missing parts for this feature to work properly.
> > This patchset correct these things and make load_balance() robust.
> > 
> > Others are related to LBF_ALL_PINNED. This is fallback functionality
> > when all tasks can't be moved as cpu affinity. But, currently,
> > if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED
> > flag and 'redo' is triggered. This is not our intention, so correct it.
> > 
> > These are based on v3.8-rc7.
> > 
> > Joonsoo Kim (8):
> >   sched: change position of resched_cpu() in load_balance()
> >   sched: explicitly cpu_idle_type checking in rebalance_domains()
> >   sched: don't consider other cpus in our group in case of NEWLY_IDLE
> >   sched: clean up move_task() and move_one_task()
> >   sched: move up affinity check to mitigate useless redoing overhead
> >   sched: rename load_balance_tmpmask to load_balance_cpu_active
> >   sched: prevent to re-select dst-cpu in load_balance()
> >   sched: reset lb_env when redo in load_balance()
> > 
> >  kernel/sched/core.c |    9 +++--
> >  kernel/sched/fair.c |  107 +++++++++++++++++++++++++++++----------------------
> >  2 files changed, 67 insertions(+), 49 deletions(-)
> 
> Hello, Ingo and Peter.
> 
> Could you review this patch set?
> Please let me know what I should do for merging this?

Hello.
One more ping :)

> 
> Thanks.
> 
> > -- 
> > 1.7.9.5
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] sched: change position of resched_cpu() in load_balance()
  2013-02-14  5:48 ` [PATCH 1/8] sched: change position of resched_cpu() in load_balance() Joonsoo Kim
@ 2013-03-19 12:59   ` Peter Zijlstra
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-19 12:59 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> cur_ld_moved is reset if env.flags hit LBF_NEED_BREAK.
> So, there is possibility that we miss doing resched_cpu().
> Correct it as changing position of resched_cpu()
> before checking LBF_NEED_BREAK.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/8] sched: explicitly cpu_idle_type checking in rebalance_domains()
  2013-02-14  5:48 ` [PATCH 2/8] sched: explicitly cpu_idle_type checking in rebalance_domains() Joonsoo Kim
@ 2013-03-19 14:02   ` Peter Zijlstra
  2013-03-20  6:48     ` Joonsoo Kim
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-19 14:02 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> After commit 88b8dac0, dst-cpu can be changed in load_balance(),
> then we can't know cpu_idle_type of dst-cpu when load_balance()
> return positive. So, add explicit cpu_idle_type checking.

No real objection I suppose, but did you actually see this go wrong?



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] sched: don't consider other cpus in our group in case of NEWLY_IDLE
  2013-02-14  5:48 ` [PATCH 3/8] sched: don't consider other cpus in our group in case of NEWLY_IDLE Joonsoo Kim
@ 2013-03-19 14:20   ` Peter Zijlstra
  2013-03-20  6:52     ` Joonsoo Kim
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-19 14:20 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group,
> regardless of idle type. When we do NEWLY_IDLE balancing, we should not
> consider it, because a motivation of NEWLY_IDLE balancing is to turn
> this cpu to non idle state if needed. This is not the case of other cpus.
> So, change code not to consider other cpus for NEWLY_IDLE balancing.
> 
> With this patch, assign 'if (pulled_task) this_rq->idle_stamp = 0'
> in idle_balance() is corrected, because NEWLY_IDLE balancing doesn't
> consider other cpus. Assigning to 'this_rq->idle_stamp' is now valid.
> 
> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Fair enough, good catch.

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0c6aaf6..97498f4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5016,8 +5016,15 @@ static int load_balance(int this_cpu, struct rq *this_rq,
>  		.cpus		= cpus,
>  	};
>  
> +	/* For NEWLY_IDLE load_balancing, we don't need to consider
> +	 * other cpus in our group */
> +	if (idle == CPU_NEWLY_IDLE) {
> +		env.dst_grpmask = NULL;
> +		max_lb_iterations = 0;

Just a small nit; I don't think we'll ever get to evaluate
max_lb_iterations when !dst_grpmask. So strictly speaking its
superfluous to touch it.

> +	} else {
> +		max_lb_iterations = cpumask_weight(env.dst_grpmask);
> +	}
>  	cpumask_copy(cpus, cpu_active_mask);
> -	max_lb_iterations = cpumask_weight(env.dst_grpmask);
>  
>  	schedstat_inc(sd, lb_count[idle]);
>  



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] sched: clean up move_task() and move_one_task()
  2013-02-14  5:48 ` [PATCH 4/8] sched: clean up move_task() and move_one_task() Joonsoo Kim
@ 2013-03-19 14:30   ` Peter Zijlstra
  2013-03-20  7:33     ` Joonsoo Kim
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-19 14:30 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> Some validation for task moving is performed in move_tasks() and
> move_one_task(). We can move these code to can_migrate_task()
> which is already exist for this purpose.

> @@ -4011,18 +4027,7 @@ static int move_tasks(struct lb_env *env)
>  			break;
>  		}
>  
> -		if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
> -			goto next;
> -
> -		load = task_h_load(p);
> -
> -		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
> -			goto next;
> -
> -		if ((load / 2) > env->imbalance)
> -			goto next;
> -
> -		if (!can_migrate_task(p, env))
> +		if (!can_migrate_task(p, env, false, &load))
>  			goto next;
>  
>  		move_task(p, env);

Right, so I'm not so taken with this one. The whole load stuff really
is a balance heuristic that's part of move_tasks(), move_one_task()
really doesn't care about that.

So why did you include it? Purely so you didn't have to re-order the
tests? I don't see any reason not to flip a tests around.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/8] sched: rename load_balance_tmpmask to load_balance_cpu_active
  2013-02-14  5:48 ` [PATCH 6/8] sched: rename load_balance_tmpmask to load_balance_cpu_active Joonsoo Kim
@ 2013-03-19 15:01   ` Peter Zijlstra
  2013-03-20  7:35     ` Joonsoo Kim
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-19 15:01 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> This name doesn't represent specific meaning.
> So rename it to imply it's purpose.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 26058d0..e6f8783 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6814,7 +6814,7 @@ struct task_group root_task_group;
>  LIST_HEAD(task_groups);
>  #endif
>  
> -DECLARE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
> +DECLARE_PER_CPU(cpumask_var_t, load_balance_cpu_active);

That's not much better; how about we call it: load_balance_mask.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance()
  2013-02-14  5:48 ` [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance() Joonsoo Kim
@ 2013-03-19 15:05   ` Peter Zijlstra
  2013-03-20  7:43     ` Joonsoo Kim
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-19 15:05 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> But, in that, there is no code for preventing to re-select dst-cpu.
> So, same dst-cpu can be selected over and over.
> 
> This patch add functionality to load_balance() in order to exclude
> cpu which is selected once.

Oh man.. seriously? Did you see this happen? Also, can't we simply
remove it from lb->cpus?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 8/8] sched: reset lb_env when redo in load_balance()
  2013-02-14  5:48 ` [PATCH 8/8] sched: reset lb_env when redo " Joonsoo Kim
@ 2013-03-19 15:21   ` Peter Zijlstra
  2013-03-20  8:13     ` Joonsoo Kim
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-19 15:21 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> So, now, When we redo in load_balance(), we should reset some fields of
> lb_env to ensure that load_balance() works for initial cpu, not for other
> cpus in its group. So correct it.
> 
> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 70631e8..25c798c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5014,14 +5014,20 @@ static int load_balance(int this_cpu, struct rq *this_rq,
>  
>  	struct lb_env env = {
>  		.sd		= sd,
> -		.dst_cpu	= this_cpu,
> -		.dst_rq		= this_rq,
>  		.dst_grpmask    = dst_grp,
>  		.idle		= idle,
> -		.loop_break	= sched_nr_migrate_break,
>  		.cpus		= cpus,
>  	};
>  
> +	schedstat_inc(sd, lb_count[idle]);
> +	cpumask_copy(cpus, cpu_active_mask);
> +
> +redo:
> +	env.dst_cpu = this_cpu;
> +	env.dst_rq = this_rq;
> +	env.loop = 0;
> +	env.loop_break = sched_nr_migrate_break;
> +
>  	/* For NEWLY_IDLE load_balancing, we don't need to consider
>  	 * other cpus in our group */
>  	if (idle == CPU_NEWLY_IDLE) {

OK, so this is the case where we tried to balance !this_cpu and found
ALL_PINNED, right?

You can only get here in very weird cases where people love their
sched_setaffinity() waaaaay too much, do we care? Why not give up?

Also, looking at this, shouldn't we consider env->cpus in
can_migrate_task() where we compute new_dst_cpu?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/8] sched: explicitly cpu_idle_type checking in rebalance_domains()
  2013-03-19 14:02   ` Peter Zijlstra
@ 2013-03-20  6:48     ` Joonsoo Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-03-20  6:48 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

Hello, Peter.

On Tue, Mar 19, 2013 at 03:02:21PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > After commit 88b8dac0, dst-cpu can be changed in load_balance(),
> > then we can't know cpu_idle_type of dst-cpu when load_balance()
> > return positive. So, add explicit cpu_idle_type checking.
> 
> No real objection I suppose, but did you actually see this go wrong?

No, I found it while I review whole scheduler code.
Thanks.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] sched: don't consider other cpus in our group in case of NEWLY_IDLE
  2013-03-19 14:20   ` Peter Zijlstra
@ 2013-03-20  6:52     ` Joonsoo Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-03-20  6:52 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Tue, Mar 19, 2013 at 03:20:57PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group,
> > regardless of idle type. When we do NEWLY_IDLE balancing, we should not
> > consider it, because a motivation of NEWLY_IDLE balancing is to turn
> > this cpu to non idle state if needed. This is not the case of other cpus.
> > So, change code not to consider other cpus for NEWLY_IDLE balancing.
> > 
> > With this patch, assign 'if (pulled_task) this_rq->idle_stamp = 0'
> > in idle_balance() is corrected, because NEWLY_IDLE balancing doesn't
> > consider other cpus. Assigning to 'this_rq->idle_stamp' is now valid.
> > 
> > Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Fair enough, good catch.
> 
> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 0c6aaf6..97498f4 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5016,8 +5016,15 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> >  		.cpus		= cpus,
> >  	};
> >  
> > +	/* For NEWLY_IDLE load_balancing, we don't need to consider
> > +	 * other cpus in our group */
> > +	if (idle == CPU_NEWLY_IDLE) {
> > +		env.dst_grpmask = NULL;
> > +		max_lb_iterations = 0;
> 
> Just a small nit; I don't think we'll ever get to evaluate
> max_lb_iterations when !dst_grpmask. So strictly speaking its
> superfluous to touch it.

Okay. In next spin, I will remove it and add a comment here.

Thanks.

> 
> > +	} else {
> > +		max_lb_iterations = cpumask_weight(env.dst_grpmask);
> > +	}
> >  	cpumask_copy(cpus, cpu_active_mask);
> > -	max_lb_iterations = cpumask_weight(env.dst_grpmask);
> >  
> >  	schedstat_inc(sd, lb_count[idle]);
> >  
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] sched: clean up move_task() and move_one_task()
  2013-03-19 14:30   ` Peter Zijlstra
@ 2013-03-20  7:33     ` Joonsoo Kim
  2013-03-20 11:11       ` Peter Zijlstra
  2013-03-20 11:16       ` Peter Zijlstra
  0 siblings, 2 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-03-20  7:33 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Tue, Mar 19, 2013 at 03:30:15PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Some validation for task moving is performed in move_tasks() and
> > move_one_task(). We can move these code to can_migrate_task()
> > which is already exist for this purpose.
> 
> > @@ -4011,18 +4027,7 @@ static int move_tasks(struct lb_env *env)
> >  			break;
> >  		}
> >  
> > -		if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
> > -			goto next;
> > -
> > -		load = task_h_load(p);
> > -
> > -		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
> > -			goto next;
> > -
> > -		if ((load / 2) > env->imbalance)
> > -			goto next;
> > -
> > -		if (!can_migrate_task(p, env))
> > +		if (!can_migrate_task(p, env, false, &load))
> >  			goto next;
> >  
> >  		move_task(p, env);
> 
> Right, so I'm not so taken with this one. The whole load stuff really
> is a balance heuristic that's part of move_tasks(), move_one_task()
> really doesn't care about that.
> 
> So why did you include it? Purely so you didn't have to re-order the
> tests? I don't see any reason not to flip a tests around.

I think that I'm not fully understand what you are concerning, because of
my poor English. If possible, please elaborate on a problem in more detail.

First of all, I do my best to answer your question.

Patch 4/8, 5/8 are for mitigating useless redoing overhead caused
by LBF_ALL_PINNED. For this purpose, we should check 'cpu affinity'
before evaluating a load. Just moving up can_migrate_task() above
load evaluation code may raise side effect, because can_migrate_task() have
other checking which is 'cache hottness'. I don't want a side effect. So
embedding load evaluation to can_migrate_task() and re-order checking and
makes load evaluation disabled for move_one_task().

If your recommandation is to move up can_mirate_task() above
load evaluation code, yes, I can, and will do that. :)

Please let me know what I am misunderstand.

Thanks.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/8] sched: rename load_balance_tmpmask to load_balance_cpu_active
  2013-03-19 15:01   ` Peter Zijlstra
@ 2013-03-20  7:35     ` Joonsoo Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-03-20  7:35 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Tue, Mar 19, 2013 at 04:01:01PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > This name doesn't represent specific meaning.
> > So rename it to imply it's purpose.
> > 
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > 
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 26058d0..e6f8783 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -6814,7 +6814,7 @@ struct task_group root_task_group;
> >  LIST_HEAD(task_groups);
> >  #endif
> >  
> > -DECLARE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
> > +DECLARE_PER_CPU(cpumask_var_t, load_balance_cpu_active);
> 
> That's not much better; how about we call it: load_balance_mask.

Okay, in next spin, I will call it as load_balance_mask.

Thanks.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance()
  2013-03-19 15:05   ` Peter Zijlstra
@ 2013-03-20  7:43     ` Joonsoo Kim
  2013-03-20 12:38       ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Joonsoo Kim @ 2013-03-20  7:43 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Tue, Mar 19, 2013 at 04:05:46PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> > But, in that, there is no code for preventing to re-select dst-cpu.
> > So, same dst-cpu can be selected over and over.
> > 
> > This patch add functionality to load_balance() in order to exclude
> > cpu which is selected once.
> 
> Oh man.. seriously? Did you see this happen? Also, can't we simply
> remove it from lb->cpus?

I didn't see it, I do just logical thinking. :)
lb->cpus is for source cpus and dst-cpu is for dest cpus. So, it doesn't
works to remove it from lb->cpus.

Thanks.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 8/8] sched: reset lb_env when redo in load_balance()
  2013-03-19 15:21   ` Peter Zijlstra
@ 2013-03-20  8:13     ` Joonsoo Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Joonsoo Kim @ 2013-03-20  8:13 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Tue, Mar 19, 2013 at 04:21:23PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> > So, now, When we redo in load_balance(), we should reset some fields of
> > lb_env to ensure that load_balance() works for initial cpu, not for other
> > cpus in its group. So correct it.
> > 
> > Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 70631e8..25c798c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5014,14 +5014,20 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> >  
> >  	struct lb_env env = {
> >  		.sd		= sd,
> > -		.dst_cpu	= this_cpu,
> > -		.dst_rq		= this_rq,
> >  		.dst_grpmask    = dst_grp,
> >  		.idle		= idle,
> > -		.loop_break	= sched_nr_migrate_break,
> >  		.cpus		= cpus,
> >  	};
> >  
> > +	schedstat_inc(sd, lb_count[idle]);
> > +	cpumask_copy(cpus, cpu_active_mask);
> > +
> > +redo:
> > +	env.dst_cpu = this_cpu;
> > +	env.dst_rq = this_rq;
> > +	env.loop = 0;
> > +	env.loop_break = sched_nr_migrate_break;
> > +
> >  	/* For NEWLY_IDLE load_balancing, we don't need to consider
> >  	 * other cpus in our group */
> >  	if (idle == CPU_NEWLY_IDLE) {
> 
> OK, so this is the case where we tried to balance !this_cpu and found
> ALL_PINNED, right?
> 
> You can only get here in very weird cases where people love their
> sched_setaffinity() waaaaay too much, do we care? Why not give up?

Now that you mentioned it, I have no enough reason for this patch.
I think that giving up is more preferable to me.
I will omit this patch for next spin.

> 
> Also, looking at this, shouldn't we consider env->cpus in
> can_migrate_task() where we compute new_dst_cpu?

As previously stated, env-cpus is for src cpus, so when we decide dst_cpu,
it doesn't matter.

Really thanks for detailed review to all this patchset.
Thanks.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] sched: clean up move_task() and move_one_task()
  2013-03-20  7:33     ` Joonsoo Kim
@ 2013-03-20 11:11       ` Peter Zijlstra
  2013-03-20 13:42         ` JoonSoo Kim
  2013-03-20 11:16       ` Peter Zijlstra
  1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-20 11:11 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Wed, 2013-03-20 at 16:33 +0900, Joonsoo Kim wrote:

> > Right, so I'm not so taken with this one. The whole load stuff really
> > is a balance heuristic that's part of move_tasks(), move_one_task()
> > really doesn't care about that.
> > 
> > So why did you include it? Purely so you didn't have to re-order the
> > tests? I don't see any reason not to flip a tests around.
> 
> I think that I'm not fully understand what you are concerning, because of
> my poor English. If possible, please elaborate on a problem in more detail.

OK, so your initial Changelog said it wanted to remove some code
duplication between move_tasks() and move_one_task(); but then you put
in the load heuristics and add a boolean argument to only enable those
for move_tasks() -- so clearly that wasn't duplicated.

So why move that code.. I proposed that this was due a reluctance to
re-arrange the various tests that stop the migration from happening.

Now you say:

> ... Just moving up can_migrate_task() above
> load evaluation code may raise side effect, because can_migrate_task() have
> other checking which is 'cache hottness'. I don't want a side effect. So
> embedding load evaluation to can_migrate_task() and re-order checking and
> makes load evaluation disabled for move_one_task().

Which pretty much affirms this. However I also said that I don't think
the order really matters that much; each test will cancel the migration
of this task; the order of these tests seem immaterial.

> If your recommandation is to move up can_mirate_task() above
> load evaluation code, yes, I can, and will do that. :)

I would actually propose moving the throttled test into
can_migrate_task() and leave it at that.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] sched: clean up move_task() and move_one_task()
  2013-03-20  7:33     ` Joonsoo Kim
  2013-03-20 11:11       ` Peter Zijlstra
@ 2013-03-20 11:16       ` Peter Zijlstra
  2013-03-20 12:07         ` Peter Zijlstra
  1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-20 11:16 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel


On Wed, 2013-03-20 at 16:33 +0900, Joonsoo Kim wrote:

> > Right, so I'm not so taken with this one. The whole load stuff really
> > is a balance heuristic that's part of move_tasks(), move_one_task()
> > really doesn't care about that.
> > 
> > So why did you include it? Purely so you didn't have to re-order the
> > tests? I don't see any reason not to flip a tests around.
> 
> I think that I'm not fully understand what you are concerning, because of
> my poor English. If possible, please elaborate on a problem in more detail.

OK, so your initial Changelog said it wanted to remove some code
duplication between move_tasks() and move_one_task(); but then you put
in the load heuristics and add a boolean argument to only enable those
for move_tasks() -- so clearly that wasn't duplicated.

So why move that code.. I proposed that this was due a reluctance to
re-arrange the various tests that stop the migration from happening.

Now you say:

> ... Just moving up can_migrate_task() above
> load evaluation code may raise side effect, because can_migrate_task() have
> other checking which is 'cache hottness'. I don't want a side effect. So
> embedding load evaluation to can_migrate_task() and re-order checking and
> makes load evaluation disabled for move_one_task().

Which pretty much affirms this. However I also said that I don't think
the order really matters that much; each test will cancel the migration
of this task; the order of these tests seem immaterial.

> If your recommandation is to move up can_mirate_task() above
> load evaluation code, yes, I can, and will do that. :)

I would actually propose 



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] sched: clean up move_task() and move_one_task()
  2013-03-20 11:16       ` Peter Zijlstra
@ 2013-03-20 12:07         ` Peter Zijlstra
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-20 12:07 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Wed, 2013-03-20 at 12:16 +0100, Peter Zijlstra wrote:
> > If your recommandation is to move up can_mirate_task() above
> > load evaluation code, yes, I can, and will do that. :)
> 
> I would actually propose 

... to move the throttled test into can_migrate_task().

(damn evo crashed on me... *again*).


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance()
  2013-03-20  7:43     ` Joonsoo Kim
@ 2013-03-20 12:38       ` Peter Zijlstra
  2013-03-20 13:48         ` JoonSoo Kim
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2013-03-20 12:38 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

On Wed, 2013-03-20 at 16:43 +0900, Joonsoo Kim wrote:
> On Tue, Mar 19, 2013 at 04:05:46PM +0100, Peter Zijlstra wrote:
> > On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> > > But, in that, there is no code for preventing to re-select dst-cpu.
> > > So, same dst-cpu can be selected over and over.
> > > 
> > > This patch add functionality to load_balance() in order to exclude
> > > cpu which is selected once.
> > 
> > Oh man.. seriously? Did you see this happen? Also, can't we simply
> > remove it from lb->cpus?
> 
> I didn't see it, I do just logical thinking. :)
> lb->cpus is for source cpus and dst-cpu is for dest cpus. So, it doesn't
> works to remove it from lb->cpus.

How about we interpret ->cpus as the total mask to balance; so both
source and destination. That way clearing a cpu means we won't take nor
put tasks on it.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] sched: clean up move_task() and move_one_task()
  2013-03-20 11:11       ` Peter Zijlstra
@ 2013-03-20 13:42         ` JoonSoo Kim
  0 siblings, 0 replies; 30+ messages in thread
From: JoonSoo Kim @ 2013-03-20 13:42 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Joonsoo Kim, Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

2013/3/20 Peter Zijlstra <peterz@infradead.org>:
> On Wed, 2013-03-20 at 16:33 +0900, Joonsoo Kim wrote:
>
>> > Right, so I'm not so taken with this one. The whole load stuff really
>> > is a balance heuristic that's part of move_tasks(), move_one_task()
>> > really doesn't care about that.
>> >
>> > So why did you include it? Purely so you didn't have to re-order the
>> > tests? I don't see any reason not to flip a tests around.
>>
>> I think that I'm not fully understand what you are concerning, because of
>> my poor English. If possible, please elaborate on a problem in more detail.
>
> OK, so your initial Changelog said it wanted to remove some code
> duplication between move_tasks() and move_one_task(); but then you put
> in the load heuristics and add a boolean argument to only enable those
> for move_tasks() -- so clearly that wasn't duplicated.
>
> So why move that code.. I proposed that this was due a reluctance to
> re-arrange the various tests that stop the migration from happening.
>
> Now you say:
>
>> ... Just moving up can_migrate_task() above
>> load evaluation code may raise side effect, because can_migrate_task() have
>> other checking which is 'cache hottness'. I don't want a side effect. So
>> embedding load evaluation to can_migrate_task() and re-order checking and
>> makes load evaluation disabled for move_one_task().
>
> Which pretty much affirms this. However I also said that I don't think
> the order really matters that much; each test will cancel the migration
> of this task; the order of these tests seem immaterial.
>
>> If your recommandation is to move up can_mirate_task() above
>> load evaluation code, yes, I can, and will do that. :)
>
> I would actually propose moving the throttled test into
> can_migrate_task() and leave it at that.

Okay. I will do that in next spin.

Thanks!!

>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance()
  2013-03-20 12:38       ` Peter Zijlstra
@ 2013-03-20 13:48         ` JoonSoo Kim
  0 siblings, 0 replies; 30+ messages in thread
From: JoonSoo Kim @ 2013-03-20 13:48 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Joonsoo Kim, Ingo Molnar, Srivatsa Vaddagiri, linux-kernel

2013/3/20 Peter Zijlstra <peterz@infradead.org>:
> On Wed, 2013-03-20 at 16:43 +0900, Joonsoo Kim wrote:
>> On Tue, Mar 19, 2013 at 04:05:46PM +0100, Peter Zijlstra wrote:
>> > On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
>> > > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
>> > > But, in that, there is no code for preventing to re-select dst-cpu.
>> > > So, same dst-cpu can be selected over and over.
>> > >
>> > > This patch add functionality to load_balance() in order to exclude
>> > > cpu which is selected once.
>> >
>> > Oh man.. seriously? Did you see this happen? Also, can't we simply
>> > remove it from lb->cpus?
>>
>> I didn't see it, I do just logical thinking. :)
>> lb->cpus is for source cpus and dst-cpu is for dest cpus. So, it doesn't
>> works to remove it from lb->cpus.
>
> How about we interpret ->cpus as the total mask to balance; so both
> source and destination. That way clearing a cpu means we won't take nor
> put tasks on it.

In my quick thought, it may be possible.
I will try to do it.

Thanks.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2013-03-20 13:48 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-14  5:48 [PATCH 0/8] correct load_balance() Joonsoo Kim
2013-02-14  5:48 ` [PATCH 1/8] sched: change position of resched_cpu() in load_balance() Joonsoo Kim
2013-03-19 12:59   ` Peter Zijlstra
2013-02-14  5:48 ` [PATCH 2/8] sched: explicitly cpu_idle_type checking in rebalance_domains() Joonsoo Kim
2013-03-19 14:02   ` Peter Zijlstra
2013-03-20  6:48     ` Joonsoo Kim
2013-02-14  5:48 ` [PATCH 3/8] sched: don't consider other cpus in our group in case of NEWLY_IDLE Joonsoo Kim
2013-03-19 14:20   ` Peter Zijlstra
2013-03-20  6:52     ` Joonsoo Kim
2013-02-14  5:48 ` [PATCH 4/8] sched: clean up move_task() and move_one_task() Joonsoo Kim
2013-03-19 14:30   ` Peter Zijlstra
2013-03-20  7:33     ` Joonsoo Kim
2013-03-20 11:11       ` Peter Zijlstra
2013-03-20 13:42         ` JoonSoo Kim
2013-03-20 11:16       ` Peter Zijlstra
2013-03-20 12:07         ` Peter Zijlstra
2013-02-14  5:48 ` [PATCH 5/8] sched: move up affinity check to mitigate useless redoing overhead Joonsoo Kim
2013-02-14  5:48 ` [PATCH 6/8] sched: rename load_balance_tmpmask to load_balance_cpu_active Joonsoo Kim
2013-03-19 15:01   ` Peter Zijlstra
2013-03-20  7:35     ` Joonsoo Kim
2013-02-14  5:48 ` [PATCH 7/8] sched: prevent to re-select dst-cpu in load_balance() Joonsoo Kim
2013-03-19 15:05   ` Peter Zijlstra
2013-03-20  7:43     ` Joonsoo Kim
2013-03-20 12:38       ` Peter Zijlstra
2013-03-20 13:48         ` JoonSoo Kim
2013-02-14  5:48 ` [PATCH 8/8] sched: reset lb_env when redo " Joonsoo Kim
2013-03-19 15:21   ` Peter Zijlstra
2013-03-20  8:13     ` Joonsoo Kim
2013-02-25  4:56 ` [PATCH 0/8] correct load_balance() Joonsoo Kim
2013-03-19  5:11   ` Joonsoo Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).