All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()
@ 2013-01-11  8:15 Michael Wang
  2013-01-11  8:17 ` [RFC PATCH 1/2] sched: schedule balance map foundation Michael Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Michael Wang @ 2013-01-11  8:15 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

This patch set is trying to simplify the select_task_rq_fair() with
schedule balance map.

After get rid of the complex code and reorganize the logical, pgbench show
the improvement.

	Prev:
		| db_size | clients |  tps  |
		+---------+---------+-------+
		| 22 MB   |       1 |  4437 |
		| 22 MB   |      16 | 51351 |
		| 22 MB   |      32 | 49959 |
		| 7484 MB |       1 |  4078 |
		| 7484 MB |      16 | 44681 |
		| 7484 MB |      32 | 42463 |
		| 15 GB   |       1 |  3992 |
		| 15 GB   |      16 | 44107 |
		| 15 GB   |      32 | 41797 |

	Post:
		| db_size | clients |  tps  |
		+---------+---------+-------+
		| 22 MB   |       1 | 11053 |		+149.11%
		| 22 MB   |      16 | 55671 |		+8.41%
		| 22 MB   |      32 | 52596 |		+5.28%
		| 7483 MB |       1 |  8180 |		+100.59%
		| 7483 MB |      16 | 48392 |		+8.31%
		| 7483 MB |      32 | 44185 |		+0.18%
		| 15 GB   |       1 |  8127 |		+103.58%
		| 15 GB   |      16 | 48156 |		+9.18%
		| 15 GB   |      32 | 43387 |		+3.8%

Please check the patch for more details about schedule balance map, they
currently based on linux-next 3.7.0-rc6, will rebase them to tip tree in
follow version.

Comments are very welcomed.

Test with:
	12 cpu X86 server and linux-next 3.7.0-rc6.

Michael Wang (2):
	[PATCH 1/2] sched: schedule balance map foundation
	[PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map

Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
 core.c  |   61 +++++++++++++++++++++++++++++
 fair.c  |  133 +++++++++++++++++++++++++++++++++-------------------------------
 sched.h |   28 +++++++++++++
 3 files changed, 159 insertions(+), 63 deletions(-)


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC PATCH 1/2] sched: schedule balance map foundation
  2013-01-11  8:15 [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Michael Wang
@ 2013-01-11  8:17 ` Michael Wang
  2013-01-14  8:26   ` Namhyung Kim
  2013-01-11  8:18 ` [RFC PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map Michael Wang
  2013-01-11 10:13 ` [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Nikunj A Dadhania
  2 siblings, 1 reply; 9+ messages in thread
From: Michael Wang @ 2013-01-11  8:17 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

In order to get rid of the complex code in select_task_rq_fair(),
approach to directly get sd on each level with proper flag is
required.

Schedule balance map is the solution, which record the sd according
to it's flag and level.

For example, cpu_sbm->sd[wake][l] will locate the sd of cpu which
support wake up on level l.

In order to quickly locate the lower sd while changing the base cpu,
the level with empty sd in map will be filled with the lower sd.

Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
 kernel/sched/core.c  |   61 ++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h |   28 +++++++++++++++++++++++
 2 files changed, 89 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d8927f..80810a3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5497,6 +5497,55 @@ static void update_top_cache_domain(int cpu)
 	per_cpu(sd_llc_id, cpu) = id;
 }
 
+DEFINE_PER_CPU_SHARED_ALIGNED(struct sched_balance_map, sbm_array);
+
+static void build_sched_balance_map(int cpu)
+{
+	struct sched_balance_map *sbm = &per_cpu(sbm_array, cpu);
+	struct sched_domain *sd = cpu_rq(cpu)->sd;
+	struct sched_domain *top_sd = NULL;
+	int i, type, level = 0;
+
+	while (sd) {
+		if (sd->flags & SD_LOAD_BALANCE) {
+			if (sd->flags & SD_BALANCE_EXEC) {
+				sbm->top_level[SBM_EXEC_TYPE] = sd->level;
+				sbm->sd[SBM_EXEC_TYPE][sd->level] = sd;
+			}
+
+			if (sd->flags & SD_BALANCE_FORK) {
+				sbm->top_level[SBM_FORK_TYPE] = sd->level;
+				sbm->sd[SBM_FORK_TYPE][sd->level] = sd;
+			}
+
+			if (sd->flags & SD_BALANCE_WAKE) {
+				sbm->top_level[SBM_WAKE_TYPE] = sd->level;
+				sbm->sd[SBM_WAKE_TYPE][sd->level] = sd;
+			}
+
+			if (sd->flags & SD_WAKE_AFFINE) {
+				for_each_cpu(i, sched_domain_span(sd)) {
+					if (!sbm->affine_map[i])
+						sbm->affine_map[i] = sd;
+				}
+			}
+		}
+		sd = sd->parent;
+	}
+
+	/*
+	 * fill the hole to get lower level sd easily.
+	 */
+	for (type = 0; type < SBM_MAX_TYPE; type++) {
+		level = sbm->top_level[type];
+		top_sd = sbm->sd[type][level];
+		if ((++level != SBM_MAX_LEVEL) && top_sd) {
+			for (; level < SBM_MAX_LEVEL; level++)
+				sbm->sd[type][level] = top_sd;
+		}
+	}
+}
+
 /*
  * Attach the domain 'sd' to 'cpu' as its base domain. Callers must
  * hold the hotplug lock.
@@ -5506,6 +5555,9 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
 	struct sched_domain *tmp;
+	struct sched_balance_map *sbm = &per_cpu(sbm_array, cpu);
+
+	rcu_assign_pointer(rq->sbm, NULL);
 
 	/* Remove the sched domains which do not contribute to scheduling. */
 	for (tmp = sd; tmp; ) {
@@ -5538,6 +5590,15 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
 	destroy_sched_domains(tmp, cpu);
 
 	update_top_cache_domain(cpu);
+
+	/*
+	 * synchronize_rcu() is unnecessary here since
+	 * destroy_sched_domains() already do the work.
+	 */
+	memset(sbm, 0, sizeof(*sbm));
+
+	build_sched_balance_map(cpu);
+	rcu_assign_pointer(rq->sbm, sbm);
 }
 
 /* cpus with isolated domains */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 7a7db09..c91c6c7 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -336,6 +336,33 @@ struct root_domain {
 
 extern struct root_domain def_root_domain;
 
+#ifdef CONFIG_SCHED_SMT
+#define SBM_MAX_LEVEL	4
+#else
+#ifdef CONFIG_SCHED_MC
+#define SBM_MAX_LEVEL	3
+#else
+#ifdef CONFIG_SCHED_BOOK
+#define SBM_MAX_LEVEL	2
+#else
+#define SBM_MAX_LEVEL	1
+#endif
+#endif
+#endif
+
+enum {
+	SBM_EXEC_TYPE,
+	SBM_FORK_TYPE,
+	SBM_WAKE_TYPE,
+	SBM_MAX_TYPE
+};
+
+struct sched_balance_map {
+	struct sched_domain *sd[SBM_MAX_TYPE][SBM_MAX_LEVEL];
+	int top_level[SBM_MAX_TYPE];
+	struct sched_domain *affine_map[NR_CPUS];
+};
+
 #endif /* CONFIG_SMP */
 
 /*
@@ -403,6 +430,7 @@ struct rq {
 #ifdef CONFIG_SMP
 	struct root_domain *rd;
 	struct sched_domain *sd;
+	struct sched_balance_map *sbm;
 
 	unsigned long cpu_power;
 
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map
  2013-01-11  8:15 [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Michael Wang
  2013-01-11  8:17 ` [RFC PATCH 1/2] sched: schedule balance map foundation Michael Wang
@ 2013-01-11  8:18 ` Michael Wang
  2013-01-14  8:27   ` Namhyung Kim
  2013-01-11 10:13 ` [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Nikunj A Dadhania
  2 siblings, 1 reply; 9+ messages in thread
From: Michael Wang @ 2013-01-11  8:18 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

Since schedule balance map provide the approach to get proper sd directly,
simplify the code of select_task_rq_fair() is possible.

The new code is designed to reserve most of the old logical, but get rid
of those 'for' by using the schedule balance map to locate proper sd
directly.

Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
 kernel/sched/fair.c |  133 +++++++++++++++++++++++++++------------------------
 1 files changed, 70 insertions(+), 63 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6b800a1..20b6f5b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2682,100 +2682,107 @@ done:
 }
 
 /*
- * sched_balance_self: balance the current task (running on cpu) in domains
- * that have the 'flag' flag set. In practice, this is SD_BALANCE_FORK and
- * SD_BALANCE_EXEC.
+ * select_task_rq_fair()
+ *		select a proper cpu for task to run.
  *
- * Balance, ie. select the least loaded group.
- *
- * Returns the target CPU number, or the same CPU if no balancing is needed.
- *
- * preempt must be disabled.
+ *	p		-- the task we are going to select cpu for
+ *	sd_flag		-- indicate the context, WAKE, EXEC or FORK.
+ *	wake_flag	-- we only care about WF_SYNC currently
  */
 static int
 select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 {
-	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
+	struct sched_domain *sd = NULL;
 	int cpu = smp_processor_id();
 	int prev_cpu = task_cpu(p);
 	int new_cpu = cpu;
-	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
+	struct sched_balance_map *sbm = NULL;
+	int type = 0;
 
 	if (p->nr_cpus_allowed == 1)
 		return prev_cpu;
 
-	if (sd_flag & SD_BALANCE_WAKE) {
-		if (cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
-			want_affine = 1;
-		new_cpu = prev_cpu;
-	}
+	if (sd_flag & SD_BALANCE_EXEC)
+		type = SBM_EXEC_TYPE;
+	else if (sd_flag & SD_BALANCE_FORK)
+		type = SBM_FORK_TYPE;
+	else if (sd_flag & SD_BALANCE_WAKE)
+		type = SBM_WAKE_TYPE;
 
 	rcu_read_lock();
-	for_each_domain(cpu, tmp) {
-		if (!(tmp->flags & SD_LOAD_BALANCE))
-			continue;
 
+	sbm = cpu_rq(cpu)->sbm;
+	if (!sbm)
+		goto unlock;
+
+	if (sd_flag & SD_BALANCE_WAKE) {
 		/*
-		 * If both cpu and prev_cpu are part of this domain,
-		 * cpu is a valid SD_WAKE_AFFINE target.
+		 * Tasks to be waked is special, memory it relied on
+		 * may has already been cached on prev_cpu, and usually
+		 * they require low latency.
+		 *
+		 * So firstly try to locate an idle cpu shared the cache
+		 * with prev_cpu, it has the chance to break the load
+		 * balance, fortunately, select_idle_sibling() will search
+		 * from top to bottom, which help to reduce the chance in
+		 * some cases.
 		 */
-		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
-		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
-			affine_sd = tmp;
-			break;
-		}
+		new_cpu = select_idle_sibling(p, prev_cpu);
+		if (idle_cpu(new_cpu))
+			goto unlock;
 
-		if (tmp->flags & sd_flag)
-			sd = tmp;
-	}
+		/*
+		 * No idle cpu could be found in the topology of prev_cpu,
+		 * before jump into the slow balance_path, try search again
+		 * in the topology of current cpu if it is the affine of
+		 * prev_cpu.
+		 */
+		if (!sbm->affine_map[prev_cpu] &&
+				!cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
+			goto balance_path;
 
-	if (affine_sd) {
-		if (cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-			prev_cpu = cpu;
+		new_cpu = select_idle_sibling(p, cpu);
+		if (!idle_cpu(new_cpu))
+			goto balance_path;
 
-		new_cpu = select_idle_sibling(p, prev_cpu);
-		goto unlock;
+		/*
+		 * Invoke wake_affine() finally since it is no doubt a
+		 * performance killer.
+		 */
+		if (wake_affine(sbm->affine_map[prev_cpu], p, sync))
+			goto unlock;
 	}
 
+balance_path:
+	new_cpu = cpu;
+	sd = sbm->sd[type][sbm->top_level[type]];
+
 	while (sd) {
 		int load_idx = sd->forkexec_idx;
-		struct sched_group *group;
-		int weight;
-
-		if (!(sd->flags & sd_flag)) {
-			sd = sd->child;
-			continue;
-		}
+		struct sched_group *sg = NULL;
 
 		if (sd_flag & SD_BALANCE_WAKE)
 			load_idx = sd->wake_idx;
 
-		group = find_idlest_group(sd, p, cpu, load_idx);
-		if (!group) {
-			sd = sd->child;
-			continue;
-		}
+		sg = find_idlest_group(sd, p, cpu, load_idx);
+		if (!sg)
+			goto next_sd;
 
-		new_cpu = find_idlest_cpu(group, p, cpu);
-		if (new_cpu == -1 || new_cpu == cpu) {
-			/* Now try balancing at a lower domain level of cpu */
-			sd = sd->child;
-			continue;
-		}
+		new_cpu = find_idlest_cpu(sg, p, cpu);
+		if (new_cpu != -1)
+			cpu = new_cpu;
+next_sd:
+		if (!sd->level)
+			break;
+
+		sbm = cpu_rq(cpu)->sbm;
+		if (!sbm)
+			break;
+
+		sd = sbm->sd[type][sd->level - 1];
+	};
 
-		/* Now try balancing at a lower domain level of new_cpu */
-		cpu = new_cpu;
-		weight = sd->span_weight;
-		sd = NULL;
-		for_each_domain(cpu, tmp) {
-			if (weight <= tmp->span_weight)
-				break;
-			if (tmp->flags & sd_flag)
-				sd = tmp;
-		}
-		/* while loop will break here if sd == NULL */
-	}
 unlock:
 	rcu_read_unlock();
 
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()
  2013-01-11  8:15 [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Michael Wang
  2013-01-11  8:17 ` [RFC PATCH 1/2] sched: schedule balance map foundation Michael Wang
  2013-01-11  8:18 ` [RFC PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map Michael Wang
@ 2013-01-11 10:13 ` Nikunj A Dadhania
  2013-01-15  2:20   ` Michael Wang
  2 siblings, 1 reply; 9+ messages in thread
From: Nikunj A Dadhania @ 2013-01-11 10:13 UTC (permalink / raw)
  To: Michael Wang, LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

Hi Michael,

Michael Wang <wangyun@linux.vnet.ibm.com> writes:
> 	Prev:
> 		+---------+---------+-------+
> 		| 7484 MB |      32 | 42463 |
> 	Post:
> 		| 7483 MB |      32 | 44185 |		+0.18%
That should be +4.05%

Regards
Nikunj


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 1/2] sched: schedule balance map foundation
  2013-01-11  8:17 ` [RFC PATCH 1/2] sched: schedule balance map foundation Michael Wang
@ 2013-01-14  8:26   ` Namhyung Kim
  2013-01-15  2:33     ` Michael Wang
  0 siblings, 1 reply; 9+ messages in thread
From: Namhyung Kim @ 2013-01-14  8:26 UTC (permalink / raw)
  To: Michael Wang
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

Hi Michael,

On Fri, 11 Jan 2013 16:17:43 +0800, Michael Wang wrote:
> In order to get rid of the complex code in select_task_rq_fair(),
> approach to directly get sd on each level with proper flag is
> required.
>
> Schedule balance map is the solution, which record the sd according
> to it's flag and level.
>
> For example, cpu_sbm->sd[wake][l] will locate the sd of cpu which
> support wake up on level l.
>
> In order to quickly locate the lower sd while changing the base cpu,
> the level with empty sd in map will be filled with the lower sd.
>
> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
> ---
>  kernel/sched/core.c  |   61 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  kernel/sched/sched.h |   28 +++++++++++++++++++++++
>  2 files changed, 89 insertions(+), 0 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 2d8927f..80810a3 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5497,6 +5497,55 @@ static void update_top_cache_domain(int cpu)
>  	per_cpu(sd_llc_id, cpu) = id;
>  }
>  
> +DEFINE_PER_CPU_SHARED_ALIGNED(struct sched_balance_map, sbm_array);
> +
> +static void build_sched_balance_map(int cpu)
> +{
> +	struct sched_balance_map *sbm = &per_cpu(sbm_array, cpu);
> +	struct sched_domain *sd = cpu_rq(cpu)->sd;
> +	struct sched_domain *top_sd = NULL;
> +	int i, type, level = 0;
> +
> +	while (sd) {
> +		if (sd->flags & SD_LOAD_BALANCE) {
> +			if (sd->flags & SD_BALANCE_EXEC) {
> +				sbm->top_level[SBM_EXEC_TYPE] = sd->level;
> +				sbm->sd[SBM_EXEC_TYPE][sd->level] = sd;
> +			}
> +
> +			if (sd->flags & SD_BALANCE_FORK) {
> +				sbm->top_level[SBM_FORK_TYPE] = sd->level;
> +				sbm->sd[SBM_FORK_TYPE][sd->level] = sd;
> +			}
> +
> +			if (sd->flags & SD_BALANCE_WAKE) {
> +				sbm->top_level[SBM_WAKE_TYPE] = sd->level;
> +				sbm->sd[SBM_WAKE_TYPE][sd->level] = sd;
> +			}
> +
> +			if (sd->flags & SD_WAKE_AFFINE) {
> +				for_each_cpu(i, sched_domain_span(sd)) {
> +					if (!sbm->affine_map[i])
> +						sbm->affine_map[i] = sd;
> +				}
> +			}
> +		}
> +		sd = sd->parent;
> +	}

It seems that it can be done like:

	for_each_domain(cpu, sd) {
		if (!(sd->flags & SD_LOAD_BALANCE))
                	continue;

		if (sd->flags & SD_BALANCE_EXEC)
		...
	}


> +
> +	/*
> +	 * fill the hole to get lower level sd easily.
> +	 */
> +	for (type = 0; type < SBM_MAX_TYPE; type++) {
> +		level = sbm->top_level[type];
> +		top_sd = sbm->sd[type][level];
> +		if ((++level != SBM_MAX_LEVEL) && top_sd) {
> +			for (; level < SBM_MAX_LEVEL; level++)
> +				sbm->sd[type][level] = top_sd;
> +		}
> +	}
> +}
[snip]
> +#ifdef CONFIG_SCHED_SMT
> +#define SBM_MAX_LEVEL	4
> +#else
> +#ifdef CONFIG_SCHED_MC
> +#define SBM_MAX_LEVEL	3
> +#else
> +#ifdef CONFIG_SCHED_BOOK
> +#define SBM_MAX_LEVEL	2
> +#else
> +#define SBM_MAX_LEVEL	1
> +#endif
> +#endif
> +#endif

Looks like this fixed level constants does not consider NUMA domains.
Doesn't accessing sbm->sd[type][level] in the above while loop cause a
problem on big NUMA machines?

Thanks,
Namhyung

> +
> +enum {
> +	SBM_EXEC_TYPE,
> +	SBM_FORK_TYPE,
> +	SBM_WAKE_TYPE,
> +	SBM_MAX_TYPE
> +};
> +
> +struct sched_balance_map {
> +	struct sched_domain *sd[SBM_MAX_TYPE][SBM_MAX_LEVEL];
> +	int top_level[SBM_MAX_TYPE];
> +	struct sched_domain *affine_map[NR_CPUS];
> +};
> +
>  #endif /* CONFIG_SMP */
>  
>  /*
> @@ -403,6 +430,7 @@ struct rq {
>  #ifdef CONFIG_SMP
>  	struct root_domain *rd;
>  	struct sched_domain *sd;
> +	struct sched_balance_map *sbm;
>  
>  	unsigned long cpu_power;

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map
  2013-01-11  8:18 ` [RFC PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map Michael Wang
@ 2013-01-14  8:27   ` Namhyung Kim
  2013-01-15  2:33     ` Michael Wang
  0 siblings, 1 reply; 9+ messages in thread
From: Namhyung Kim @ 2013-01-14  8:27 UTC (permalink / raw)
  To: Michael Wang
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

On Fri, 11 Jan 2013 16:18:33 +0800, Michael Wang wrote:
> +next_sd:
> +		if (!sd->level)
> +			break;
> +
> +		sbm = cpu_rq(cpu)->sbm;
> +		if (!sbm)
> +			break;
> +
> +		sd = sbm->sd[type][sd->level - 1];
> +	};

An unnessary semicolone here.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()
  2013-01-11 10:13 ` [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Nikunj A Dadhania
@ 2013-01-15  2:20   ` Michael Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Wang @ 2013-01-15  2:20 UTC (permalink / raw)
  To: Nikunj A Dadhania
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

On 01/11/2013 06:13 PM, Nikunj A Dadhania wrote:
> Hi Michael,
> 
> Michael Wang <wangyun@linux.vnet.ibm.com> writes:
>> 	Prev:
>> 		+---------+---------+-------+
>> 		| 7484 MB |      32 | 42463 |
>> 	Post:
>> 		| 7483 MB |      32 | 44185 |		+0.18%
> That should be +4.05%

Hi, Nikunj

Thanks for your notify, that's my mistake on the calculation, will
correct it.

Regards,
Michael Wang

> 
> Regards
> Nikunj
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 1/2] sched: schedule balance map foundation
  2013-01-14  8:26   ` Namhyung Kim
@ 2013-01-15  2:33     ` Michael Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Wang @ 2013-01-15  2:33 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

Hi, Namhyung

Thanks for your reply.

On 01/14/2013 04:26 PM, Namhyung Kim wrote:
> Hi Michael,
> 

[snip]

>> +	while (sd) {
>> +		if (sd->flags & SD_LOAD_BALANCE) {
>> +			if (sd->flags & SD_BALANCE_EXEC) {
>> +				sbm->top_level[SBM_EXEC_TYPE] = sd->level;
>> +				sbm->sd[SBM_EXEC_TYPE][sd->level] = sd;
>> +			}
>> +
>> +			if (sd->flags & SD_BALANCE_FORK) {
>> +				sbm->top_level[SBM_FORK_TYPE] = sd->level;
>> +				sbm->sd[SBM_FORK_TYPE][sd->level] = sd;
>> +			}
>> +
>> +			if (sd->flags & SD_BALANCE_WAKE) {
>> +				sbm->top_level[SBM_WAKE_TYPE] = sd->level;
>> +				sbm->sd[SBM_WAKE_TYPE][sd->level] = sd;
>> +			}
>> +
>> +			if (sd->flags & SD_WAKE_AFFINE) {
>> +				for_each_cpu(i, sched_domain_span(sd)) {
>> +					if (!sbm->affine_map[i])
>> +						sbm->affine_map[i] = sd;
>> +				}
>> +			}
>> +		}
>> +		sd = sd->parent;
>> +	}
> 
> It seems that it can be done like:
> 
> 	for_each_domain(cpu, sd) {
> 		if (!(sd->flags & SD_LOAD_BALANCE))
>                 	continue;
> 
> 		if (sd->flags & SD_BALANCE_EXEC)
> 		...
> 	}
> 
> 

That's right, will correct it.

>> +
>> +	/*
>> +	 * fill the hole to get lower level sd easily.
>> +	 */
>> +	for (type = 0; type < SBM_MAX_TYPE; type++) {
>> +		level = sbm->top_level[type];
>> +		top_sd = sbm->sd[type][level];
>> +		if ((++level != SBM_MAX_LEVEL) && top_sd) {
>> +			for (; level < SBM_MAX_LEVEL; level++)
>> +				sbm->sd[type][level] = top_sd;
>> +		}
>> +	}
>> +}
> [snip]
>> +#ifdef CONFIG_SCHED_SMT
>> +#define SBM_MAX_LEVEL	4
>> +#else
>> +#ifdef CONFIG_SCHED_MC
>> +#define SBM_MAX_LEVEL	3
>> +#else
>> +#ifdef CONFIG_SCHED_BOOK
>> +#define SBM_MAX_LEVEL	2
>> +#else
>> +#define SBM_MAX_LEVEL	1
>> +#endif
>> +#endif
>> +#endif
> 
> Looks like this fixed level constants does not consider NUMA domains.
> Doesn't accessing sbm->sd[type][level] in the above while loop cause a
> problem on big NUMA machines?

Yes, that's true, this patch is based on 3.7.0-rc6 without NUMA merged,
in order to make the topic a little easier to be started, I will
consider about the NUMA thing in next version, and please let me know if
you have any suggestions.

Regards,
Michael Wang

> 
> Thanks,
> Namhyung
> 
>> +
>> +enum {
>> +	SBM_EXEC_TYPE,
>> +	SBM_FORK_TYPE,
>> +	SBM_WAKE_TYPE,
>> +	SBM_MAX_TYPE
>> +};
>> +
>> +struct sched_balance_map {
>> +	struct sched_domain *sd[SBM_MAX_TYPE][SBM_MAX_LEVEL];
>> +	int top_level[SBM_MAX_TYPE];
>> +	struct sched_domain *affine_map[NR_CPUS];
>> +};
>> +
>>  #endif /* CONFIG_SMP */
>>  
>>  /*
>> @@ -403,6 +430,7 @@ struct rq {
>>  #ifdef CONFIG_SMP
>>  	struct root_domain *rd;
>>  	struct sched_domain *sd;
>> +	struct sched_balance_map *sbm;
>>  
>>  	unsigned long cpu_power;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map
  2013-01-14  8:27   ` Namhyung Kim
@ 2013-01-15  2:33     ` Michael Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Wang @ 2013-01-15  2:33 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Paul Turner, Tejun Heo,
	Mike Galbraith, Andrew Morton

On 01/14/2013 04:27 PM, Namhyung Kim wrote:
> On Fri, 11 Jan 2013 16:18:33 +0800, Michael Wang wrote:
>> +next_sd:
>> +		if (!sd->level)
>> +			break;
>> +
>> +		sbm = cpu_rq(cpu)->sbm;
>> +		if (!sbm)
>> +			break;
>> +
>> +		sd = sbm->sd[type][sd->level - 1];
>> +	};
> 
> An unnessary semicolone here.

Thanks for the notify, will correct it.

Regards,
Michael Wang

> 
> Thanks,
> Namhyung
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-01-15  2:33 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-11  8:15 [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Michael Wang
2013-01-11  8:17 ` [RFC PATCH 1/2] sched: schedule balance map foundation Michael Wang
2013-01-14  8:26   ` Namhyung Kim
2013-01-15  2:33     ` Michael Wang
2013-01-11  8:18 ` [RFC PATCH 2/2] sched: simplify select_task_rq_fair() with schedule balance map Michael Wang
2013-01-14  8:27   ` Namhyung Kim
2013-01-15  2:33     ` Michael Wang
2013-01-11 10:13 ` [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Nikunj A Dadhania
2013-01-15  2:20   ` Michael Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.