* [patch] sched: minimalist select_idle_sibling() bouncing cow syndrome fix
@ 2013-01-27 7:50 Mike Galbraith
2013-01-28 10:53 ` Ingo Molnar
0 siblings, 1 reply; 6+ messages in thread
From: Mike Galbraith @ 2013-01-27 7:50 UTC (permalink / raw)
To: LKML; +Cc: Ingo Molnar, Peter Zijlstra
If the previous CPU is cache affine and idle, select it.
Signed-off-by: Mike Galbraith <bitbucket@online.de>
---
kernel/sched/fair.c | 21 +++++++--------------
1 file changed, 7 insertions(+), 14 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3270,25 +3270,18 @@ find_idlest_cpu(struct sched_group *grou
*/
static int select_idle_sibling(struct task_struct *p, int target)
{
- int cpu = smp_processor_id();
- int prev_cpu = task_cpu(p);
struct sched_domain *sd;
struct sched_group *sg;
- int i;
+ int i = task_cpu(p);
- /*
- * If the task is going to be woken-up on this cpu and if it is
- * already idle, then it is the right target.
- */
- if (target == cpu && idle_cpu(cpu))
- return cpu;
+ if (idle_cpu(target))
+ return target;
/*
- * If the task is going to be woken-up on the cpu where it previously
- * ran and if it is currently idle, then it the right target.
+ * If the prevous cpu is cache affine and idle, don't be stupid.
*/
- if (target == prev_cpu && idle_cpu(prev_cpu))
- return prev_cpu;
+ if (i != target && cpus_share_cache(i, target) && idle_cpu(i))
+ return i;
/*
* Otherwise, iterate the domains and find an elegible idle cpu.
@@ -3302,7 +3295,7 @@ static int select_idle_sibling(struct ta
goto next;
for_each_cpu(i, sched_group_cpus(sg)) {
- if (!idle_cpu(i))
+ if (i == target || !idle_cpu(i))
goto next;
}
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [patch] sched: minimalist select_idle_sibling() bouncing cow syndrome fix
2013-01-27 7:50 [patch] sched: minimalist select_idle_sibling() bouncing cow syndrome fix Mike Galbraith
@ 2013-01-28 10:53 ` Ingo Molnar
2013-01-28 11:19 ` Mike Galbraith
0 siblings, 1 reply; 6+ messages in thread
From: Ingo Molnar @ 2013-01-28 10:53 UTC (permalink / raw)
To: Mike Galbraith; +Cc: LKML, Ingo Molnar, Peter Zijlstra
* Mike Galbraith <bitbucket@online.de> wrote:
> If the previous CPU is cache affine and idle, select it.
No objections in principle - but would be nice to have a
changelog with numbers, % of improvement included and so?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [patch] sched: minimalist select_idle_sibling() bouncing cow syndrome fix
2013-01-28 10:53 ` Ingo Molnar
@ 2013-01-28 11:19 ` Mike Galbraith
2013-01-28 11:21 ` Ingo Molnar
2013-02-04 19:25 ` [tip:sched/core] sched: Fix select_idle_sibling() bouncing cow syndrome tip-bot for Mike Galbraith
0 siblings, 2 replies; 6+ messages in thread
From: Mike Galbraith @ 2013-01-28 11:19 UTC (permalink / raw)
To: Ingo Molnar; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Mon, 2013-01-28 at 11:53 +0100, Ingo Molnar wrote:
> * Mike Galbraith <bitbucket@online.de> wrote:
>
> > If the previous CPU is cache affine and idle, select it.
>
> No objections in principle - but would be nice to have a
> changelog with numbers, % of improvement included and so?
Well, that like was my changelog, guess it needs improvement.
Take 2.
sched: minimalist select_idle_sibling() bouncing cow syndrome fix
If the previous CPU is cache affine and idle, select it.
The current implementation simply traverses the sd_llc domain,
taking the first idle CPU encountered, which walks buddy pairs
hand in hand over the package, inflicting excruciating pain.
1 tbench pair (worst case) in a 10 core + SMT package:
pre 15.22 MB/sec 1 procs
post 252.01 MB/sec 1 procs
Signed-off-by: Mike Galbraith <bitbucket@online.de>
---
kernel/sched/fair.c | 21 +++++++--------------
1 file changed, 7 insertions(+), 14 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3270,25 +3270,18 @@ find_idlest_cpu(struct sched_group *grou
*/
static int select_idle_sibling(struct task_struct *p, int target)
{
- int cpu = smp_processor_id();
- int prev_cpu = task_cpu(p);
struct sched_domain *sd;
struct sched_group *sg;
- int i;
+ int i = task_cpu(p);
- /*
- * If the task is going to be woken-up on this cpu and if it is
- * already idle, then it is the right target.
- */
- if (target == cpu && idle_cpu(cpu))
- return cpu;
+ if (idle_cpu(target))
+ return target;
/*
- * If the task is going to be woken-up on the cpu where it previously
- * ran and if it is currently idle, then it the right target.
+ * If the prevous cpu is cache affine and idle, don't be stupid.
*/
- if (target == prev_cpu && idle_cpu(prev_cpu))
- return prev_cpu;
+ if (i != target && cpus_share_cache(i, target) && idle_cpu(i))
+ return i;
/*
* Otherwise, iterate the domains and find an elegible idle cpu.
@@ -3302,7 +3295,7 @@ static int select_idle_sibling(struct ta
goto next;
for_each_cpu(i, sched_group_cpus(sg)) {
- if (!idle_cpu(i))
+ if (i == target || !idle_cpu(i))
goto next;
}
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [patch] sched: minimalist select_idle_sibling() bouncing cow syndrome fix
2013-01-28 11:19 ` Mike Galbraith
@ 2013-01-28 11:21 ` Ingo Molnar
2013-01-28 11:28 ` Mike Galbraith
2013-02-04 19:25 ` [tip:sched/core] sched: Fix select_idle_sibling() bouncing cow syndrome tip-bot for Mike Galbraith
1 sibling, 1 reply; 6+ messages in thread
From: Ingo Molnar @ 2013-01-28 11:21 UTC (permalink / raw)
To: Mike Galbraith; +Cc: LKML, Ingo Molnar, Peter Zijlstra
* Mike Galbraith <bitbucket@online.de> wrote:
> On Mon, 2013-01-28 at 11:53 +0100, Ingo Molnar wrote:
> > * Mike Galbraith <bitbucket@online.de> wrote:
> >
> > > If the previous CPU is cache affine and idle, select it.
> >
> > No objections in principle - but would be nice to have a
> > changelog with numbers, % of improvement included and so?
>
> Well, that like was my changelog, guess it needs improvement.
>
> Take 2.
>
> sched: minimalist select_idle_sibling() bouncing cow syndrome fix
>
> If the previous CPU is cache affine and idle, select it.
>
> The current implementation simply traverses the sd_llc domain,
> taking the first idle CPU encountered, which walks buddy pairs
> hand in hand over the package, inflicting excruciating pain.
>
> 1 tbench pair (worst case) in a 10 core + SMT package:
>
> pre 15.22 MB/sec 1 procs
> post 252.01 MB/sec 1 procs
Drool ... :-)
What would be a 'contrarian' test - i.e. a test where this could
hurt most?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [patch] sched: minimalist select_idle_sibling() bouncing cow syndrome fix
2013-01-28 11:21 ` Ingo Molnar
@ 2013-01-28 11:28 ` Mike Galbraith
0 siblings, 0 replies; 6+ messages in thread
From: Mike Galbraith @ 2013-01-28 11:28 UTC (permalink / raw)
To: Ingo Molnar; +Cc: LKML, Ingo Molnar, Peter Zijlstra
On Mon, 2013-01-28 at 12:21 +0100, Ingo Molnar wrote:
> * Mike Galbraith <bitbucket@online.de> wrote:
>
> > On Mon, 2013-01-28 at 11:53 +0100, Ingo Molnar wrote:
> > > * Mike Galbraith <bitbucket@online.de> wrote:
> > >
> > > > If the previous CPU is cache affine and idle, select it.
> > >
> > > No objections in principle - but would be nice to have a
> > > changelog with numbers, % of improvement included and so?
> >
> > Well, that like was my changelog, guess it needs improvement.
> >
> > Take 2.
> >
> > sched: minimalist select_idle_sibling() bouncing cow syndrome fix
> >
> > If the previous CPU is cache affine and idle, select it.
> >
> > The current implementation simply traverses the sd_llc domain,
> > taking the first idle CPU encountered, which walks buddy pairs
> > hand in hand over the package, inflicting excruciating pain.
> >
> > 1 tbench pair (worst case) in a 10 core + SMT package:
> >
> > pre 15.22 MB/sec 1 procs
> > post 252.01 MB/sec 1 procs
>
> Drool ... :-)
Yeah, it really is _that_ fugly as is.
> What would be a 'contrarian' test - i.e. a test where this could
> hurt most?
There is none. It cuts off no preemption escape routes for pgbench,
still has both good and evil faces for all to see, and some to turn to
stone ;-) It only does one thing, don't do the unspeakable.
-Mike
^ permalink raw reply [flat|nested] 6+ messages in thread
* [tip:sched/core] sched: Fix select_idle_sibling() bouncing cow syndrome
2013-01-28 11:19 ` Mike Galbraith
2013-01-28 11:21 ` Ingo Molnar
@ 2013-02-04 19:25 ` tip-bot for Mike Galbraith
1 sibling, 0 replies; 6+ messages in thread
From: tip-bot for Mike Galbraith @ 2013-02-04 19:25 UTC (permalink / raw)
To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, bitbucket, tglx
Commit-ID: e0a79f529d5ba2507486d498b25da40911d95cf6
Gitweb: http://git.kernel.org/tip/e0a79f529d5ba2507486d498b25da40911d95cf6
Author: Mike Galbraith <bitbucket@online.de>
AuthorDate: Mon, 28 Jan 2013 12:19:25 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 4 Feb 2013 20:07:24 +0100
sched: Fix select_idle_sibling() bouncing cow syndrome
If the previous CPU is cache affine and idle, select it.
The current implementation simply traverses the sd_llc domain,
taking the first idle CPU encountered, which walks buddy pairs
hand in hand over the package, inflicting excruciating pain.
1 tbench pair (worst case) in a 10 core + SMT package:
pre 15.22 MB/sec 1 procs
post 252.01 MB/sec 1 procs
Signed-off-by: Mike Galbraith <bitbucket@online.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1359371965.5783.127.camel@marge.simpson.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/fair.c | 21 +++++++--------------
1 file changed, 7 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8dbee9f..ed18c74 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3252,25 +3252,18 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
*/
static int select_idle_sibling(struct task_struct *p, int target)
{
- int cpu = smp_processor_id();
- int prev_cpu = task_cpu(p);
struct sched_domain *sd;
struct sched_group *sg;
- int i;
+ int i = task_cpu(p);
- /*
- * If the task is going to be woken-up on this cpu and if it is
- * already idle, then it is the right target.
- */
- if (target == cpu && idle_cpu(cpu))
- return cpu;
+ if (idle_cpu(target))
+ return target;
/*
- * If the task is going to be woken-up on the cpu where it previously
- * ran and if it is currently idle, then it the right target.
+ * If the prevous cpu is cache affine and idle, don't be stupid.
*/
- if (target == prev_cpu && idle_cpu(prev_cpu))
- return prev_cpu;
+ if (i != target && cpus_share_cache(i, target) && idle_cpu(i))
+ return i;
/*
* Otherwise, iterate the domains and find an elegible idle cpu.
@@ -3284,7 +3277,7 @@ static int select_idle_sibling(struct task_struct *p, int target)
goto next;
for_each_cpu(i, sched_group_cpus(sg)) {
- if (!idle_cpu(i))
+ if (i == target || !idle_cpu(i))
goto next;
}
^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-02-04 19:26 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-27 7:50 [patch] sched: minimalist select_idle_sibling() bouncing cow syndrome fix Mike Galbraith
2013-01-28 10:53 ` Ingo Molnar
2013-01-28 11:19 ` Mike Galbraith
2013-01-28 11:21 ` Ingo Molnar
2013-01-28 11:28 ` Mike Galbraith
2013-02-04 19:25 ` [tip:sched/core] sched: Fix select_idle_sibling() bouncing cow syndrome tip-bot for Mike Galbraith
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).