sched: beef up wake_wide()
diff mbox series

Message ID 1436336026.3767.53.camel@gmail.com
State New, archived
Headers show
Series
  • sched: beef up wake_wide()
Related show

Commit Message

Mike Galbraith July 8, 2015, 6:13 a.m. UTC
Josef Bacik reported that Facebook sees better performance with their
1:N load (1 dispatch/node, N workers/node) when carrying an old patch
to try very hard to wake to an idle CPU.  While looking at wake_wide(),
I noticed that it doesn't pay attention to wakeup of the 1:N waker,
returning 1 only when the 1:N waker is waking one of its minions.

Correct that, and don't bother doing domain traversal when we know
that all we need to do is check for an idle cpu.

While at it, adjust task_struct bits, we don't need a 64bit counter.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Tested-by: Josef Bacik <jbacik@fb.com>
---
 include/linux/sched.h |    4 +--
 kernel/sched/fair.c   |   56 ++++++++++++++++++++++++--------------------------
 2 files changed, 29 insertions(+), 31 deletions(-)



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Comments

Peter Zijlstra July 9, 2015, 1:26 p.m. UTC | #1
On Wed, Jul 08, 2015 at 08:13:46AM +0200, Mike Galbraith wrote:
> 
> +/*
> + * Detect 1:N waker/wakee relationship via a switching-frequency heuristic.
> + * A waker of many should wake a different task than the one last awakened
> + * at a frequency roughly N times higher than one of its wakees.  In order
> + * to determine whether we should let the load spread vs consolodating to
> + * shared cache, we look for a minimum 'flip' frequency of llc_size in one
> + * partner, and a factor of lls_size higher frequency in the other.  With
> + * both conditions met, we can be relatively sure that we are seeing a 1:N
> + * relationship, and that load size exceeds socket size.
> + */
>  static int wake_wide(struct task_struct *p)
>  {
> +	unsigned int waker_flips = current->wakee_flips;
> +	unsigned int wakee_flips = p->wakee_flips;
>  	int factor = this_cpu_read(sd_llc_size);
>  
> +	if (waker_flips < wakee_flips)
> +		swap(waker_flips, wakee_flips);

This makes the wakee/waker names useless, the end result is more like
wakee_flips := client_flips, waker_flips := server_flips.

> +	if (wakee_flips < factor || waker_flips < wakee_flips * factor)
> +		return 0;

I don't get the first condition... why would the client ever flip? It
only talks to that one server.

> +	return 1;
>  }


> @@ -5021,14 +5015,17 @@ select_task_rq_fair(struct task_struct *
>  {
>  	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
>  	int cpu = smp_processor_id();
> +	int new_cpu = prev_cpu;
>  	int want_affine = 0;
>  	int sync = wake_flags & WF_SYNC;
>  
>  	rcu_read_lock();
> +	if (sd_flag & SD_BALANCE_WAKE) {
> +		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
> +		if (!want_affine)
> +			goto select_idle;
> +	}

So this preserves/makes worse the bug Morten spotted, even without
want_affine we should still attempt SD_BALANCE_WAKE if set.

> +
>  	for_each_domain(cpu, tmp) {
>  		if (!(tmp->flags & SD_LOAD_BALANCE))
>  			continue;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 9, 2015, 2:07 p.m. UTC | #2
On Thu, 2015-07-09 at 15:26 +0200, Peter Zijlstra wrote:
> On Wed, Jul 08, 2015 at 08:13:46AM +0200, Mike Galbraith wrote:
> > 
> > +/*
> > + * Detect 1:N waker/wakee relationship via a switching-frequency heuristic.
> > + * A waker of many should wake a different task than the one last awakened
> > + * at a frequency roughly N times higher than one of its wakees.  In order
> > + * to determine whether we should let the load spread vs consolodating to
> > + * shared cache, we look for a minimum 'flip' frequency of llc_size in one
> > + * partner, and a factor of lls_size higher frequency in the other.  With
> > + * both conditions met, we can be relatively sure that we are seeing a 1:N
> > + * relationship, and that load size exceeds socket size.
> > + */
> >  static int wake_wide(struct task_struct *p)
> >  {
> > +	unsigned int waker_flips = current->wakee_flips;
> > +	unsigned int wakee_flips = p->wakee_flips;
> >  	int factor = this_cpu_read(sd_llc_size);
> >  
> > +	if (waker_flips < wakee_flips)
> > +		swap(waker_flips, wakee_flips);
> 
> This makes the wakee/waker names useless, the end result is more like
> wakee_flips := client_flips, waker_flips := server_flips.

True, perhaps a rename is in order.

> > +	if (wakee_flips < factor || waker_flips < wakee_flips * factor)
> > +		return 0;
> 
> I don't get the first condition... why would the client ever flip? It
> only talks to that one server.

So I was thinking too, and I initially cemented the relationship by
flipping both.  However, the thing works in virgin source, ie clients do
in fact flip, so I removed that cementing based on the hard evidence.

> > @@ -5021,14 +5015,17 @@ select_task_rq_fair(struct task_struct *
> >  {
> >  	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
> >  	int cpu = smp_processor_id();
> > +	int new_cpu = prev_cpu;
> >  	int want_affine = 0;
> >  	int sync = wake_flags & WF_SYNC;
> >  
> >  	rcu_read_lock();
> > +	if (sd_flag & SD_BALANCE_WAKE) {
> > +		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
> > +		if (!want_affine)
> > +			goto select_idle;
> > +	}
> 
> So this preserves/makes worse the bug Morten spotted, even without
> want_affine we should still attempt SD_BALANCE_WAKE if set.

Yeah.  I can redo it if you want, but it seems a shame to traverse for
nothing given we know SD_BALANCE_WAKE is so painful that nobody really
really wants to do that.  One has to override the other in any case, no?

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 9, 2015, 2:46 p.m. UTC | #3
On Thu, 2015-07-09 at 16:07 +0200, Mike Galbraith wrote:
> On Thu, 2015-07-09 at 15:26 +0200, Peter Zijlstra wrote:
> > On Wed, Jul 08, 2015 at 08:13:46AM +0200, Mike Galbraith wrote:
> > > 
> > > +/*
> > > + * Detect 1:N waker/wakee relationship via a switching-frequency heuristic.
> > > + * A waker of many should wake a different task than the one last awakened
> > > + * at a frequency roughly N times higher than one of its wakees.  In order
> > > + * to determine whether we should let the load spread vs consolodating to
> > > + * shared cache, we look for a minimum 'flip' frequency of llc_size in one
> > > + * partner, and a factor of lls_size higher frequency in the other.  With
> > > + * both conditions met, we can be relatively sure that we are seeing a 1:N
> > > + * relationship, and that load size exceeds socket size.
> > > + */
> > >  static int wake_wide(struct task_struct *p)
> > >  {
> > > +	unsigned int waker_flips = current->wakee_flips;
> > > +	unsigned int wakee_flips = p->wakee_flips;
> > >  	int factor = this_cpu_read(sd_llc_size);
> > >  
> > > +	if (waker_flips < wakee_flips)
> > > +		swap(waker_flips, wakee_flips);
> > 
> > This makes the wakee/waker names useless, the end result is more like
> > wakee_flips := client_flips, waker_flips := server_flips.
> 
> True, perhaps a rename is in order.

I should perhaps add that waker/wakee_flips does make sense to me in the
sense that the task who's flips end up in the waker_flips bucket is our
waker of many vs being one its many wakees.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 10, 2015, 5:19 a.m. UTC | #4
On Thu, 2015-07-09 at 15:26 +0200, Peter Zijlstra wrote:
> On Wed, Jul 08, 2015 at 08:13:46AM +0200, Mike Galbraith wrote:
> >  static int wake_wide(struct task_struct *p)
> >  {
> > +	unsigned int waker_flips = current->wakee_flips;
> > +	unsigned int wakee_flips = p->wakee_flips;
> >  	int factor = this_cpu_read(sd_llc_size);
> >  
> > +	if (waker_flips < wakee_flips)
> > +		swap(waker_flips, wakee_flips);
> 
> This makes the wakee/waker names useless, the end result is more like
> wakee_flips := client_flips, waker_flips := server_flips.

I settled on master/slave plus hopefully improved comment block.

> > +	if (wakee_flips < factor || waker_flips < wakee_flips * factor)
> > +		return 0;
> 
> I don't get the first condition... why would the client ever flip? It
> only talks to that one server.

(tightening heuristic up a bit by one means or another would be good,
but "if it ain't broke, don't fix it" applies for this patchlet)

> > @@ -5021,14 +5015,17 @@ select_task_rq_fair(struct task_struct *
> >  {
> >  	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
> >  	int cpu = smp_processor_id();
> > +	int new_cpu = prev_cpu;
> >  	int want_affine = 0;
> >  	int sync = wake_flags & WF_SYNC;
> >  
> >  	rcu_read_lock();
> > +	if (sd_flag & SD_BALANCE_WAKE) {
> > +		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
> > +		if (!want_affine)
> > +			goto select_idle;
> > +	}
> 
> So this preserves/makes worse the bug Morten spotted, even without
> want_affine we should still attempt SD_BALANCE_WAKE if set.

Fixed.  wake_wide() may override want_affine as before, want_affine may
override other ->flags as before, but a surviving domain selection now
results in a full balance instead of a select_idle_sibling() call.

sched: beef up wake_wide()

Josef Bacik reported that Facebook sees better performance with their
1:N load (1 dispatch/node, N workers/node) when carrying an old patch
to try very hard to wake to an idle CPU.  While looking at wake_wide(),
I noticed that it doesn't pay attention to the wakeup of a many partner
waker, returning 1 only when waking one of its many partners.

Correct that, letting explicit domain flags override the heuristic.

While at it, adjust task_struct bits, we don't need a 64bit counter.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Tested-by: Josef Bacik <jbacik@fb.com>
---
 include/linux/sched.h |    4 +--
 kernel/sched/fair.c   |   57 ++++++++++++++++++++++----------------------------
 2 files changed, 28 insertions(+), 33 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1351,9 +1351,9 @@ struct task_struct {
 #ifdef CONFIG_SMP
 	struct llist_node wake_entry;
 	int on_cpu;
-	struct task_struct *last_wakee;
-	unsigned long wakee_flips;
+	unsigned int wakee_flips;
 	unsigned long wakee_flip_decay_ts;
+	struct task_struct *last_wakee;
 
 	int wake_cpu;
 #endif
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4730,26 +4730,29 @@ static long effective_load(struct task_g
 
 #endif
 
+/*
+ * Detect M:N waker/wakee relationships via a switching-frequency heuristic.
+ * A waker of many should wake a different task than the one last awakened
+ * at a frequency roughly N times higher than one of its wakees.  In order
+ * to determine whether we should let the load spread vs consolodating to
+ * shared cache, we look for a minimum 'flip' frequency of llc_size in one
+ * partner, and a factor of lls_size higher frequency in the other.  With
+ * both conditions met, we can be relatively sure that the relationship is
+ * non-monogamous, with partner count exceeding socket size.  Waker/wakee
+ * being client/server, worker/dispatcher, interrupt source or whatever is
+ * irrelevant, spread criteria is apparent partner count exceeds socket size.
+ */
 static int wake_wide(struct task_struct *p)
 {
+	unsigned int master = current->wakee_flips;
+	unsigned int slave = p->wakee_flips;
 	int factor = this_cpu_read(sd_llc_size);
 
-	/*
-	 * Yeah, it's the switching-frequency, could means many wakee or
-	 * rapidly switch, use factor here will just help to automatically
-	 * adjust the loose-degree, so bigger node will lead to more pull.
-	 */
-	if (p->wakee_flips > factor) {
-		/*
-		 * wakee is somewhat hot, it needs certain amount of cpu
-		 * resource, so if waker is far more hot, prefer to leave
-		 * it alone.
-		 */
-		if (current->wakee_flips > (factor * p->wakee_flips))
-			return 1;
-	}
-
-	return 0;
+	if (master < slave)
+		swap(master, slave);
+	if (slave < factor || master < slave * factor)
+		return 0;
+	return 1;
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
@@ -4761,13 +4764,6 @@ static int wake_affine(struct sched_doma
 	unsigned long weight;
 	int balanced;
 
-	/*
-	 * If we wake multiple tasks be careful to not bounce
-	 * ourselves around too much.
-	 */
-	if (wake_wide(p))
-		return 0;
-
 	idx	  = sd->wake_idx;
 	this_cpu  = smp_processor_id();
 	prev_cpu  = task_cpu(p);
@@ -5021,12 +5017,12 @@ select_task_rq_fair(struct task_struct *
 {
 	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
 	int cpu = smp_processor_id();
-	int new_cpu = cpu;
+	int new_cpu = prev_cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
 
 	if (sd_flag & SD_BALANCE_WAKE)
-		want_affine = cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
+		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
 
 	rcu_read_lock();
 	for_each_domain(cpu, tmp) {
@@ -5040,6 +5036,8 @@ select_task_rq_fair(struct task_struct *
 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
 			affine_sd = tmp;
+			/* Prefer affinity over any other flags */
+			sd = NULL;
 			break;
 		}
 
@@ -5048,12 +5046,10 @@ select_task_rq_fair(struct task_struct *
 	}
 
 	if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-		prev_cpu = cpu;
+		new_cpu = cpu;
 
-	if (sd_flag & SD_BALANCE_WAKE) {
-		new_cpu = select_idle_sibling(p, prev_cpu);
-		goto unlock;
-	}
+	if ((sd_flag & SD_BALANCE_WAKE) && (!sd || (!(sd->flags & SD_BALANCE_WAKE))))
+		new_cpu = select_idle_sibling(p, new_cpu);
 
 	while (sd) {
 		struct sched_group *group;
@@ -5089,7 +5085,6 @@ select_task_rq_fair(struct task_struct *
 		}
 		/* while loop will break here if sd == NULL */
 	}
-unlock:
 	rcu_read_unlock();
 
 	return new_cpu;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Josef Bacik July 10, 2015, 1:41 p.m. UTC | #5
On 07/10/2015 01:19 AM, Mike Galbraith wrote:
> On Thu, 2015-07-09 at 15:26 +0200, Peter Zijlstra wrote:
>> On Wed, Jul 08, 2015 at 08:13:46AM +0200, Mike Galbraith wrote:
>>>   static int wake_wide(struct task_struct *p)
>>>   {
>>> +	unsigned int waker_flips = current->wakee_flips;
>>> +	unsigned int wakee_flips = p->wakee_flips;
>>>   	int factor = this_cpu_read(sd_llc_size);
>>>
>>> +	if (waker_flips < wakee_flips)
>>> +		swap(waker_flips, wakee_flips);
>>
>> This makes the wakee/waker names useless, the end result is more like
>> wakee_flips := client_flips, waker_flips := server_flips.
>
> I settled on master/slave plus hopefully improved comment block.
>
>>> +	if (wakee_flips < factor || waker_flips < wakee_flips * factor)
>>> +		return 0;
>>
>> I don't get the first condition... why would the client ever flip? It
>> only talks to that one server.
>
> (tightening heuristic up a bit by one means or another would be good,
> but "if it ain't broke, don't fix it" applies for this patchlet)
>
>>> @@ -5021,14 +5015,17 @@ select_task_rq_fair(struct task_struct *
>>>   {
>>>   	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
>>>   	int cpu = smp_processor_id();
>>> +	int new_cpu = prev_cpu;
>>>   	int want_affine = 0;
>>>   	int sync = wake_flags & WF_SYNC;
>>>
>>>   	rcu_read_lock();
>>> +	if (sd_flag & SD_BALANCE_WAKE) {
>>> +		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
>>> +		if (!want_affine)
>>> +			goto select_idle;
>>> +	}
>>
>> So this preserves/makes worse the bug Morten spotted, even without
>> want_affine we should still attempt SD_BALANCE_WAKE if set.
>
> Fixed.  wake_wide() may override want_affine as before, want_affine may
> override other ->flags as before, but a surviving domain selection now
> results in a full balance instead of a select_idle_sibling() call.
>
> sched: beef up wake_wide()
>
> Josef Bacik reported that Facebook sees better performance with their
> 1:N load (1 dispatch/node, N workers/node) when carrying an old patch
> to try very hard to wake to an idle CPU.  While looking at wake_wide(),
> I noticed that it doesn't pay attention to the wakeup of a many partner
> waker, returning 1 only when waking one of its many partners.
>
> Correct that, letting explicit domain flags override the heuristic.
>
> While at it, adjust task_struct bits, we don't need a 64bit counter.
>
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
> Tested-by: Josef Bacik <jbacik@fb.com>


I'll give this new one a whirl and let you know how it goes.  Thanks,

Josef

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Josef Bacik July 10, 2015, 8:59 p.m. UTC | #6
On 07/10/2015 01:19 AM, Mike Galbraith wrote:
> On Thu, 2015-07-09 at 15:26 +0200, Peter Zijlstra wrote:
>> On Wed, Jul 08, 2015 at 08:13:46AM +0200, Mike Galbraith wrote:
>>>   static int wake_wide(struct task_struct *p)
>>>   {
>>> +	unsigned int waker_flips = current->wakee_flips;
>>> +	unsigned int wakee_flips = p->wakee_flips;
>>>   	int factor = this_cpu_read(sd_llc_size);
>>>
>>> +	if (waker_flips < wakee_flips)
>>> +		swap(waker_flips, wakee_flips);
>>
>> This makes the wakee/waker names useless, the end result is more like
>> wakee_flips := client_flips, waker_flips := server_flips.
>
> I settled on master/slave plus hopefully improved comment block.
>
>>> +	if (wakee_flips < factor || waker_flips < wakee_flips * factor)
>>> +		return 0;
>>
>> I don't get the first condition... why would the client ever flip? It
>> only talks to that one server.
>
> (tightening heuristic up a bit by one means or another would be good,
> but "if it ain't broke, don't fix it" applies for this patchlet)
>
>>> @@ -5021,14 +5015,17 @@ select_task_rq_fair(struct task_struct *
>>>   {
>>>   	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
>>>   	int cpu = smp_processor_id();
>>> +	int new_cpu = prev_cpu;
>>>   	int want_affine = 0;
>>>   	int sync = wake_flags & WF_SYNC;
>>>
>>>   	rcu_read_lock();
>>> +	if (sd_flag & SD_BALANCE_WAKE) {
>>> +		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
>>> +		if (!want_affine)
>>> +			goto select_idle;
>>> +	}
>>
>> So this preserves/makes worse the bug Morten spotted, even without
>> want_affine we should still attempt SD_BALANCE_WAKE if set.
>
> Fixed.  wake_wide() may override want_affine as before, want_affine may
> override other ->flags as before, but a surviving domain selection now
> results in a full balance instead of a select_idle_sibling() call.
>
> sched: beef up wake_wide()
>
> Josef Bacik reported that Facebook sees better performance with their
> 1:N load (1 dispatch/node, N workers/node) when carrying an old patch
> to try very hard to wake to an idle CPU.  While looking at wake_wide(),
> I noticed that it doesn't pay attention to the wakeup of a many partner
> waker, returning 1 only when waking one of its many partners.
>
> Correct that, letting explicit domain flags override the heuristic.
>
> While at it, adjust task_struct bits, we don't need a 64bit counter.
>
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
> Tested-by: Josef Bacik <jbacik@fb.com>

Not quite as awesome but still better than the baseline so we're good. 
Thanks,

Josef

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 11, 2015, 3:11 a.m. UTC | #7
On Fri, 2015-07-10 at 16:59 -0400, Josef Bacik wrote:

> Not quite as awesome but still better than the baseline so we're good. 
> Thanks,

I personally like the other much better.  We're not doing the user any
favor by making the thing balance when SD_BALANCE_WAKE is set.  Until
such time as it ceases to be horribly painful, we're just paying a few
cycles to help a theoretically existing masochist torture his box.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Josef Bacik July 13, 2015, 1:53 p.m. UTC | #8
On 07/10/2015 11:11 PM, Mike Galbraith wrote:
> On Fri, 2015-07-10 at 16:59 -0400, Josef Bacik wrote:
>
>> Not quite as awesome but still better than the baseline so we're good.
>> Thanks,
>
> I personally like the other much better.  We're not doing the user any
> favor by making the thing balance when SD_BALANCE_WAKE is set.  Until
> such time as it ceases to be horribly painful, we're just paying a few
> cycles to help a theoretically existing masochist torture his box.
>

Agreed, I'm all for the faster version ;),

Josef

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Peter Zijlstra July 14, 2015, 11:19 a.m. UTC | #9
On Sat, Jul 11, 2015 at 05:11:51AM +0200, Mike Galbraith wrote:
> On Fri, 2015-07-10 at 16:59 -0400, Josef Bacik wrote:
> 
> > Not quite as awesome but still better than the baseline so we're good. 
> > Thanks,
> 
> I personally like the other much better.  We're not doing the user any
> favor by making the thing balance when SD_BALANCE_WAKE is set.  Until
> such time as it ceases to be horribly painful, we're just paying a few
> cycles to help a theoretically existing masochist torture his box.

OK, how about something like the below; it tightens things up by
applying two rules:

 - We really should not continue looking for a balancing domain once
   SD_LOAD_BALANCE is not set.

 - SD (balance) flags should really be set in a single contiguous range,
   always starting at the bottom.

The latter means what if !want_affine and the (first) sd doesn't have
BALANCE_WAKE set, we're done. Getting rid of (most of) that iteration
junk you didn't like..

Hmm?

---
Subject: sched: Beef up wake_wide()
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Fri, 10 Jul 2015 07:19:26 +0200

Josef Bacik reported that Facebook sees better performance with their
1:N load (1 dispatch/node, N workers/node) when carrying an old patch
to try very hard to wake to an idle CPU.  While looking at wake_wide(),
I noticed that it doesn't pay attention to the wakeup of a many partner
waker, returning 1 only when waking one of its many partners.

Correct that, letting explicit domain flags override the heuristic.

While at it, adjust task_struct bits, we don't need a 64bit counter.

Cc: morten.rasmussen@arm.com
Cc: riel@redhat.com
Cc: mingo@redhat.com
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Tested-by: Josef Bacik <jbacik@fb.com>
[peterz: frobbings]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1436505566.5715.50.camel@gmail.com
---
 include/linux/sched.h |    4 +-
 kernel/sched/fair.c   |   72 +++++++++++++++++++++++---------------------------
 2 files changed, 36 insertions(+), 40 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1351,9 +1351,9 @@ struct task_struct {
 #ifdef CONFIG_SMP
 	struct llist_node wake_entry;
 	int on_cpu;
-	struct task_struct *last_wakee;
-	unsigned long wakee_flips;
+	unsigned int wakee_flips;
 	unsigned long wakee_flip_decay_ts;
+	struct task_struct *last_wakee;
 
 	int wake_cpu;
 #endif
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4726,26 +4726,29 @@ static long effective_load(struct task_g
 
 #endif
 
+/*
+ * Detect M:N waker/wakee relationships via a switching-frequency heuristic.
+ * A waker of many should wake a different task than the one last awakened
+ * at a frequency roughly N times higher than one of its wakees.  In order
+ * to determine whether we should let the load spread vs consolodating to
+ * shared cache, we look for a minimum 'flip' frequency of llc_size in one
+ * partner, and a factor of lls_size higher frequency in the other.  With
+ * both conditions met, we can be relatively sure that the relationship is
+ * non-monogamous, with partner count exceeding socket size.  Waker/wakee
+ * being client/server, worker/dispatcher, interrupt source or whatever is
+ * irrelevant, spread criteria is apparent partner count exceeds socket size.
+ */
 static int wake_wide(struct task_struct *p)
 {
+	unsigned int master = current->wakee_flips;
+	unsigned int slave = p->wakee_flips;
 	int factor = this_cpu_read(sd_llc_size);
 
-	/*
-	 * Yeah, it's the switching-frequency, could means many wakee or
-	 * rapidly switch, use factor here will just help to automatically
-	 * adjust the loose-degree, so bigger node will lead to more pull.
-	 */
-	if (p->wakee_flips > factor) {
-		/*
-		 * wakee is somewhat hot, it needs certain amount of cpu
-		 * resource, so if waker is far more hot, prefer to leave
-		 * it alone.
-		 */
-		if (current->wakee_flips > (factor * p->wakee_flips))
-			return 1;
-	}
-
-	return 0;
+	if (master < slave)
+		swap(master, slave);
+	if (slave < factor || master < slave * factor)
+		return 0;
+	return 1;
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
@@ -4757,13 +4760,6 @@ static int wake_affine(struct sched_doma
 	unsigned long weight;
 	int balanced;
 
-	/*
-	 * If we wake multiple tasks be careful to not bounce
-	 * ourselves around too much.
-	 */
-	if (wake_wide(p))
-		return 0;
-
 	idx	  = sd->wake_idx;
 	this_cpu  = smp_processor_id();
 	prev_cpu  = task_cpu(p);
@@ -5015,19 +5011,19 @@ static int get_cpu_usage(int cpu)
 static int
 select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_flags)
 {
-	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
+	struct sched_domain *tmp, *sd = NULL;
 	int cpu = smp_processor_id();
-	int new_cpu = cpu;
+	int new_cpu = prev_cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
 
 	if (sd_flag & SD_BALANCE_WAKE)
-		want_affine = cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
+		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
 
 	rcu_read_lock();
 	for_each_domain(cpu, tmp) {
 		if (!(tmp->flags & SD_LOAD_BALANCE))
-			continue;
+			break;
 
 		/*
 		 * If both cpu and prev_cpu are part of this domain,
@@ -5035,20 +5031,14 @@ select_task_rq_fair(struct task_struct *
 		 */
 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
-			affine_sd = tmp;
-			break;
+			/* Prefer affinity over any other flags */
+			goto do_affine;
 		}
 
 		if (tmp->flags & sd_flag)
 			sd = tmp;
-	}
-
-	if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-		prev_cpu = cpu;
-
-	if (sd_flag & SD_BALANCE_WAKE) {
-		new_cpu = select_idle_sibling(p, prev_cpu);
-		goto unlock;
+		else if (!want_affine)
+			break;
 	}
 
 	while (sd) {
@@ -5087,8 +5077,14 @@ select_task_rq_fair(struct task_struct *
 	}
 unlock:
 	rcu_read_unlock();
-
 	return new_cpu;
+
+do_affine:
+	if (cpu != prev_cpu && wake_affine(tmp, p, sync))
+		new_cpu = cpu;
+
+	new_cpu = select_idle_sibling(p, new_cpu);
+	goto unlock;
 }
 
 /*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 14, 2015, 1:49 p.m. UTC | #10
On Tue, 2015-07-14 at 13:19 +0200, Peter Zijlstra wrote:

> OK, how about something like the below; it tightens things up by
> applying two rules:
> 
>  - We really should not continue looking for a balancing domain once
>    SD_LOAD_BALANCE is not set.
> 
>  - SD (balance) flags should really be set in a single contiguous range,
>    always starting at the bottom.
> 
> The latter means what if !want_affine and the (first) sd doesn't have
> BALANCE_WAKE set, we're done. Getting rid of (most of) that iteration
> junk you didn't like..
> 
> Hmm?

Yeah, that's better.  It's not big hairy deal either way, it just bugged
me to knowingly toss those cycles out the window ;-)

select_idle_sibling() looks kinda funny down there, but otoh when the
day comes (hah) that we can just balance, it's closer to the exit.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Peter Zijlstra July 14, 2015, 2:07 p.m. UTC | #11
On Tue, Jul 14, 2015 at 03:49:17PM +0200, Mike Galbraith wrote:
> On Tue, 2015-07-14 at 13:19 +0200, Peter Zijlstra wrote:
> 
> > OK, how about something like the below; it tightens things up by
> > applying two rules:
> > 
> >  - We really should not continue looking for a balancing domain once
> >    SD_LOAD_BALANCE is not set.
> > 
> >  - SD (balance) flags should really be set in a single contiguous range,
> >    always starting at the bottom.
> > 
> > The latter means what if !want_affine and the (first) sd doesn't have
> > BALANCE_WAKE set, we're done. Getting rid of (most of) that iteration
> > junk you didn't like..
> > 
> > Hmm?
> 
> Yeah, that's better.  It's not big hairy deal either way, it just bugged
> me to knowingly toss those cycles out the window ;-)
> 
> select_idle_sibling() looks kinda funny down there, but otoh when the
> day comes (hah) that we can just balance, it's closer to the exit.

Right, not too pretty, does this look beter? 

---
Subject: sched: Beef up wake_wide()
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Fri, 10 Jul 2015 07:19:26 +0200

Josef Bacik reported that Facebook sees better performance with their
1:N load (1 dispatch/node, N workers/node) when carrying an old patch
to try very hard to wake to an idle CPU.  While looking at wake_wide(),
I noticed that it doesn't pay attention to the wakeup of a many partner
waker, returning 1 only when waking one of its many partners.

Correct that, letting explicit domain flags override the heuristic.

While at it, adjust task_struct bits, we don't need a 64bit counter.

Cc: mingo@redhat.com
Cc: morten.rasmussen@arm.com
Cc: riel@redhat.com
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Tested-by: Josef Bacik <jbacik@fb.com>
[peterz: frobbings]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1436505566.5715.50.camel@gmail.com
---
 include/linux/sched.h |    4 +-
 kernel/sched/fair.c   |   68 +++++++++++++++++++++++---------------------------
 2 files changed, 34 insertions(+), 38 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1359,9 +1359,9 @@ struct task_struct {
 #ifdef CONFIG_SMP
 	struct llist_node wake_entry;
 	int on_cpu;
-	struct task_struct *last_wakee;
-	unsigned long wakee_flips;
+	unsigned int wakee_flips;
 	unsigned long wakee_flip_decay_ts;
+	struct task_struct *last_wakee;
 
 	int wake_cpu;
 #endif
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4726,26 +4726,29 @@ static long effective_load(struct task_g
 
 #endif
 
+/*
+ * Detect M:N waker/wakee relationships via a switching-frequency heuristic.
+ * A waker of many should wake a different task than the one last awakened
+ * at a frequency roughly N times higher than one of its wakees.  In order
+ * to determine whether we should let the load spread vs consolodating to
+ * shared cache, we look for a minimum 'flip' frequency of llc_size in one
+ * partner, and a factor of lls_size higher frequency in the other.  With
+ * both conditions met, we can be relatively sure that the relationship is
+ * non-monogamous, with partner count exceeding socket size.  Waker/wakee
+ * being client/server, worker/dispatcher, interrupt source or whatever is
+ * irrelevant, spread criteria is apparent partner count exceeds socket size.
+ */
 static int wake_wide(struct task_struct *p)
 {
+	unsigned int master = current->wakee_flips;
+	unsigned int slave = p->wakee_flips;
 	int factor = this_cpu_read(sd_llc_size);
 
-	/*
-	 * Yeah, it's the switching-frequency, could means many wakee or
-	 * rapidly switch, use factor here will just help to automatically
-	 * adjust the loose-degree, so bigger node will lead to more pull.
-	 */
-	if (p->wakee_flips > factor) {
-		/*
-		 * wakee is somewhat hot, it needs certain amount of cpu
-		 * resource, so if waker is far more hot, prefer to leave
-		 * it alone.
-		 */
-		if (current->wakee_flips > (factor * p->wakee_flips))
-			return 1;
-	}
-
-	return 0;
+	if (master < slave)
+		swap(master, slave);
+	if (slave < factor || master < slave * factor)
+		return 0;
+	return 1;
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
@@ -4757,13 +4760,6 @@ static int wake_affine(struct sched_doma
 	unsigned long weight;
 	int balanced;
 
-	/*
-	 * If we wake multiple tasks be careful to not bounce
-	 * ourselves around too much.
-	 */
-	if (wake_wide(p))
-		return 0;
-
 	idx	  = sd->wake_idx;
 	this_cpu  = smp_processor_id();
 	prev_cpu  = task_cpu(p);
@@ -5015,19 +5011,19 @@ static int get_cpu_usage(int cpu)
 static int
 select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_flags)
 {
-	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
+	struct sched_domain *tmp, affine_sd = NULL, *sd = NULL;
 	int cpu = smp_processor_id();
-	int new_cpu = cpu;
+	int new_cpu = prev_cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
 
 	if (sd_flag & SD_BALANCE_WAKE)
-		want_affine = cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
+		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
 
 	rcu_read_lock();
 	for_each_domain(cpu, tmp) {
 		if (!(tmp->flags & SD_LOAD_BALANCE))
-			continue;
+			break;
 
 		/*
 		 * If both cpu and prev_cpu are part of this domain,
@@ -5041,17 +5037,17 @@ select_task_rq_fair(struct task_struct *
 
 		if (tmp->flags & sd_flag)
 			sd = tmp;
+		else if (!want_affine)
+			break;
 	}
 
-	if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-		prev_cpu = cpu;
+	if (affine_sd) { /* Prefer affinity over any other flags */
+		if (cpu != prev_cpu && wake_affine(affine_sd, p, sync))
+			new_cpu = cpu;
 
-	if (sd_flag & SD_BALANCE_WAKE) {
-		new_cpu = select_idle_sibling(p, prev_cpu);
-		goto unlock;
-	}
+		new_cpu = select_idle_sibling(p, new_cpu);
 
-	while (sd) {
+	} else while (sd) {
 		struct sched_group *group;
 		int weight;
 
@@ -5085,10 +5081,10 @@ select_task_rq_fair(struct task_struct *
 		}
 		/* while loop will break here if sd == NULL */
 	}
-unlock:
-	rcu_read_unlock();
 
+	rcu_read_unlock();
 	return new_cpu;
+
 }
 
 /*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 14, 2015, 2:17 p.m. UTC | #12
On Tue, 2015-07-14 at 16:07 +0200, Peter Zijlstra wrote:
> On Tue, Jul 14, 2015 at 03:49:17PM +0200, Mike Galbraith wrote:
> > On Tue, 2015-07-14 at 13:19 +0200, Peter Zijlstra wrote:
> > 
> > > OK, how about something like the below; it tightens things up by
> > > applying two rules:
> > > 
> > >  - We really should not continue looking for a balancing domain once
> > >    SD_LOAD_BALANCE is not set.
> > > 
> > >  - SD (balance) flags should really be set in a single contiguous range,
> > >    always starting at the bottom.
> > > 
> > > The latter means what if !want_affine and the (first) sd doesn't have
> > > BALANCE_WAKE set, we're done. Getting rid of (most of) that iteration
> > > junk you didn't like..
> > > 
> > > Hmm?
> > 
> > Yeah, that's better.  It's not big hairy deal either way, it just bugged
> > me to knowingly toss those cycles out the window ;-)
> > 
> > select_idle_sibling() looks kinda funny down there, but otoh when the
> > day comes (hah) that we can just balance, it's closer to the exit.
> 
> Right, not too pretty, does this look beter?

There's a buglet, I was just about to mention the inverse in the other.


> @@ -5041,17 +5037,17 @@ select_task_rq_fair(struct task_struct *
>  
>  		if (tmp->flags & sd_flag)
>  			sd = tmp;
> +		else if (!want_affine)
> +			break;
>  	}
>  
> -	if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))
> -		prev_cpu = cpu;
> +	if (affine_sd) { /* Prefer affinity over any other flags */
> +		if (cpu != prev_cpu && wake_affine(affine_sd, p, sync))
> +			new_cpu = cpu;
>  
> -	if (sd_flag & SD_BALANCE_WAKE) {
> -		new_cpu = select_idle_sibling(p, prev_cpu);
> -		goto unlock;
> -	}
> +		new_cpu = select_idle_sibling(p, new_cpu);

We'll not look for a idle cpu when wake_wide() naks want_affine.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Peter Zijlstra July 14, 2015, 3:04 p.m. UTC | #13
On Tue, Jul 14, 2015 at 04:17:46PM +0200, Mike Galbraith wrote:
> There's a buglet, 

> We'll not look for a idle cpu when wake_wide() naks want_affine.

*sigh* indeed.. fixing that'll bring us very close to what we started
out wiht..

The one XXX there raises the question on whether we should always so
select_idle_sibling() if we do not have a suitable balance flag, or only
on wakeups.



---
Subject: sched: Beef up wake_wide()
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Fri, 10 Jul 2015 07:19:26 +0200

Josef Bacik reported that Facebook sees better performance with their
1:N load (1 dispatch/node, N workers/node) when carrying an old patch
to try very hard to wake to an idle CPU.  While looking at wake_wide(),
I noticed that it doesn't pay attention to the wakeup of a many partner
waker, returning 1 only when waking one of its many partners.

Correct that, letting explicit domain flags override the heuristic.

While at it, adjust task_struct bits, we don't need a 64bit counter.

Cc: morten.rasmussen@arm.com
Cc: riel@redhat.com
Cc: mingo@redhat.com
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Tested-by: Josef Bacik <jbacik@fb.com>
[peterz: frobbings]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1436505566.5715.50.camel@gmail.com
---
 include/linux/sched.h |    4 +-
 kernel/sched/fair.c   |   69 ++++++++++++++++++++++++--------------------------
 2 files changed, 36 insertions(+), 37 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1359,9 +1359,9 @@ struct task_struct {
 #ifdef CONFIG_SMP
 	struct llist_node wake_entry;
 	int on_cpu;
-	struct task_struct *last_wakee;
-	unsigned long wakee_flips;
+	unsigned int wakee_flips;
 	unsigned long wakee_flip_decay_ts;
+	struct task_struct *last_wakee;
 
 	int wake_cpu;
 #endif
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4726,26 +4726,29 @@ static long effective_load(struct task_g
 
 #endif
 
+/*
+ * Detect M:N waker/wakee relationships via a switching-frequency heuristic.
+ * A waker of many should wake a different task than the one last awakened
+ * at a frequency roughly N times higher than one of its wakees.  In order
+ * to determine whether we should let the load spread vs consolodating to
+ * shared cache, we look for a minimum 'flip' frequency of llc_size in one
+ * partner, and a factor of lls_size higher frequency in the other.  With
+ * both conditions met, we can be relatively sure that the relationship is
+ * non-monogamous, with partner count exceeding socket size.  Waker/wakee
+ * being client/server, worker/dispatcher, interrupt source or whatever is
+ * irrelevant, spread criteria is apparent partner count exceeds socket size.
+ */
 static int wake_wide(struct task_struct *p)
 {
+	unsigned int master = current->wakee_flips;
+	unsigned int slave = p->wakee_flips;
 	int factor = this_cpu_read(sd_llc_size);
 
-	/*
-	 * Yeah, it's the switching-frequency, could means many wakee or
-	 * rapidly switch, use factor here will just help to automatically
-	 * adjust the loose-degree, so bigger node will lead to more pull.
-	 */
-	if (p->wakee_flips > factor) {
-		/*
-		 * wakee is somewhat hot, it needs certain amount of cpu
-		 * resource, so if waker is far more hot, prefer to leave
-		 * it alone.
-		 */
-		if (current->wakee_flips > (factor * p->wakee_flips))
-			return 1;
-	}
-
-	return 0;
+	if (master < slave)
+		swap(master, slave);
+	if (slave < factor || master < slave * factor)
+		return 0;
+	return 1;
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
@@ -4757,13 +4760,6 @@ static int wake_affine(struct sched_doma
 	unsigned long weight;
 	int balanced;
 
-	/*
-	 * If we wake multiple tasks be careful to not bounce
-	 * ourselves around too much.
-	 */
-	if (wake_wide(p))
-		return 0;
-
 	idx	  = sd->wake_idx;
 	this_cpu  = smp_processor_id();
 	prev_cpu  = task_cpu(p);
@@ -5015,19 +5011,19 @@ static int get_cpu_usage(int cpu)
 static int
 select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_flags)
 {
-	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
+	struct sched_domain *tmp, affine_sd = NULL, *sd = NULL;
 	int cpu = smp_processor_id();
-	int new_cpu = cpu;
+	int new_cpu = prev_cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
 
 	if (sd_flag & SD_BALANCE_WAKE)
-		want_affine = cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
+		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
 
 	rcu_read_lock();
 	for_each_domain(cpu, tmp) {
 		if (!(tmp->flags & SD_LOAD_BALANCE))
-			continue;
+			break;
 
 		/*
 		 * If both cpu and prev_cpu are part of this domain,
@@ -5041,17 +5037,21 @@ select_task_rq_fair(struct task_struct *
 
 		if (tmp->flags & sd_flag)
 			sd = tmp;
+		else if (!want_affine)
+			break;
 	}
 
-	if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-		prev_cpu = cpu;
-
-	if (sd_flag & SD_BALANCE_WAKE) {
-		new_cpu = select_idle_sibling(p, prev_cpu);
-		goto unlock;
+	if (affine_sd) {
+		sd = NULL; /* Prefer wake_affine over balance flags */
+		if (cpu != prev_cpu && wake_affine(affine_sd, p, sync))
+			new_cpu = cpu;
 	}
 
-	while (sd) {
+	if (!sd) {
+		if (sd_flags & SD_BALANCE_WAKE) /* XXX always ? */
+			new_cpu = select_idle_sibling(p, new_cpu);
+
+	} else while (sd) {
 		struct sched_group *group;
 		int weight;
 
@@ -5085,7 +5085,6 @@ select_task_rq_fair(struct task_struct *
 		}
 		/* while loop will break here if sd == NULL */
 	}
-unlock:
 	rcu_read_unlock();
 
 	return new_cpu;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 14, 2015, 3:39 p.m. UTC | #14
On Tue, 2015-07-14 at 17:04 +0200, Peter Zijlstra wrote:
> On Tue, Jul 14, 2015 at 04:17:46PM +0200, Mike Galbraith wrote:
> > There's a buglet, 
> 
> > We'll not look for a idle cpu when wake_wide() naks want_affine.
> 
> *sigh* indeed.. fixing that'll bring us very close to what we started
> out wiht..
> 
> The one XXX there raises the question on whether we should always so
> select_idle_sibling() if we do not have a suitable balance flag, or only
> on wakeups.

That's what I've been sitting here waffling over, finally convinced
myself that should the user turn FORX/EXEC off, he shouldn't find that a
substitute quietly slipped in.. though otoh.. crap, guess I'm not done
waffling after all.  Yeah, this will work just fine ;-)

(typos fixed)

---
Subject: sched: Beef up wake_wide()
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Fri, 10 Jul 2015 07:19:26 +0200

Josef Bacik reported that Facebook sees better performance with their
1:N load (1 dispatch/node, N workers/node) when carrying an old patch
to try very hard to wake to an idle CPU.  While looking at wake_wide(),
I noticed that it doesn't pay attention to the wakeup of a many partner
waker, returning 1 only when waking one of its many partners.

Correct that, letting explicit domain flags override the heuristic.

While at it, adjust task_struct bits, we don't need a 64bit counter.

Cc: morten.rasmussen@arm.com
Cc: riel@redhat.com
Cc: mingo@redhat.com
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Tested-by: Josef Bacik <jbacik@fb.com>
[peterz: frobbings]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1436505566.5715.50.camel@gmail.com
---
 include/linux/sched.h |    4 +-
 kernel/sched/fair.c   |   67 ++++++++++++++++++++++++--------------------------
 2 files changed, 35 insertions(+), 36 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1351,9 +1351,9 @@ struct task_struct {
 #ifdef CONFIG_SMP
 	struct llist_node wake_entry;
 	int on_cpu;
-	struct task_struct *last_wakee;
-	unsigned long wakee_flips;
+	unsigned int wakee_flips;
 	unsigned long wakee_flip_decay_ts;
+	struct task_struct *last_wakee;
 
 	int wake_cpu;
 #endif
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4730,26 +4730,29 @@ static long effective_load(struct task_g
 
 #endif
 
+/*
+ * Detect M:N waker/wakee relationships via a switching-frequency heuristic.
+ * A waker of many should wake a different task than the one last awakened
+ * at a frequency roughly N times higher than one of its wakees.  In order
+ * to determine whether we should let the load spread vs consolodating to
+ * shared cache, we look for a minimum 'flip' frequency of llc_size in one
+ * partner, and a factor of lls_size higher frequency in the other.  With
+ * both conditions met, we can be relatively sure that the relationship is
+ * non-monogamous, with partner count exceeding socket size.  Waker/wakee
+ * being client/server, worker/dispatcher, interrupt source or whatever is
+ * irrelevant, spread criteria is apparent partner count exceeds socket size.
+ */
 static int wake_wide(struct task_struct *p)
 {
+	unsigned int master = current->wakee_flips;
+	unsigned int slave = p->wakee_flips;
 	int factor = this_cpu_read(sd_llc_size);
 
-	/*
-	 * Yeah, it's the switching-frequency, could means many wakee or
-	 * rapidly switch, use factor here will just help to automatically
-	 * adjust the loose-degree, so bigger node will lead to more pull.
-	 */
-	if (p->wakee_flips > factor) {
-		/*
-		 * wakee is somewhat hot, it needs certain amount of cpu
-		 * resource, so if waker is far more hot, prefer to leave
-		 * it alone.
-		 */
-		if (current->wakee_flips > (factor * p->wakee_flips))
-			return 1;
-	}
-
-	return 0;
+	if (master < slave)
+		swap(master, slave);
+	if (slave < factor || master < slave * factor)
+		return 0;
+	return 1;
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
@@ -4761,13 +4764,6 @@ static int wake_affine(struct sched_doma
 	unsigned long weight;
 	int balanced;
 
-	/*
-	 * If we wake multiple tasks be careful to not bounce
-	 * ourselves around too much.
-	 */
-	if (wake_wide(p))
-		return 0;
-
 	idx	  = sd->wake_idx;
 	this_cpu  = smp_processor_id();
 	prev_cpu  = task_cpu(p);
@@ -5021,17 +5017,17 @@ select_task_rq_fair(struct task_struct *
 {
 	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
 	int cpu = smp_processor_id();
-	int new_cpu = cpu;
+	int new_cpu = prev_cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
 
 	if (sd_flag & SD_BALANCE_WAKE)
-		want_affine = cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
+		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
 
 	rcu_read_lock();
 	for_each_domain(cpu, tmp) {
 		if (!(tmp->flags & SD_LOAD_BALANCE))
-			continue;
+			break;
 
 		/*
 		 * If both cpu and prev_cpu are part of this domain,
@@ -5045,17 +5041,21 @@ select_task_rq_fair(struct task_struct *
 
 		if (tmp->flags & sd_flag)
 			sd = tmp;
+		else if (!want_affine)
+			break;
 	}
 
-	if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-		prev_cpu = cpu;
-
-	if (sd_flag & SD_BALANCE_WAKE) {
-		new_cpu = select_idle_sibling(p, prev_cpu);
-		goto unlock;
+	if (affine_sd) {
+		sd = NULL; /* Prefer wake_affine over balance flags */
+		if (cpu != prev_cpu && wake_affine(affine_sd, p, sync))
+			new_cpu = cpu;
 	}
 
-	while (sd) {
+	if (!sd) {
+		if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */
+			new_cpu = select_idle_sibling(p, new_cpu);
+
+	} else while (sd) {
 		struct sched_group *group;
 		int weight;
 
@@ -5089,7 +5089,6 @@ select_task_rq_fair(struct task_struct *
 		}
 		/* while loop will break here if sd == NULL */
 	}
-unlock:
 	rcu_read_unlock();
 
 	return new_cpu;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Josef Bacik July 14, 2015, 4:01 p.m. UTC | #15
On 07/14/2015 11:39 AM, Mike Galbraith wrote:
> On Tue, 2015-07-14 at 17:04 +0200, Peter Zijlstra wrote:
>> On Tue, Jul 14, 2015 at 04:17:46PM +0200, Mike Galbraith wrote:
>>> There's a buglet,
>>
>>> We'll not look for a idle cpu when wake_wide() naks want_affine.
>>
>> *sigh* indeed.. fixing that'll bring us very close to what we started
>> out wiht..
>>
>> The one XXX there raises the question on whether we should always so
>> select_idle_sibling() if we do not have a suitable balance flag, or only
>> on wakeups.
>
> That's what I've been sitting here waffling over, finally convinced
> myself that should the user turn FORX/EXEC off, he shouldn't find that a
> substitute quietly slipped in.. though otoh.. crap, guess I'm not done
> waffling after all.  Yeah, this will work just fine ;-)
>
> (typos fixed)
>

We happy with this or should I wait for more patches to fly by before I 
test something ;)?

Josef

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Mike Galbraith July 14, 2015, 5:59 p.m. UTC | #16
On Tue, 2015-07-14 at 12:01 -0400, Josef Bacik wrote:

> We happy with this or should I wait for more patches to fly by before I 
> test something ;)?

Yeah, it should be The End time.

	-Mike


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Josef Bacik July 15, 2015, 5:11 p.m. UTC | #17
On 07/14/2015 01:59 PM, Mike Galbraith wrote:
> On Tue, 2015-07-14 at 12:01 -0400, Josef Bacik wrote:
>
>> We happy with this or should I wait for more patches to fly by before I
>> test something ;)?
>
> Yeah, it should be The End time.

It's showing the same thing as one of the many earlier patches, it's 
better at high RPS, but at low RPS there's a 3% regression.  Thanks,

Josef

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Patch
diff mbox series

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1351,9 +1351,9 @@  struct task_struct {
 #ifdef CONFIG_SMP
 	struct llist_node wake_entry;
 	int on_cpu;
-	struct task_struct *last_wakee;
-	unsigned long wakee_flips;
+	unsigned int wakee_flips;
 	unsigned long wakee_flip_decay_ts;
+	struct task_struct *last_wakee;
 
 	int wake_cpu;
 #endif
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4730,26 +4730,27 @@  static long effective_load(struct task_g
 
 #endif
 
+/*
+ * Detect 1:N waker/wakee relationship via a switching-frequency heuristic.
+ * A waker of many should wake a different task than the one last awakened
+ * at a frequency roughly N times higher than one of its wakees.  In order
+ * to determine whether we should let the load spread vs consolodating to
+ * shared cache, we look for a minimum 'flip' frequency of llc_size in one
+ * partner, and a factor of lls_size higher frequency in the other.  With
+ * both conditions met, we can be relatively sure that we are seeing a 1:N
+ * relationship, and that load size exceeds socket size.
+ */
 static int wake_wide(struct task_struct *p)
 {
+	unsigned int waker_flips = current->wakee_flips;
+	unsigned int wakee_flips = p->wakee_flips;
 	int factor = this_cpu_read(sd_llc_size);
 
-	/*
-	 * Yeah, it's the switching-frequency, could means many wakee or
-	 * rapidly switch, use factor here will just help to automatically
-	 * adjust the loose-degree, so bigger node will lead to more pull.
-	 */
-	if (p->wakee_flips > factor) {
-		/*
-		 * wakee is somewhat hot, it needs certain amount of cpu
-		 * resource, so if waker is far more hot, prefer to leave
-		 * it alone.
-		 */
-		if (current->wakee_flips > (factor * p->wakee_flips))
-			return 1;
-	}
-
-	return 0;
+	if (waker_flips < wakee_flips)
+		swap(waker_flips, wakee_flips);
+	if (wakee_flips < factor || waker_flips < wakee_flips * factor)
+		return 0;
+	return 1;
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
@@ -4761,13 +4762,6 @@  static int wake_affine(struct sched_doma
 	unsigned long weight;
 	int balanced;
 
-	/*
-	 * If we wake multiple tasks be careful to not bounce
-	 * ourselves around too much.
-	 */
-	if (wake_wide(p))
-		return 0;
-
 	idx	  = sd->wake_idx;
 	this_cpu  = smp_processor_id();
 	prev_cpu  = task_cpu(p);
@@ -5021,14 +5015,17 @@  select_task_rq_fair(struct task_struct *
 {
 	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
 	int cpu = smp_processor_id();
-	int new_cpu = cpu;
+	int new_cpu = prev_cpu;
 	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
 
-	if (sd_flag & SD_BALANCE_WAKE)
-		want_affine = cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
-
 	rcu_read_lock();
+	if (sd_flag & SD_BALANCE_WAKE) {
+		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
+		if (!want_affine)
+			goto select_idle;
+	}
+
 	for_each_domain(cpu, tmp) {
 		if (!(tmp->flags & SD_LOAD_BALANCE))
 			continue;
@@ -5048,10 +5045,11 @@  select_task_rq_fair(struct task_struct *
 	}
 
 	if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-		prev_cpu = cpu;
+		new_cpu = cpu;
 
 	if (sd_flag & SD_BALANCE_WAKE) {
-		new_cpu = select_idle_sibling(p, prev_cpu);
+select_idle:
+		new_cpu = select_idle_sibling(p, new_cpu);
 		goto unlock;
 	}