linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [tip:sched/core] sched/fair: Clean up scale confusion
       [not found] <tip-1be0eb2a97d756fb7dd8c9baf372d81fa9699c09@git.kernel.org>
@ 2016-05-12 19:42 ` Yuyang Du
  2016-05-13  7:23   ` Vincent Guittot
  0 siblings, 1 reply; 4+ messages in thread
From: Yuyang Du @ 2016-05-12 19:42 UTC (permalink / raw)
  To: morten.rasmussen, linux-kernel, efault, wanpeng.li, peterz, hpa,
	torvalds, tglx, mingo
  Cc: vincent.guittot, bsegall

On Thu, May 12, 2016 at 03:31:27AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID:  1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
> Gitweb:     http://git.kernel.org/tip/1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
> Author:     Peter Zijlstra <peterz@infradead.org>
> AuthorDate: Fri, 6 May 2016 12:21:23 +0200
> Committer:  Ingo Molnar <mingo@kernel.org>
> CommitDate: Thu, 12 May 2016 09:55:33 +0200
> 
> sched/fair: Clean up scale confusion
> 
> Wanpeng noted that the scale_load_down() in calculate_imbalance() was
> weird. I agree, it should be SCHED_CAPACITY_SCALE, since we're going
> to compare against busiest->group_capacity, which is in [capacity]
> units.
>
> Reported-by: Wanpeng Li <wanpeng.li@hotmail.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Morten Rasmussen <morten.rasmussen@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Yuyang Du <yuyang.du@intel.com>
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Ingo Molnar <mingo@kernel.org>

It is good that this issue is addressed and patch merged, however, for the
record, Vincent has already had a solution for this, and we had a patch,
including other cleanups (the latest version is: https://lkml.org/lkml/2016/5/3/925).
And I think Ben first pointed this out (and we then attempted to address it)
as far as I can tell.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [tip:sched/core] sched/fair: Clean up scale confusion
  2016-05-12 19:42 ` [tip:sched/core] sched/fair: Clean up scale confusion Yuyang Du
@ 2016-05-13  7:23   ` Vincent Guittot
  2016-05-20 10:12     ` Morten Rasmussen
  0 siblings, 1 reply; 4+ messages in thread
From: Vincent Guittot @ 2016-05-13  7:23 UTC (permalink / raw)
  To: Yuyang Du
  Cc: Morten Rasmussen, linux-kernel, Mike Galbraith, wanpeng.li,
	Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Thomas Gleixner,
	Ingo Molnar, Benjamin Segall

On 12 May 2016 at 21:42, Yuyang Du <yuyang.du@intel.com> wrote:
> On Thu, May 12, 2016 at 03:31:27AM -0700, tip-bot for Peter Zijlstra wrote:
>> Commit-ID:  1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
>> Gitweb:     http://git.kernel.org/tip/1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
>> Author:     Peter Zijlstra <peterz@infradead.org>
>> AuthorDate: Fri, 6 May 2016 12:21:23 +0200
>> Committer:  Ingo Molnar <mingo@kernel.org>
>> CommitDate: Thu, 12 May 2016 09:55:33 +0200
>>
>> sched/fair: Clean up scale confusion
>>
>> Wanpeng noted that the scale_load_down() in calculate_imbalance() was
>> weird. I agree, it should be SCHED_CAPACITY_SCALE, since we're going
>> to compare against busiest->group_capacity, which is in [capacity]
>> units.

In fact, load_above_capacity is only about load and not about capacity.

load_above_capacity -= busiest->group_capacity is an optimization (may
be a wronf one) of
load_above_capacity -= busiest->group_capacity * SCHED_LOAD_SCALE /
SCHED_CAPACITY_SCALE

so we subtract load to load

>>
>> Reported-by: Wanpeng Li <wanpeng.li@hotmail.com>
>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>> Cc: Linus Torvalds <torvalds@linux-foundation.org>
>> Cc: Mike Galbraith <efault@gmx.de>
>> Cc: Morten Rasmussen <morten.rasmussen@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Yuyang Du <yuyang.du@intel.com>
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Ingo Molnar <mingo@kernel.org>
>
> It is good that this issue is addressed and patch merged, however, for the
> record, Vincent has already had a solution for this, and we had a patch,
> including other cleanups (the latest version is: https://lkml.org/lkml/2016/5/3/925).
> And I think Ben first pointed this out (and we then attempted to address it)
> as far as I can tell.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [tip:sched/core] sched/fair: Clean up scale confusion
  2016-05-13  7:23   ` Vincent Guittot
@ 2016-05-20 10:12     ` Morten Rasmussen
  2016-05-23 10:27       ` Vincent Guittot
  0 siblings, 1 reply; 4+ messages in thread
From: Morten Rasmussen @ 2016-05-20 10:12 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Yuyang Du, linux-kernel, Mike Galbraith, wanpeng.li,
	Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Thomas Gleixner,
	Ingo Molnar, Benjamin Segall

On Fri, May 13, 2016 at 09:23:50AM +0200, Vincent Guittot wrote:
> On 12 May 2016 at 21:42, Yuyang Du <yuyang.du@intel.com> wrote:
> > On Thu, May 12, 2016 at 03:31:27AM -0700, tip-bot for Peter Zijlstra wrote:
> >> Commit-ID:  1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
> >> Gitweb:     http://git.kernel.org/tip/1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
> >> Author:     Peter Zijlstra <peterz@infradead.org>
> >> AuthorDate: Fri, 6 May 2016 12:21:23 +0200
> >> Committer:  Ingo Molnar <mingo@kernel.org>
> >> CommitDate: Thu, 12 May 2016 09:55:33 +0200
> >>
> >> sched/fair: Clean up scale confusion
> >>
> >> Wanpeng noted that the scale_load_down() in calculate_imbalance() was
> >> weird. I agree, it should be SCHED_CAPACITY_SCALE, since we're going
> >> to compare against busiest->group_capacity, which is in [capacity]
> >> units.
> 
> In fact, load_above_capacity is only about load and not about capacity.
> 
> load_above_capacity -= busiest->group_capacity is an optimization (may
> be a wronf one) of
> load_above_capacity -= busiest->group_capacity * SCHED_LOAD_SCALE /
> SCHED_CAPACITY_SCALE
> 
> so we subtract load to load

I like your approach as you compute the desired minimum load, which is
essentially finding the number of NICE_0_LOAD task we want in the group,
and then determine how much excess load there is. So it becomes quite
clear that it is load.

While it preserves existing behaviour I would question the whole
NICE_0_LOAD assumption. It totally falls apart with PELT and if we have
tasks with nice != 0.

Also, it doesn't address the existing unit issue as load_above_capacity
is later multiplied by busiest->group_capacity when computing the
imbalance. As said in the other thread, we should either kill the
minimum load estimation that assumes always-running NICE_0_LOAD tasks,
or at least make sure the scaling of load_above_capacity is correct.
Patches attempting either solution are in the other thread.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [tip:sched/core] sched/fair: Clean up scale confusion
  2016-05-20 10:12     ` Morten Rasmussen
@ 2016-05-23 10:27       ` Vincent Guittot
  0 siblings, 0 replies; 4+ messages in thread
From: Vincent Guittot @ 2016-05-23 10:27 UTC (permalink / raw)
  To: Morten Rasmussen
  Cc: Yuyang Du, linux-kernel, Mike Galbraith, Wanpeng Li,
	Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Thomas Gleixner,
	Ingo Molnar, Benjamin Segall

On 20 May 2016 at 12:12, Morten Rasmussen <morten.rasmussen@arm.com> wrote:
> On Fri, May 13, 2016 at 09:23:50AM +0200, Vincent Guittot wrote:
>> On 12 May 2016 at 21:42, Yuyang Du <yuyang.du@intel.com> wrote:
>> > On Thu, May 12, 2016 at 03:31:27AM -0700, tip-bot for Peter Zijlstra wrote:
>> >> Commit-ID:  1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
>> >> Gitweb:     http://git.kernel.org/tip/1be0eb2a97d756fb7dd8c9baf372d81fa9699c09
>> >> Author:     Peter Zijlstra <peterz@infradead.org>
>> >> AuthorDate: Fri, 6 May 2016 12:21:23 +0200
>> >> Committer:  Ingo Molnar <mingo@kernel.org>
>> >> CommitDate: Thu, 12 May 2016 09:55:33 +0200
>> >>
>> >> sched/fair: Clean up scale confusion
>> >>
>> >> Wanpeng noted that the scale_load_down() in calculate_imbalance() was
>> >> weird. I agree, it should be SCHED_CAPACITY_SCALE, since we're going
>> >> to compare against busiest->group_capacity, which is in [capacity]
>> >> units.
>>
>> In fact, load_above_capacity is only about load and not about capacity.
>>
>> load_above_capacity -= busiest->group_capacity is an optimization (may
>> be a wronf one) of
>> load_above_capacity -= busiest->group_capacity * SCHED_LOAD_SCALE /
>> SCHED_CAPACITY_SCALE
>>
>> so we subtract load to load
>
> I like your approach as you compute the desired minimum load, which is
> essentially finding the number of NICE_0_LOAD task we want in the group,
> and then determine how much excess load there is. So it becomes quite
> clear that it is load.
>
> While it preserves existing behaviour I would question the whole
> NICE_0_LOAD assumption. It totally falls apart with PELT and if we have
> tasks with nice != 0.

yes,  i haven't found a simple use case yet to demonstrate the
usefulness of this part

>
> Also, it doesn't address the existing unit issue as load_above_capacity
> is later multiplied by busiest->group_capacity when computing the
> imbalance. As said in the other thread, we should either kill the
> minimum load estimation that assumes always-running NICE_0_LOAD tasks,
> or at least make sure the scaling of load_above_capacity is correct.
> Patches attempting either solution are in the other thread.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-05-23 10:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <tip-1be0eb2a97d756fb7dd8c9baf372d81fa9699c09@git.kernel.org>
2016-05-12 19:42 ` [tip:sched/core] sched/fair: Clean up scale confusion Yuyang Du
2016-05-13  7:23   ` Vincent Guittot
2016-05-20 10:12     ` Morten Rasmussen
2016-05-23 10:27       ` Vincent Guittot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).