All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/cfs: change initial value of runnable_avg
@ 2020-06-24 15:44 Vincent Guittot
  2020-06-24 16:32 ` Valentin Schneider
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Vincent Guittot @ 2020-06-24 15:44 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, linux-kernel, rong.a.chen
  Cc: valentin.schneider, pauld, hdanton, Vincent Guittot

Some performance regression on reaim benchmark have been raised with
  commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")

The problem comes from the init value of runnable_avg which is initialized
with max value. This can be a problem if the newly forked task is finally
a short task because the group of CPUs is wrongly set to overloaded and
tasks are pulled less agressively.

Set initial value of runnable_avg equals to util_avg to reflect that there
is no waiting time so far.

Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0424a0af5f87..45e467bf42fc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
 		}
 	}
 
-	sa->runnable_avg = cpu_scale;
+	sa->runnable_avg = sa->util_avg;
 
 	if (p->sched_class != &fair_sched_class) {
 		/*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched/cfs: change initial value of runnable_avg
  2020-06-24 15:44 [PATCH] sched/cfs: change initial value of runnable_avg Vincent Guittot
@ 2020-06-24 16:32 ` Valentin Schneider
  2020-06-24 16:40   ` Vincent Guittot
  2020-06-25  9:24 ` Holger Hoffstätte
  2020-06-25 11:53 ` [tip: sched/urgent] " tip-bot2 for Vincent Guittot
  2 siblings, 1 reply; 9+ messages in thread
From: Valentin Schneider @ 2020-06-24 16:32 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, linux-kernel, rong.a.chen, pauld, hdanton


On 24/06/20 16:44, Vincent Guittot wrote:
> Some performance regression on reaim benchmark have been raised with
>   commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
>
> The problem comes from the init value of runnable_avg which is initialized
> with max value. This can be a problem if the newly forked task is finally
> a short task because the group of CPUs is wrongly set to overloaded and
> tasks are pulled less agressively.
>
> Set initial value of runnable_avg equals to util_avg to reflect that there
> is no waiting time so far.
>
> Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> Reported-by: kernel test robot <rong.a.chen@intel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0424a0af5f87..45e467bf42fc 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
>  		}
>  	}
>  
> -	sa->runnable_avg = cpu_scale;
> +	sa->runnable_avg = sa->util_avg;

IIRC we didn't go for this initially because hackbench behaved slightly
worse with it. Did we end up re-evaluating this? Also, how does this reaim
benchmark behave with it? I *think* the table from that regression thread
says it behaves better, but I had a hard time parsing it (seems like it got
damaged by line wrapping)

Conceptually I'm all for it, so as long as the tests back it up:
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

>  
>  	if (p->sched_class != &fair_sched_class) {
>  		/*


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched/cfs: change initial value of runnable_avg
  2020-06-24 16:32 ` Valentin Schneider
@ 2020-06-24 16:40   ` Vincent Guittot
  0 siblings, 0 replies; 9+ messages in thread
From: Vincent Guittot @ 2020-06-24 16:40 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, linux-kernel,
	kernel test robot, Phil Auld, Hillf Danton

On Wed, 24 Jun 2020 at 18:32, Valentin Schneider
<valentin.schneider@arm.com> wrote:
>
>
> On 24/06/20 16:44, Vincent Guittot wrote:
> > Some performance regression on reaim benchmark have been raised with
> >   commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> >
> > The problem comes from the init value of runnable_avg which is initialized
> > with max value. This can be a problem if the newly forked task is finally
> > a short task because the group of CPUs is wrongly set to overloaded and
> > tasks are pulled less agressively.
> >
> > Set initial value of runnable_avg equals to util_avg to reflect that there
> > is no waiting time so far.
> >
> > Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> > Reported-by: kernel test robot <rong.a.chen@intel.com>
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >  kernel/sched/fair.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 0424a0af5f87..45e467bf42fc 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
> >               }
> >       }
> >
> > -     sa->runnable_avg = cpu_scale;
> > +     sa->runnable_avg = sa->util_avg;
>
> IIRC we didn't go for this initially because hackbench behaved slightly
> worse with it. Did we end up re-evaluating this? Also, how does this reaim

yes. hackbench was slightly worse and it was the only inputs at that
time, that's why we decided to keep the other init. Since, Rong
reported a significant regression for reaim which is fixed by this
patch

> benchmark behave with it? I *think* the table from that regression thread
> says it behaves better, but I had a hard time parsing it (seems like it got
> damaged by line wrapping)
>
> Conceptually I'm all for it, so as long as the tests back it up:
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
>
> >
> >       if (p->sched_class != &fair_sched_class) {
> >               /*
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched/cfs: change initial value of runnable_avg
  2020-06-24 15:44 [PATCH] sched/cfs: change initial value of runnable_avg Vincent Guittot
  2020-06-24 16:32 ` Valentin Schneider
@ 2020-06-25  9:24 ` Holger Hoffstätte
  2020-06-25  9:56   ` Vincent Guittot
  2020-06-25 11:53 ` [tip: sched/urgent] " tip-bot2 for Vincent Guittot
  2 siblings, 1 reply; 9+ messages in thread
From: Holger Hoffstätte @ 2020-06-25  9:24 UTC (permalink / raw)
  To: Vincent Guittot, mingo, peterz, juri.lelli, dietmar.eggemann,
	rostedt, bsegall, mgorman, linux-kernel, rong.a.chen
  Cc: valentin.schneider, pauld, hdanton

On 2020-06-24 17:44, Vincent Guittot wrote:
> Some performance regression on reaim benchmark have been raised with
>    commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> 
> The problem comes from the init value of runnable_avg which is initialized
> with max value. This can be a problem if the newly forked task is finally
> a short task because the group of CPUs is wrongly set to overloaded and
> tasks are pulled less agressively.
> 
> Set initial value of runnable_avg equals to util_avg to reflect that there
> is no waiting time so far.
> 
> Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> Reported-by: kernel test robot <rong.a.chen@intel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>   kernel/sched/fair.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0424a0af5f87..45e467bf42fc 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
>   		}
>   	}
>   
> -	sa->runnable_avg = cpu_scale;
> +	sa->runnable_avg = sa->util_avg;
>   
>   	if (p->sched_class != &fair_sched_class) {
>   		/*
> 

Something is wrong here. I woke up my machine from suspend-to-RAM this morning
and saw that a completely idle machine had a loadavg of ~7. According to my
monitoring system this happened to be the loadavg right before I suspended.
I've reverted this, rebooted, created a loadavg >0, suspended and after wake up
loadavg again correctly ranges between 0 and whatever, as expected.

-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched/cfs: change initial value of runnable_avg
  2020-06-25  9:24 ` Holger Hoffstätte
@ 2020-06-25  9:56   ` Vincent Guittot
  2020-06-25 10:42     ` Holger Hoffstätte
  0 siblings, 1 reply; 9+ messages in thread
From: Vincent Guittot @ 2020-06-25  9:56 UTC (permalink / raw)
  To: Holger Hoffstätte
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, linux-kernel,
	kernel test robot, Valentin Schneider, Phil Auld, Hillf Danton

On Thu, 25 Jun 2020 at 11:24, Holger Hoffstätte
<holger@applied-asynchrony.com> wrote:
>
> On 2020-06-24 17:44, Vincent Guittot wrote:
> > Some performance regression on reaim benchmark have been raised with
> >    commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> >
> > The problem comes from the init value of runnable_avg which is initialized
> > with max value. This can be a problem if the newly forked task is finally
> > a short task because the group of CPUs is wrongly set to overloaded and
> > tasks are pulled less agressively.
> >
> > Set initial value of runnable_avg equals to util_avg to reflect that there
> > is no waiting time so far.
> >
> > Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> > Reported-by: kernel test robot <rong.a.chen@intel.com>
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >   kernel/sched/fair.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 0424a0af5f87..45e467bf42fc 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
> >               }
> >       }
> >
> > -     sa->runnable_avg = cpu_scale;
> > +     sa->runnable_avg = sa->util_avg;
> >
> >       if (p->sched_class != &fair_sched_class) {
> >               /*
> >
>
> Something is wrong here. I woke up my machine from suspend-to-RAM this morning
> and saw that a completely idle machine had a loadavg of ~7. According to my

Just to make sure: Are you speaking about loadavg that is output by
/proc/loadavg or load_avg which is the PELT load ?
The output of /proc/loadavg hasn't any link with runnable_avg. The 1st
one monitors nr_running at 5 sec interval whereas the other one is a
geometrics series of the weight of runnable tasks with a half time of
32ms

> monitoring system this happened to be the loadavg right before I suspended.
> I've reverted this, rebooted, created a loadavg >0, suspended and after wake up
> loadavg again correctly ranges between 0 and whatever, as expected.

I'm not sure to catch why ~7 is bad compared to correctly ranges
between 0 and whatever. Isn't ~7 part of the whatever ?

Vincent

>
> -h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched/cfs: change initial value of runnable_avg
  2020-06-25  9:56   ` Vincent Guittot
@ 2020-06-25 10:42     ` Holger Hoffstätte
  2020-06-25 12:08       ` Vincent Guittot
  2020-06-25 22:03       ` Holger Hoffstätte
  0 siblings, 2 replies; 9+ messages in thread
From: Holger Hoffstätte @ 2020-06-25 10:42 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, linux-kernel,
	kernel test robot, Valentin Schneider, Phil Auld, Hillf Danton

On 2020-06-25 11:56, Vincent Guittot wrote:
> On Thu, 25 Jun 2020 at 11:24, Holger Hoffstätte
> <holger@applied-asynchrony.com> wrote:
>>
>> On 2020-06-24 17:44, Vincent Guittot wrote:
>>> Some performance regression on reaim benchmark have been raised with
>>>     commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
>>>
>>> The problem comes from the init value of runnable_avg which is initialized
>>> with max value. This can be a problem if the newly forked task is finally
>>> a short task because the group of CPUs is wrongly set to overloaded and
>>> tasks are pulled less agressively.
>>>
>>> Set initial value of runnable_avg equals to util_avg to reflect that there
>>> is no waiting time so far.
>>>
>>> Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
>>> Reported-by: kernel test robot <rong.a.chen@intel.com>
>>> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
>>> ---
>>>    kernel/sched/fair.c | 2 +-
>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 0424a0af5f87..45e467bf42fc 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
>>>                }
>>>        }
>>>
>>> -     sa->runnable_avg = cpu_scale;
>>> +     sa->runnable_avg = sa->util_avg;
>>>
>>>        if (p->sched_class != &fair_sched_class) {
>>>                /*
>>>
>>
>> Something is wrong here. I woke up my machine from suspend-to-RAM this morning
>> and saw that a completely idle machine had a loadavg of ~7. According to my
> 
> Just to make sure: Are you speaking about loadavg that is output by
> /proc/loadavg or load_avg which is the PELT load ?

/proc/loadavg

>> monitoring system this happened to be the loadavg right before I suspended.
>> I've reverted this, rebooted, created a loadavg >0, suspended and after wake up
>> loadavg again correctly ranges between 0 and whatever, as expected.
> 
> I'm not sure to catch why ~7 is bad compared to correctly ranges
> between 0 and whatever. Isn't ~7 part of the whatever ?

After wakeup the _baseline_ for loadavg seemed to be the last value before suspend,
not 0. The 7 then was the base loadavg for a _mostly idle machine_ (just reading
mail etc.), i.e. it never went below said baseline again, no matter the
_actual_ load.

Here's an image: https://imgur.com/a/kd2stqO

Before 02:00 last night the load was ~7 (compiled something), then all processes
were terminated and the machine was suspended. After wakeup the machine was mostly
idle (9am..11am), yet measured loadavg continued with the same value as before.
I didn't notice this right away since my CPU meter on the desktop didn't show any
*actual* activity (because there was none). The spike at ~11am is the revert/reboot.
After that loadavg became normal again, i.e. representative of the actual load,
even after suspend/resume cycles.
I suspend/resume every night and the only thing that changed recently was this
patch, so.. :)

-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [tip: sched/urgent] sched/cfs: change initial value of runnable_avg
  2020-06-24 15:44 [PATCH] sched/cfs: change initial value of runnable_avg Vincent Guittot
  2020-06-24 16:32 ` Valentin Schneider
  2020-06-25  9:24 ` Holger Hoffstätte
@ 2020-06-25 11:53 ` tip-bot2 for Vincent Guittot
  2 siblings, 0 replies; 9+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2020-06-25 11:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: kernel test robot, Vincent Guittot, Peter Zijlstra (Intel), x86, LKML

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     68f7b5cc835de7d5b6c7696533c126018171e793
Gitweb:        https://git.kernel.org/tip/68f7b5cc835de7d5b6c7696533c126018171e793
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Wed, 24 Jun 2020 17:44:22 +02:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Thu, 25 Jun 2020 13:45:38 +02:00

sched/cfs: change initial value of runnable_avg

Some performance regression on reaim benchmark have been raised with
  commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")

The problem comes from the init value of runnable_avg which is initialized
with max value. This can be a problem if the newly forked task is finally
a short task because the group of CPUs is wrongly set to overloaded and
tasks are pulled less agressively.

Set initial value of runnable_avg equals to util_avg to reflect that there
is no waiting time so far.

Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200624154422.29166-1-vincent.guittot@linaro.org
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cbcb2f7..658aa7a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
 		}
 	}
 
-	sa->runnable_avg = cpu_scale;
+	sa->runnable_avg = sa->util_avg;
 
 	if (p->sched_class != &fair_sched_class) {
 		/*

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched/cfs: change initial value of runnable_avg
  2020-06-25 10:42     ` Holger Hoffstätte
@ 2020-06-25 12:08       ` Vincent Guittot
  2020-06-25 22:03       ` Holger Hoffstätte
  1 sibling, 0 replies; 9+ messages in thread
From: Vincent Guittot @ 2020-06-25 12:08 UTC (permalink / raw)
  To: Holger Hoffstätte
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, linux-kernel,
	kernel test robot, Valentin Schneider, Phil Auld, Hillf Danton

On Thu, 25 Jun 2020 at 12:42, Holger Hoffstätte
<holger@applied-asynchrony.com> wrote:
>
> On 2020-06-25 11:56, Vincent Guittot wrote:
> > On Thu, 25 Jun 2020 at 11:24, Holger Hoffstätte
> > <holger@applied-asynchrony.com> wrote:
> >>
> >> On 2020-06-24 17:44, Vincent Guittot wrote:
> >>> Some performance regression on reaim benchmark have been raised with
> >>>     commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> >>>
> >>> The problem comes from the init value of runnable_avg which is initialized
> >>> with max value. This can be a problem if the newly forked task is finally
> >>> a short task because the group of CPUs is wrongly set to overloaded and
> >>> tasks are pulled less agressively.
> >>>
> >>> Set initial value of runnable_avg equals to util_avg to reflect that there
> >>> is no waiting time so far.
> >>>
> >>> Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
> >>> Reported-by: kernel test robot <rong.a.chen@intel.com>
> >>> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> >>> ---
> >>>    kernel/sched/fair.c | 2 +-
> >>>    1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >>> index 0424a0af5f87..45e467bf42fc 100644
> >>> --- a/kernel/sched/fair.c
> >>> +++ b/kernel/sched/fair.c
> >>> @@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
> >>>                }
> >>>        }
> >>>
> >>> -     sa->runnable_avg = cpu_scale;
> >>> +     sa->runnable_avg = sa->util_avg;
> >>>
> >>>        if (p->sched_class != &fair_sched_class) {
> >>>                /*
> >>>
> >>
> >> Something is wrong here. I woke up my machine from suspend-to-RAM this morning
> >> and saw that a completely idle machine had a loadavg of ~7. According to my
> >
> > Just to make sure: Are you speaking about loadavg that is output by
> > /proc/loadavg or load_avg which is the PELT load ?
>
> /proc/loadavg
>
> >> monitoring system this happened to be the loadavg right before I suspended.
> >> I've reverted this, rebooted, created a loadavg >0, suspended and after wake up
> >> loadavg again correctly ranges between 0 and whatever, as expected.
> >
> > I'm not sure to catch why ~7 is bad compared to correctly ranges
> > between 0 and whatever. Isn't ~7 part of the whatever ?
>
> After wakeup the _baseline_ for loadavg seemed to be the last value before suspend,
> not 0. The 7 then was the base loadavg for a _mostly idle machine_ (just reading
> mail etc.), i.e. it never went below said baseline again, no matter the
> _actual_ load.
>
> Here's an image: https://imgur.com/a/kd2stqO
>
> Before 02:00 last night the load was ~7 (compiled something), then all processes
> were terminated and the machine was suspended. After wakeup the machine was mostly
> idle (9am..11am), yet measured loadavg continued with the same value as before.
> I didn't notice this right away since my CPU meter on the desktop didn't show any
> *actual* activity (because there was none). The spike at ~11am is the revert/reboot.

you have reverted only this patch ?

TBH, there is no link between these 2 metrics and I don't see how the
init value of runnable_avg can impact loadavg. As explained, loadavg
is doing a snapshot of nr_running every 5 seconds whereas the impact
of changing this init value will have disappeared in far less than
300ms most of the time.

Let me try to reproduce this on my system


> After that loadavg became normal again, i.e. representative of the actual load,
> even after suspend/resume cycles.
> I suspend/resume every night and the only thing that changed recently was this
> patch, so.. :)
>
> -h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched/cfs: change initial value of runnable_avg
  2020-06-25 10:42     ` Holger Hoffstätte
  2020-06-25 12:08       ` Vincent Guittot
@ 2020-06-25 22:03       ` Holger Hoffstätte
  1 sibling, 0 replies; 9+ messages in thread
From: Holger Hoffstätte @ 2020-06-25 22:03 UTC (permalink / raw)
  To: Vincent Guittot, LKML

On 2020-06-25 12:42, Holger Hoffstätte wrote:
<loadavg weirdness>

Nevermind, it turned out to be something else after all.
I'm not entirely sure *what* exactly yet, but for some reason my loadavg
is high again for no good reason.
Sorry for the false alarm.

-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-06-25 22:04 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-24 15:44 [PATCH] sched/cfs: change initial value of runnable_avg Vincent Guittot
2020-06-24 16:32 ` Valentin Schneider
2020-06-24 16:40   ` Vincent Guittot
2020-06-25  9:24 ` Holger Hoffstätte
2020-06-25  9:56   ` Vincent Guittot
2020-06-25 10:42     ` Holger Hoffstätte
2020-06-25 12:08       ` Vincent Guittot
2020-06-25 22:03       ` Holger Hoffstätte
2020-06-25 11:53 ` [tip: sched/urgent] " tip-bot2 for Vincent Guittot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.