linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] sched/fair : fix reordering of enqueue_task_fair
@ 2020-03-05  7:48 Vincent Guittot
  2020-03-05 18:39 ` bsegall
  0 siblings, 1 reply; 3+ messages in thread
From: Vincent Guittot @ 2020-03-05  7:48 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, linux-kernel
  Cc: pauld, parth, valentin.schneider, hdanton, zhout, Vincent Guittot

Even when a cgroup is throttled, the group se of a child cgroup can still
be enqueued and its gse->on_rq stays true. When a task is enqueued on such
child, we still have to update the load_avg and increase
h_nr_running of the throttled cfs. Nevertheless, the 1st
for_each_sched_entity loop is skipped because of gse->on_rq == true and the
2nd loop because the cfs is throttled whereas we have to update both
load_avg with the old h_nr_running and increase h_nr_running in such case.
Note that the update of load_avg will effectively happen only once in order
to sync up to the throttled time. Next call for updating load_avg will stop
early because the clock stays unchanged.

Fixes: 6d4d22468dae ("sched/fair: Reorder enqueue/dequeue_task_fair path")
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fcc968669aea..5b232d261842 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5431,16 +5431,16 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	for_each_sched_entity(se) {
 		cfs_rq = cfs_rq_of(se);
 
-		/* end evaluation on encountering a throttled cfs_rq */
-		if (cfs_rq_throttled(cfs_rq))
-			goto enqueue_throttle;
-
 		update_load_avg(cfs_rq, se, UPDATE_TG);
 		se_update_runnable(se);
 		update_cfs_group(se);
 
 		cfs_rq->h_nr_running++;
 		cfs_rq->idle_h_nr_running += idle_h_nr_running;
+
+		/* end evaluation on encountering a throttled cfs_rq */
+		if (cfs_rq_throttled(cfs_rq))
+			goto enqueue_throttle;
 	}
 
 enqueue_throttle:
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] sched/fair : fix reordering of enqueue_task_fair
  2020-03-05  7:48 [PATCH v2] sched/fair : fix reordering of enqueue_task_fair Vincent Guittot
@ 2020-03-05 18:39 ` bsegall
  2020-03-06  7:59   ` Vincent Guittot
  0 siblings, 1 reply; 3+ messages in thread
From: bsegall @ 2020-03-05 18:39 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, linux-kernel, pauld, parth, valentin.schneider, hdanton,
	zhout

Vincent Guittot <vincent.guittot@linaro.org> writes:

> Even when a cgroup is throttled, the group se of a child cgroup can still
> be enqueued and its gse->on_rq stays true. When a task is enqueued on such
> child, we still have to update the load_avg and increase
> h_nr_running of the throttled cfs. Nevertheless, the 1st
> for_each_sched_entity loop is skipped because of gse->on_rq == true and the
> 2nd loop because the cfs is throttled whereas we have to update both
> load_avg with the old h_nr_running and increase h_nr_running in such case.
> Note that the update of load_avg will effectively happen only once in order
> to sync up to the throttled time. Next call for updating load_avg will stop
> early because the clock stays unchanged.
>
> Fixes: 6d4d22468dae ("sched/fair: Reorder enqueue/dequeue_task_fair path")
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fcc968669aea..5b232d261842 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5431,16 +5431,16 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>  	for_each_sched_entity(se) {
>  		cfs_rq = cfs_rq_of(se);
>  
> -		/* end evaluation on encountering a throttled cfs_rq */
> -		if (cfs_rq_throttled(cfs_rq))
> -			goto enqueue_throttle;
> -
>  		update_load_avg(cfs_rq, se, UPDATE_TG);
>  		se_update_runnable(se);
>  		update_cfs_group(se);
>  
>  		cfs_rq->h_nr_running++;
>  		cfs_rq->idle_h_nr_running += idle_h_nr_running;
> +
> +		/* end evaluation on encountering a throttled cfs_rq */
> +		if (cfs_rq_throttled(cfs_rq))
> +			goto enqueue_throttle;
>  	}
>  
>  enqueue_throttle:


I think there's an equivalent issue on dequeue as well, though that's
much rarer to trigger (but still possible). I think the same fix works
there too?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] sched/fair : fix reordering of enqueue_task_fair
  2020-03-05 18:39 ` bsegall
@ 2020-03-06  7:59   ` Vincent Guittot
  0 siblings, 0 replies; 3+ messages in thread
From: Vincent Guittot @ 2020-03-06  7:59 UTC (permalink / raw)
  To: Ben Segall
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Mel Gorman, linux-kernel, Phil Auld, Parth Shah,
	Valentin Schneider, Hillf Danton, Tao Zhou

On Thu, 5 Mar 2020 at 19:39, <bsegall@google.com> wrote:
>
> Vincent Guittot <vincent.guittot@linaro.org> writes:
>
> > Even when a cgroup is throttled, the group se of a child cgroup can still
> > be enqueued and its gse->on_rq stays true. When a task is enqueued on such
> > child, we still have to update the load_avg and increase
> > h_nr_running of the throttled cfs. Nevertheless, the 1st
> > for_each_sched_entity loop is skipped because of gse->on_rq == true and the
> > 2nd loop because the cfs is throttled whereas we have to update both
> > load_avg with the old h_nr_running and increase h_nr_running in such case.
> > Note that the update of load_avg will effectively happen only once in order
> > to sync up to the throttled time. Next call for updating load_avg will stop
> > early because the clock stays unchanged.
> >
> > Fixes: 6d4d22468dae ("sched/fair: Reorder enqueue/dequeue_task_fair path")
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >  kernel/sched/fair.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index fcc968669aea..5b232d261842 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5431,16 +5431,16 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >       for_each_sched_entity(se) {
> >               cfs_rq = cfs_rq_of(se);
> >
> > -             /* end evaluation on encountering a throttled cfs_rq */
> > -             if (cfs_rq_throttled(cfs_rq))
> > -                     goto enqueue_throttle;
> > -
> >               update_load_avg(cfs_rq, se, UPDATE_TG);
> >               se_update_runnable(se);
> >               update_cfs_group(se);
> >
> >               cfs_rq->h_nr_running++;
> >               cfs_rq->idle_h_nr_running += idle_h_nr_running;
> > +
> > +             /* end evaluation on encountering a throttled cfs_rq */
> > +             if (cfs_rq_throttled(cfs_rq))
> > +                     goto enqueue_throttle;
> >       }
> >
> >  enqueue_throttle:
>
>
> I think there's an equivalent issue on dequeue as well, though that's
> much rarer to trigger (but still possible). I think the same fix works
> there too?

I thought it was not needed because we don't have the "if (se->on_rq)
break" in dequeue but :
  /* Avoid re-evaluating load for this entity: */
  se = parent_entity(se);
It creates similar behavior for the parent

I'm going to add it.
Thanks

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-03-06  7:59 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-05  7:48 [PATCH v2] sched/fair : fix reordering of enqueue_task_fair Vincent Guittot
2020-03-05 18:39 ` bsegall
2020-03-06  7:59   ` Vincent Guittot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).