* [PATCH] sched/fair: update tg->load_avg and se->load in throttle_cfs_rq() @ 2022-04-13 4:16 Chengming Zhou 2022-04-13 17:30 ` Benjamin Segall 0 siblings, 1 reply; 5+ messages in thread From: Chengming Zhou @ 2022-04-13 4:16 UTC (permalink / raw) To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman, bristot Cc: linux-kernel, duanxiongchun, songmuchun, zhengqi.arch, Chengming Zhou We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated even when the cfs_rq's load_avg has changed. And we also don't call update_cfs_group(se), so the se->load won't be updated too. Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add update_cfs_group(se) in throttle_cfs_rq(), like we do in dequeue_task_fair(). Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> --- kernel/sched/fair.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d4bd299d67ab..b37dc1db7be7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) if (!se->on_rq) goto done; - update_load_avg(qcfs_rq, se, 0); + update_load_avg(qcfs_rq, se, UPDATE_TG); se_update_runnable(se); + update_cfs_group(se); if (cfs_rq_is_idle(group_cfs_rq(se))) idle_task_delta = cfs_rq->h_nr_running; -- 2.35.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/fair: update tg->load_avg and se->load in throttle_cfs_rq() 2022-04-13 4:16 [PATCH] sched/fair: update tg->load_avg and se->load in throttle_cfs_rq() Chengming Zhou @ 2022-04-13 17:30 ` Benjamin Segall 2022-04-15 5:42 ` [External] " Chengming Zhou 0 siblings, 1 reply; 5+ messages in thread From: Benjamin Segall @ 2022-04-13 17:30 UTC (permalink / raw) To: Chengming Zhou Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt, mgorman, bristot, linux-kernel, duanxiongchun, songmuchun, zhengqi.arch Chengming Zhou <zhouchengming@bytedance.com> writes: > We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the > cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated > even when the cfs_rq's load_avg has changed. > > And we also don't call update_cfs_group(se), so the se->load won't > be updated too. > > Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add > update_cfs_group(se) in throttle_cfs_rq(), like we do in > dequeue_task_fair(). Hmm, this does look more correct; Vincent, was having this not do UPDATE_TG deliberate, or an accident that we all missed when checking? It looks like the unthrottle_cfs_rq side got UPDATE_TG added later in the two-loops pass, but not the throttle_cfs_rq side. Also unthrottle_cfs_rq I'm guessing could still use update_cfs_group(se) > > Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> > --- > kernel/sched/fair.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index d4bd299d67ab..b37dc1db7be7 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) > if (!se->on_rq) > goto done; > > - update_load_avg(qcfs_rq, se, 0); > + update_load_avg(qcfs_rq, se, UPDATE_TG); > se_update_runnable(se); > + update_cfs_group(se); > > if (cfs_rq_is_idle(group_cfs_rq(se))) > idle_task_delta = cfs_rq->h_nr_running; ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: [PATCH] sched/fair: update tg->load_avg and se->load in throttle_cfs_rq() 2022-04-13 17:30 ` Benjamin Segall @ 2022-04-15 5:42 ` Chengming Zhou 2022-04-15 7:51 ` Vincent Guittot 0 siblings, 1 reply; 5+ messages in thread From: Chengming Zhou @ 2022-04-15 5:42 UTC (permalink / raw) To: Benjamin Segall Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt, mgorman, bristot, linux-kernel, duanxiongchun, songmuchun, zhengqi.arch On 2022/4/14 01:30, Benjamin Segall wrote: > Chengming Zhou <zhouchengming@bytedance.com> writes: > >> We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the >> cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated >> even when the cfs_rq's load_avg has changed. >> >> And we also don't call update_cfs_group(se), so the se->load won't >> be updated too. >> >> Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add >> update_cfs_group(se) in throttle_cfs_rq(), like we do in >> dequeue_task_fair(). > > Hmm, this does look more correct; Vincent, was having this not do > UPDATE_TG deliberate, or an accident that we all missed when checking? > > It looks like the unthrottle_cfs_rq side got UPDATE_TG added later in > the two-loops pass, but not the throttle_cfs_rq side. Yes, UPDATE_TG was added in unthrottle_cfs_rq() in commit 39f23ce07b93 ("sched/fair: Fix unthrottle_cfs_rq() for leaf_cfs_rq list"). > > Also unthrottle_cfs_rq I'm guessing could still use update_cfs_group(se) It looks like we should also add update_cfs_group(se) in unthrottle_cfs_rq(). Thanks. > > >> >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> >> --- >> kernel/sched/fair.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index d4bd299d67ab..b37dc1db7be7 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) >> if (!se->on_rq) >> goto done; >> >> - update_load_avg(qcfs_rq, se, 0); >> + update_load_avg(qcfs_rq, se, UPDATE_TG); >> se_update_runnable(se); >> + update_cfs_group(se); >> >> if (cfs_rq_is_idle(group_cfs_rq(se))) >> idle_task_delta = cfs_rq->h_nr_running; ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: [PATCH] sched/fair: update tg->load_avg and se->load in throttle_cfs_rq() 2022-04-15 5:42 ` [External] " Chengming Zhou @ 2022-04-15 7:51 ` Vincent Guittot 2022-04-18 13:20 ` Chengming Zhou 0 siblings, 1 reply; 5+ messages in thread From: Vincent Guittot @ 2022-04-15 7:51 UTC (permalink / raw) To: Chengming Zhou Cc: Benjamin Segall, mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, mgorman, bristot, linux-kernel, duanxiongchun, songmuchun, zhengqi.arch On Fri, 15 Apr 2022 at 07:42, Chengming Zhou <zhouchengming@bytedance.com> wrote: > > On 2022/4/14 01:30, Benjamin Segall wrote: > > Chengming Zhou <zhouchengming@bytedance.com> writes: > > > >> We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the > >> cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated > >> even when the cfs_rq's load_avg has changed. > >> > >> And we also don't call update_cfs_group(se), so the se->load won't > >> be updated too. > >> > >> Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add > >> update_cfs_group(se) in throttle_cfs_rq(), like we do in > >> dequeue_task_fair(). > > > > Hmm, this does look more correct; Vincent, was having this not do > > UPDATE_TG deliberate, or an accident that we all missed when checking? The cost of UPDATE_TG/update_tg_load_avg() is not free and the parent cfs->load_avg should not change because of the throttling but only the cfs->weight so I don't see a real benefit of UPDATE_TG. Chengming, have you faced an issue or this change is based on code review ? > > > > It looks like the unthrottle_cfs_rq side got UPDATE_TG added later in > > the two-loops pass, but not the throttle_cfs_rq side. > > Yes, UPDATE_TG was added in unthrottle_cfs_rq() in commit 39f23ce07b93 > ("sched/fair: Fix unthrottle_cfs_rq() for leaf_cfs_rq list"). > > > > > Also unthrottle_cfs_rq I'm guessing could still use update_cfs_group(se) > > It looks like we should also add update_cfs_group(se) in unthrottle_cfs_rq(). > > Thanks. > > > > > > >> > >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> > >> --- > >> kernel/sched/fair.c | 3 ++- > >> 1 file changed, 2 insertions(+), 1 deletion(-) > >> > >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > >> index d4bd299d67ab..b37dc1db7be7 100644 > >> --- a/kernel/sched/fair.c > >> +++ b/kernel/sched/fair.c > >> @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) > >> if (!se->on_rq) > >> goto done; > >> > >> - update_load_avg(qcfs_rq, se, 0); > >> + update_load_avg(qcfs_rq, se, UPDATE_TG); > >> se_update_runnable(se); > >> + update_cfs_group(se); > >> > >> if (cfs_rq_is_idle(group_cfs_rq(se))) > >> idle_task_delta = cfs_rq->h_nr_running; ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: [PATCH] sched/fair: update tg->load_avg and se->load in throttle_cfs_rq() 2022-04-15 7:51 ` Vincent Guittot @ 2022-04-18 13:20 ` Chengming Zhou 0 siblings, 0 replies; 5+ messages in thread From: Chengming Zhou @ 2022-04-18 13:20 UTC (permalink / raw) To: Vincent Guittot Cc: Benjamin Segall, mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, mgorman, bristot, linux-kernel, duanxiongchun, songmuchun, zhengqi.arch On 2022/4/15 15:51, Vincent Guittot wrote: > On Fri, 15 Apr 2022 at 07:42, Chengming Zhou > <zhouchengming@bytedance.com> wrote: >> >> On 2022/4/14 01:30, Benjamin Segall wrote: >>> Chengming Zhou <zhouchengming@bytedance.com> writes: >>> >>>> We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the >>>> cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated >>>> even when the cfs_rq's load_avg has changed. >>>> >>>> And we also don't call update_cfs_group(se), so the se->load won't >>>> be updated too. >>>> >>>> Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add >>>> update_cfs_group(se) in throttle_cfs_rq(), like we do in >>>> dequeue_task_fair(). >>> >>> Hmm, this does look more correct; Vincent, was having this not do >>> UPDATE_TG deliberate, or an accident that we all missed when checking? > > The cost of UPDATE_TG/update_tg_load_avg() is not free and the parent > cfs->load_avg should not change because of the throttling but only the > cfs->weight so I don't see a real benefit of UPDATE_TG. Hi Vincent, If the current task has dequeued before throttle_cfs_rq() when pick_next_task_fair, the parent cfs_rq will wait to update_tg_load_avg() until the throttle_cfs_rq() when enqueue_entity(), cause delay update of parent cfs_rq->load_avg and the load.weight of that group_se, so the fairness of task_groups may be delayed. update_tg_load_avg() won't touch tg->load_avg if (delta <= cfs_rq->tg_load_avg_contrib / 64). So the cost may have been avoided if the load_avg is really unchanged ? > > Chengming, > have you faced an issue or this change is based on code review ? Yes, this change is based on code review and git log history. Thanks. > >>> >>> It looks like the unthrottle_cfs_rq side got UPDATE_TG added later in >>> the two-loops pass, but not the throttle_cfs_rq side. >> >> Yes, UPDATE_TG was added in unthrottle_cfs_rq() in commit 39f23ce07b93 >> ("sched/fair: Fix unthrottle_cfs_rq() for leaf_cfs_rq list"). >> >>> >>> Also unthrottle_cfs_rq I'm guessing could still use update_cfs_group(se) >> >> It looks like we should also add update_cfs_group(se) in unthrottle_cfs_rq(). >> >> Thanks. >> >>> >>> >>>> >>>> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> >>>> --- >>>> kernel/sched/fair.c | 3 ++- >>>> 1 file changed, 2 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>>> index d4bd299d67ab..b37dc1db7be7 100644 >>>> --- a/kernel/sched/fair.c >>>> +++ b/kernel/sched/fair.c >>>> @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) >>>> if (!se->on_rq) >>>> goto done; >>>> >>>> - update_load_avg(qcfs_rq, se, 0); >>>> + update_load_avg(qcfs_rq, se, UPDATE_TG); >>>> se_update_runnable(se); >>>> + update_cfs_group(se); >>>> >>>> if (cfs_rq_is_idle(group_cfs_rq(se))) >>>> idle_task_delta = cfs_rq->h_nr_running; ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-04-18 14:28 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-04-13 4:16 [PATCH] sched/fair: update tg->load_avg and se->load in throttle_cfs_rq() Chengming Zhou 2022-04-13 17:30 ` Benjamin Segall 2022-04-15 5:42 ` [External] " Chengming Zhou 2022-04-15 7:51 ` Vincent Guittot 2022-04-18 13:20 ` Chengming Zhou
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.