From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753781AbcFTJYD (ORCPT ); Mon, 20 Jun 2016 05:24:03 -0400 Received: from mail-wm0-f54.google.com ([74.125.82.54]:35370 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752925AbcFTJXp (ORCPT ); Mon, 20 Jun 2016 05:23:45 -0400 Date: Mon, 20 Jun 2016 11:23:39 +0200 From: Vincent Guittot To: Peter Zijlstra Cc: Yuyang Du , Ingo Molnar , linux-kernel , Mike Galbraith , Benjamin Segall , Paul Turner , Morten Rasmussen , Dietmar Eggemann , Matt Fleming Subject: Re: [PATCH 4/4] sched,fair: Fix PELT integrity for new tasks Message-ID: <20160620092339.GA4526@vingu-laptop> References: <20160617120136.064100812@infradead.org> <20160617120454.150630859@infradead.org> <20160617142814.GT30154@twins.programming.kicks-ass.net> <20160617160239.GL30927@twins.programming.kicks-ass.net> <20160617161831.GM30927@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20160617161831.GM30927@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le Friday 17 Jun 2016 à 18:18:31 (+0200), Peter Zijlstra a écrit : > On Fri, Jun 17, 2016 at 06:02:39PM +0200, Peter Zijlstra wrote: > > So yes, ho-humm, how to go about doing that bestest. Lemme have a play. > > This is what I came up with, not entirely pretty, but I suppose it'll > have to do. > > --- > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -724,6 +724,7 @@ void post_init_entity_util_avg(struct sc > struct cfs_rq *cfs_rq = cfs_rq_of(se); > struct sched_avg *sa = &se->avg; > long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; > + u64 now = cfs_rq_clock_task(cfs_rq); > > if (cap > 0) { > if (cfs_rq->avg.util_avg != 0) { > @@ -738,7 +739,20 @@ void post_init_entity_util_avg(struct sc > sa->util_sum = sa->util_avg * LOAD_AVG_MAX; > } > > - update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false); > + if (entity_is_task(se)) { Why only task ? > + struct task_struct *p = task_of(se); > + if (p->sched_class != &fair_sched_class) { > + /* > + * For !fair tasks do attach_entity_load_avg() > + * followed by detach_entity_load_avg() as per > + * switched_from_fair(). > + */ > + se->avg.last_update_time = now; > + return; > + } > + } > + > + update_cfs_rq_load_avg(now, cfs_rq, false); > attach_entity_load_avg(cfs_rq, se); Don't we have to do a complete attach with attach_task_cfs_rq instead of just the load_avg ? to set also depth ? What about something like below ? --- --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -723,6 +723,7 @@ void post_init_entity_util_avg(struct sched_entity *se) struct cfs_rq *cfs_rq = cfs_rq_of(se); struct sched_avg *sa = &se->avg; long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; + u64 now = cfs_rq_clock_task(cfs_rq); if (cap > 0) { if (cfs_rq->avg.util_avg != 0) { @@ -737,8 +738,18 @@ void post_init_entity_util_avg(struct sched_entity *se) sa->util_sum = sa->util_avg * LOAD_AVG_MAX; } - update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false); - attach_entity_load_avg(cfs_rq, se); + if (p->sched_class == &fair_sched_class) { + /* fair entity must be attached to cfs_rq */ + attach_task_cfs_rq(se); + } else { + /* + * For !fair tasks do attach_entity_load_avg() + * followed by detach_entity_load_avg() as per + * switched_from_fair(). + */ + se->avg.last_update_time = now; + } + } static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq); --