From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752272AbcFGDfz (ORCPT ); Mon, 6 Jun 2016 23:35:55 -0400 Received: from mga04.intel.com ([192.55.52.120]:28105 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751039AbcFGDfy (ORCPT ); Mon, 6 Jun 2016 23:35:54 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,431,1459839600"; d="scan'208";a="117260971" Date: Tue, 7 Jun 2016 03:38:48 +0800 From: Yuyang Du To: Peter Zijlstra Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com Subject: Re: [PATCH v4 2/5] sched/fair: Fix attaching task sched avgs twice when switching to fair or changing task group Message-ID: <20160606193848.GF8105@intel.com> References: <1465172441-27727-1-git-send-email-yuyang.du@intel.com> <1465172441-27727-3-git-send-email-yuyang.du@intel.com> <20160606095450.GA30909@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160606095450.GA30909@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 06, 2016 at 11:54:50AM +0200, Peter Zijlstra wrote: > > The task groups are changed like this: > > > > if (queued) > > dequeue_task() > > task_move_group() > > if (queued) > > enqueue_task() > > > > Unlike the switch to fair class case, if the task is on_rq, it will be > > enqueued after we move task groups, so the simplest solution is to reset > > the task's last_update_time when we do task_move_group(), but not to > > attach sched avgs in task_move_group(), and then let enqueue_task() do > > the sched avgs attachment. > > So this patch completely removes the detach->attach aging you moved > around in the previous patch -- leading me to wonder what the purpose of > the previous patch was. Basically, they address different issues, and should not be conflated. > Also, this Changelog completely fails to mention this fact, nor does it > explain why this is 'right'. I should have explained this in the changelog. It is "right", because when a task switches to fair, it is most likely "we could have just aged the entire load away" as the XXX comment said. But despite that, basically we have a lost record time, aging or not aging by that time, it doesn't occur to me one is definitely better than the other. I will make a comment in the code explaining this. You think? > > +/* Virtually synchronize task with its cfs_rq */ > > I don't feel this comment actually enlightens the function much. Synchronize without update, so virtually, :) > > @@ -8372,9 +8363,6 @@ static void attach_task_cfs_rq(struct task_struct *p) > > se->depth = se->parent ? se->parent->depth + 1 : 0; > > #endif > > > > - /* Synchronize task with its cfs_rq */ > > - attach_entity_load_avg(cfs_rq, se); > > - > > if (!vruntime_normalized(p)) > > se->vruntime += cfs_rq->min_vruntime; > > } > > You leave attach/detach asymmetric and not a comment in sight explaining > why. So it is asymmetric because we uniformly attach in enqueue. I will explain. This also relates to the following code comments and your comments. > > @@ -8382,16 +8370,18 @@ static void attach_task_cfs_rq(struct task_struct *p) > > static void switched_from_fair(struct rq *rq, struct task_struct *p) > > { > > detach_task_cfs_rq(p); > > + reset_task_last_update_time(p); > > + /* > > + * If we change back to fair class, we will attach the sched > > + * avgs when we are enqueued, which will be done only once. We > > + * won't have the chance to consistently age the avgs before > > + * attaching them, so we have to continue with the last updated > > + * sched avgs when we were detached. > > + */ > > This comment needs improvement; it confuses. > > > @@ -8444,6 +8434,11 @@ static void task_move_group_fair(struct task_struct *p) > > detach_task_cfs_rq(p); > > set_task_rq(p, task_cpu(p)); > > attach_task_cfs_rq(p); > > + /* > > + * This assures we will attach the sched avgs when we are enqueued, > > "ensures" ? Also, more confusion. > > > + * which will be done only once. > > + */ > > + reset_task_last_update_time(p); > > } >