From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753572AbcIOPO2 (ORCPT ); Thu, 15 Sep 2016 11:14:28 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:39724 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751081AbcIOPOZ (ORCPT ); Thu, 15 Sep 2016 11:14:25 -0400 Date: Thu, 15 Sep 2016 17:14:15 +0200 From: Peter Zijlstra To: Dietmar Eggemann Cc: Vincent Guittot , mingo@kernel.org, linux-kernel@vger.kernel.org, yuyang.du@intel.com, Morten.Rasmussen@arm.com, linaro-kernel@lists.linaro.org, pjt@google.com, bsegall@google.com Subject: Re: [PATCH 4/7 v3] sched: propagate load during synchronous attach/detach Message-ID: <20160915151415.GF5012@twins.programming.kicks-ass.net> References: <1473666472-13749-1-git-send-email-vincent.guittot@linaro.org> <1473666472-13749-5-git-send-email-vincent.guittot@linaro.org> <896df1f8-c5ee-ae4c-46f0-4f4e76ad19b1@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <896df1f8-c5ee-ae4c-46f0-4f4e76ad19b1@arm.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 15, 2016 at 02:11:49PM +0100, Dietmar Eggemann wrote: > On 12/09/16 08:47, Vincent Guittot wrote: > > +/* Take into account change of load of a child task group */ > > +static inline void > > +update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se) > > +{ > > + struct cfs_rq *gcfs_rq = group_cfs_rq(se); > > + long delta, load = gcfs_rq->avg.load_avg; > > + > > + /* If the load of group cfs_rq is null, the load of the > > + * sched_entity will also be null so we can skip the formula > > + */ > > + if (load) { > > + long tg_load; > > + > > + /* Get tg's load and ensure tg_load > 0 */ > > + tg_load = atomic_long_read(&gcfs_rq->tg->load_avg) + 1; > > + > > + /* Ensure tg_load >= load and updated with current load*/ > > + tg_load -= gcfs_rq->tg_load_avg_contrib; > > + tg_load += load; > > + > > + /* scale gcfs_rq's load into tg's shares*/ > > + load *= scale_load_down(gcfs_rq->tg->shares); > > + load /= tg_load; > > + > > + /* > > + * we need to compute a correction term in the case that the > > + * task group is consuming <1 cpu so that we would contribute > > + * the same load as a task of equal weight. > > Wasn't 'consuming <1' related to 'NICE_0_LOAD' and not > scale_load_down(gcfs_rq->tg->shares) before the rewrite of PELT (v4.2, > __update_group_entity_contrib())? So the approximation was: min(1, runnable_avg) * shares; And it just so happened that we tracked runnable_avg in 10 bit fixed point, which then happened to be NICE_0_LOAD. But here we have load_avg, which already includes a '* shares' factor. So that then becomes min(shares, load_avg). We did however loose a lot on why and how min(1, runnable_avg) is a sensible thing to do... > > + */ > > + if (tg_load < scale_load_down(gcfs_rq->tg->shares)) { > > + load *= tg_load; > > + load /= scale_load_down(gcfs_rq->tg->shares); > > + } > > + } > > [...]