From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751235AbdEAV4O (ORCPT ); Mon, 1 May 2017 17:56:14 -0400 Received: from mail-yb0-f174.google.com ([209.85.213.174]:33949 "EHLO mail-yb0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750957AbdEAV4H (ORCPT ); Mon, 1 May 2017 17:56:07 -0400 Date: Mon, 1 May 2017 17:56:04 -0400 From: Tejun Heo To: Peter Zijlstra Cc: Vincent Guittot , Ingo Molnar , linux-kernel , Linus Torvalds , Mike Galbraith , Paul Turner , Chris Mason , kernel-team@fb.com Subject: Re: [PATCH 1/2] sched/fair: Fix how load gets propagated from cfs_rq to its sched_entity Message-ID: <20170501215604.GB19079@htj.duckdns.org> References: <20170424201344.GA14169@wtj.duckdns.org> <20170424201415.GB14169@wtj.duckdns.org> <20170425181219.GA15593@wtj.duckdns.org> <20170426165123.GA17921@linaro.org> <20170501141733.shphf35psasefraj@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170501141733.shphf35psasefraj@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.8.0 (2017-02-23) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, Peter. On Mon, May 01, 2017 at 04:17:33PM +0200, Peter Zijlstra wrote: > So this here does: > > ( tg->load_avg = \Sum cfs_rq->load_avg ) > > load = cfs_rq->load.weight > > tg_weight = tg->load_avg - cfs_rq->contrib + load > > > tg->shares * load > shares = ----------------- > tg_weight > > > cfs_rq->load_avg > avg_shares = shares * ---------------- > load > > tg->shares * cfs_rq->load_avg > = ----------------------------- > tg_weight > > > ( se->load.weight = shares ) > > se->load_avg = min(shares, avg_shares); > > > So where shares (and se->load.weight) are an upper bound (due to using > cfs_rq->load.weight, see calc_cfs_shares), avg_shares is supposedly a > more accurate representation based on our PELT averages. > > This looks OK; and I agree with Vincent that we should use > cfs_rq->avg.load_avg, not cfs_rq->runnable_load_avg, since tg->load_avg > is a sum of the former, not the latter. With this, we end up using a different metric for picking the busiest queue depending on whether there are nested cfs_rq's or not. The root's runnable_load_avg ends up including blocked load avgs queued behind nested cfs_rq's because we lose the resolution across threads across nesting. > Also, arguably calculating the above avg_shares directly (using the > second equation) might be more precise; but I doubt it makes much of a > difference, however since we do min(), we should at least clamp against > MIN_SHARES again. > > Furthermore, it appears to me we want a different tg_weight value for > the avg_shares, something like: > > tg_weight = tg->load_avg - cfs_rq->contrib + cfs_rq->avg.load_avg > > To better match with the numerator's units, otherwise it will have a > tendency to push avg_shares down further than it needs to be. > > > (All assuming it actually works of course.. compile tested only) So, if changing gcfs_rq se->load_avg.avg to match the gcfs_rq's runnable_load_avg is icky, and I can see why that would be, we can simply introduce a separate channel of propagation so that runnable_load_avg gets propagated independently from se->load_avg propagation, so that for all every cfs_rq, its runnable_load_avg is the sum of all active load_avgs queued on itself and its descendents, which is the number we want for load balancing anyway. I'll try to spin a patch which does that. I still wonder what gcfs_rq se->load_avg.avg is good for tho? It's nice to keep the value in line but is it actually used anywhere? The parent cfs_rq's values are independently calculated and, AFAICS, the only time the value is used is to propagate into the parent's runnable_load_sum, which has to use a different value, as explained above. Thanks. -- tejun