From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752279AbbCZKUs (ORCPT ); Thu, 26 Mar 2015 06:20:48 -0400 Received: from eu-smtp-delivery-143.mimecast.com ([146.101.78.143]:43561 "EHLO eu-smtp-delivery-143.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751382AbbCZKUq convert rfc822-to-8bit (ORCPT ); Thu, 26 Mar 2015 06:20:46 -0400 Message-ID: <5513DDA4.10802@arm.com> Date: Thu, 26 Mar 2015 10:21:24 +0000 From: Juri Lelli User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Peter Zijlstra CC: Morten Rasmussen , "mingo@redhat.com" , "vincent.guittot@linaro.org" , Dietmar Eggemann , "yuyang.du@intel.com" , "preeti@linux.vnet.ibm.com" , "mturquette@linaro.org" , "nico@linaro.org" , "rjw@rjwysocki.net" , "linux-kernel@vger.kernel.org" Subject: Re: [RFCv3 PATCH 33/48] sched: Energy-aware wake-up task placement References: <1423074685-6336-1-git-send-email-morten.rasmussen@arm.com> <1423074685-6336-34-git-send-email-morten.rasmussen@arm.com> <20150324163503.GZ23123@twins.programming.kicks-ass.net> <5512F7F2.2010705@arm.com> <20150325181413.GT21418@twins.programming.kicks-ass.net> In-Reply-To: <20150325181413.GT21418@twins.programming.kicks-ass.net> X-OriginalArrivalTime: 26 Mar 2015 10:20:42.0152 (UTC) FILETIME=[8379EA80:01D067AE] X-MC-Unique: 9NZz4G8PSFSr3xrHEoMLBw-1 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 25/03/15 18:14, Peter Zijlstra wrote: > On Wed, Mar 25, 2015 at 06:01:22PM +0000, Juri Lelli wrote: > >> Yes and no, IMHO. It makes perfect sense to trigger cpufreq on the >> target_cpu's freq domain, as we know that we are going to add p's >> utilization there. > > Fair point; I mainly wanted to start this discussion so that seems to > have been a success :-) > >> Anyway, I was thinking that we could just >> rely on triggering points in {en,de}queue_task_fair and task_tick_fair. >> We end up calling one of them every time we wake-up a task, perform >> a load balancing decision or just while running the task itself >> (we have to react to tasks phase changes). This way we should be >> able to reduce the number of triggering points and be more general >> at the same time. > > The one worry I have with that is that it might need to re-compute which > P state to request, where in the above (now trimmed quoted) code we > already figured out which P state we needed to be in, any hook in > enqueue would have forgotten that. > Right. And we currently have some of this re-compute needs. The reason why I thought we could still give a try to this approach comes from a few points: - we can't be fully synchronous yet (cpufreq drivers might sleep) and if we rely on some asynchronous entity to actually do the freq change we might already defeat the purpose of passing any sort of guidance to it (as things can be changed a lot by the time he has to take a decision); of course, once we'll get there things will change :) - how do we cope with codepaths that don't rely on usage for taking decisions? I guess we'll have to modify those to be able to drive cpufreq, or we could just trade some re-compute burden with this need - what about other sched classes? I know that this is very premature, but I can help but thinking that we'll need to do some sort of aggregation of requests, and if we put triggers in very specialized points we might lose some of the sched classes separation Anyway, I'd say we try to look at what we have and then move forward from there :). Thanks! - Juri >>> So does it make sense to at least put in the right hooks now? I realize >>> we'll likely take cpufreq out back and feed it to the bears but >>> something managing P states will be there whatever we'll call the new >>> fangled thing and this would be the place to hook it still. >>> >> >> We should be able to clean up and post something along this line >> fairly soon. > > Grand! >