From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932453AbbC0P6S (ORCPT ); Fri, 27 Mar 2015 11:58:18 -0400 Received: from foss.arm.com ([217.140.101.70]:56763 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932437AbbC0P6N (ORCPT ); Fri, 27 Mar 2015 11:58:13 -0400 Date: Fri, 27 Mar 2015 15:58:49 +0000 From: Morten Rasmussen To: Sai Gurrappadi Cc: "peterz@infradead.org" , "mingo@redhat.com" , "vincent.guittot@linaro.org" , Dietmar Eggemann , "yuyang.du@intel.com" , "preeti@linux.vnet.ibm.com" , "mturquette@linaro.org" , "nico@linaro.org" , "rjw@rjwysocki.net" , Juri Lelli , "linux-kernel@vger.kernel.org" , Peter Boonstoppel Subject: Re: [RFCv3 PATCH 30/48] sched: Calculate energy consumption of sched_group Message-ID: <20150327155849.GP18994@e105550-lin.cambridge.arm.com> References: <1423074685-6336-1-git-send-email-morten.rasmussen@arm.com> <1423074685-6336-31-git-send-email-morten.rasmussen@arm.com> <550C69A7.8040405@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <550C69A7.8040405@nvidia.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 20, 2015 at 06:40:39PM +0000, Sai Gurrappadi wrote: > On 02/04/2015 10:31 AM, Morten Rasmussen wrote: > > +/* > > + * sched_group_energy(): Returns absolute energy consumption of cpus belonging > > + * to the sched_group including shared resources shared only by members of the > > + * group. Iterates over all cpus in the hierarchy below the sched_group starting > > + * from the bottom working it's way up before going to the next cpu until all > > + * cpus are covered at all levels. The current implementation is likely to > > + * gather the same usage statistics multiple times. This can probably be done in > > + * a faster but more complex way. > > + */ > > +static unsigned int sched_group_energy(struct sched_group *sg_top) > > +{ > > + struct sched_domain *sd; > > + int cpu, total_energy = 0; > > + struct cpumask visit_cpus; > > + struct sched_group *sg; > > + > > + WARN_ON(!sg_top->sge); > > + > > + cpumask_copy(&visit_cpus, sched_group_cpus(sg_top)); > > + > > + while (!cpumask_empty(&visit_cpus)) { > > + struct sched_group *sg_shared_cap = NULL; > > + > > + cpu = cpumask_first(&visit_cpus); > > + > > + /* > > + * Is the group utilization affected by cpus outside this > > + * sched_group? > > + */ > > + sd = highest_flag_domain(cpu, SD_SHARE_CAP_STATES); > > + if (sd && sd->parent) > > + sg_shared_cap = sd->parent->groups; > > + > > + for_each_domain(cpu, sd) { > > + sg = sd->groups; > > + > > + /* Has this sched_domain already been visited? */ > > + if (sd->child && cpumask_first(sched_group_cpus(sg)) != cpu) > > + break; > > + > > + do { > > + struct sched_group *sg_cap_util; > > + unsigned group_util; > > + int sg_busy_energy, sg_idle_energy; > > + int cap_idx; > > + > > + if (sg_shared_cap && sg_shared_cap->group_weight >= sg->group_weight) > > + sg_cap_util = sg_shared_cap; > > + else > > + sg_cap_util = sg; > > + > > + cap_idx = find_new_capacity(sg_cap_util, sg->sge); > > + group_util = group_norm_usage(sg); > > + sg_busy_energy = (group_util * sg->sge->cap_states[cap_idx].power) > > + >> SCHED_CAPACITY_SHIFT; > > + sg_idle_energy = ((SCHED_LOAD_SCALE-group_util) * sg->sge->idle_states[0].power) > > + >> SCHED_CAPACITY_SHIFT; > > + > > + total_energy += sg_busy_energy + sg_idle_energy; > > Should normalize group_util with the newly found capacity instead of > capacity_curr. You're right. In the next patch when sched_group_energy() can be used for energy predictions based on usage deltas group_util should be normalized to the new capacity. Thanks for spotting this mistake. Morten