From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756920AbbJ2M14 (ORCPT ); Thu, 29 Oct 2015 08:27:56 -0400 Received: from mail-pa0-f41.google.com ([209.85.220.41]:34368 "EHLO mail-pa0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756897AbbJ2M1x (ORCPT ); Thu, 29 Oct 2015 08:27:53 -0400 From: Viresh Kumar To: Rafael Wysocki Cc: linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, Viresh Kumar , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 6/6] cpufreq: ondemand: update update_sampling_rate() to make it more efficient Date: Thu, 29 Oct 2015 17:57:25 +0530 Message-Id: X-Mailer: git-send-email 2.6.2.198.g614a2ac In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently update_sampling_rate() runs over each online CPU and cancels/queues timers on all policy->cpus every time. This should be done just once for any cpu belonging to a policy. Create a cpumask and keep on clearing it as and when we process policies, so that we don't have to traverse through all CPUs of the same policy. Signed-off-by: Viresh Kumar --- drivers/cpufreq/cpufreq_ondemand.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 9e0293c23285..6eac39124894 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -251,6 +251,7 @@ static void update_sampling_rate(struct dbs_data *dbs_data, struct cpufreq_policy *policy; struct cpu_common_dbs_info *shared; unsigned long next_sampling, appointed_at; + struct cpumask cpumask; int cpu; od_tuners->sampling_rate = new_rate = max(new_rate, @@ -261,7 +262,9 @@ static void update_sampling_rate(struct dbs_data *dbs_data, */ mutex_lock(&od_dbs_cdata.mutex); - for_each_online_cpu(cpu) { + cpumask_copy(&cpumask, cpu_online_mask); + + for_each_cpu(cpu, &cpumask) { dbs_info = &per_cpu(od_cpu_dbs_info, cpu); cdbs = &dbs_info->cdbs; shared = cdbs->shared; @@ -275,6 +278,9 @@ static void update_sampling_rate(struct dbs_data *dbs_data, policy = shared->policy; + /* clear all CPUs of this policy */ + cpumask_andnot(&cpumask, &cpumask, policy->cpus); + /* * Update sampling rate for CPUs whose policy is governed by * dbs_data. In case of governor_per_policy, only a single @@ -284,6 +290,10 @@ static void update_sampling_rate(struct dbs_data *dbs_data, if (dbs_data != policy->governor_data) continue; + /* + * Checking this for any CPU should be fine, timers for all of + * them are scheduled together. + */ next_sampling = jiffies + usecs_to_jiffies(new_rate); appointed_at = dbs_info->cdbs.timer.expires; -- 2.6.2.198.g614a2ac