From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932952AbbLQUDf (ORCPT ); Thu, 17 Dec 2015 15:03:35 -0500 Received: from mga11.intel.com ([192.55.52.93]:8026 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754946AbbLQTrW (ORCPT ); Thu, 17 Dec 2015 14:47:22 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,442,1444719600"; d="scan'208";a="873743686" From: "Fenghua Yu" To: "H. Peter Anvin" , "Ingo Molnar" , "Thomas Gleixner" , "Tony Luck" , "Ravi V Shankar" , "Peter Zijlstra" , "Tejun Heo" , "Marcelo Tosatti" Cc: "linux-kernel" , "x86" , Fenghua Yu , Vikas Shivappa Subject: [PATCH V16 02/11] x86/intel_rapl: Modify hot cpu notification handling Date: Thu, 17 Dec 2015 14:46:07 -0800 Message-Id: <1450392376-6397-3-git-send-email-fenghua.yu@intel.com> X-Mailer: git-send-email 1.8.0.1 In-Reply-To: <1450392376-6397-1-git-send-email-fenghua.yu@intel.com> References: <1450392376-6397-1-git-send-email-fenghua.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Fenghua Yu From: Vikas Shivappa - In rapl_cpu_init, use the existing package<->core map instead of looping through all cpus in rapl_cpumask. - In rapl_cpu_exit, use the same mapping instead of looping all online cpus. In large systems with large number of cpus the time taken to loop may be expensive and also the time increase linearly. Signed-off-by: Vikas Shivappa Signed-off-by: Fenghua Yu --- arch/x86/kernel/cpu/perf_event_intel_rapl.c | 35 ++++++++++++++--------------- 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/arch/x86/kernel/cpu/perf_event_intel_rapl.c b/arch/x86/kernel/cpu/perf_event_intel_rapl.c index ed446bd..0e0fe70 100644 --- a/arch/x86/kernel/cpu/perf_event_intel_rapl.c +++ b/arch/x86/kernel/cpu/perf_event_intel_rapl.c @@ -130,6 +130,12 @@ static struct pmu rapl_pmu_class; static cpumask_t rapl_cpu_mask; static int rapl_cntr_mask; +/* + * Temporary cpumask used during hot cpu notificaiton handling. The usage + * is serialized by hot cpu locks. + */ +static cpumask_t tmp_cpumask; + static DEFINE_PER_CPU(struct rapl_pmu *, rapl_pmu); static DEFINE_PER_CPU(struct rapl_pmu *, rapl_pmu_to_free); @@ -533,18 +539,16 @@ static struct pmu rapl_pmu_class = { static void rapl_cpu_exit(int cpu) { struct rapl_pmu *pmu = per_cpu(rapl_pmu, cpu); - int i, phys_id = topology_physical_package_id(cpu); int target = -1; + int i; /* find a new cpu on same package */ - for_each_online_cpu(i) { - if (i == cpu) - continue; - if (phys_id == topology_physical_package_id(i)) { - target = i; - break; - } - } + cpumask_and(&tmp_cpumask, topology_core_cpumask(cpu), cpu_online_mask); + cpumask_clear_cpu(cpu, &tmp_cpumask); + i = cpumask_any(&tmp_cpumask); + if (i < nr_cpu_ids) + target = i; + /* * clear cpu from cpumask * if was set in cpumask and still some cpu on package, @@ -566,15 +570,10 @@ static void rapl_cpu_exit(int cpu) static void rapl_cpu_init(int cpu) { - int i, phys_id = topology_physical_package_id(cpu); - - /* check if phys_is is already covered */ - for_each_cpu(i, &rapl_cpu_mask) { - if (phys_id == topology_physical_package_id(i)) - return; - } - /* was not found, so add it */ - cpumask_set_cpu(cpu, &rapl_cpu_mask); + /* check if cpu's package is already covered.If not, add it.*/ + cpumask_and(&tmp_cpumask, &rapl_cpu_mask, topology_core_cpumask(cpu)); + if (cpumask_empty(&tmp_cpumask)) + cpumask_set_cpu(cpu, &rapl_cpu_mask); } static __init void rapl_hsw_server_quirk(void) -- 2.5.0