From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755778AbbFLSU0 (ORCPT ); Fri, 12 Jun 2015 14:20:26 -0400 Received: from mga02.intel.com ([134.134.136.20]:47230 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755717AbbFLSUN (ORCPT ); Fri, 12 Jun 2015 14:20:13 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,602,1427785200"; d="scan'208";a="742306771" From: Vikas Shivappa To: linux-kernel@vger.kernel.org Cc: vikas.shivappa@intel.com, x86@kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, matt.fleming@intel.com, will.auld@intel.com, linux-rdt@eclists.intel.com, vikas.shivappa@linux.intel.com Subject: [PATCH 02/10] x86/intel_cqm: Modify hot cpu notification handling Date: Fri, 12 Jun 2015 11:17:09 -0700 Message-Id: <1434133037-25189-3-git-send-email-vikas.shivappa@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1434133037-25189-1-git-send-email-vikas.shivappa@linux.intel.com> References: <1434133037-25189-1-git-send-email-vikas.shivappa@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch modifies hot cpu notification handling in Intel cache monitoring: - to add a new cpu to the cqm_cpumask(which has one cpu per package) during cpu start, it uses the existing package<->core map instead of looping through all cpus in cqm_cpumask. - to search for the next online sibling during cpu exit, it uses the cpumask_any_online_but instead of looping through all online cpus. In large systems with large number of cpus the time taken to loop may be expensive and also the time increase linearly. Signed-off-by: Vikas Shivappa --- arch/x86/kernel/cpu/perf_event_intel_cqm.c | 27 ++++++++++----------------- 1 file changed, 10 insertions(+), 17 deletions(-) diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c index 1880761..b224142 100644 --- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c +++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c @@ -1236,15 +1236,15 @@ static struct pmu intel_cqm_pmu = { static inline void cqm_pick_event_reader(int cpu) { - int phys_id = topology_physical_package_id(cpu); - int i; + struct cpumask tmp; - for_each_cpu(i, &cqm_cpumask) { - if (phys_id == topology_physical_package_id(i)) - return; /* already got reader for this socket */ - } + cpumask_and(&tmp, &cqm_cpumask, topology_core_cpumask(cpu)); - cpumask_set_cpu(cpu, &cqm_cpumask); + /* + * Pick a reader if there isn't one already. + */ + if (cpumask_empty(&tmp)) + cpumask_set_cpu(cpu, &cqm_cpumask); } static void intel_cqm_cpu_prepare(unsigned int cpu) @@ -1262,7 +1262,6 @@ static void intel_cqm_cpu_prepare(unsigned int cpu) static void intel_cqm_cpu_exit(unsigned int cpu) { - int phys_id = topology_physical_package_id(cpu); int i; /* @@ -1271,15 +1270,9 @@ static void intel_cqm_cpu_exit(unsigned int cpu) if (!cpumask_test_and_clear_cpu(cpu, &cqm_cpumask)) return; - for_each_online_cpu(i) { - if (i == cpu) - continue; - - if (phys_id == topology_physical_package_id(i)) { - cpumask_set_cpu(i, &cqm_cpumask); - break; - } - } + i = cpumask_any_online_but(topology_core_cpumask(cpu), cpu); + if (i < nr_cpu_ids) + cpumask_set_cpu(i, &cqm_cpumask); } static int intel_cqm_cpu_notifier(struct notifier_block *nb, -- 1.9.1