From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752035AbbJBG3i (ORCPT ); Fri, 2 Oct 2015 02:29:38 -0400 Received: from mga09.intel.com ([134.134.136.24]:36076 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751651AbbJBG2V (ORCPT ); Fri, 2 Oct 2015 02:28:21 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,621,1437462000"; d="scan'208";a="782531190" From: Fenghua Yu To: "H Peter Anvin" , "Ingo Molnar" , "Thomas Gleixner" , "Peter Zijlstra" Cc: "linux-kernel" , "x86" , "Fenghua Yu" , "Vikas Shivappa" Subject: [PATCH V15 08/11] x86/intel_rdt: Hot cpu support for Cache Allocation Date: Thu, 1 Oct 2015 23:09:42 -0700 Message-Id: <1443766185-61618-9-git-send-email-fenghua.yu@intel.com> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1443766185-61618-1-git-send-email-fenghua.yu@intel.com> References: <1443766185-61618-1-git-send-email-fenghua.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds hot plug cpu support for Intel Cache allocation. Support includes updating the cache bitmask MSRs IA32_L3_QOS_n when a new CPU package comes online or goes offline. The IA32_L3_QOS_n MSRs are one per Class of service on each CPU package. The new package's MSRs are synchronized with the values of existing MSRs. Also the software cache for IA32_PQR_ASSOC MSRs are reset during hot cpu notifications. Signed-off-by: Vikas Shivappa Signed-off-by: Fenghua Yu --- arch/x86/kernel/cpu/intel_rdt.c | 76 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c index 8379df8..31f8588 100644 --- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -24,6 +24,7 @@ #include #include +#include #include #include #include @@ -234,6 +235,75 @@ static inline bool rdt_cpumask_update(int cpu) return false; } +/* + * cbm_update_msrs() - Updates all the existing IA32_L3_MASK_n MSRs + * which are one per CLOSid on the current package. + */ +static void cbm_update_msrs(void *dummy) +{ + int maxid = boot_cpu_data.x86_cache_max_closid; + struct rdt_remote_data info; + unsigned int i; + + for (i = 0; i < maxid; i++) { + if (cctable[i].clos_refcnt) { + info.msr = CBM_FROM_INDEX(i); + info.val = cctable[i].l3_cbm; + msr_cpu_update(&info); + } + } +} + +static inline void intel_rdt_cpu_start(int cpu) +{ + struct intel_pqr_state *state = &per_cpu(pqr_state, cpu); + + state->closid = 0; + mutex_lock(&rdt_group_mutex); + if (rdt_cpumask_update(cpu)) + smp_call_function_single(cpu, cbm_update_msrs, NULL, 1); + mutex_unlock(&rdt_group_mutex); +} + +static void intel_rdt_cpu_exit(unsigned int cpu) +{ + int i; + + mutex_lock(&rdt_group_mutex); + if (!cpumask_test_and_clear_cpu(cpu, &rdt_cpumask)) { + mutex_unlock(&rdt_group_mutex); + return; + } + + cpumask_and(&tmp_cpumask, topology_core_cpumask(cpu), cpu_online_mask); + cpumask_clear_cpu(cpu, &tmp_cpumask); + i = cpumask_any(&tmp_cpumask); + + if (i < nr_cpu_ids) + cpumask_set_cpu(i, &rdt_cpumask); + mutex_unlock(&rdt_group_mutex); +} + +static int intel_rdt_cpu_notifier(struct notifier_block *nb, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (unsigned long)hcpu; + + switch (action) { + case CPU_DOWN_FAILED: + case CPU_ONLINE: + intel_rdt_cpu_start(cpu); + break; + case CPU_DOWN_PREPARE: + intel_rdt_cpu_exit(cpu); + break; + default: + break; + } + + return NOTIFY_OK; +} + static int __init intel_rdt_late_init(void) { struct cpuinfo_x86 *c = &boot_cpu_data; @@ -261,9 +331,15 @@ static int __init intel_rdt_late_init(void) goto out_err; } + cpu_notifier_register_begin(); + for_each_online_cpu(i) rdt_cpumask_update(i); + __hotcpu_notifier(intel_rdt_cpu_notifier, 0); + + cpu_notifier_register_done(); + static_key_slow_inc(&rdt_enable_key); pr_info("Intel cache allocation enabled\n"); out_err: -- 1.8.1.2