From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966505AbcA1Oys (ORCPT ); Thu, 28 Jan 2016 09:54:48 -0500 Received: from mail.skyhub.de ([78.46.96.112]:40551 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964839AbcA1Oyp (ORCPT ); Thu, 28 Jan 2016 09:54:45 -0500 Date: Thu, 28 Jan 2016 15:54:36 +0100 From: Borislav Petkov To: Huang Rui Cc: Peter Zijlstra , Ingo Molnar , Andy Lutomirski , Thomas Gleixner , Robert Richter , Jacob Shin , John Stultz , =?utf-8?B?RnLDqWTDqXJpYw==?= Weisbecker , linux-kernel@vger.kernel.org, spg_linux_kernel@amd.com, x86@kernel.org, Guenter Roeck , Andreas Herrmann , Suravee Suthikulpanit , Aravind Gopalakrishnan , Fengguang Wu , Aaron Lu Subject: Re: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Message-ID: <20160128145436.GE14274@pd.tnic> References: <1453963131-2013-1-git-send-email-ray.huang@amd.com> <20160128090314.GB14274@pd.tnic> <20160128100141.GA28282@hr-amur2> <20160128124209.GC14274@pd.tnic> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20160128124209.GC14274@pd.tnic> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 28, 2016 at 01:42:09PM +0100, Borislav Petkov wrote: > http://git.kernel.org/cgit/linux/kernel/git/bp/bp.git/log/?h=tip-perf > > Please use that branch instead. I'm pasting next version here, I'm not hoping that kbuild robot will be able to paste this email and know *not* to build it directly but use the branch above instead :-) Ok, next version, I killed the MAX_CUS and cu_num thing: > #define MAX_CUS 8 > > ... > > cores_per_cu = amd_get_cores_per_cu(); > cu_num = boot_cpu_data.x86_max_cores / cores_per_cu; > > if (WARN_ON_ONCE(cu_num > MAX_CUS)) > return -EINVAL; as it is completely arbitrary. What's wrong with having a multi-socket machine where CUs are more than 8 counting in the whole fabric? Nothing, AFAICT. --- From: Huang Rui Date: Thu, 28 Jan 2016 14:38:51 +0800 Subject: [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting mechanism MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Introduce an AMD accumlated power reporting mechanism for the Carrizo (Family 15h, Model 60h) processor that can be used to calculate the average power consumed by a processor during a measurement interval. The feature support is indicated by CPUID Fn8000_0007_EDX[12]. This feature will be implemented both in hwmon and perf. The current design provides one event to report per package/processor power consumption by counting each compute unit power value. Here the gory details of how the computation is done: --------------------------------------------------------------------- * Tsample: compute unit power accumulator sample period * Tref: the PTSC counter period (PTSC: performance timestamp counter) * N: the ratio of compute unit power accumulator sample period to the PTSC period * Jmax: max compute unit accumulated power which is indicated by MSR_C001007b[MaxCpuSwPwrAcc] * Jx/Jy: compute unit accumulated power which is indicated by MSR_C001007a[CpuSwPwrAcc] * Tx/Ty: the value of performance timestamp counter which is indicated by CU_PTSC MSR_C0010280[PTSC] * PwrCPUave: CPU average power i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007. N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]]. ii. Read the full range of the cumulative energy value from the new MSR MaxCpuSwPwrAcc. Jmax = value returned. iii. At time x, software reads CpuSwPwrAcc and samples the PTSC. Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC. iv. At time y, software reads CpuSwPwrAcc and samples the PTSC. Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC. v. Calculate the average power consumption for a compute unit over time period (y-x). Unit of result is uWatt: if (Jy < Jx) // Rollover has occurred Jdelta = (Jy + Jmax) - Jx else Jdelta = Jy - Jx PwrCPUave = N * Jdelta * 1000 / (Ty - Tx) ---------------------------------------------------------------------- Simple example: root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4 CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h CHK include/generated/timeconst.h CHK include/generated/bounds.h CHK include/generated/asm-offsets.h CALL scripts/checksyscalls.sh CHK include/generated/compile.h SKIPPED include/generated/compile.h Building modules, stage 2. Kernel: arch/x86/boot/bzImage is ready (#40) MODPOST 4225 modules Performance counter stats for 'system wide': 183.44 mWatts power/power-pkg/ 341.837270111 seconds time elapsed root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10 Performance counter stats for 'system wide': 0.18 mWatts power/power-pkg/ 10.012551815 seconds time elapsed Suggested-by: Peter Zijlstra Suggested-by: Ingo Molnar Suggested-by: Borislav Petkov Signed-off-by: Huang Rui Cc: Aaron Lu Cc: Alexander Shishkin Cc: Andreas Herrmann Cc: Andy Lutomirski Cc: Aravind Gopalakrishnan Cc: Arnaldo Carvalho de Melo Cc: Fengguang Wu Cc: Frédéric Weisbecker Cc: Guenter Roeck Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jacob Shin Cc: John Stultz Cc: Kan Liang Cc: Peter Zijlstra Cc: Robert Richter Cc: spg_linux_kernel@amd.com Cc: Suravee Suthikulpanit Cc: Thomas Gleixner Cc: x86-ml Link: http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic Link: http://lkml.kernel.org/r/1453963131-2013-1-git-send-email-ray.huang@amd.com Signed-off-by: Borislav Petkov --- arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/perf_event_amd_power.c | 482 +++++++++++++++++++++++++++++ 2 files changed, 483 insertions(+) create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index faa7b5204129..ffc96503d610 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS) += perf_event.o ifdef CONFIG_PERF_EVENTS obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd.o perf_event_amd_uncore.o +obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd_power.o ifdef CONFIG_AMD_IOMMU obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd_iommu.o endif diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c new file mode 100644 index 000000000000..b0df1594b095 --- /dev/null +++ b/arch/x86/kernel/cpu/perf_event_amd_power.c @@ -0,0 +1,482 @@ +/* + * Performance events - AMD Processor Power Reporting Mechanism + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Huang Rui + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include "perf_event.h" + +#define MSR_F15H_CU_PWR_ACCUMULATOR 0xc001007a +#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b +#define MSR_F15H_PTSC 0xc0010280 + +/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */ +#define AMD_POWER_EVENT_MASK 0xFFULL + +/* + * Accumulated power status counters. + */ +#define AMD_POWER_PKG_ID 0 +#define AMD_POWER_EVENTSEL_PKG 1 + +/* + * The ratio of compute unit power accumulator sample period to the + * PTSC period. + */ +static unsigned int cpu_pwr_sample_ratio; +static unsigned int cores_per_cu; + +/* Maximum accumulated power of a compute unit. */ +static u64 max_cu_acc_power; + +struct power_pmu { + raw_spinlock_t lock; + struct pmu *pmu; + local64_t cpu_sw_pwr_ptsc; + + /* + * These two cpumasks are used for avoiding the allocations on the + * CPU_STARTING phase because power_cpu_prepare() will be called with + * IRQs disabled. + */ + cpumask_var_t mask; + cpumask_var_t tmp_mask; +}; + +static struct pmu pmu_class; + +/* + * Accumulated power represents the sum of each compute unit's (CU) power + * consumption. On any core of each CU we read the total accumulated power from + * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores + * which are picked to measure the power for the CUs they belong to. + */ +static cpumask_t cpu_mask; + +static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu); + +static u64 event_update(struct perf_event *event, struct power_pmu *pmu) +{ + struct hw_perf_event *hwc = &event->hw; + u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc; + u64 delta, tdelta; + +again: + prev_raw_count = local64_read(&hwc->prev_count); + prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc); + rdmsrl(event->hw.event_base, new_raw_count); + rdmsrl(MSR_F15H_PTSC, new_ptsc); + + if (local64_cmpxchg(&hwc->prev_count, prev_raw_count, + new_raw_count) != prev_raw_count) { + cpu_relax(); + goto again; + } + + /* + * Calculate the CU power consumption over a time period, the unit of + * final value (delta) is micro-Watts. Then add it to the event count. + */ + if (new_raw_count < prev_raw_count) { + delta = max_cu_acc_power + new_raw_count; + delta -= prev_raw_count; + } else + delta = new_raw_count - prev_raw_count; + + delta *= cpu_pwr_sample_ratio * 1000; + tdelta = new_ptsc - prev_ptsc; + + do_div(delta, tdelta); + local64_add(delta, &event->count); + + return new_raw_count; +} + +static void __pmu_event_start(struct power_pmu *pmu, struct perf_event *event) +{ + u64 ptsc, counts; + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + event->hw.state = 0; + + rdmsrl(MSR_F15H_PTSC, ptsc); + local64_set(&pmu->cpu_sw_pwr_ptsc, ptsc); + rdmsrl(event->hw.event_base, counts); + local64_set(&event->hw.prev_count, counts); +} + +static void pmu_event_start(struct perf_event *event, int mode) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + + raw_spin_lock(&pmu->lock); + __pmu_event_start(pmu, event); + raw_spin_unlock(&pmu->lock); +} + +static void pmu_event_stop(struct perf_event *event, int mode) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + struct hw_perf_event *hwc = &event->hw; + + raw_spin_lock(&pmu->lock); + + /* Mark event as deactivated and stopped. */ + if (!(hwc->state & PERF_HES_STOPPED)) + hwc->state |= PERF_HES_STOPPED; + + /* Check if software counter update is necessary. */ + if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { + /* + * Drain the remaining delta count out of an event + * that we are disabling: + */ + event_update(event, pmu); + hwc->state |= PERF_HES_UPTODATE; + } + + raw_spin_unlock(&pmu->lock); +} + +static int pmu_event_add(struct perf_event *event, int mode) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + struct hw_perf_event *hwc = &event->hw; + + raw_spin_lock(&pmu->lock); + + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + + if (mode & PERF_EF_START) + __pmu_event_start(pmu, event); + + raw_spin_unlock(&pmu->lock); + + return 0; +} + +static void pmu_event_del(struct perf_event *event, int flags) +{ + pmu_event_stop(event, PERF_EF_UPDATE); +} + +static int pmu_event_init(struct perf_event *event) +{ + u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK; + int ret = 0; + + /* Only look at AMD power events. */ + if (event->attr.type != pmu_class.type) + return -ENOENT; + + /* Unsupported modes and filters. */ + if (event->attr.exclude_user || + event->attr.exclude_kernel || + event->attr.exclude_hv || + event->attr.exclude_idle || + event->attr.exclude_host || + event->attr.exclude_guest || + /* no sampling */ + event->attr.sample_period) + return -EINVAL; + + if (cfg != AMD_POWER_EVENTSEL_PKG) + return -EINVAL; + + event->hw.event_base = MSR_F15H_CU_PWR_ACCUMULATOR; + event->hw.config = cfg; + event->hw.idx = AMD_POWER_PKG_ID; + + return ret; +} + +static void pmu_event_read(struct perf_event *event) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + + event_update(event, pmu); +} + +static ssize_t +get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpumap_print_to_pagebuf(true, buf, &cpu_mask); +} + +static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL); + +static struct attribute *pmu_attrs[] = { + &dev_attr_cpumask.attr, + NULL, +}; + +static struct attribute_group pmu_attr_group = { + .attrs = pmu_attrs, +}; + +/* + * Currently it only supports to report the power of each + * processor/package. + */ +EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01"); + +EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts"); + +/* Convert the count from micro-Watts to milli-Watts. */ +EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3"); + + +static struct attribute *events_attr[] = { + EVENT_PTR(power_pkg), + EVENT_PTR(power_pkg_unit), + EVENT_PTR(power_pkg_scale), + NULL, +}; + +static struct attribute_group pmu_events_group = { + .name = "events", + .attrs = events_attr, +}; + +PMU_FORMAT_ATTR(event, "config:0-7"); + +static struct attribute *formats_attr[] = { + &format_attr_event.attr, + NULL, +}; + +static struct attribute_group pmu_format_group = { + .name = "format", + .attrs = formats_attr, +}; + +static const struct attribute_group *attr_groups[] = { + &pmu_attr_group, + &pmu_format_group, + &pmu_events_group, + NULL, +}; + +static struct pmu pmu_class = { + .attr_groups = attr_groups, + /* system-wide only */ + .task_ctx_nr = perf_invalid_context, + .event_init = pmu_event_init, + .add = pmu_event_add, + .del = pmu_event_del, + .start = pmu_event_start, + .stop = pmu_event_stop, + .read = pmu_event_read, +}; + +static int power_cpu_exit(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + int target = nr_cpumask_bits; + int ret = 0; + + cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu)); + + cpumask_clear_cpu(cpu, &cpu_mask); + cpumask_clear_cpu(cpu, pmu->mask); + + if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask)) + goto out; + + /* + * Find a new CPU on the same compute unit, if was set in cpumask + * and still some CPUs on compute unit. Then move on to the new CPU. + */ + target = cpumask_any(pmu->tmp_mask); + if (target < nr_cpumask_bits && target != cpu) + cpumask_set_cpu(target, &cpu_mask); + + WARN_ON(cpumask_empty(&cpu_mask)); + +out: + /* + * Migrate event and context to new CPU. + */ + if (target < nr_cpumask_bits) + perf_pmu_migrate_context(pmu->pmu, cpu, target); + + return ret; + +} + +static int power_cpu_init(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + + if (!pmu) + return 0; + + if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask)) + cpumask_set_cpu(cpu, &cpu_mask); + + return 0; +} + +static int power_cpu_prepare(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + int phys_id = topology_physical_package_id(cpu); + int ret = 0; + + if (pmu) + return 0; + + if (phys_id < 0) + return -EINVAL; + + pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu)); + if (!pmu) + return -ENOMEM; + + if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) { + ret = -ENOMEM; + goto out; + } + + if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) { + ret = -ENOMEM; + goto out1; + } + + raw_spin_lock_init(&pmu->lock); + + pmu->pmu = &pmu_class; + + per_cpu(amd_power_pmu, cpu) = pmu; + + return 0; + +out1: + free_cpumask_var(pmu->mask); +out: + kfree(pmu); + + return ret; +} + +static void power_cpu_kfree(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + + if (!pmu) + return; + + free_cpumask_var(pmu->mask); + free_cpumask_var(pmu->tmp_mask); + kfree(pmu); + + per_cpu(amd_power_pmu, cpu) = NULL; +} + +static int +power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu) +{ + unsigned int cpu = (long)hcpu; + + switch (action & ~CPU_TASKS_FROZEN) { + case CPU_UP_PREPARE: + if (power_cpu_prepare(cpu)) + return NOTIFY_BAD; + break; + case CPU_STARTING: + if (power_cpu_init(cpu)) + return NOTIFY_BAD; + break; + case CPU_DEAD: + power_cpu_kfree(cpu); + break; + case CPU_DOWN_PREPARE: + if (power_cpu_exit(cpu)) + return NOTIFY_BAD; + break; + default: + break; + } + + return NOTIFY_OK; +} + +static const struct x86_cpu_id cpu_match[] = { + { .vendor = X86_VENDOR_AMD, .family = 0x15 }, + {}, +}; + +static int __init amd_power_pmu_init(void) +{ + int i, ret; + u64 tmp; + + if (!x86_match_cpu(cpu_match)) + return 0; + + if (!boot_cpu_has(X86_FEATURE_ACC_POWER)) + return -ENODEV; + + cores_per_cu = amd_get_cores_per_cu(); + + cpu_pwr_sample_ratio = cpuid_ecx(0x80000007); + + if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) { + pr_err("Failed to read max compute unit power accumulator MSR\n"); + return -ENODEV; + } + max_cu_acc_power = tmp; + + cpu_notifier_register_begin(); + + /* Choose one online core of each compute unit. */ + for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) { + WARN_ON(cpumask_empty(topology_sibling_cpumask(i))); + cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask); + } + + for_each_present_cpu(i) { + ret = power_cpu_prepare(i); + if (ret) { + /* Unwind on [0 ... i-1] CPUs. */ + while (i--) + power_cpu_kfree(i); + goto out; + } + ret = power_cpu_init(i); + if (ret) { + /* Unwind on [0 ... i] CPUs. */ + while (i >= 0) + power_cpu_kfree(i--); + goto out; + } + } + + __perf_cpu_notifier(power_cpu_notifier); + + ret = perf_pmu_register(&pmu_class, "power", -1); + if (WARN_ON(ret)) { + pr_warn("AMD Power PMU registration failed\n"); + goto out; + } + + pr_info("AMD Power PMU detected.\n"); + +out: + cpu_notifier_register_done(); + + return ret; +} +device_initcall(amd_power_pmu_init); -- 2.3.5 -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply.