All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
@ 2016-01-28  6:38 Huang Rui
  2016-01-28  9:03 ` Borislav Petkov
  0 siblings, 1 reply; 10+ messages in thread
From: Huang Rui @ 2016-01-28  6:38 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker
  Cc: linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Borislav Petkov, Fengguang Wu, Aaron Lu, Huang Rui

Introduce an AMD accumlated power reporting mechanism for Carrizo
(Family 15h, Model 60h) processor that should be used to calculate the
average power consumed by a processor during a measurement interval.
The feature of accumulated power mechanism is indicated by CPUID
Fn8000_0007_EDX[12].

---------------------------------------------------------------------
* Tsample: compute unit power accumulator sample period
* Tref: the PTSC counter period
* PTSC: performance timestamp counter
* N: the ratio of compute unit power accumulator sample period to the
  PTSC period
* Jmax: max compute unit accumulated power which is indicated by
  MaxCpuSwPwrAcc MSR C001007b
* Jx/Jy: compute unit accumulated power which is indicated by
  CpuSwPwrAcc MSR C001007a
* Tx/Ty: the value of performance timestamp counter which is indicated
  by CU_PTSC MSR C0010280
* PwrCPUave: CPU average power

i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007.
	N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]].

ii. Read the full range of the cumulative energy value from the new
MSR MaxCpuSwPwrAcc.
	Jmax = value returned.
iii. At time x, SW reads CpuSwPwrAcc MSR and samples the PTSC.
	Jx = value read from CpuSwPwrAcc and Tx = value read from
PTSC.

iv. At time y, SW reads CpuSwPwrAcc MSR and samples the PTSC.
	Jy = value read from CpuSwPwrAcc and Ty = value read from
PTSC.

v. Calculate the average power consumption for a compute unit over
time period (y-x). Unit of result is uWatt.
	if (Jy < Jx) // Rollover has occurred
		Jdelta = (Jy + Jmax) - Jx
	else
		Jdelta = Jy - Jx
	PwrCPUave = N * Jdelta * 1000 / (Ty - Tx)
----------------------------------------------------------------------

This feature will be implemented both on hwmon and perf that discussed
in mail list before. At current design, it provides one event to
report per package/processor power consumption by counting each
compute unit power value.

Simple example:

root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4
  CHK     include/config/kernel.release
  CHK     include/generated/uapi/linux/version.h
  CHK     include/generated/utsrelease.h
  CHK     include/generated/timeconst.h
  CHK     include/generated/bounds.h
  CHK     include/generated/asm-offsets.h
  CALL    scripts/checksyscalls.sh
  CHK     include/generated/compile.h
  SKIPPED include/generated/compile.h
  Building modules, stage 2.
Kernel: arch/x86/boot/bzImage is ready  (#40)
  MODPOST 4225 modules

 Performance counter stats for 'system wide':

            183.44 mWatts power/power-pkg/

     341.837270111 seconds time elapsed

root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10

 Performance counter stats for 'system wide':

              0.18 mWatts power/power-pkg/

      10.012551815 seconds time elapsed

Reference:
http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Ingo Molnar <mingo@kernel.org>
Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Cc: Guenter Roeck <linux@roeck-us.net>
---

Hi,

This series of patches introduces the perf implementation of
accumulated power reporting algorithm. It will calculate the average
power consumption for the processor. The CPU feature flag is
CPUID.8000_0007H:EDX[12].


Changes from v1 -> v2:
- Add a patch to fix the build issue which is reported by kbuild test
  robot.

Changes from v2 -> v3:
- Use raw_spinlock_t instead of spinlock_t, because it need meet the
  -rt mode use case.
- Use topology_sibling_cpumask to make the cpumask operation easier.

Changes from v3 -> v4:
- Remove active_list, because it is not iterated.
- Capitalize sentences consistently and fix some typos.
- Fix some code style issues.
- Initialize structures in a vertically aligned manner.
- Remove unnecessary comment.
- Fix the runtime bug, and do some testing on CPU-hotplug scenario.

Thanks,
Rui

---
 arch/x86/kernel/cpu/Makefile               |   1 +
 arch/x86/kernel/cpu/perf_event_amd_power.c | 498 +++++++++++++++++++++++++++++
 2 files changed, 499 insertions(+)
 create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 5803130..97f3413 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS)		+= perf_event.o
 
 ifdef CONFIG_PERF_EVENTS
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd.o perf_event_amd_uncore.o
+obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_power.o
 ifdef CONFIG_AMD_IOMMU
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_iommu.o
 endif
diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
new file mode 100644
index 0000000..01630ec
--- /dev/null
+++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
@@ -0,0 +1,498 @@
+/*
+ * Performance events - AMD Processor Power Reporting Mechanism
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Huang Rui <ray.huang@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/perf_event.h>
+#include <asm/cpu_device_id.h>
+#include "perf_event.h"
+
+#define MSR_F15H_CU_PWR_ACCUMULATOR     0xc001007a
+#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
+#define MSR_F15H_PTSC			0xc0010280
+
+/*
+ * Event code: LSB 8 bits, passed in attr->config
+ * any other bit is reserved.
+ */
+#define AMD_POWER_EVENT_MASK	0xFFULL
+
+#define MAX_CUS	8
+
+/*
+ * Accumulated power status counters.
+ */
+#define AMD_POWER_PKG_ID		0
+#define AMD_POWER_EVENTSEL_PKG		1
+
+/*
+ * The ratio of compute unit power accumulator sample period to the
+ * PTSC period.
+ */
+static unsigned int cpu_pwr_sample_ratio;
+static unsigned int cores_per_cu;
+static unsigned int cu_num;
+
+/* Maximum accumulated power of a compute unit. */
+static u64 max_cu_acc_power;
+
+struct power_pmu {
+	raw_spinlock_t		lock;
+	struct pmu		*pmu; /* pointer to power_pmu_class */
+	local64_t		cpu_sw_pwr_ptsc;
+	/*
+	 * These two cpumasks are used for avoiding the allocations on
+	 * CPU_STARTING phase. Because power_cpu_prepare will be
+	 * called on IRQs disabled status.
+	 */
+	cpumask_var_t		mask;
+	cpumask_var_t		tmp_mask;
+};
+
+static struct pmu pmu_class;
+
+/*
+ * Accumulated power is to measure the sum of each compute unit's
+ * power consumption. So it picks only one core from each compute unit
+ * to get the power with MSR_F15H_CU_PWR_ACCUMULATOR. The cpu_mask
+ * represents CPU bit map of all cores which are picked to measure the
+ * power for the compute units that they belong to.
+ */
+static cpumask_t cpu_mask;
+
+static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu);
+
+static u64 event_update(struct perf_event *event, struct power_pmu *pmu)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc;
+	u64 delta, tdelta;
+
+again:
+	prev_raw_count = local64_read(&hwc->prev_count);
+	prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc);
+	rdmsrl(event->hw.event_base, new_raw_count);
+	rdmsrl(MSR_F15H_PTSC, new_ptsc);
+
+	if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+			    new_raw_count) != prev_raw_count) {
+		cpu_relax();
+		goto again;
+	}
+
+	/*
+	 * Calculate the power consumption for each compute unit over
+	 * a time period, the unit of final value (delta) is
+	 * micro-Watts. Then add it into event count.
+	 */
+	if (new_raw_count < prev_raw_count) {
+		delta = max_cu_acc_power + new_raw_count;
+		delta -= prev_raw_count;
+	} else
+		delta = new_raw_count - prev_raw_count;
+
+	delta *= cpu_pwr_sample_ratio * 1000;
+	tdelta = new_ptsc - prev_ptsc;
+
+	do_div(delta, tdelta);
+	local64_add(delta, &event->count);
+
+	return new_raw_count;
+}
+
+static void
+__pmu_event_start(struct power_pmu *pmu, struct perf_event *event)
+{
+	u64 ptsc, counts;
+
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+		return;
+
+	event->hw.state = 0;
+
+	rdmsrl(MSR_F15H_PTSC, ptsc);
+	local64_set(&pmu->cpu_sw_pwr_ptsc, ptsc);
+	rdmsrl(event->hw.event_base, counts);
+	local64_set(&event->hw.prev_count, counts);
+}
+
+static void pmu_event_start(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+
+	raw_spin_lock(&pmu->lock);
+	__pmu_event_start(pmu, event);
+	raw_spin_unlock(&pmu->lock);
+}
+
+static void pmu_event_stop(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+
+	raw_spin_lock(&pmu->lock);
+
+	/* Mark event as deactivated and stopped. */
+	if (!(hwc->state & PERF_HES_STOPPED))
+		hwc->state |= PERF_HES_STOPPED;
+
+	/* Check if update of SW counter is necessary. */
+	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+		/*
+		 * Drain the remaining delta count out of an event
+		 * that we are disabling:
+		 */
+		event_update(event, pmu);
+		hwc->state |= PERF_HES_UPTODATE;
+	}
+
+	raw_spin_unlock(&pmu->lock);
+}
+
+static int pmu_event_add(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+
+	raw_spin_lock(&pmu->lock);
+
+	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+
+	if (mode & PERF_EF_START)
+		__pmu_event_start(pmu, event);
+
+	raw_spin_unlock(&pmu->lock);
+
+	return 0;
+}
+
+static void pmu_event_del(struct perf_event *event, int flags)
+{
+	pmu_event_stop(event, PERF_EF_UPDATE);
+}
+
+static int pmu_event_init(struct perf_event *event)
+{
+	u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK;
+	int ret = 0;
+
+	/* Only look at AMD power events. */
+	if (event->attr.type != pmu_class.type)
+		return -ENOENT;
+
+	/* Unsupported modes and filters. */
+	if (event->attr.exclude_user   ||
+	    event->attr.exclude_kernel ||
+	    event->attr.exclude_hv     ||
+	    event->attr.exclude_idle   ||
+	    event->attr.exclude_host   ||
+	    event->attr.exclude_guest  ||
+	    event->attr.sample_period) /* no sampling */
+		return -EINVAL;
+
+	if (cfg != AMD_POWER_EVENTSEL_PKG)
+		return -EINVAL;
+
+	event->hw.event_base = MSR_F15H_CU_PWR_ACCUMULATOR;
+	event->hw.config = cfg;
+	event->hw.idx = AMD_POWER_PKG_ID;
+
+	return ret;
+}
+
+static void pmu_event_read(struct perf_event *event)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+
+	event_update(event, pmu);
+}
+
+static ssize_t
+get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpumap_print_to_pagebuf(true, buf, &cpu_mask);
+}
+
+static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL);
+
+static struct attribute *pmu_attrs[] = {
+	&dev_attr_cpumask.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_attr_group = {
+	.attrs = pmu_attrs,
+};
+
+
+/*
+ * Currently it only supports to report the power of each
+ * processor/package.
+ */
+EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
+
+EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts");
+
+/* Convert the count from micro-Watts to milli-Watts. */
+EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3");
+
+
+static struct attribute *events_attr[] = {
+	EVENT_PTR(power_pkg),
+	EVENT_PTR(power_pkg_unit),
+	EVENT_PTR(power_pkg_scale),
+	NULL,
+};
+
+static struct attribute_group pmu_events_group = {
+	.name	= "events",
+	.attrs	= events_attr,
+};
+
+PMU_FORMAT_ATTR(event, "config:0-7");
+
+static struct attribute *formats_attr[] = {
+	&format_attr_event.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_format_group = {
+	.name	= "format",
+	.attrs	= formats_attr,
+};
+
+static const struct attribute_group *attr_groups[] = {
+	&pmu_attr_group,
+	&pmu_format_group,
+	&pmu_events_group,
+	NULL,
+};
+
+static struct pmu pmu_class = {
+	.attr_groups	= attr_groups,
+	.task_ctx_nr	= perf_invalid_context, /* system-wide only */
+	.event_init	= pmu_event_init,
+	.add		= pmu_event_add,
+	.del		= pmu_event_del,
+	.start		= pmu_event_start,
+	.stop		= pmu_event_stop,
+	.read		= pmu_event_read,
+};
+
+
+static int power_cpu_exit(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int ret = 0;
+	int target = nr_cpumask_bits;
+
+	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
+
+	cpumask_clear_cpu(cpu, &cpu_mask);
+	cpumask_clear_cpu(cpu, pmu->mask);
+
+	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
+		goto out;
+
+	/*
+	 * Find a new CPU on same compute unit, if was set in cpumask
+	 * and still some CPUs on compute unit. Then move on to the
+	 * new CPU.
+	 */
+	target = cpumask_any(pmu->tmp_mask);
+	if (target < nr_cpumask_bits && target != cpu)
+		cpumask_set_cpu(target, &cpu_mask);
+
+	WARN_ON(cpumask_empty(&cpu_mask));
+
+out:
+	/*
+	 * Migrate event and context to new CPU.
+	 */
+	if (target < nr_cpumask_bits)
+		perf_pmu_migrate_context(pmu->pmu, cpu, target);
+
+	return ret;
+
+}
+
+static int power_cpu_init(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return 0;
+
+	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask))
+		cpumask_set_cpu(cpu, &cpu_mask);
+
+	return 0;
+}
+
+static int power_cpu_prepare(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int phys_id = topology_physical_package_id(cpu);
+	int ret = 0;
+
+	if (pmu)
+		return 0;
+
+	if (phys_id < 0)
+		return -EINVAL;
+
+	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
+	if (!pmu)
+		return -ENOMEM;
+
+	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out1;
+	}
+
+	raw_spin_lock_init(&pmu->lock);
+
+	pmu->pmu = &pmu_class;
+
+	per_cpu(amd_power_pmu, cpu) = pmu;
+
+	return 0;
+
+out1:
+	free_cpumask_var(pmu->mask);
+out:
+	kfree(pmu);
+
+	return ret;
+}
+
+static void power_cpu_kfree(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return;
+
+	free_cpumask_var(pmu->mask);
+	free_cpumask_var(pmu->tmp_mask);
+	kfree(pmu);
+
+	per_cpu(amd_power_pmu, cpu) = NULL;
+}
+
+static int
+power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
+{
+	unsigned int cpu = (long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_UP_PREPARE:
+		if (power_cpu_prepare(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_STARTING:
+		if (power_cpu_init(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_DEAD:
+		power_cpu_kfree(cpu);
+		break;
+	case CPU_DOWN_PREPARE:
+		if (power_cpu_exit(cpu))
+			return NOTIFY_BAD;
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static const struct x86_cpu_id cpu_match[] = {
+	{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
+	{},
+};
+
+static int __init amd_power_pmu_init(void)
+{
+	int i, ret;
+	u64 tmp;
+
+	if (!x86_match_cpu(cpu_match))
+		return 0;
+
+	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
+		return -ENODEV;
+
+	cores_per_cu = amd_get_cores_per_cu();
+	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
+
+	if (WARN_ON_ONCE(cu_num > MAX_CUS))
+		return -EINVAL;
+
+	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
+
+	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
+		pr_err("Failed to read max compute unit power accumulator MSR\n");
+		return -ENODEV;
+	}
+	max_cu_acc_power = tmp;
+
+	cpu_notifier_register_begin();
+
+	/*
+	 * Choose the one online core of each compute unit.
+	 */
+	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
+		/* WARN_ON for empty CU masks */
+		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
+		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask);
+	}
+
+	for_each_present_cpu(i) {
+		ret = power_cpu_prepare(i);
+		if (ret) {
+			/* Unwind on [0 ... i-1] CPUs. */
+			while (i--)
+				power_cpu_kfree(i);
+			goto out;
+		}
+		ret = power_cpu_init(i);
+		if (ret) {
+			/* Unwind on [0 ... i] CPUs. */
+			while (i >= 0)
+				power_cpu_kfree(i--);
+			goto out;
+		}
+	}
+
+	__perf_cpu_notifier(power_cpu_notifier);
+
+	ret = perf_pmu_register(&pmu_class, "power", -1);
+	if (WARN_ON(ret)) {
+		pr_warn("AMD Power PMU registration failed\n");
+		goto out;
+	}
+
+	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
+
+out:
+	cpu_notifier_register_done();
+
+	return ret;
+}
+device_initcall(amd_power_pmu_init);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-28  6:38 [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
@ 2016-01-28  9:03 ` Borislav Petkov
  2016-01-28  9:39   ` [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting kbuild test robot
                     ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Borislav Petkov @ 2016-01-28  9:03 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker, linux-kernel, spg_linux_kernel,
	x86, Guenter Roeck, Andreas Herrmann, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Aaron Lu

On Thu, Jan 28, 2016 at 02:38:51PM +0800, Huang Rui wrote:
> Introduce an AMD accumlated power reporting mechanism for Carrizo
> (Family 15h, Model 60h) processor that should be used to calculate the
> average power consumed by a processor during a measurement interval.
> The feature of accumulated power mechanism is indicated by CPUID
> Fn8000_0007_EDX[12].

...

> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Ingo Molnar <mingo@kernel.org>
> Suggested-by: Borislav Petkov <bp@suse.de>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Cc: Guenter Roeck <linux@roeck-us.net>

...

> +static int power_cpu_exit(int cpu)
> +{
> +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> +	int ret = 0;
> +	int target = nr_cpumask_bits;
> +
> +	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
> +
> +	cpumask_clear_cpu(cpu, &cpu_mask);
> +	cpumask_clear_cpu(cpu, pmu->mask);
> +
> +	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
> +		goto out;
> +
> +	/*
> +	 * Find a new CPU on same compute unit, if was set in cpumask
> +	 * and still some CPUs on compute unit. Then move on to the
> +	 * new CPU.
> +	 */

Uuh, boy, I *think* I know what you mean here but I'm not sure. Please
explain.

Anyway, I went through it and fixed a bunch of minor issues like
comments formatting and formulation, code formatting and text
streamlining. I couldn't spot anything strange except that MAX_CUS thing
which I think we should remove completely.

The meat of the code is for Peter/Ingo to review though.

Please use this version below for future changes:

---
From: Huang Rui <ray.huang@amd.com>
Date: Thu, 28 Jan 2016 14:38:51 +0800
Subject: [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting
 mechanism
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce an AMD accumlated power reporting mechanism for the Carrizo
(Family 15h, Model 60h) processor that can be used to calculate the
average power consumed by a processor during a measurement interval. The
feature support is indicated by CPUID Fn8000_0007_EDX[12].

This feature will be implemented both in hwmon and perf. The current
design provides one event to report per package/processor power
consumption by counting each compute unit power value.

Here the gory details of how the computation is done:

---------------------------------------------------------------------
* Tsample: compute unit power accumulator sample period
* Tref: the PTSC counter period (PTSC: performance timestamp counter)
* N: the ratio of compute unit power accumulator sample period to the
  PTSC period

* Jmax: max compute unit accumulated power which is indicated by
  MSR_C001007b[MaxCpuSwPwrAcc]

* Jx/Jy: compute unit accumulated power which is indicated by
  MSR_C001007a[CpuSwPwrAcc]

* Tx/Ty: the value of performance timestamp counter which is indicated
  by CU_PTSC MSR_C0010280[PTSC]
* PwrCPUave: CPU average power

i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007.
	N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]].

ii. Read the full range of the cumulative energy value from the new
    MSR MaxCpuSwPwrAcc.
	Jmax = value returned.

iii. At time x, software reads CpuSwPwrAcc and samples the PTSC.
	Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC.

iv. At time y, software reads CpuSwPwrAcc and samples the PTSC.
	Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC.

v. Calculate the average power consumption for a compute unit over
time period (y-x). Unit of result is uWatt:

	if (Jy < Jx) // Rollover has occurred
		Jdelta = (Jy + Jmax) - Jx
	else
		Jdelta = Jy - Jx
	PwrCPUave = N * Jdelta * 1000 / (Ty - Tx)
----------------------------------------------------------------------

Simple example:

  root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4
    CHK     include/config/kernel.release
    CHK     include/generated/uapi/linux/version.h
    CHK     include/generated/utsrelease.h
    CHK     include/generated/timeconst.h
    CHK     include/generated/bounds.h
    CHK     include/generated/asm-offsets.h
    CALL    scripts/checksyscalls.sh
    CHK     include/generated/compile.h
    SKIPPED include/generated/compile.h
    Building modules, stage 2.
  Kernel: arch/x86/boot/bzImage is ready  (#40)
    MODPOST 4225 modules

   Performance counter stats for 'system wide':

              183.44 mWatts power/power-pkg/

       341.837270111 seconds time elapsed

  root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10

   Performance counter stats for 'system wide':

                0.18 mWatts power/power-pkg/

        10.012551815 seconds time elapsed

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Ingo Molnar <mingo@kernel.org>
Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andreas Herrmann <herrmann.der.user@googlemail.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jacob Shin <jacob.w.shin@gmail.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <rric@kernel.org>
Cc: spg_linux_kernel@amd.com
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic
Link: http://lkml.kernel.org/r/1453963131-2013-1-git-send-email-ray.huang@amd.com
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kernel/cpu/Makefile               |   1 +
 arch/x86/kernel/cpu/perf_event_amd_power.c | 489 +++++++++++++++++++++++++++++
 2 files changed, 490 insertions(+)
 create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index faa7b5204129..ffc96503d610 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS)		+= perf_event.o
 
 ifdef CONFIG_PERF_EVENTS
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd.o perf_event_amd_uncore.o
+obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_power.o
 ifdef CONFIG_AMD_IOMMU
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_iommu.o
 endif
diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
new file mode 100644
index 000000000000..ff6893620828
--- /dev/null
+++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
@@ -0,0 +1,489 @@
+/*
+ * Performance events - AMD Processor Power Reporting Mechanism
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Huang Rui <ray.huang@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/perf_event.h>
+#include <asm/cpu_device_id.h>
+#include "perf_event.h"
+
+#define MSR_F15H_CU_PWR_ACCUMULATOR     0xc001007a
+#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
+#define MSR_F15H_PTSC			0xc0010280
+
+/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
+#define AMD_POWER_EVENT_MASK	0xFFULL
+
+#define MAX_CUS	8
+
+/*
+ * Accumulated power status counters.
+ */
+#define AMD_POWER_PKG_ID		0
+#define AMD_POWER_EVENTSEL_PKG		1
+
+/*
+ * The ratio of compute unit power accumulator sample period to the
+ * PTSC period.
+ */
+static unsigned int cpu_pwr_sample_ratio;
+static unsigned int cores_per_cu;
+static unsigned int cu_num;
+
+/* Maximum accumulated power of a compute unit. */
+static u64 max_cu_acc_power;
+
+struct power_pmu {
+	raw_spinlock_t		lock;
+	struct pmu		*pmu;
+	local64_t		cpu_sw_pwr_ptsc;
+
+	/*
+	 * These two cpumasks are used for avoiding the allocations on the
+	 * CPU_STARTING phase because power_cpu_prepare() will be called with
+	 * IRQs disabled.
+	 */
+	cpumask_var_t		mask;
+	cpumask_var_t		tmp_mask;
+};
+
+static struct pmu pmu_class;
+
+/*
+ * Accumulated power represents the sum of each compute unit's (CU) power
+ * consumption. On any core of each CU we read the total accumulated power from
+ * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
+ * which are picked to measure the power for the CUs they belong to.
+ */
+static cpumask_t cpu_mask;
+
+static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu);
+
+static u64 event_update(struct perf_event *event, struct power_pmu *pmu)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc;
+	u64 delta, tdelta;
+
+again:
+	prev_raw_count = local64_read(&hwc->prev_count);
+	prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc);
+	rdmsrl(event->hw.event_base, new_raw_count);
+	rdmsrl(MSR_F15H_PTSC, new_ptsc);
+
+	if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+			    new_raw_count) != prev_raw_count) {
+		cpu_relax();
+		goto again;
+	}
+
+	/*
+	 * Calculate the CU power consumption over a time period, the unit of
+	 * final value (delta) is micro-Watts. Then add it to the event count.
+	 */
+	if (new_raw_count < prev_raw_count) {
+		delta = max_cu_acc_power + new_raw_count;
+		delta -= prev_raw_count;
+	} else
+		delta = new_raw_count - prev_raw_count;
+
+	delta *= cpu_pwr_sample_ratio * 1000;
+	tdelta = new_ptsc - prev_ptsc;
+
+	do_div(delta, tdelta);
+	local64_add(delta, &event->count);
+
+	return new_raw_count;
+}
+
+static void __pmu_event_start(struct power_pmu *pmu, struct perf_event *event)
+{
+	u64 ptsc, counts;
+
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+		return;
+
+	event->hw.state = 0;
+
+	rdmsrl(MSR_F15H_PTSC, ptsc);
+	local64_set(&pmu->cpu_sw_pwr_ptsc, ptsc);
+	rdmsrl(event->hw.event_base, counts);
+	local64_set(&event->hw.prev_count, counts);
+}
+
+static void pmu_event_start(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+
+	raw_spin_lock(&pmu->lock);
+	__pmu_event_start(pmu, event);
+	raw_spin_unlock(&pmu->lock);
+}
+
+static void pmu_event_stop(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+
+	raw_spin_lock(&pmu->lock);
+
+	/* Mark event as deactivated and stopped. */
+	if (!(hwc->state & PERF_HES_STOPPED))
+		hwc->state |= PERF_HES_STOPPED;
+
+	/* Check if software counter update is necessary. */
+	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+		/*
+		 * Drain the remaining delta count out of an event
+		 * that we are disabling:
+		 */
+		event_update(event, pmu);
+		hwc->state |= PERF_HES_UPTODATE;
+	}
+
+	raw_spin_unlock(&pmu->lock);
+}
+
+static int pmu_event_add(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+
+	raw_spin_lock(&pmu->lock);
+
+	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+
+	if (mode & PERF_EF_START)
+		__pmu_event_start(pmu, event);
+
+	raw_spin_unlock(&pmu->lock);
+
+	return 0;
+}
+
+static void pmu_event_del(struct perf_event *event, int flags)
+{
+	pmu_event_stop(event, PERF_EF_UPDATE);
+}
+
+static int pmu_event_init(struct perf_event *event)
+{
+	u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK;
+	int ret = 0;
+
+	/* Only look at AMD power events. */
+	if (event->attr.type != pmu_class.type)
+		return -ENOENT;
+
+	/* Unsupported modes and filters. */
+	if (event->attr.exclude_user   ||
+	    event->attr.exclude_kernel ||
+	    event->attr.exclude_hv     ||
+	    event->attr.exclude_idle   ||
+	    event->attr.exclude_host   ||
+	    event->attr.exclude_guest  ||
+	    /* no sampling */
+	    event->attr.sample_period)
+		return -EINVAL;
+
+	if (cfg != AMD_POWER_EVENTSEL_PKG)
+		return -EINVAL;
+
+	event->hw.event_base = MSR_F15H_CU_PWR_ACCUMULATOR;
+	event->hw.config = cfg;
+	event->hw.idx = AMD_POWER_PKG_ID;
+
+	return ret;
+}
+
+static void pmu_event_read(struct perf_event *event)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+
+	event_update(event, pmu);
+}
+
+static ssize_t
+get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpumap_print_to_pagebuf(true, buf, &cpu_mask);
+}
+
+static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL);
+
+static struct attribute *pmu_attrs[] = {
+	&dev_attr_cpumask.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_attr_group = {
+	.attrs = pmu_attrs,
+};
+
+/*
+ * Currently it only supports to report the power of each
+ * processor/package.
+ */
+EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
+
+EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts");
+
+/* Convert the count from micro-Watts to milli-Watts. */
+EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3");
+
+
+static struct attribute *events_attr[] = {
+	EVENT_PTR(power_pkg),
+	EVENT_PTR(power_pkg_unit),
+	EVENT_PTR(power_pkg_scale),
+	NULL,
+};
+
+static struct attribute_group pmu_events_group = {
+	.name	= "events",
+	.attrs	= events_attr,
+};
+
+PMU_FORMAT_ATTR(event, "config:0-7");
+
+static struct attribute *formats_attr[] = {
+	&format_attr_event.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_format_group = {
+	.name	= "format",
+	.attrs	= formats_attr,
+};
+
+static const struct attribute_group *attr_groups[] = {
+	&pmu_attr_group,
+	&pmu_format_group,
+	&pmu_events_group,
+	NULL,
+};
+
+static struct pmu pmu_class = {
+	.attr_groups	= attr_groups,
+	/* system-wide only */
+	.task_ctx_nr	= perf_invalid_context,
+	.event_init	= pmu_event_init,
+	.add		= pmu_event_add,
+	.del		= pmu_event_del,
+	.start		= pmu_event_start,
+	.stop		= pmu_event_stop,
+	.read		= pmu_event_read,
+};
+
+static int power_cpu_exit(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int target = nr_cpumask_bits;
+	int ret = 0;
+
+	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
+
+	cpumask_clear_cpu(cpu, &cpu_mask);
+	cpumask_clear_cpu(cpu, pmu->mask);
+
+	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
+		goto out;
+
+	/*
+	 * Find a new CPU on the same compute unit, if was set in cpumask
+	 * and still some CPUs on compute unit. Then move on to the new CPU.
+	 */
+	target = cpumask_any(pmu->tmp_mask);
+	if (target < nr_cpumask_bits && target != cpu)
+		cpumask_set_cpu(target, &cpu_mask);
+
+	WARN_ON(cpumask_empty(&cpu_mask));
+
+out:
+	/*
+	 * Migrate event and context to new CPU.
+	 */
+	if (target < nr_cpumask_bits)
+		perf_pmu_migrate_context(pmu->pmu, cpu, target);
+
+	return ret;
+
+}
+
+static int power_cpu_init(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return 0;
+
+	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask))
+		cpumask_set_cpu(cpu, &cpu_mask);
+
+	return 0;
+}
+
+static int power_cpu_prepare(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int phys_id = topology_physical_package_id(cpu);
+	int ret = 0;
+
+	if (pmu)
+		return 0;
+
+	if (phys_id < 0)
+		return -EINVAL;
+
+	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
+	if (!pmu)
+		return -ENOMEM;
+
+	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out1;
+	}
+
+	raw_spin_lock_init(&pmu->lock);
+
+	pmu->pmu = &pmu_class;
+
+	per_cpu(amd_power_pmu, cpu) = pmu;
+
+	return 0;
+
+out1:
+	free_cpumask_var(pmu->mask);
+out:
+	kfree(pmu);
+
+	return ret;
+}
+
+static void power_cpu_kfree(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return;
+
+	free_cpumask_var(pmu->mask);
+	free_cpumask_var(pmu->tmp_mask);
+	kfree(pmu);
+
+	per_cpu(amd_power_pmu, cpu) = NULL;
+}
+
+static int
+power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
+{
+	unsigned int cpu = (long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_UP_PREPARE:
+		if (power_cpu_prepare(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_STARTING:
+		if (power_cpu_init(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_DEAD:
+		power_cpu_kfree(cpu);
+		break;
+	case CPU_DOWN_PREPARE:
+		if (power_cpu_exit(cpu))
+			return NOTIFY_BAD;
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static const struct x86_cpu_id cpu_match[] = {
+	{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
+	{},
+};
+
+static int __init amd_power_pmu_init(void)
+{
+	int i, ret;
+	u64 tmp;
+
+	if (!x86_match_cpu(cpu_match))
+		return 0;
+
+	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
+		return -ENODEV;
+
+	cores_per_cu = amd_get_cores_per_cu();
+	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
+
+	if (WARN_ON_ONCE(cu_num > MAX_CUS))
+		return -EINVAL;
+
+	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
+
+	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
+		pr_err("Failed to read max compute unit power accumulator MSR\n");
+		return -ENODEV;
+	}
+	max_cu_acc_power = tmp;
+
+	cpu_notifier_register_begin();
+
+	/* Choose one online core of each compute unit.  */
+	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
+		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
+		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask);
+	}
+
+	for_each_present_cpu(i) {
+		ret = power_cpu_prepare(i);
+		if (ret) {
+			/* Unwind on [0 ... i-1] CPUs. */
+			while (i--)
+				power_cpu_kfree(i);
+			goto out;
+		}
+		ret = power_cpu_init(i);
+		if (ret) {
+			/* Unwind on [0 ... i] CPUs. */
+			while (i >= 0)
+				power_cpu_kfree(i--);
+			goto out;
+		}
+	}
+
+	__perf_cpu_notifier(power_cpu_notifier);
+
+	ret = perf_pmu_register(&pmu_class, "power", -1);
+	if (WARN_ON(ret)) {
+		pr_warn("AMD Power PMU registration failed\n");
+		goto out;
+	}
+
+	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
+
+out:
+	cpu_notifier_register_done();
+
+	return ret;
+}
+device_initcall(amd_power_pmu_init);
-- 
2.3.5


-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting
  2016-01-28  9:03 ` Borislav Petkov
@ 2016-01-28  9:39   ` kbuild test robot
  2016-01-28  9:41   ` kbuild test robot
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2016-01-28  9:39 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kbuild-all, Huang Rui, Borislav Petkov, Peter Zijlstra,
	Ingo Molnar, Andy Lutomirski, Thomas Gleixner, Robert Richter,
	Jacob Shin, John Stultz, Frédéric Weisbecker,
	linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Fengguang Wu, Aaron Lu

[-- Attachment #1: Type: text/plain, Size: 3618 bytes --]

Hi Borislav,

[auto build test ERROR on tip/x86/core]
[also build test ERROR on v4.5-rc1 next-20160128]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Borislav-Petkov/perf-x86-amd-power-Add-AMD-accumulated-power-reporting/20160128-170527
config: i386-randconfig-a0-01271607 (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   In file included from include/linux/kobject.h:21:0,
                    from include/linux/module.h:17,
                    from arch/x86/kernel/cpu/perf_event_amd_power.c:13:
   arch/x86/kernel/cpu/perf_event.h:660:31: error: 'events_sysfs_show' undeclared here (not in a function)
     .attr  = __ATTR(_name, 0444, events_sysfs_show, NULL), \
                                  ^
   include/linux/sysfs.h:93:10: note: in definition of macro '__ATTR'
     .show = _show,      \
             ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:236:1: note: in expansion of macro 'EVENT_ATTR_STR'
    EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
    ^
   In file included from arch/x86/include/asm/alternative.h:158:0,
                    from arch/x86/include/asm/bitops.h:16,
                    from include/linux/bitops.h:36,
                    from include/linux/kernel.h:10,
                    from include/linux/list.h:8,
                    from include/linux/module.h:9,
                    from arch/x86/kernel/cpu/perf_event_amd_power.c:13:
   arch/x86/kernel/cpu/perf_event_amd_power.c: In function 'amd_power_pmu_init':
>> arch/x86/kernel/cpu/perf_event_amd_power.c:432:20: error: 'X86_FEATURE_ACC_POWER' undeclared (first use in this function)
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
                       ^
   arch/x86/include/asm/cpufeature.h:319:24: note: in definition of macro 'cpu_has'
     (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
                           ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:7: note: in expansion of macro 'boot_cpu_has'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
          ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:20: note: each undeclared identifier is reported only once for each function it appears in
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
                       ^
   arch/x86/include/asm/cpufeature.h:319:24: note: in definition of macro 'cpu_has'
     (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
                           ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:7: note: in expansion of macro 'boot_cpu_has'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
          ^
>> arch/x86/kernel/cpu/perf_event_amd_power.c:435:17: error: implicit declaration of function 'amd_get_cores_per_cu' [-Werror=implicit-function-declaration]
     cores_per_cu = amd_get_cores_per_cu();
                    ^
   cc1: some warnings being treated as errors

vim +/X86_FEATURE_ACC_POWER +432 arch/x86/kernel/cpu/perf_event_amd_power.c

   426		int i, ret;
   427		u64 tmp;
   428	
   429		if (!x86_match_cpu(cpu_match))
   430			return 0;
   431	
 > 432		if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
   433			return -ENODEV;
   434	
 > 435		cores_per_cu = amd_get_cores_per_cu();
   436		cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
   437	
   438		if (WARN_ON_ONCE(cu_num > MAX_CUS))

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 28367 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting
  2016-01-28  9:03 ` Borislav Petkov
  2016-01-28  9:39   ` [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting kbuild test robot
@ 2016-01-28  9:41   ` kbuild test robot
  2016-01-28 10:01   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2016-01-28  9:41 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kbuild-all, Huang Rui, Borislav Petkov, Peter Zijlstra,
	Ingo Molnar, Andy Lutomirski, Thomas Gleixner, Robert Richter,
	Jacob Shin, John Stultz, Frédéric Weisbecker,
	linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Fengguang Wu, Aaron Lu

[-- Attachment #1: Type: text/plain, Size: 3230 bytes --]

Hi Borislav,

[auto build test WARNING on tip/x86/core]
[also build test WARNING on v4.5-rc1 next-20160128]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Borislav-Petkov/perf-x86-amd-power-Add-AMD-accumulated-power-reporting/20160128-170527
config: i386-randconfig-s0-201604 (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

   In file included from arch/x86/include/asm/alternative.h:158:0,
                    from arch/x86/include/asm/bitops.h:16,
                    from include/linux/bitops.h:36,
                    from include/linux/kernel.h:10,
                    from include/linux/list.h:8,
                    from include/linux/module.h:9,
                    from arch/x86/kernel/cpu/perf_event_amd_power.c:13:
   arch/x86/kernel/cpu/perf_event_amd_power.c: In function 'amd_power_pmu_init':
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:20: error: 'X86_FEATURE_ACC_POWER' undeclared (first use in this function)
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
                       ^
   arch/x86/include/asm/cpufeature.h:319:24: note: in definition of macro 'cpu_has'
     (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
                           ^
>> arch/x86/kernel/cpu/perf_event_amd_power.c:432:7: note: in expansion of macro 'boot_cpu_has'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
          ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:20: note: each undeclared identifier is reported only once for each function it appears in
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
                       ^
   arch/x86/include/asm/cpufeature.h:319:24: note: in definition of macro 'cpu_has'
     (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
                           ^
>> arch/x86/kernel/cpu/perf_event_amd_power.c:432:7: note: in expansion of macro 'boot_cpu_has'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
          ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:435:17: error: implicit declaration of function 'amd_get_cores_per_cu' [-Werror=implicit-function-declaration]
     cores_per_cu = amd_get_cores_per_cu();
                    ^
   cc1: some warnings being treated as errors

vim +/boot_cpu_has +432 arch/x86/kernel/cpu/perf_event_amd_power.c

   416		return NOTIFY_OK;
   417	}
   418	
   419	static const struct x86_cpu_id cpu_match[] = {
   420		{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
   421		{},
   422	};
   423	
   424	static int __init amd_power_pmu_init(void)
   425	{
   426		int i, ret;
   427		u64 tmp;
   428	
   429		if (!x86_match_cpu(cpu_match))
   430			return 0;
   431	
 > 432		if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
   433			return -ENODEV;
   434	
   435		cores_per_cu = amd_get_cores_per_cu();
   436		cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
   437	
   438		if (WARN_ON_ONCE(cu_num > MAX_CUS))
   439			return -EINVAL;
   440	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 23262 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-28  9:03 ` Borislav Petkov
  2016-01-28  9:39   ` [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting kbuild test robot
  2016-01-28  9:41   ` kbuild test robot
@ 2016-01-28 10:01   ` Huang Rui
  2016-01-28 12:42     ` Borislav Petkov
  2016-01-28 10:04   ` [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting kbuild test robot
  2016-01-28 15:28   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Peter Zijlstra
  4 siblings, 1 reply; 10+ messages in thread
From: Huang Rui @ 2016-01-28 10:01 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker, linux-kernel, spg_linux_kernel,
	x86, Guenter Roeck, Andreas Herrmann, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Aaron Lu

On Thu, Jan 28, 2016 at 10:03:15AM +0100, Borislav Petkov wrote:
> On Thu, Jan 28, 2016 at 02:38:51PM +0800, Huang Rui wrote:
> > Introduce an AMD accumlated power reporting mechanism for Carrizo
> > (Family 15h, Model 60h) processor that should be used to calculate the
> > average power consumed by a processor during a measurement interval.
> > The feature of accumulated power mechanism is indicated by CPUID
> > Fn8000_0007_EDX[12].
> 
> ...
> 
> > Suggested-by: Peter Zijlstra <peterz@infradead.org>
> > Suggested-by: Ingo Molnar <mingo@kernel.org>
> > Suggested-by: Borislav Petkov <bp@suse.de>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > Cc: Guenter Roeck <linux@roeck-us.net>
> 
> ...
> 
> > +static int power_cpu_exit(int cpu)
> > +{
> > +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> > +	int ret = 0;
> > +	int target = nr_cpumask_bits;
> > +
> > +	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
> > +
> > +	cpumask_clear_cpu(cpu, &cpu_mask);
> > +	cpumask_clear_cpu(cpu, pmu->mask);
> > +
> > +	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
> > +		goto out;
> > +
> > +	/*
> > +	 * Find a new CPU on same compute unit, if was set in cpumask
> > +	 * and still some CPUs on compute unit. Then move on to the
> > +	 * new CPU.
> > +	 */
> 
> Uuh, boy, I *think* I know what you mean here but I'm not sure. Please
> explain.
> 

OK, ;-)

For example: Carrizo has four CPU cores and two compute units (CUs).
CPU0 and CPU1 belongs to CU0, CPU2 and CPU3 belongs to CU1.

At normal initialization, cpu_mask should be "0,2". That means OS
choose CPU0 in CU0 and CPU2 in CU1 to measure the CU0 and CU1's power
consumption. If we make the CPU2 offline at runtime, OS need try to
find another CPU in same compute unit (Here is CU1, only CPU3 can be
picked). Then OS will move on to the CPU3 to measure CU1's power
consumption instead of CPU2.

Thanks,
Rui

> Anyway, I went through it and fixed a bunch of minor issues like
> comments formatting and formulation, code formatting and text
> streamlining. I couldn't spot anything strange except that MAX_CUS thing
> which I think we should remove completely.
> 
> The meat of the code is for Peter/Ingo to review though.
> 
> Please use this version below for future changes:
> 
> ---
> From: Huang Rui <ray.huang@amd.com>
> Date: Thu, 28 Jan 2016 14:38:51 +0800
> Subject: [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting
>  mechanism
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
> 
> Introduce an AMD accumlated power reporting mechanism for the Carrizo
> (Family 15h, Model 60h) processor that can be used to calculate the
> average power consumed by a processor during a measurement interval. The
> feature support is indicated by CPUID Fn8000_0007_EDX[12].
> 
> This feature will be implemented both in hwmon and perf. The current
> design provides one event to report per package/processor power
> consumption by counting each compute unit power value.
> 
> Here the gory details of how the computation is done:
> 
> ---------------------------------------------------------------------
> * Tsample: compute unit power accumulator sample period
> * Tref: the PTSC counter period (PTSC: performance timestamp counter)
> * N: the ratio of compute unit power accumulator sample period to the
>   PTSC period
> 
> * Jmax: max compute unit accumulated power which is indicated by
>   MSR_C001007b[MaxCpuSwPwrAcc]
> 
> * Jx/Jy: compute unit accumulated power which is indicated by
>   MSR_C001007a[CpuSwPwrAcc]
> 
> * Tx/Ty: the value of performance timestamp counter which is indicated
>   by CU_PTSC MSR_C0010280[PTSC]
> * PwrCPUave: CPU average power
> 
> i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007.
> 	N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]].
> 
> ii. Read the full range of the cumulative energy value from the new
>     MSR MaxCpuSwPwrAcc.
> 	Jmax = value returned.
> 
> iii. At time x, software reads CpuSwPwrAcc and samples the PTSC.
> 	Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC.
> 
> iv. At time y, software reads CpuSwPwrAcc and samples the PTSC.
> 	Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC.
> 
> v. Calculate the average power consumption for a compute unit over
> time period (y-x). Unit of result is uWatt:
> 
> 	if (Jy < Jx) // Rollover has occurred
> 		Jdelta = (Jy + Jmax) - Jx
> 	else
> 		Jdelta = Jy - Jx
> 	PwrCPUave = N * Jdelta * 1000 / (Ty - Tx)
> ----------------------------------------------------------------------
> 
> Simple example:
> 
>   root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4
>     CHK     include/config/kernel.release
>     CHK     include/generated/uapi/linux/version.h
>     CHK     include/generated/utsrelease.h
>     CHK     include/generated/timeconst.h
>     CHK     include/generated/bounds.h
>     CHK     include/generated/asm-offsets.h
>     CALL    scripts/checksyscalls.sh
>     CHK     include/generated/compile.h
>     SKIPPED include/generated/compile.h
>     Building modules, stage 2.
>   Kernel: arch/x86/boot/bzImage is ready  (#40)
>     MODPOST 4225 modules
> 
>    Performance counter stats for 'system wide':
> 
>               183.44 mWatts power/power-pkg/
> 
>        341.837270111 seconds time elapsed
> 
>   root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10
> 
>    Performance counter stats for 'system wide':
> 
>                 0.18 mWatts power/power-pkg/
> 
>         10.012551815 seconds time elapsed
> 
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Ingo Molnar <mingo@kernel.org>
> Suggested-by: Borislav Petkov <bp@suse.de>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Cc: Aaron Lu <aaron.lu@intel.com>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Andreas Herrmann <herrmann.der.user@googlemail.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Fengguang Wu <fengguang.wu@intel.com>
> Cc: Frédéric Weisbecker <fweisbec@gmail.com>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Jacob Shin <jacob.w.shin@gmail.com>
> Cc: John Stultz <john.stultz@linaro.org>
> Cc: Kan Liang <kan.liang@intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Robert Richter <rric@kernel.org>
> Cc: spg_linux_kernel@amd.com
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: x86-ml <x86@kernel.org>
> Link: http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic
> Link: http://lkml.kernel.org/r/1453963131-2013-1-git-send-email-ray.huang@amd.com
> Signed-off-by: Borislav Petkov <bp@suse.de>
> ---
>  arch/x86/kernel/cpu/Makefile               |   1 +
>  arch/x86/kernel/cpu/perf_event_amd_power.c | 489 +++++++++++++++++++++++++++++
>  2 files changed, 490 insertions(+)
>  create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c
> 
> diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
> index faa7b5204129..ffc96503d610 100644
> --- a/arch/x86/kernel/cpu/Makefile
> +++ b/arch/x86/kernel/cpu/Makefile
> @@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS)		+= perf_event.o
>  
>  ifdef CONFIG_PERF_EVENTS
>  obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd.o perf_event_amd_uncore.o
> +obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_power.o
>  ifdef CONFIG_AMD_IOMMU
>  obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_iommu.o
>  endif
> diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
> new file mode 100644
> index 000000000000..ff6893620828
> --- /dev/null
> +++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
> @@ -0,0 +1,489 @@
> +/*
> + * Performance events - AMD Processor Power Reporting Mechanism
> + *
> + * Copyright (C) 2016 Advanced Micro Devices, Inc.
> + *
> + * Author: Huang Rui <ray.huang@amd.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/perf_event.h>
> +#include <asm/cpu_device_id.h>
> +#include "perf_event.h"
> +
> +#define MSR_F15H_CU_PWR_ACCUMULATOR     0xc001007a
> +#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
> +#define MSR_F15H_PTSC			0xc0010280
> +
> +/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
> +#define AMD_POWER_EVENT_MASK	0xFFULL
> +
> +#define MAX_CUS	8
> +
> +/*
> + * Accumulated power status counters.
> + */
> +#define AMD_POWER_PKG_ID		0
> +#define AMD_POWER_EVENTSEL_PKG		1
> +
> +/*
> + * The ratio of compute unit power accumulator sample period to the
> + * PTSC period.
> + */
> +static unsigned int cpu_pwr_sample_ratio;
> +static unsigned int cores_per_cu;
> +static unsigned int cu_num;
> +
> +/* Maximum accumulated power of a compute unit. */
> +static u64 max_cu_acc_power;
> +
> +struct power_pmu {
> +	raw_spinlock_t		lock;
> +	struct pmu		*pmu;
> +	local64_t		cpu_sw_pwr_ptsc;
> +
> +	/*
> +	 * These two cpumasks are used for avoiding the allocations on the
> +	 * CPU_STARTING phase because power_cpu_prepare() will be called with
> +	 * IRQs disabled.
> +	 */
> +	cpumask_var_t		mask;
> +	cpumask_var_t		tmp_mask;
> +};
> +
> +static struct pmu pmu_class;
> +
> +/*
> + * Accumulated power represents the sum of each compute unit's (CU) power
> + * consumption. On any core of each CU we read the total accumulated power from
> + * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
> + * which are picked to measure the power for the CUs they belong to.
> + */
> +static cpumask_t cpu_mask;
> +
> +static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu);
> +
> +static u64 event_update(struct perf_event *event, struct power_pmu *pmu)
> +{
> +	struct hw_perf_event *hwc = &event->hw;
> +	u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc;
> +	u64 delta, tdelta;
> +
> +again:
> +	prev_raw_count = local64_read(&hwc->prev_count);
> +	prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc);
> +	rdmsrl(event->hw.event_base, new_raw_count);
> +	rdmsrl(MSR_F15H_PTSC, new_ptsc);
> +
> +	if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> +			    new_raw_count) != prev_raw_count) {
> +		cpu_relax();
> +		goto again;
> +	}
> +
> +	/*
> +	 * Calculate the CU power consumption over a time period, the unit of
> +	 * final value (delta) is micro-Watts. Then add it to the event count.
> +	 */
> +	if (new_raw_count < prev_raw_count) {
> +		delta = max_cu_acc_power + new_raw_count;
> +		delta -= prev_raw_count;
> +	} else
> +		delta = new_raw_count - prev_raw_count;
> +
> +	delta *= cpu_pwr_sample_ratio * 1000;
> +	tdelta = new_ptsc - prev_ptsc;
> +
> +	do_div(delta, tdelta);
> +	local64_add(delta, &event->count);
> +
> +	return new_raw_count;
> +}
> +
> +static void __pmu_event_start(struct power_pmu *pmu, struct perf_event *event)
> +{
> +	u64 ptsc, counts;
> +
> +	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
> +		return;
> +
> +	event->hw.state = 0;
> +
> +	rdmsrl(MSR_F15H_PTSC, ptsc);
> +	local64_set(&pmu->cpu_sw_pwr_ptsc, ptsc);
> +	rdmsrl(event->hw.event_base, counts);
> +	local64_set(&event->hw.prev_count, counts);
> +}
> +
> +static void pmu_event_start(struct perf_event *event, int mode)
> +{
> +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> +
> +	raw_spin_lock(&pmu->lock);
> +	__pmu_event_start(pmu, event);
> +	raw_spin_unlock(&pmu->lock);
> +}
> +
> +static void pmu_event_stop(struct perf_event *event, int mode)
> +{
> +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> +	struct hw_perf_event *hwc = &event->hw;
> +
> +	raw_spin_lock(&pmu->lock);
> +
> +	/* Mark event as deactivated and stopped. */
> +	if (!(hwc->state & PERF_HES_STOPPED))
> +		hwc->state |= PERF_HES_STOPPED;
> +
> +	/* Check if software counter update is necessary. */
> +	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
> +		/*
> +		 * Drain the remaining delta count out of an event
> +		 * that we are disabling:
> +		 */
> +		event_update(event, pmu);
> +		hwc->state |= PERF_HES_UPTODATE;
> +	}
> +
> +	raw_spin_unlock(&pmu->lock);
> +}
> +
> +static int pmu_event_add(struct perf_event *event, int mode)
> +{
> +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> +	struct hw_perf_event *hwc = &event->hw;
> +
> +	raw_spin_lock(&pmu->lock);
> +
> +	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> +
> +	if (mode & PERF_EF_START)
> +		__pmu_event_start(pmu, event);
> +
> +	raw_spin_unlock(&pmu->lock);
> +
> +	return 0;
> +}
> +
> +static void pmu_event_del(struct perf_event *event, int flags)
> +{
> +	pmu_event_stop(event, PERF_EF_UPDATE);
> +}
> +
> +static int pmu_event_init(struct perf_event *event)
> +{
> +	u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK;
> +	int ret = 0;
> +
> +	/* Only look at AMD power events. */
> +	if (event->attr.type != pmu_class.type)
> +		return -ENOENT;
> +
> +	/* Unsupported modes and filters. */
> +	if (event->attr.exclude_user   ||
> +	    event->attr.exclude_kernel ||
> +	    event->attr.exclude_hv     ||
> +	    event->attr.exclude_idle   ||
> +	    event->attr.exclude_host   ||
> +	    event->attr.exclude_guest  ||
> +	    /* no sampling */
> +	    event->attr.sample_period)
> +		return -EINVAL;
> +
> +	if (cfg != AMD_POWER_EVENTSEL_PKG)
> +		return -EINVAL;
> +
> +	event->hw.event_base = MSR_F15H_CU_PWR_ACCUMULATOR;
> +	event->hw.config = cfg;
> +	event->hw.idx = AMD_POWER_PKG_ID;
> +
> +	return ret;
> +}
> +
> +static void pmu_event_read(struct perf_event *event)
> +{
> +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> +
> +	event_update(event, pmu);
> +}
> +
> +static ssize_t
> +get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	return cpumap_print_to_pagebuf(true, buf, &cpu_mask);
> +}
> +
> +static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL);
> +
> +static struct attribute *pmu_attrs[] = {
> +	&dev_attr_cpumask.attr,
> +	NULL,
> +};
> +
> +static struct attribute_group pmu_attr_group = {
> +	.attrs = pmu_attrs,
> +};
> +
> +/*
> + * Currently it only supports to report the power of each
> + * processor/package.
> + */
> +EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
> +
> +EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts");
> +
> +/* Convert the count from micro-Watts to milli-Watts. */
> +EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3");
> +
> +
> +static struct attribute *events_attr[] = {
> +	EVENT_PTR(power_pkg),
> +	EVENT_PTR(power_pkg_unit),
> +	EVENT_PTR(power_pkg_scale),
> +	NULL,
> +};
> +
> +static struct attribute_group pmu_events_group = {
> +	.name	= "events",
> +	.attrs	= events_attr,
> +};
> +
> +PMU_FORMAT_ATTR(event, "config:0-7");
> +
> +static struct attribute *formats_attr[] = {
> +	&format_attr_event.attr,
> +	NULL,
> +};
> +
> +static struct attribute_group pmu_format_group = {
> +	.name	= "format",
> +	.attrs	= formats_attr,
> +};
> +
> +static const struct attribute_group *attr_groups[] = {
> +	&pmu_attr_group,
> +	&pmu_format_group,
> +	&pmu_events_group,
> +	NULL,
> +};
> +
> +static struct pmu pmu_class = {
> +	.attr_groups	= attr_groups,
> +	/* system-wide only */
> +	.task_ctx_nr	= perf_invalid_context,
> +	.event_init	= pmu_event_init,
> +	.add		= pmu_event_add,
> +	.del		= pmu_event_del,
> +	.start		= pmu_event_start,
> +	.stop		= pmu_event_stop,
> +	.read		= pmu_event_read,
> +};
> +
> +static int power_cpu_exit(int cpu)
> +{
> +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> +	int target = nr_cpumask_bits;
> +	int ret = 0;
> +
> +	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
> +
> +	cpumask_clear_cpu(cpu, &cpu_mask);
> +	cpumask_clear_cpu(cpu, pmu->mask);
> +
> +	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
> +		goto out;
> +
> +	/*
> +	 * Find a new CPU on the same compute unit, if was set in cpumask
> +	 * and still some CPUs on compute unit. Then move on to the new CPU.
> +	 */
> +	target = cpumask_any(pmu->tmp_mask);
> +	if (target < nr_cpumask_bits && target != cpu)
> +		cpumask_set_cpu(target, &cpu_mask);
> +
> +	WARN_ON(cpumask_empty(&cpu_mask));
> +
> +out:
> +	/*
> +	 * Migrate event and context to new CPU.
> +	 */
> +	if (target < nr_cpumask_bits)
> +		perf_pmu_migrate_context(pmu->pmu, cpu, target);
> +
> +	return ret;
> +
> +}
> +
> +static int power_cpu_init(int cpu)
> +{
> +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> +
> +	if (!pmu)
> +		return 0;
> +
> +	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask))
> +		cpumask_set_cpu(cpu, &cpu_mask);
> +
> +	return 0;
> +}
> +
> +static int power_cpu_prepare(int cpu)
> +{
> +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> +	int phys_id = topology_physical_package_id(cpu);
> +	int ret = 0;
> +
> +	if (pmu)
> +		return 0;
> +
> +	if (phys_id < 0)
> +		return -EINVAL;
> +
> +	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
> +	if (!pmu)
> +		return -ENOMEM;
> +
> +	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto out1;
> +	}
> +
> +	raw_spin_lock_init(&pmu->lock);
> +
> +	pmu->pmu = &pmu_class;
> +
> +	per_cpu(amd_power_pmu, cpu) = pmu;
> +
> +	return 0;
> +
> +out1:
> +	free_cpumask_var(pmu->mask);
> +out:
> +	kfree(pmu);
> +
> +	return ret;
> +}
> +
> +static void power_cpu_kfree(int cpu)
> +{
> +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> +
> +	if (!pmu)
> +		return;
> +
> +	free_cpumask_var(pmu->mask);
> +	free_cpumask_var(pmu->tmp_mask);
> +	kfree(pmu);
> +
> +	per_cpu(amd_power_pmu, cpu) = NULL;
> +}
> +
> +static int
> +power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
> +{
> +	unsigned int cpu = (long)hcpu;
> +
> +	switch (action & ~CPU_TASKS_FROZEN) {
> +	case CPU_UP_PREPARE:
> +		if (power_cpu_prepare(cpu))
> +			return NOTIFY_BAD;
> +		break;
> +	case CPU_STARTING:
> +		if (power_cpu_init(cpu))
> +			return NOTIFY_BAD;
> +		break;
> +	case CPU_DEAD:
> +		power_cpu_kfree(cpu);
> +		break;
> +	case CPU_DOWN_PREPARE:
> +		if (power_cpu_exit(cpu))
> +			return NOTIFY_BAD;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static const struct x86_cpu_id cpu_match[] = {
> +	{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
> +	{},
> +};
> +
> +static int __init amd_power_pmu_init(void)
> +{
> +	int i, ret;
> +	u64 tmp;
> +
> +	if (!x86_match_cpu(cpu_match))
> +		return 0;
> +
> +	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
> +		return -ENODEV;
> +
> +	cores_per_cu = amd_get_cores_per_cu();
> +	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
> +
> +	if (WARN_ON_ONCE(cu_num > MAX_CUS))
> +		return -EINVAL;
> +
> +	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
> +
> +	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
> +		pr_err("Failed to read max compute unit power accumulator MSR\n");
> +		return -ENODEV;
> +	}
> +	max_cu_acc_power = tmp;
> +
> +	cpu_notifier_register_begin();
> +
> +	/* Choose one online core of each compute unit.  */
> +	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
> +		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
> +		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask);
> +	}
> +
> +	for_each_present_cpu(i) {
> +		ret = power_cpu_prepare(i);
> +		if (ret) {
> +			/* Unwind on [0 ... i-1] CPUs. */
> +			while (i--)
> +				power_cpu_kfree(i);
> +			goto out;
> +		}
> +		ret = power_cpu_init(i);
> +		if (ret) {
> +			/* Unwind on [0 ... i] CPUs. */
> +			while (i >= 0)
> +				power_cpu_kfree(i--);
> +			goto out;
> +		}
> +	}
> +
> +	__perf_cpu_notifier(power_cpu_notifier);
> +
> +	ret = perf_pmu_register(&pmu_class, "power", -1);
> +	if (WARN_ON(ret)) {
> +		pr_warn("AMD Power PMU registration failed\n");
> +		goto out;
> +	}
> +
> +	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
> +
> +out:
> +	cpu_notifier_register_done();
> +
> +	return ret;
> +}
> +device_initcall(amd_power_pmu_init);
> -- 
> 2.3.5
> 
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting
  2016-01-28  9:03 ` Borislav Petkov
                     ` (2 preceding siblings ...)
  2016-01-28 10:01   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
@ 2016-01-28 10:04   ` kbuild test robot
  2016-01-28 15:28   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Peter Zijlstra
  4 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2016-01-28 10:04 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kbuild-all, Huang Rui, Borislav Petkov, Peter Zijlstra,
	Ingo Molnar, Andy Lutomirski, Thomas Gleixner, Robert Richter,
	Jacob Shin, John Stultz, Frédéric Weisbecker,
	linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Fengguang Wu, Aaron Lu

[-- Attachment #1: Type: text/plain, Size: 3866 bytes --]

Hi Borislav,

[auto build test WARNING on tip/x86/core]
[also build test WARNING on v4.5-rc1 next-20160128]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Borislav-Petkov/perf-x86-amd-power-Add-AMD-accumulated-power-reporting/20160128-170527
config: x86_64-randconfig-i0-01270829 (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All warnings (new ones prefixed by >>):

   In file included from include/uapi/linux/stddef.h:1:0,
                    from include/linux/stddef.h:4,
                    from include/uapi/linux/posix_types.h:4,
                    from include/uapi/linux/types.h:13,
                    from include/linux/types.h:5,
                    from include/linux/list.h:4,
                    from include/linux/module.h:9,
                    from arch/x86/kernel/cpu/perf_event_amd_power.c:13:
   arch/x86/kernel/cpu/perf_event_amd_power.c: In function 'amd_power_pmu_init':
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:20: error: 'X86_FEATURE_ACC_POWER' undeclared (first use in this function)
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
                       ^
   include/linux/compiler.h:147:28: note: in definition of macro '__trace_if'
     if (__builtin_constant_p((cond)) ? !!(cond) :   \
                               ^
>> arch/x86/kernel/cpu/perf_event_amd_power.c:432:2: note: in expansion of macro 'if'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
     ^
   arch/x86/include/asm/cpufeature.h:338:27: note: in expansion of macro 'cpu_has'
    #define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
                              ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:7: note: in expansion of macro 'boot_cpu_has'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
          ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:20: note: each undeclared identifier is reported only once for each function it appears in
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
                       ^
   include/linux/compiler.h:147:28: note: in definition of macro '__trace_if'
     if (__builtin_constant_p((cond)) ? !!(cond) :   \
                               ^
>> arch/x86/kernel/cpu/perf_event_amd_power.c:432:2: note: in expansion of macro 'if'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
     ^
   arch/x86/include/asm/cpufeature.h:338:27: note: in expansion of macro 'cpu_has'
    #define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
                              ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:432:7: note: in expansion of macro 'boot_cpu_has'
     if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
          ^
   arch/x86/kernel/cpu/perf_event_amd_power.c:435:17: error: implicit declaration of function 'amd_get_cores_per_cu' [-Werror=implicit-function-declaration]
     cores_per_cu = amd_get_cores_per_cu();
                    ^
   cc1: some warnings being treated as errors

vim +/if +432 arch/x86/kernel/cpu/perf_event_amd_power.c

   416		return NOTIFY_OK;
   417	}
   418	
   419	static const struct x86_cpu_id cpu_match[] = {
   420		{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
   421		{},
   422	};
   423	
   424	static int __init amd_power_pmu_init(void)
   425	{
   426		int i, ret;
   427		u64 tmp;
   428	
   429		if (!x86_match_cpu(cpu_match))
   430			return 0;
   431	
 > 432		if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
   433			return -ENODEV;
   434	
   435		cores_per_cu = amd_get_cores_per_cu();
   436		cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
   437	
   438		if (WARN_ON_ONCE(cu_num > MAX_CUS))
   439			return -EINVAL;
   440	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 20607 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-28 10:01   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
@ 2016-01-28 12:42     ` Borislav Petkov
  2016-01-28 14:54       ` Borislav Petkov
  0 siblings, 1 reply; 10+ messages in thread
From: Borislav Petkov @ 2016-01-28 12:42 UTC (permalink / raw)
  To: Huang Rui
  Cc: Peter Zijlstra, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker, linux-kernel, spg_linux_kernel,
	x86, Guenter Roeck, Andreas Herrmann, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Aaron Lu

On Thu, Jan 28, 2016 at 06:01:43PM +0800, Huang Rui wrote:
> For example: Carrizo has four CPU cores and two compute units (CUs).
> CPU0 and CPU1 belongs to CU0, CPU2 and CPU3 belongs to CU1.
> 
> At normal initialization, cpu_mask should be "0,2". That means OS
> choose CPU0 in CU0 and CPU2 in CU1 to measure the CU0 and CU1's power
> consumption. If we make the CPU2 offline at runtime, OS need try to
> find another CPU in same compute unit (Here is CU1, only CPU3 can be
> picked). Then OS will move on to the CPU3 to measure CU1's power
> consumption instead of CPU2.

So basically you want to simply say:

"Find another CPU on the same compute unit and set it in the mask of
CPUs on which we do the measurements."

Which reminds me: that cpu_mask thing is insufficiently named - it
should be called measuring_cpus_mask or so.

Btw, the kbuild robot errors come from the fact that there are changes
to cpufeature.h which I didn't mention when applying your patches. So
I've pushed the whole pile here:

http://git.kernel.org/cgit/linux/kernel/git/bp/bp.git/log/?h=tip-perf

Please use that branch instead.

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-28 12:42     ` Borislav Petkov
@ 2016-01-28 14:54       ` Borislav Petkov
  0 siblings, 0 replies; 10+ messages in thread
From: Borislav Petkov @ 2016-01-28 14:54 UTC (permalink / raw)
  To: Huang Rui
  Cc: Peter Zijlstra, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker, linux-kernel, spg_linux_kernel,
	x86, Guenter Roeck, Andreas Herrmann, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Aaron Lu

On Thu, Jan 28, 2016 at 01:42:09PM +0100, Borislav Petkov wrote:
> http://git.kernel.org/cgit/linux/kernel/git/bp/bp.git/log/?h=tip-perf
> 
> Please use that branch instead.

I'm pasting next version here, I'm not hoping that kbuild robot will be
able to paste this email and know *not* to build it directly but use the
branch above instead :-)

Ok, next version, I killed the MAX_CUS and cu_num thing:

> #define MAX_CUS 8
>
>	...
>
>        cores_per_cu = amd_get_cores_per_cu();
>        cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
>
>        if (WARN_ON_ONCE(cu_num > MAX_CUS))
>                return -EINVAL;

as it is completely arbitrary. What's wrong with having a multi-socket
machine where CUs are more than 8 counting in the whole fabric? Nothing,
AFAICT.

---
From: Huang Rui <ray.huang@amd.com>
Date: Thu, 28 Jan 2016 14:38:51 +0800
Subject: [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting
 mechanism
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce an AMD accumlated power reporting mechanism for the Carrizo
(Family 15h, Model 60h) processor that can be used to calculate the
average power consumed by a processor during a measurement interval. The
feature support is indicated by CPUID Fn8000_0007_EDX[12].

This feature will be implemented both in hwmon and perf. The current
design provides one event to report per package/processor power
consumption by counting each compute unit power value.

Here the gory details of how the computation is done:

---------------------------------------------------------------------
* Tsample: compute unit power accumulator sample period
* Tref: the PTSC counter period (PTSC: performance timestamp counter)
* N: the ratio of compute unit power accumulator sample period to the
  PTSC period

* Jmax: max compute unit accumulated power which is indicated by
  MSR_C001007b[MaxCpuSwPwrAcc]

* Jx/Jy: compute unit accumulated power which is indicated by
  MSR_C001007a[CpuSwPwrAcc]

* Tx/Ty: the value of performance timestamp counter which is indicated
  by CU_PTSC MSR_C0010280[PTSC]
* PwrCPUave: CPU average power

i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007.
	N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]].

ii. Read the full range of the cumulative energy value from the new
    MSR MaxCpuSwPwrAcc.
	Jmax = value returned.

iii. At time x, software reads CpuSwPwrAcc and samples the PTSC.
	Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC.

iv. At time y, software reads CpuSwPwrAcc and samples the PTSC.
	Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC.

v. Calculate the average power consumption for a compute unit over
time period (y-x). Unit of result is uWatt:

	if (Jy < Jx) // Rollover has occurred
		Jdelta = (Jy + Jmax) - Jx
	else
		Jdelta = Jy - Jx
	PwrCPUave = N * Jdelta * 1000 / (Ty - Tx)
----------------------------------------------------------------------

Simple example:

  root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4
    CHK     include/config/kernel.release
    CHK     include/generated/uapi/linux/version.h
    CHK     include/generated/utsrelease.h
    CHK     include/generated/timeconst.h
    CHK     include/generated/bounds.h
    CHK     include/generated/asm-offsets.h
    CALL    scripts/checksyscalls.sh
    CHK     include/generated/compile.h
    SKIPPED include/generated/compile.h
    Building modules, stage 2.
  Kernel: arch/x86/boot/bzImage is ready  (#40)
    MODPOST 4225 modules

   Performance counter stats for 'system wide':

              183.44 mWatts power/power-pkg/

       341.837270111 seconds time elapsed

  root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10

   Performance counter stats for 'system wide':

                0.18 mWatts power/power-pkg/

        10.012551815 seconds time elapsed

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Ingo Molnar <mingo@kernel.org>
Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andreas Herrmann <herrmann.der.user@googlemail.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jacob Shin <jacob.w.shin@gmail.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <rric@kernel.org>
Cc: spg_linux_kernel@amd.com
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic
Link: http://lkml.kernel.org/r/1453963131-2013-1-git-send-email-ray.huang@amd.com
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kernel/cpu/Makefile               |   1 +
 arch/x86/kernel/cpu/perf_event_amd_power.c | 482 +++++++++++++++++++++++++++++
 2 files changed, 483 insertions(+)
 create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index faa7b5204129..ffc96503d610 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS)		+= perf_event.o
 
 ifdef CONFIG_PERF_EVENTS
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd.o perf_event_amd_uncore.o
+obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_power.o
 ifdef CONFIG_AMD_IOMMU
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_iommu.o
 endif
diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
new file mode 100644
index 000000000000..b0df1594b095
--- /dev/null
+++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
@@ -0,0 +1,482 @@
+/*
+ * Performance events - AMD Processor Power Reporting Mechanism
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Huang Rui <ray.huang@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/perf_event.h>
+#include <asm/cpu_device_id.h>
+#include "perf_event.h"
+
+#define MSR_F15H_CU_PWR_ACCUMULATOR     0xc001007a
+#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
+#define MSR_F15H_PTSC			0xc0010280
+
+/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
+#define AMD_POWER_EVENT_MASK	0xFFULL
+
+/*
+ * Accumulated power status counters.
+ */
+#define AMD_POWER_PKG_ID		0
+#define AMD_POWER_EVENTSEL_PKG		1
+
+/*
+ * The ratio of compute unit power accumulator sample period to the
+ * PTSC period.
+ */
+static unsigned int cpu_pwr_sample_ratio;
+static unsigned int cores_per_cu;
+
+/* Maximum accumulated power of a compute unit. */
+static u64 max_cu_acc_power;
+
+struct power_pmu {
+	raw_spinlock_t		lock;
+	struct pmu		*pmu;
+	local64_t		cpu_sw_pwr_ptsc;
+
+	/*
+	 * These two cpumasks are used for avoiding the allocations on the
+	 * CPU_STARTING phase because power_cpu_prepare() will be called with
+	 * IRQs disabled.
+	 */
+	cpumask_var_t		mask;
+	cpumask_var_t		tmp_mask;
+};
+
+static struct pmu pmu_class;
+
+/*
+ * Accumulated power represents the sum of each compute unit's (CU) power
+ * consumption. On any core of each CU we read the total accumulated power from
+ * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
+ * which are picked to measure the power for the CUs they belong to.
+ */
+static cpumask_t cpu_mask;
+
+static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu);
+
+static u64 event_update(struct perf_event *event, struct power_pmu *pmu)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc;
+	u64 delta, tdelta;
+
+again:
+	prev_raw_count = local64_read(&hwc->prev_count);
+	prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc);
+	rdmsrl(event->hw.event_base, new_raw_count);
+	rdmsrl(MSR_F15H_PTSC, new_ptsc);
+
+	if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+			    new_raw_count) != prev_raw_count) {
+		cpu_relax();
+		goto again;
+	}
+
+	/*
+	 * Calculate the CU power consumption over a time period, the unit of
+	 * final value (delta) is micro-Watts. Then add it to the event count.
+	 */
+	if (new_raw_count < prev_raw_count) {
+		delta = max_cu_acc_power + new_raw_count;
+		delta -= prev_raw_count;
+	} else
+		delta = new_raw_count - prev_raw_count;
+
+	delta *= cpu_pwr_sample_ratio * 1000;
+	tdelta = new_ptsc - prev_ptsc;
+
+	do_div(delta, tdelta);
+	local64_add(delta, &event->count);
+
+	return new_raw_count;
+}
+
+static void __pmu_event_start(struct power_pmu *pmu, struct perf_event *event)
+{
+	u64 ptsc, counts;
+
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+		return;
+
+	event->hw.state = 0;
+
+	rdmsrl(MSR_F15H_PTSC, ptsc);
+	local64_set(&pmu->cpu_sw_pwr_ptsc, ptsc);
+	rdmsrl(event->hw.event_base, counts);
+	local64_set(&event->hw.prev_count, counts);
+}
+
+static void pmu_event_start(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+
+	raw_spin_lock(&pmu->lock);
+	__pmu_event_start(pmu, event);
+	raw_spin_unlock(&pmu->lock);
+}
+
+static void pmu_event_stop(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+
+	raw_spin_lock(&pmu->lock);
+
+	/* Mark event as deactivated and stopped. */
+	if (!(hwc->state & PERF_HES_STOPPED))
+		hwc->state |= PERF_HES_STOPPED;
+
+	/* Check if software counter update is necessary. */
+	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+		/*
+		 * Drain the remaining delta count out of an event
+		 * that we are disabling:
+		 */
+		event_update(event, pmu);
+		hwc->state |= PERF_HES_UPTODATE;
+	}
+
+	raw_spin_unlock(&pmu->lock);
+}
+
+static int pmu_event_add(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+
+	raw_spin_lock(&pmu->lock);
+
+	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+
+	if (mode & PERF_EF_START)
+		__pmu_event_start(pmu, event);
+
+	raw_spin_unlock(&pmu->lock);
+
+	return 0;
+}
+
+static void pmu_event_del(struct perf_event *event, int flags)
+{
+	pmu_event_stop(event, PERF_EF_UPDATE);
+}
+
+static int pmu_event_init(struct perf_event *event)
+{
+	u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK;
+	int ret = 0;
+
+	/* Only look at AMD power events. */
+	if (event->attr.type != pmu_class.type)
+		return -ENOENT;
+
+	/* Unsupported modes and filters. */
+	if (event->attr.exclude_user   ||
+	    event->attr.exclude_kernel ||
+	    event->attr.exclude_hv     ||
+	    event->attr.exclude_idle   ||
+	    event->attr.exclude_host   ||
+	    event->attr.exclude_guest  ||
+	    /* no sampling */
+	    event->attr.sample_period)
+		return -EINVAL;
+
+	if (cfg != AMD_POWER_EVENTSEL_PKG)
+		return -EINVAL;
+
+	event->hw.event_base = MSR_F15H_CU_PWR_ACCUMULATOR;
+	event->hw.config = cfg;
+	event->hw.idx = AMD_POWER_PKG_ID;
+
+	return ret;
+}
+
+static void pmu_event_read(struct perf_event *event)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+
+	event_update(event, pmu);
+}
+
+static ssize_t
+get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpumap_print_to_pagebuf(true, buf, &cpu_mask);
+}
+
+static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL);
+
+static struct attribute *pmu_attrs[] = {
+	&dev_attr_cpumask.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_attr_group = {
+	.attrs = pmu_attrs,
+};
+
+/*
+ * Currently it only supports to report the power of each
+ * processor/package.
+ */
+EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
+
+EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts");
+
+/* Convert the count from micro-Watts to milli-Watts. */
+EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3");
+
+
+static struct attribute *events_attr[] = {
+	EVENT_PTR(power_pkg),
+	EVENT_PTR(power_pkg_unit),
+	EVENT_PTR(power_pkg_scale),
+	NULL,
+};
+
+static struct attribute_group pmu_events_group = {
+	.name	= "events",
+	.attrs	= events_attr,
+};
+
+PMU_FORMAT_ATTR(event, "config:0-7");
+
+static struct attribute *formats_attr[] = {
+	&format_attr_event.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_format_group = {
+	.name	= "format",
+	.attrs	= formats_attr,
+};
+
+static const struct attribute_group *attr_groups[] = {
+	&pmu_attr_group,
+	&pmu_format_group,
+	&pmu_events_group,
+	NULL,
+};
+
+static struct pmu pmu_class = {
+	.attr_groups	= attr_groups,
+	/* system-wide only */
+	.task_ctx_nr	= perf_invalid_context,
+	.event_init	= pmu_event_init,
+	.add		= pmu_event_add,
+	.del		= pmu_event_del,
+	.start		= pmu_event_start,
+	.stop		= pmu_event_stop,
+	.read		= pmu_event_read,
+};
+
+static int power_cpu_exit(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int target = nr_cpumask_bits;
+	int ret = 0;
+
+	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
+
+	cpumask_clear_cpu(cpu, &cpu_mask);
+	cpumask_clear_cpu(cpu, pmu->mask);
+
+	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
+		goto out;
+
+	/*
+	 * Find a new CPU on the same compute unit, if was set in cpumask
+	 * and still some CPUs on compute unit. Then move on to the new CPU.
+	 */
+	target = cpumask_any(pmu->tmp_mask);
+	if (target < nr_cpumask_bits && target != cpu)
+		cpumask_set_cpu(target, &cpu_mask);
+
+	WARN_ON(cpumask_empty(&cpu_mask));
+
+out:
+	/*
+	 * Migrate event and context to new CPU.
+	 */
+	if (target < nr_cpumask_bits)
+		perf_pmu_migrate_context(pmu->pmu, cpu, target);
+
+	return ret;
+
+}
+
+static int power_cpu_init(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return 0;
+
+	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask))
+		cpumask_set_cpu(cpu, &cpu_mask);
+
+	return 0;
+}
+
+static int power_cpu_prepare(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int phys_id = topology_physical_package_id(cpu);
+	int ret = 0;
+
+	if (pmu)
+		return 0;
+
+	if (phys_id < 0)
+		return -EINVAL;
+
+	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
+	if (!pmu)
+		return -ENOMEM;
+
+	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out1;
+	}
+
+	raw_spin_lock_init(&pmu->lock);
+
+	pmu->pmu = &pmu_class;
+
+	per_cpu(amd_power_pmu, cpu) = pmu;
+
+	return 0;
+
+out1:
+	free_cpumask_var(pmu->mask);
+out:
+	kfree(pmu);
+
+	return ret;
+}
+
+static void power_cpu_kfree(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return;
+
+	free_cpumask_var(pmu->mask);
+	free_cpumask_var(pmu->tmp_mask);
+	kfree(pmu);
+
+	per_cpu(amd_power_pmu, cpu) = NULL;
+}
+
+static int
+power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
+{
+	unsigned int cpu = (long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_UP_PREPARE:
+		if (power_cpu_prepare(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_STARTING:
+		if (power_cpu_init(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_DEAD:
+		power_cpu_kfree(cpu);
+		break;
+	case CPU_DOWN_PREPARE:
+		if (power_cpu_exit(cpu))
+			return NOTIFY_BAD;
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static const struct x86_cpu_id cpu_match[] = {
+	{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
+	{},
+};
+
+static int __init amd_power_pmu_init(void)
+{
+	int i, ret;
+	u64 tmp;
+
+	if (!x86_match_cpu(cpu_match))
+		return 0;
+
+	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
+		return -ENODEV;
+
+	cores_per_cu = amd_get_cores_per_cu();
+
+	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
+
+	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
+		pr_err("Failed to read max compute unit power accumulator MSR\n");
+		return -ENODEV;
+	}
+	max_cu_acc_power = tmp;
+
+	cpu_notifier_register_begin();
+
+	/* Choose one online core of each compute unit.  */
+	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
+		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
+		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask);
+	}
+
+	for_each_present_cpu(i) {
+		ret = power_cpu_prepare(i);
+		if (ret) {
+			/* Unwind on [0 ... i-1] CPUs. */
+			while (i--)
+				power_cpu_kfree(i);
+			goto out;
+		}
+		ret = power_cpu_init(i);
+		if (ret) {
+			/* Unwind on [0 ... i] CPUs. */
+			while (i >= 0)
+				power_cpu_kfree(i--);
+			goto out;
+		}
+	}
+
+	__perf_cpu_notifier(power_cpu_notifier);
+
+	ret = perf_pmu_register(&pmu_class, "power", -1);
+	if (WARN_ON(ret)) {
+		pr_warn("AMD Power PMU registration failed\n");
+		goto out;
+	}
+
+	pr_info("AMD Power PMU detected.\n");
+
+out:
+	cpu_notifier_register_done();
+
+	return ret;
+}
+device_initcall(amd_power_pmu_init);
-- 
2.3.5


-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-28  9:03 ` Borislav Petkov
                     ` (3 preceding siblings ...)
  2016-01-28 10:04   ` [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting kbuild test robot
@ 2016-01-28 15:28   ` Peter Zijlstra
  2016-01-29  8:18     ` Huang Rui
  4 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2016-01-28 15:28 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Huang Rui, Borislav Petkov, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker, linux-kernel, spg_linux_kernel,
	x86, Guenter Roeck, Andreas Herrmann, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Aaron Lu

On Thu, Jan 28, 2016 at 10:03:15AM +0100, Borislav Petkov wrote:

> +
> +struct power_pmu {
> +	raw_spinlock_t		lock;

Now that the list is gone, what does this thing protect?

> +	struct pmu		*pmu;

This member seems superfluous, there's only the one possible value.

> +	local64_t		cpu_sw_pwr_ptsc;
> +
> +	/*
> +	 * These two cpumasks are used for avoiding the allocations on the
> +	 * CPU_STARTING phase because power_cpu_prepare() will be called with
> +	 * IRQs disabled.
> +	 */
> +	cpumask_var_t		mask;
> +	cpumask_var_t		tmp_mask;
> +};
> +
> +static struct pmu pmu_class;
> +
> +/*
> + * Accumulated power represents the sum of each compute unit's (CU) power
> + * consumption. On any core of each CU we read the total accumulated power from
> + * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
> + * which are picked to measure the power for the CUs they belong to.
> + */
> +static cpumask_t cpu_mask;
> +
> +static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu);
> +
> +static u64 event_update(struct perf_event *event, struct power_pmu *pmu)
> +{

Is there ever a case where @pmu != __this_cpu_read(power_pmu) ?

> +	struct hw_perf_event *hwc = &event->hw;
> +	u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc;
> +	u64 delta, tdelta;
> +
> +again:
> +	prev_raw_count = local64_read(&hwc->prev_count);
> +	prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc);
> +	rdmsrl(event->hw.event_base, new_raw_count);

Is hw.event_base != MSR_F15H_CU_PWR_ACCUMULATOR possible?

> +	rdmsrl(MSR_F15H_PTSC, new_ptsc);


Also, I suspect this doesn't do what you expect it to do.

We measure per-event PWR_ACC deltas, but per CPU PTSC values. These do
not match when there's more than 1 event on the CPU.

I would suggest adding a new struct to the hw_perf_event union with the
two u64 deltas like:

	struct { /* amd_power */
		u64 pwr_acc;
		u64 ptsc;
	};

And track these values per-event.

> +
> +	if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> +			    new_raw_count) != prev_raw_count) {
> +		cpu_relax();
> +		goto again;
> +	}
> +
> +	/*
> +	 * Calculate the CU power consumption over a time period, the unit of
> +	 * final value (delta) is micro-Watts. Then add it to the event count.
> +	 */
> +	if (new_raw_count < prev_raw_count) {
> +		delta = max_cu_acc_power + new_raw_count;
> +		delta -= prev_raw_count;
> +	} else
> +		delta = new_raw_count - prev_raw_count;
> +
> +	delta *= cpu_pwr_sample_ratio * 1000;
> +	tdelta = new_ptsc - prev_ptsc;
> +
> +	do_div(delta, tdelta);
> +	local64_add(delta, &event->count);

Then this division can be redone on the total values, that looses less
precision over-all.

> +
> +	return new_raw_count;
> +}

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-28 15:28   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Peter Zijlstra
@ 2016-01-29  8:18     ` Huang Rui
  0 siblings, 0 replies; 10+ messages in thread
From: Huang Rui @ 2016-01-29  8:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Borislav Petkov, Borislav Petkov, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Fengguang Wu,
	Aaron Lu

On Thu, Jan 28, 2016 at 04:28:48PM +0100, Peter Zijlstra wrote:
> On Thu, Jan 28, 2016 at 10:03:15AM +0100, Borislav Petkov wrote:
> 
> > +
> > +struct power_pmu {
> > +	raw_spinlock_t		lock;
> 
> Now that the list is gone, what does this thing protect?
> 

Protect the event count value before measure it.

> > +	struct pmu		*pmu;
> 
> This member seems superfluous, there's only the one possible value.
> 

Currently, it's only one. But there will be more power pmu types in
future processors. Acc power is one of them.

> > +	local64_t		cpu_sw_pwr_ptsc;
> > +
> > +	/*
> > +	 * These two cpumasks are used for avoiding the allocations on the
> > +	 * CPU_STARTING phase because power_cpu_prepare() will be called with
> > +	 * IRQs disabled.
> > +	 */
> > +	cpumask_var_t		mask;
> > +	cpumask_var_t		tmp_mask;
> > +};
> > +
> > +static struct pmu pmu_class;
> > +
> > +/*
> > + * Accumulated power represents the sum of each compute unit's (CU) power
> > + * consumption. On any core of each CU we read the total accumulated power from
> > + * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
> > + * which are picked to measure the power for the CUs they belong to.
> > + */
> > +static cpumask_t cpu_mask;
> > +
> > +static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu);
> > +
> > +static u64 event_update(struct perf_event *event, struct power_pmu *pmu)
> > +{
> 
> Is there ever a case where @pmu != __this_cpu_read(power_pmu) ?
> 

It only might be called at pmu:{read, stop}, they ensure
__this_cpu_read(amd_power_pmu). Is there any other case I missed?

> > +	struct hw_perf_event *hwc = &event->hw;
> > +	u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc;
> > +	u64 delta, tdelta;
> > +
> > +again:
> > +	prev_raw_count = local64_read(&hwc->prev_count);
> > +	prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc);
> > +	rdmsrl(event->hw.event_base, new_raw_count);
> 
> Is hw.event_base != MSR_F15H_CU_PWR_ACCUMULATOR possible?
> 

Any case that I missed?

Could you explain more?

> > +	rdmsrl(MSR_F15H_PTSC, new_ptsc);
> 
> 
> Also, I suspect this doesn't do what you expect it to do.
> 
> We measure per-event PWR_ACC deltas, but per CPU PTSC values. These do
> not match when there's more than 1 event on the CPU.
> 

OK, I see. My intention of pre-event's count (event->count) should be
PWR_ACC values after divided by PTSC. But here we cannot use
local64_read(&hwc->prev_count) as previous value of PWR_ACC before
divided by PTSC. Thanks to catch it.

> I would suggest adding a new struct to the hw_perf_event union with the
> two u64 deltas like:
> 
> 	struct { /* amd_power */
> 		u64 pwr_acc;
> 		u64 ptsc;
> 	};
> 
> And track these values per-event.
> 

Thanks to reminder.

Thanks,
Rui

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-01-29  8:18 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-28  6:38 [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
2016-01-28  9:03 ` Borislav Petkov
2016-01-28  9:39   ` [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting kbuild test robot
2016-01-28  9:41   ` kbuild test robot
2016-01-28 10:01   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
2016-01-28 12:42     ` Borislav Petkov
2016-01-28 14:54       ` Borislav Petkov
2016-01-28 10:04   ` [PATCH] perf/x86/amd/power: Add AMD accumulated power reporting kbuild test robot
2016-01-28 15:28   ` [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Peter Zijlstra
2016-01-29  8:18     ` Huang Rui

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.