linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
@ 2016-02-26  9:40 Huang Rui
  2016-02-26 10:18 ` Thomas Gleixner
  0 siblings, 1 reply; 8+ messages in thread
From: Huang Rui @ 2016-02-26  9:40 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin,
	Arnaldo Carvalho de Melo, Kan Liang
  Cc: linux-kernel, spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Borislav Petkov, Fengguang Wu, Huang Rui,
	Guenter Roeck

Introduce an AMD accumlated power reporting mechanism for the Family
15h, Model 60h processor that can be used to calculate the average
power consumed by a processor during a measurement interval. The
feature support is indicated by CPUID Fn8000_0007_EDX[12].

This feature will be implemented both in hwmon and perf. The current
design provides one event to report per package/processor power
consumption by counting each compute unit power value.

Here the gory details of how the computation is done:

---------------------------------------------------------------------
* Tsample: compute unit power accumulator sample period
* Tref: the PTSC counter period (PTSC: performance timestamp counter)
* N: the ratio of compute unit power accumulator sample period to the
  PTSC period

* Jmax: max compute unit accumulated power which is indicated by
  MSR_C001007b[MaxCpuSwPwrAcc]

* Jx/Jy: compute unit accumulated power which is indicated by
  MSR_C001007a[CpuSwPwrAcc]

* Tx/Ty: the value of performance timestamp counter which is indicated
  by CU_PTSC MSR_C0010280[PTSC]
* PwrCPUave: CPU average power

i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007.
	N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]].

ii. Read the full range of the cumulative energy value from the new
    MSR MaxCpuSwPwrAcc.
	Jmax = value returned.

iii. At time x, software reads CpuSwPwrAcc and samples the PTSC.
	Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC.

iv. At time y, software reads CpuSwPwrAcc and samples the PTSC.
	Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC.

v. Calculate the average power consumption for a compute unit over
time period (y-x). Unit of result is uWatt:

	if (Jy < Jx) // Rollover has occurred
		Jdelta = (Jy + Jmax) - Jx
	else
		Jdelta = Jy - Jx
	PwrCPUave = N * Jdelta * 1000 / (Ty - Tx)
----------------------------------------------------------------------

Simple example:

  root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4
    CHK     include/config/kernel.release
    CHK     include/generated/uapi/linux/version.h
    CHK     include/generated/utsrelease.h
    CHK     include/generated/timeconst.h
    CHK     include/generated/bounds.h
    CHK     include/generated/asm-offsets.h
    CALL    scripts/checksyscalls.sh
    CHK     include/generated/compile.h
    SKIPPED include/generated/compile.h
    Building modules, stage 2.
  Kernel: arch/x86/boot/bzImage is ready  (#40)
    MODPOST 4225 modules

   Performance counter stats for 'system wide':

              183.44 mWatts power/power-pkg/

       341.837270111 seconds time elapsed

  root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10

   Performance counter stats for 'system wide':

                0.18 mWatts power/power-pkg/

        10.012551815 seconds time elapsed

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Ingo Molnar <mingo@kernel.org>
Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Cc: Guenter Roeck <linux@roeck-us.net>
---

Hi,

This series of patches introduces the perf implementation of
accumulated power reporting algorithm. It will calculate the average
power consumption for the processor. The CPU feature flag is
CPUID.8000_0007H:EDX[12].

The V5 is rebased on bp/tip-perf.

Changes from v1 -> v2:
- Add a patch to fix the build issue which is reported by kbuild test
  robot.

Changes from v2 -> v3:
- Use raw_spinlock_t instead of spinlock_t, because it need meet the
  -rt mode use case.
- Use topology_sibling_cpumask to make the cpumask operation easier.

Changes from v3 -> v4:
- Remove active_list, because it is not iterated.
- Capitalize sentences consistently and fix some typos.
- Fix some code style issues.
- Initialize structures in a vertically aligned manner.
- Remove unnecessary comment.
- Fix the runtime bug, and do some testing on CPU-hotplug scenario.

Changes from v4 -> v5:
- Remove "struct pmu" and lock from power_pmu, and rename it to
  power_pmu_masks
- As Peter's suggestion, add a new struct to hw_perf_event, and track
  these values from per-event.

Thanks,
Rui

---
 arch/x86/kernel/cpu/Makefile               |   1 +
 arch/x86/kernel/cpu/perf_event_amd_power.c | 445 +++++++++++++++++++++++++++++
 include/linux/perf_event.h                 |   4 +
 3 files changed, 450 insertions(+)
 create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index faa7b52..ffc9650 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS)		+= perf_event.o
 
 ifdef CONFIG_PERF_EVENTS
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd.o perf_event_amd_uncore.o
+obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_power.o
 ifdef CONFIG_AMD_IOMMU
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_iommu.o
 endif
diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
new file mode 100644
index 0000000..fe2e5e0
--- /dev/null
+++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
@@ -0,0 +1,445 @@
+/*
+ * Performance events - AMD Processor Power Reporting Mechanism
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Huang Rui <ray.huang@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/perf_event.h>
+#include <asm/cpu_device_id.h>
+#include "perf_event.h"
+
+#define MSR_F15H_CU_PWR_ACCUMULATOR     0xc001007a
+#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
+#define MSR_F15H_PTSC			0xc0010280
+
+/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
+#define AMD_POWER_EVENT_MASK	0xFFULL
+
+#define MAX_CUS	8
+
+/*
+ * Accumulated power status counters.
+ */
+#define AMD_POWER_EVENTSEL_PKG		1
+
+/*
+ * The ratio of compute unit power accumulator sample period to the
+ * PTSC period.
+ */
+static unsigned int cpu_pwr_sample_ratio;
+static unsigned int cores_per_cu;
+static unsigned int cu_num;
+
+/* Maximum accumulated power of a compute unit. */
+static u64 max_cu_acc_power;
+
+struct power_pmu_masks {
+	/*
+	 * These two cpumasks are used for avoiding the allocations on the
+	 * CPU_STARTING phase because power_cpu_prepare() will be called with
+	 * IRQs disabled.
+	 */
+	cpumask_var_t		mask;
+	cpumask_var_t		tmp_mask;
+};
+
+static struct pmu pmu_class;
+
+/*
+ * Accumulated power represents the sum of each compute unit's (CU) power
+ * consumption. On any core of each CU we read the total accumulated power from
+ * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
+ * which are picked to measure the power for the CUs they belong to.
+ */
+static cpumask_t cpu_mask;
+
+static DEFINE_PER_CPU(struct power_pmu_masks *, amd_power_pmu);
+
+static void event_update(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 prev_pwr_acc, new_pwr_acc, prev_ptsc, new_ptsc;
+	u64 delta, tdelta;
+
+	prev_pwr_acc = hwc->pwr_acc;
+	prev_ptsc = hwc->ptsc;
+	rdmsrl(MSR_F15H_CU_PWR_ACCUMULATOR, new_pwr_acc);
+	rdmsrl(MSR_F15H_PTSC, new_ptsc);
+
+	/*
+	 * Calculate the CU power consumption over a time period, the unit of
+	 * final value (delta) is micro-Watts. Then add it to the event count.
+	 */
+	if (new_pwr_acc < prev_pwr_acc) {
+		delta = max_cu_acc_power + new_pwr_acc;
+		delta -= prev_pwr_acc;
+	} else
+		delta = new_pwr_acc - prev_pwr_acc;
+
+	delta *= cpu_pwr_sample_ratio * 1000;
+	tdelta = new_ptsc - prev_ptsc;
+
+	do_div(delta, tdelta);
+	local64_add(delta, &event->count);
+}
+
+static void __pmu_event_start(struct perf_event *event)
+{
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+		return;
+
+	event->hw.state = 0;
+
+	rdmsrl(MSR_F15H_PTSC, event->hw.ptsc);
+	rdmsrl(MSR_F15H_CU_PWR_ACCUMULATOR, event->hw.pwr_acc);
+}
+
+static void pmu_event_start(struct perf_event *event, int mode)
+{
+	__pmu_event_start(event);
+}
+
+static void pmu_event_stop(struct perf_event *event, int mode)
+{
+	struct hw_perf_event *hwc = &event->hw;
+
+	/* Mark event as deactivated and stopped. */
+	if (!(hwc->state & PERF_HES_STOPPED))
+		hwc->state |= PERF_HES_STOPPED;
+
+	/* Check if software counter update is necessary. */
+	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+		/*
+		 * Drain the remaining delta count out of an event
+		 * that we are disabling:
+		 */
+		event_update(event);
+		hwc->state |= PERF_HES_UPTODATE;
+	}
+}
+
+static int pmu_event_add(struct perf_event *event, int mode)
+{
+	struct hw_perf_event *hwc = &event->hw;
+
+	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+
+	if (mode & PERF_EF_START)
+		__pmu_event_start(event);
+
+	return 0;
+}
+
+static void pmu_event_del(struct perf_event *event, int flags)
+{
+	pmu_event_stop(event, PERF_EF_UPDATE);
+}
+
+static int pmu_event_init(struct perf_event *event)
+{
+	u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK;
+
+	/* Only look at AMD power events. */
+	if (event->attr.type != pmu_class.type)
+		return -ENOENT;
+
+	/* Unsupported modes and filters. */
+	if (event->attr.exclude_user   ||
+	    event->attr.exclude_kernel ||
+	    event->attr.exclude_hv     ||
+	    event->attr.exclude_idle   ||
+	    event->attr.exclude_host   ||
+	    event->attr.exclude_guest  ||
+	    /* no sampling */
+	    event->attr.sample_period)
+		return -EINVAL;
+
+	if (cfg != AMD_POWER_EVENTSEL_PKG)
+		return -EINVAL;
+
+	return 0;
+}
+
+static void pmu_event_read(struct perf_event *event)
+{
+	event_update(event);
+}
+
+static ssize_t
+get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpumap_print_to_pagebuf(true, buf, &cpu_mask);
+}
+
+static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL);
+
+static struct attribute *pmu_attrs[] = {
+	&dev_attr_cpumask.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_attr_group = {
+	.attrs = pmu_attrs,
+};
+
+/*
+ * Currently it only supports to report the power of each
+ * processor/package.
+ */
+EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
+
+EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts");
+
+/* Convert the count from micro-Watts to milli-Watts. */
+EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3");
+
+static struct attribute *events_attr[] = {
+	EVENT_PTR(power_pkg),
+	EVENT_PTR(power_pkg_unit),
+	EVENT_PTR(power_pkg_scale),
+	NULL,
+};
+
+static struct attribute_group pmu_events_group = {
+	.name	= "events",
+	.attrs	= events_attr,
+};
+
+PMU_FORMAT_ATTR(event, "config:0-7");
+
+static struct attribute *formats_attr[] = {
+	&format_attr_event.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_format_group = {
+	.name	= "format",
+	.attrs	= formats_attr,
+};
+
+static const struct attribute_group *attr_groups[] = {
+	&pmu_attr_group,
+	&pmu_format_group,
+	&pmu_events_group,
+	NULL,
+};
+
+static struct pmu pmu_class = {
+	.attr_groups	= attr_groups,
+	/* system-wide only */
+	.task_ctx_nr	= perf_invalid_context,
+	.event_init	= pmu_event_init,
+	.add		= pmu_event_add,
+	.del		= pmu_event_del,
+	.start		= pmu_event_start,
+	.stop		= pmu_event_stop,
+	.read		= pmu_event_read,
+};
+
+static int power_cpu_exit(int cpu)
+{
+	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
+	int target = nr_cpumask_bits;
+	int ret = 0;
+
+	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
+
+	cpumask_clear_cpu(cpu, &cpu_mask);
+	cpumask_clear_cpu(cpu, pmu->mask);
+
+	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
+		goto out;
+
+	/*
+	 * Find a new CPU on the same compute unit, if was set in cpumask
+	 * and still some CPUs on compute unit. Then move on to the new CPU.
+	 */
+	target = cpumask_any(pmu->tmp_mask);
+	if (target < nr_cpumask_bits && target != cpu)
+		cpumask_set_cpu(target, &cpu_mask);
+
+	WARN_ON(cpumask_empty(&cpu_mask));
+
+out:
+	/*
+	 * Migrate event and context to new CPU.
+	 */
+	if (target < nr_cpumask_bits)
+		perf_pmu_migrate_context(&pmu_class, cpu, target);
+
+	return ret;
+
+}
+
+static int power_cpu_init(int cpu)
+{
+	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return 0;
+
+	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask))
+		cpumask_set_cpu(cpu, &cpu_mask);
+
+	return 0;
+}
+
+static int power_cpu_prepare(int cpu)
+{
+	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
+	int phys_id = topology_physical_package_id(cpu);
+	int ret = 0;
+
+	if (pmu)
+		return 0;
+
+	if (phys_id < 0)
+		return -EINVAL;
+
+	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
+	if (!pmu)
+		return -ENOMEM;
+
+	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out1;
+	}
+
+	per_cpu(amd_power_pmu, cpu) = pmu;
+
+	return 0;
+
+out1:
+	free_cpumask_var(pmu->mask);
+out:
+	kfree(pmu);
+
+	return ret;
+}
+
+static void power_cpu_kfree(int cpu)
+{
+	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
+
+	if (!pmu)
+		return;
+
+	free_cpumask_var(pmu->mask);
+	free_cpumask_var(pmu->tmp_mask);
+	kfree(pmu);
+
+	per_cpu(amd_power_pmu, cpu) = NULL;
+}
+
+static int
+power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
+{
+	unsigned int cpu = (long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_UP_PREPARE:
+		if (power_cpu_prepare(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_STARTING:
+		if (power_cpu_init(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_DEAD:
+		power_cpu_kfree(cpu);
+		break;
+	case CPU_DOWN_PREPARE:
+		if (power_cpu_exit(cpu))
+			return NOTIFY_BAD;
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static const struct x86_cpu_id cpu_match[] = {
+	{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
+	{},
+};
+
+static int __init amd_power_pmu_init(void)
+{
+	int i, ret;
+	u64 tmp;
+
+	if (!x86_match_cpu(cpu_match))
+		return 0;
+
+	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
+		return -ENODEV;
+
+	cores_per_cu = amd_get_cores_per_cu();
+	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
+
+	if (WARN_ON_ONCE(cu_num > MAX_CUS))
+		return -EINVAL;
+
+	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
+
+	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
+		pr_err("Failed to read max compute unit power accumulator MSR\n");
+		return -ENODEV;
+	}
+	max_cu_acc_power = tmp;
+
+	cpu_notifier_register_begin();
+
+	/* Choose one online core of each compute unit.  */
+	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
+		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
+		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask);
+	}
+
+	for_each_present_cpu(i) {
+		ret = power_cpu_prepare(i);
+		if (ret) {
+			/* Unwind on [0 ... i-1] CPUs. */
+			while (i--)
+				power_cpu_kfree(i);
+			goto out;
+		}
+		ret = power_cpu_init(i);
+		if (ret) {
+			/* Unwind on [0 ... i] CPUs. */
+			while (i >= 0)
+				power_cpu_kfree(i--);
+			goto out;
+		}
+	}
+
+	__perf_cpu_notifier(power_cpu_notifier);
+
+	ret = perf_pmu_register(&pmu_class, "power", -1);
+	if (WARN_ON(ret)) {
+		pr_warn("AMD Power PMU registration failed\n");
+		goto out;
+	}
+
+	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
+
+out:
+	cpu_notifier_register_done();
+
+	return ret;
+}
+device_initcall(amd_power_pmu_init);
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f9828a4..01ea21c 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -128,6 +128,10 @@ struct hw_perf_event {
 		struct { /* itrace */
 			int			itrace_started;
 		};
+		struct { /* amd_power */
+			u64	pwr_acc;
+			u64	ptsc;
+		};
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
 		struct { /* breakpoint */
 			/*
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-02-26  9:40 [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
@ 2016-02-26 10:18 ` Thomas Gleixner
  2016-02-26 10:29   ` Borislav Petkov
  2016-02-29 18:00   ` Huang Rui
  0 siblings, 2 replies; 8+ messages in thread
From: Thomas Gleixner @ 2016-02-26 10:18 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Robert Richter, Jacob Shin, Arnaldo Carvalho de Melo, Kan Liang,
	linux-kernel, spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Borislav Petkov, Fengguang Wu,
	Guenter Roeck

On Fri, 26 Feb 2016, Huang Rui wrote:
> +/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
> +#define AMD_POWER_EVENT_MASK	0xFFULL
> +
> +#define MAX_CUS	8

What's that define for? Max compute units? So is that stuff eternaly limited
to 8?

> +struct power_pmu_masks {
> +	/*
> +	 * These two cpumasks are used for avoiding the allocations on the
> +	 * CPU_STARTING phase because power_cpu_prepare() will be called with
> +	 * IRQs disabled.
> +	 */
> +	cpumask_var_t		mask;
> +	cpumask_var_t		tmp_mask;
> +};
> +
> +static struct pmu pmu_class;
> +
> +/*
> + * Accumulated power represents the sum of each compute unit's (CU) power
> + * consumption. On any core of each CU we read the total accumulated power from
> + * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores
> + * which are picked to measure the power for the CUs they belong to.
> + */
> +static cpumask_t cpu_mask;
> +
> +static DEFINE_PER_CPU(struct power_pmu_masks *, amd_power_pmu);

This is complete overkill, really.

> +static int power_cpu_exit(int cpu)
> +{
> +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> +	int target = nr_cpumask_bits;
> +	int ret = 0;
> +
> +	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
> +
> +	cpumask_clear_cpu(cpu, &cpu_mask);
> +	cpumask_clear_cpu(cpu, pmu->mask);
> +
> +	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
> +		goto out;
> +	/*
> +	 * Find a new CPU on the same compute unit, if was set in cpumask
> +	 * and still some CPUs on compute unit. Then move on to the new CPU.
> +	 */
> +	target = cpumask_any(pmu->tmp_mask);
> +	if (target < nr_cpumask_bits && target != cpu)
> +		cpumask_set_cpu(target, &cpu_mask);
> +
> +	WARN_ON(cpumask_empty(&cpu_mask));
> +
> +out:
> +	/*
> +	 * Migrate event and context to new CPU.
> +	 */
> +	if (target < nr_cpumask_bits)
> +		perf_pmu_migrate_context(&pmu_class, cpu, target);
> +
> +	return ret;

Sigh. That whole thing can be simply done with:

      	if (!cpumask_test_and_clear_cpu(cpu, &cpu_mask))
	   	return;

	target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
	if (target < nr_cpumask_bits) {
	   	 cpumask_set_cpu(target, &uncore_cpu_mask);  
		 perf_pmu_migrate_context(&pmu_class, cpu, target);
	}


> +
> +}
> +
> +static int power_cpu_init(int cpu)
> +{
> +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> +
> +	if (!pmu)
> +		return 0;
> +
> +	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask))
> +		cpumask_set_cpu(cpu, &cpu_mask);

And that's equally convoluted and can be done with:

    	target = cpumask_any_and(&cpu_mask, topology_sibling_cpumask(cpu));
    	if (target < nr_cpu_ids)
        	cpumask_set_cpu(cpu, &cpu_mask);

> +static int power_cpu_prepare(int cpu)
> +{
> +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> +	int phys_id = topology_physical_package_id(cpu);
> +	int ret = 0;
> +
> +	if (pmu)
> +		return 0;
> +
> +	if (phys_id < 0)
> +		return -EINVAL;
> +
> +	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
> +	if (!pmu)
> +		return -ENOMEM;
> +
> +	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto out1;
> +	}
> +
> +	per_cpu(amd_power_pmu, cpu) = pmu;
> +	return 0;
> +
> +out1:
> +	free_cpumask_var(pmu->mask);
> +out:
> +	kfree(pmu);
> +
> +	return ret;
> +}
> +
> +static void power_cpu_kfree(int cpu)
> +{
> +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> +
> +	if (!pmu)
> +		return;
> +
> +	free_cpumask_var(pmu->mask);
> +	free_cpumask_var(pmu->tmp_mask);
> +	kfree(pmu);
> +
> +	per_cpu(amd_power_pmu, cpu) = NULL;
> +}

Both prepare and free can go away.

> +static int __init amd_power_pmu_init(void)
> +{
> +	int i, ret;
> +	u64 tmp;
> +
> +	if (!x86_match_cpu(cpu_match))
> +		return 0;
> +
> +	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
> +		return -ENODEV;
> +
> +	cores_per_cu = amd_get_cores_per_cu();
> +	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;

Please use the new package management functions which are on the way to tip.

> +
> +	if (WARN_ON_ONCE(cu_num > MAX_CUS))
> +		return -EINVAL;
> +
> +	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
> +
> +	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
> +		pr_err("Failed to read max compute unit power accumulator MSR\n");
> +		return -ENODEV;
> +	}
> +	max_cu_acc_power = tmp;
> +
> +	cpu_notifier_register_begin();
> +
> +	/* Choose one online core of each compute unit.  */
> +	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
> +		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
> +		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask);
> +	}



> +
> +	for_each_present_cpu(i) {
> +		ret = power_cpu_prepare(i);
> +		if (ret) {
> +			/* Unwind on [0 ... i-1] CPUs. */
> +			while (i--)
> +				power_cpu_kfree(i);
> +			goto out;
> +		}

  		Not needed.

> +		ret = power_cpu_init(i);
> +		if (ret) {
> +			/* Unwind on [0 ... i] CPUs. */
> +			while (i >= 0)
> +				power_cpu_kfree(i--);
> +			goto out;
> +		}

You cannot issue the CPU STARTING callback on present, but not online cpus.

> +	}
> +
> +	__perf_cpu_notifier(power_cpu_notifier);
> +
> +	ret = perf_pmu_register(&pmu_class, "power", -1);
> +	if (WARN_ON(ret)) {
> +		pr_warn("AMD Power PMU registration failed\n");


So that leaves the notifier installed and leaks all the allocated memory.

> +		goto out;
> +	}
> +
> +	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
> +
> +out:
> +	cpu_notifier_register_done();
> +
> +	return ret;
> +}
> +device_initcall(amd_power_pmu_init);

Please make this as a module right away. There is no reason to compile all
this in.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-02-26 10:18 ` Thomas Gleixner
@ 2016-02-26 10:29   ` Borislav Petkov
  2016-02-29 15:40     ` Huang Rui
  2016-02-29 18:00   ` Huang Rui
  1 sibling, 1 reply; 8+ messages in thread
From: Borislav Petkov @ 2016-02-26 10:29 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Huang Rui, Borislav Petkov, Peter Zijlstra, Ingo Molnar,
	Andy Lutomirski, Robert Richter, Jacob Shin,
	Arnaldo Carvalho de Melo, Kan Liang, linux-kernel,
	spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Guenter Roeck

On Fri, Feb 26, 2016 at 11:18:28AM +0100, Thomas Gleixner wrote:
> On Fri, 26 Feb 2016, Huang Rui wrote:
> > +/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
> > +#define AMD_POWER_EVENT_MASK	0xFFULL
> > +
> > +#define MAX_CUS	8
> 
> What's that define for? Max compute units? So is that stuff eternaly limited
> to 8?

I already sent him a cleaned up version with that dumbness removed:

https://lkml.kernel.org/r/20160128145436.GE14274@pd.tnic

Rui, what's up?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-02-29 18:00   ` Huang Rui
@ 2016-02-29 11:30     ` Thomas Gleixner
  2016-03-01 13:58       ` Borislav Petkov
  0 siblings, 1 reply; 8+ messages in thread
From: Thomas Gleixner @ 2016-02-29 11:30 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Robert Richter, Jacob Shin, Arnaldo Carvalho de Melo, Kan Liang,
	linux-kernel, spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Borislav Petkov, Fengguang Wu,
	Guenter Roeck

On Tue, 1 Mar 2016, Huang Rui wrote:
> On Fri, Feb 26, 2016 at 11:18:28AM +0100, Thomas Gleixner wrote:
> > > +static int __init amd_power_pmu_init(void)
> > > +{
> > > +	int i, ret;
> > > +	u64 tmp;
> > > +
> > > +	if (!x86_match_cpu(cpu_match))
> > > +		return 0;
> > > +
> > > +	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
> > > +		return -ENODEV;
> > > +
> > > +	cores_per_cu = amd_get_cores_per_cu();
> > > +	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
> > 
> > Please use the new package management functions which are on the way to tip.
> > 
> Can you give me some hints?

http://git.kernel.org/tip/1f12e32f4cd5243ae46d8b933181be0d022c6793

topology_max_packages() is what you want.
 
> > You cannot issue the CPU STARTING callback on present, but not online cpus.
> > 
> 
> Do you mean we should change for_each_present_cpu to
> for_each_online_cpu?
> 
> My orignal intent here, it's to allocate data structures of
> "power_pmu_masks" for each core.

Sure, you can allocate stuff for present cpus, but you cannot call the
CPU_STARTING callback for offline cpus.

> But now, I think we needn't below codes, right?

-ENOPARSE
 
> for_each_online_cpu(i)
>         power_cpu_init(i);

You still need that.

> > > +
> > > +	__perf_cpu_notifier(power_cpu_notifier);
> > > +
> > > +	ret = perf_pmu_register(&pmu_class, "power", -1);
> > > +	if (WARN_ON(ret)) {
> > > +		pr_warn("AMD Power PMU registration failed\n");
> > 
> > 
> > So that leaves the notifier installed and leaks all the allocated memory.
> > 
> 
> OK, do you mean "issue CPU_STARTING callback on present cpus" will
> cause the memory leak here? Could you please explain more?

Oh well, can you actually read your own code? You allocate a gazillion of
memory in power_cpu_prepare() and on error you just leak it.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-02-26 10:29   ` Borislav Petkov
@ 2016-02-29 15:40     ` Huang Rui
  0 siblings, 0 replies; 8+ messages in thread
From: Huang Rui @ 2016-02-29 15:40 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Ingo Molnar,
	Andy Lutomirski, Robert Richter, Jacob Shin,
	Arnaldo Carvalho de Melo, Kan Liang, linux-kernel,
	spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Guenter Roeck

On Fri, Feb 26, 2016 at 11:29:52AM +0100, Borislav Petkov wrote:
> On Fri, Feb 26, 2016 at 11:18:28AM +0100, Thomas Gleixner wrote:
> > On Fri, 26 Feb 2016, Huang Rui wrote:
> > > +/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */
> > > +#define AMD_POWER_EVENT_MASK	0xFFULL
> > > +
> > > +#define MAX_CUS	8
> > 
> > What's that define for? Max compute units? So is that stuff eternaly limited
> > to 8?
> 
> I already sent him a cleaned up version with that dumbness removed:
> 
> https://lkml.kernel.org/r/20160128145436.GE14274@pd.tnic
> 
> Rui, what's up?
> 

Sorry, I will remove superfluous MAX_CUS check at next version.

Thanks,
Rui

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-02-26 10:18 ` Thomas Gleixner
  2016-02-26 10:29   ` Borislav Petkov
@ 2016-02-29 18:00   ` Huang Rui
  2016-02-29 11:30     ` Thomas Gleixner
  1 sibling, 1 reply; 8+ messages in thread
From: Huang Rui @ 2016-02-29 18:00 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Robert Richter, Jacob Shin, Arnaldo Carvalho de Melo, Kan Liang,
	linux-kernel, spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Borislav Petkov, Fengguang Wu,
	Guenter Roeck

Hi Thomas,

Thank you to your comments. :-)

On Fri, Feb 26, 2016 at 11:18:28AM +0100, Thomas Gleixner wrote:
> On Fri, 26 Feb 2016, Huang Rui wrote:
> 
> > +
> > +static DEFINE_PER_CPU(struct power_pmu_masks *, amd_power_pmu);
> 
> This is complete overkill, really.
> 
> > +static int power_cpu_exit(int cpu)
> > +{
> > +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> > +	int target = nr_cpumask_bits;
> > +	int ret = 0;
> > +
> > +	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
> > +
> > +	cpumask_clear_cpu(cpu, &cpu_mask);
> > +	cpumask_clear_cpu(cpu, pmu->mask);
> > +
> > +	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
> > +		goto out;
> > +	/*
> > +	 * Find a new CPU on the same compute unit, if was set in cpumask
> > +	 * and still some CPUs on compute unit. Then move on to the new CPU.
> > +	 */
> > +	target = cpumask_any(pmu->tmp_mask);
> > +	if (target < nr_cpumask_bits && target != cpu)
> > +		cpumask_set_cpu(target, &cpu_mask);
> > +
> > +	WARN_ON(cpumask_empty(&cpu_mask));
> > +
> > +out:
> > +	/*
> > +	 * Migrate event and context to new CPU.
> > +	 */
> > +	if (target < nr_cpumask_bits)
> > +		perf_pmu_migrate_context(&pmu_class, cpu, target);
> > +
> > +	return ret;
> 
> Sigh. That whole thing can be simply done with:
> 
>       	if (!cpumask_test_and_clear_cpu(cpu, &cpu_mask))
> 	   	return;
> 
> 	target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
> 	if (target < nr_cpumask_bits) {
> 	   	 cpumask_set_cpu(target, &uncore_cpu_mask);  
> 		 perf_pmu_migrate_context(&pmu_class, cpu, target);
> 	}
> 
> 
> > +
> > +}
> > +
> > +static int power_cpu_init(int cpu)
> > +{
> > +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> > +
> > +	if (!pmu)
> > +		return 0;
> > +
> > +	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask))
> > +		cpumask_set_cpu(cpu, &cpu_mask);
> 
> And that's equally convoluted and can be done with:
> 
>     	target = cpumask_any_and(&cpu_mask, topology_sibling_cpumask(cpu));
>     	if (target < nr_cpu_ids)
>         	cpumask_set_cpu(cpu, &cpu_mask);
> 
> > +static int power_cpu_prepare(int cpu)
> > +{
> > +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> > +	int phys_id = topology_physical_package_id(cpu);
> > +	int ret = 0;
> > +
> > +	if (pmu)
> > +		return 0;
> > +
> > +	if (phys_id < 0)
> > +		return -EINVAL;
> > +
> > +	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
> > +	if (!pmu)
> > +		return -ENOMEM;
> > +
> > +	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
> > +		ret = -ENOMEM;
> > +		goto out;
> > +	}
> > +
> > +	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
> > +		ret = -ENOMEM;
> > +		goto out1;
> > +	}
> > +
> > +	per_cpu(amd_power_pmu, cpu) = pmu;
> > +	return 0;
> > +
> > +out1:
> > +	free_cpumask_var(pmu->mask);
> > +out:
> > +	kfree(pmu);
> > +
> > +	return ret;
> > +}
> > +
> > +static void power_cpu_kfree(int cpu)
> > +{
> > +	struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu);
> > +
> > +	if (!pmu)
> > +		return;
> > +
> > +	free_cpumask_var(pmu->mask);
> > +	free_cpumask_var(pmu->tmp_mask);
> > +	kfree(pmu);
> > +
> > +	per_cpu(amd_power_pmu, cpu) = NULL;
> > +}
> 
> Both prepare and free can go away.
> 

OK, I got your point. After enhounce the implementation of cpu
notifier part of this driver, we can avoid to use weird
"DEFINE_PER_CPU(struct power_pmu_masks *, amd_power_pmu)". Thanks.

> > +static int __init amd_power_pmu_init(void)
> > +{
> > +	int i, ret;
> > +	u64 tmp;
> > +
> > +	if (!x86_match_cpu(cpu_match))
> > +		return 0;
> > +
> > +	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
> > +		return -ENODEV;
> > +
> > +	cores_per_cu = amd_get_cores_per_cu();
> > +	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
> 
> Please use the new package management functions which are on the way to tip.
> 

Can you give me some hints?

> > +
> > +	if (WARN_ON_ONCE(cu_num > MAX_CUS))
> > +		return -EINVAL;
> > +
> > +	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
> > +
> > +	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
> > +		pr_err("Failed to read max compute unit power accumulator MSR\n");
> > +		return -ENODEV;
> > +	}
> > +	max_cu_acc_power = tmp;
> > +
> > +	cpu_notifier_register_begin();
> > +
> > +	/* Choose one online core of each compute unit.  */
> > +	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
> > +		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
> > +		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask);
> > +	}
> 
> 
> 
> > +
> > +	for_each_present_cpu(i) {
> > +		ret = power_cpu_prepare(i);
> > +		if (ret) {
> > +			/* Unwind on [0 ... i-1] CPUs. */
> > +			while (i--)
> > +				power_cpu_kfree(i);
> > +			goto out;
> > +		}
> 
>   		Not needed.
> 
> > +		ret = power_cpu_init(i);
> > +		if (ret) {
> > +			/* Unwind on [0 ... i] CPUs. */
> > +			while (i >= 0)
> > +				power_cpu_kfree(i--);
> > +			goto out;
> > +		}
> 
> You cannot issue the CPU STARTING callback on present, but not online cpus.
> 

Do you mean we should change for_each_present_cpu to
for_each_online_cpu?

My orignal intent here, it's to allocate data structures of
"power_pmu_masks" for each core. But now, I think we needn't below
codes, right?

for_each_online_cpu(i)
        power_cpu_init(i);

> > +	}
> > +
> > +	__perf_cpu_notifier(power_cpu_notifier);
> > +
> > +	ret = perf_pmu_register(&pmu_class, "power", -1);
> > +	if (WARN_ON(ret)) {
> > +		pr_warn("AMD Power PMU registration failed\n");
> 
> 
> So that leaves the notifier installed and leaks all the allocated memory.
> 

OK, do you mean "issue CPU_STARTING callback on present cpus" will
cause the memory leak here? Could you please explain more?

> > +		goto out;
> > +	}
> > +
> > +	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
> > +
> > +out:
> > +	cpu_notifier_register_done();
> > +
> > +	return ret;
> > +}
> > +device_initcall(amd_power_pmu_init);
> 
> Please make this as a module right away. There is no reason to compile all
> this in.
> 

OK, will update.

Thanks,
Rui

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-02-29 11:30     ` Thomas Gleixner
@ 2016-03-01 13:58       ` Borislav Petkov
  2016-03-02  2:26         ` Huang Rui
  0 siblings, 1 reply; 8+ messages in thread
From: Borislav Petkov @ 2016-03-01 13:58 UTC (permalink / raw)
  To: Thomas Gleixner, Huang Rui
  Cc: Peter Zijlstra, Ingo Molnar, Andy Lutomirski, Robert Richter,
	Jacob Shin, Arnaldo Carvalho de Melo, Kan Liang, linux-kernel,
	spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Guenter Roeck

On Mon, Feb 29, 2016 at 12:30:47PM +0100, Thomas Gleixner wrote:
> On Tue, 1 Mar 2016, Huang Rui wrote:
> > On Fri, Feb 26, 2016 at 11:18:28AM +0100, Thomas Gleixner wrote:
> > > > +static int __init amd_power_pmu_init(void)
> > > > +{
> > > > +	int i, ret;
> > > > +	u64 tmp;
> > > > +
> > > > +	if (!x86_match_cpu(cpu_match))
> > > > +		return 0;
> > > > +
> > > > +	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
> > > > +		return -ENODEV;
> > > > +
> > > > +	cores_per_cu = amd_get_cores_per_cu();
> > > > +	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
> > > 
> > > Please use the new package management functions which are on the way to tip.
> > > 
> > Can you give me some hints?
> 
> http://git.kernel.org/tip/1f12e32f4cd5243ae46d8b933181be0d022c6793
> 
> topology_max_packages() is what you want.

Rui, I think you want to use smp_num_siblings here. And we did define
that amd_get_cores_per_cu() but that's redundant:

                /* get compute unit information */
                smp_num_siblings = ((ebx >> 8) & 3) + 1;
                c->compute_unit_id = ebx & 0xff;
                cores_per_cu += ((ebx >> 8) & 3);

So if I'm not missing anything else, we don't need that cores_per_cu
variable, nor do we need amd_get_cores_per_cu(). I'll zap this patch
from my queue. You can use smp_num_siblings in your patch instead which
is exported to everything.

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-03-01 13:58       ` Borislav Petkov
@ 2016-03-02  2:26         ` Huang Rui
  0 siblings, 0 replies; 8+ messages in thread
From: Huang Rui @ 2016-03-02  2:26 UTC (permalink / raw)
  To: Borislav Petkov, Thomas Gleixner
  Cc: Peter Zijlstra, Ingo Molnar, Andy Lutomirski, Robert Richter,
	Jacob Shin, Arnaldo Carvalho de Melo, Kan Liang, linux-kernel,
	spg_linux_kernel, x86, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Guenter Roeck

On Tue, Mar 01, 2016 at 02:58:39PM +0100, Borislav Petkov wrote:
> On Mon, Feb 29, 2016 at 12:30:47PM +0100, Thomas Gleixner wrote:
> > On Tue, 1 Mar 2016, Huang Rui wrote:
> > > On Fri, Feb 26, 2016 at 11:18:28AM +0100, Thomas Gleixner wrote:
> > > > > +static int __init amd_power_pmu_init(void)
> > > > > +{
> > > > > +	int i, ret;
> > > > > +	u64 tmp;
> > > > > +
> > > > > +	if (!x86_match_cpu(cpu_match))
> > > > > +		return 0;
> > > > > +
> > > > > +	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
> > > > > +		return -ENODEV;
> > > > > +
> > > > > +	cores_per_cu = amd_get_cores_per_cu();
> > > > > +	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
> > > > 
> > > > Please use the new package management functions which are on the way to tip.
> > > > 
> > > Can you give me some hints?
> > 
> > http://git.kernel.org/tip/1f12e32f4cd5243ae46d8b933181be0d022c6793
> > 
> > topology_max_packages() is what you want.
> 
> Rui, I think you want to use smp_num_siblings here. And we did define
> that amd_get_cores_per_cu() but that's redundant:
> 
>                 /* get compute unit information */
>                 smp_num_siblings = ((ebx >> 8) & 3) + 1;
>                 c->compute_unit_id = ebx & 0xff;
>                 cores_per_cu += ((ebx >> 8) & 3);
> 
> So if I'm not missing anything else, we don't need that cores_per_cu
> variable, nor do we need amd_get_cores_per_cu(). I'll zap this patch
> from my queue. You can use smp_num_siblings in your patch instead which
> is exported to everything.
> 

That makes sense for me. Thanks, Boris, Thomas. We just want to know
cores number per compute unit here.

Thanks,
Rui

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-03-02  2:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-26  9:40 [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
2016-02-26 10:18 ` Thomas Gleixner
2016-02-26 10:29   ` Borislav Petkov
2016-02-29 15:40     ` Huang Rui
2016-02-29 18:00   ` Huang Rui
2016-02-29 11:30     ` Thomas Gleixner
2016-03-01 13:58       ` Borislav Petkov
2016-03-02  2:26         ` Huang Rui

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).