linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism
@ 2016-01-14  2:50 Huang Rui
  2016-01-14  2:50 ` [PATCH v2 1/5] x86/amd: move nodes_per_socket into bsp_init_amd Huang Rui
                   ` (5 more replies)
  0 siblings, 6 replies; 21+ messages in thread
From: Huang Rui @ 2016-01-14  2:50 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker
  Cc: linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Borislav Petkov, Fengguang Wu, Aaron Lu, Huang Rui

Hi,

This series of patches introduces the perf implementation of
accumulated power reporting algorithm. It will calculate the average
power consumption for the processor. The CPU feature flag is
CPUID.8000_0007H:EDX[12].


Changes from v1 -> v2:
- Add a patch to fix the build issue which is reported by kbuild test
  robot.


Simple example:

root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4
  CHK     include/config/kernel.release
  CHK     include/generated/uapi/linux/version.h
  CHK     include/generated/utsrelease.h
  CHK     include/generated/timeconst.h
  CHK     include/generated/bounds.h
  CHK     include/generated/asm-offsets.h
  CALL    scripts/checksyscalls.sh
  CHK     include/generated/compile.h
  SKIPPED include/generated/compile.h
  Building modules, stage 2.
Kernel: arch/x86/boot/bzImage is ready  (#40)
  MODPOST 4225 modules

 Performance counter stats for 'system wide':

            183.44 mWatts power/power-pkg/

     341.837270111 seconds time elapsed

root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10

 Performance counter stats for 'system wide':

              0.18 mWatts power/power-pkg/

      10.012551815 seconds time elapsed

Reference:
http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic

Thanks,
Rui

Huang Rui (5):
  x86/amd: move nodes_per_socket into bsp_init_amd
  x86/amd: add accessor for number of cores per compute unit
  x86/cpufeature: add AMD Accumulated Power Mechanism feature flag
  perf/x86: Move events_sysfs_show outside CPU_SUP_INTEL
  perf/x86/amd/power: Add AMD accumulated power reporting mechanism

 arch/x86/include/asm/cpufeature.h          |   2 +-
 arch/x86/include/asm/processor.h           |   1 +
 arch/x86/kernel/cpu/Makefile               |   1 +
 arch/x86/kernel/cpu/amd.c                  |  31 +-
 arch/x86/kernel/cpu/perf_event.h           |   6 +-
 arch/x86/kernel/cpu/perf_event_amd_power.c | 531 +++++++++++++++++++++++++++++
 6 files changed, 564 insertions(+), 8 deletions(-)
 create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c

-- 
1.9.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 1/5] x86/amd: move nodes_per_socket into bsp_init_amd
  2016-01-14  2:50 [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism Huang Rui
@ 2016-01-14  2:50 ` Huang Rui
  2016-03-21  9:54   ` [tip:perf/urgent] perf/x86/amd: Move nodes_per_socket into bsp_init_amd() tip-bot for Huang Rui
  2016-01-14  2:50 ` [PATCH v2 2/5] x86/amd: add accessor for number of cores per compute unit Huang Rui
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-14  2:50 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker
  Cc: linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Borislav Petkov, Fengguang Wu, Aaron Lu, Huang Rui

nodes_per_socket is static and it needn't be initialized many times
during every cpu core init. So move nodes_per_socket initialization
into bsp_init_amd.

Signed-off-by: Huang Rui <ray.huang@amd.com>
---
 arch/x86/kernel/cpu/amd.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index e229640..97db1b6 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -308,7 +308,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 		u32 eax, ebx, ecx, edx;
 
 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
-		nodes_per_socket = ((ecx >> 8) & 7) + 1;
 		node_id = ecx & 7;
 
 		/* get compute unit information */
@@ -319,7 +318,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 		u64 value;
 
 		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		nodes_per_socket = ((value >> 3) & 7) + 1;
 		node_id = value & 7;
 	} else
 		return;
@@ -523,6 +521,18 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
 
 	if (cpu_has(c, X86_FEATURE_MWAITX))
 		use_mwaitx_delay();
+
+	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
+		u32 ecx;
+
+		ecx = cpuid_ecx(0x8000001e);
+		nodes_per_socket = ((ecx >> 8) & 7) + 1;
+	} else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
+		u64 value;
+
+		rdmsrl(MSR_FAM10H_NODE_ID, value);
+		nodes_per_socket = ((value >> 3) & 7) + 1;
+	}
 }
 
 static void early_init_amd(struct cpuinfo_x86 *c)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 2/5] x86/amd: add accessor for number of cores per compute unit
  2016-01-14  2:50 [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism Huang Rui
  2016-01-14  2:50 ` [PATCH v2 1/5] x86/amd: move nodes_per_socket into bsp_init_amd Huang Rui
@ 2016-01-14  2:50 ` Huang Rui
  2016-01-14  2:50 ` [PATCH v2 3/5] x86/cpufeature: add AMD Accumulated Power Mechanism feature flag Huang Rui
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 21+ messages in thread
From: Huang Rui @ 2016-01-14  2:50 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker
  Cc: linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Borislav Petkov, Fengguang Wu, Aaron Lu, Huang Rui

Add an accessor function amd_get_cores_per_cu() which returns the
number of cores per compute unit.

In a subsequent patch, we will use this function in both x86 perf and
fam15h driver.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Cc: Guenter Roeck <linux@roeck-us.net>
---
 arch/x86/include/asm/processor.h |  1 +
 arch/x86/kernel/cpu/amd.c        | 17 +++++++++++++----
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 6752225..9bf48b2 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -810,6 +810,7 @@ static inline int mpx_disable_management(void)
 
 extern u16 amd_get_nb_id(int cpu);
 extern u32 amd_get_nodes_per_socket(void);
+extern u32 amd_get_cores_per_cu(void);
 
 static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
 {
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 97db1b6..d6e320f 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -27,6 +27,9 @@
  */
 static u32 nodes_per_socket = 1;
 
+/* cores_per_cu: stores the number of cores per compute unit */
+static u32 cores_per_cu = 1;
+
 static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)
 {
 	u32 gprs[8] = { 0 };
@@ -299,7 +302,6 @@ static int nearby_node(int apicid)
 #ifdef CONFIG_SMP
 static void amd_get_topology(struct cpuinfo_x86 *c)
 {
-	u32 cores_per_cu = 1;
 	u8 node_id;
 	int cpu = smp_processor_id();
 
@@ -313,7 +315,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 		/* get compute unit information */
 		smp_num_siblings = ((ebx >> 8) & 3) + 1;
 		c->compute_unit_id = ebx & 0xff;
-		cores_per_cu += ((ebx >> 8) & 3);
 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
 		u64 value;
 
@@ -391,6 +392,13 @@ u32 amd_get_nodes_per_socket(void)
 }
 EXPORT_SYMBOL_GPL(amd_get_nodes_per_socket);
 
+/* this function returns the number of cores per compute unit */
+u32 amd_get_cores_per_cu(void)
+{
+	return cores_per_cu;
+}
+EXPORT_SYMBOL_GPL(amd_get_cores_per_cu);
+
 static void srat_detect_node(struct cpuinfo_x86 *c)
 {
 #ifdef CONFIG_NUMA
@@ -523,10 +531,11 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
 		use_mwaitx_delay();
 
 	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
-		u32 ecx;
+		u32 eax, ebx, ecx, edx;
 
-		ecx = cpuid_ecx(0x8000001e);
+		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
 		nodes_per_socket = ((ecx >> 8) & 7) + 1;
+		cores_per_cu = ((ebx >> 8) & 3) + 1;
 	} else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
 		u64 value;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 3/5] x86/cpufeature: add AMD Accumulated Power Mechanism feature flag
  2016-01-14  2:50 [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism Huang Rui
  2016-01-14  2:50 ` [PATCH v2 1/5] x86/amd: move nodes_per_socket into bsp_init_amd Huang Rui
  2016-01-14  2:50 ` [PATCH v2 2/5] x86/amd: add accessor for number of cores per compute unit Huang Rui
@ 2016-01-14  2:50 ` Huang Rui
  2016-03-21  9:55   ` [tip:perf/urgent] x86/cpufeature, perf/x86: Add " tip-bot for Huang Rui
  2016-01-14  2:50 ` [PATCH v2 4/5] perf/x86: Move events_sysfs_show outside CPU_SUP_INTEL Huang Rui
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-14  2:50 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker
  Cc: linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Borislav Petkov, Fengguang Wu, Aaron Lu, Huang Rui

AMD CPU family 15h model 0x60 introduces accumulated power mechanism.
It is used to report the processor power consumption and indicated by
CPUID Fn8000_0007_EDX[12].

Signed-off-by: Huang Rui <ray.huang@amd.com>
---
 arch/x86/include/asm/cpufeature.h | 2 +-
 arch/x86/kernel/cpu/amd.c         | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index f7ba9fb..0adce8f 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -85,7 +85,7 @@
 #define X86_FEATURE_P4		( 3*32+ 7) /* "" P4 */
 #define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */
 #define X86_FEATURE_UP		( 3*32+ 9) /* smp kernel running on up */
-/* free, was #define X86_FEATURE_FXSAVE_LEAK ( 3*32+10) * "" FXSAVE leaks FOP/FIP/FOP */
+#define X86_FEATURE_ACC_POWER	( 3*32+10) /* AMD Accumulated Power Mechanism */
 #define X86_FEATURE_ARCH_PERFMON ( 3*32+11) /* Intel Architectural PerfMon */
 #define X86_FEATURE_PEBS	( 3*32+12) /* Precise-Event Based Sampling */
 #define X86_FEATURE_BTS		( 3*32+13) /* Branch Trace Store */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index d6e320f..97d4ce2 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -559,6 +559,10 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 			set_sched_clock_stable();
 	}
 
+	/* Bit 12 of 8000_0007 edx is accumulated power mechanism. */
+	if (c->x86_power & BIT(12))
+		set_cpu_cap(c, X86_FEATURE_ACC_POWER);
+
 #ifdef CONFIG_X86_64
 	set_cpu_cap(c, X86_FEATURE_SYSCALL32);
 #else
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 4/5] perf/x86: Move events_sysfs_show outside CPU_SUP_INTEL
  2016-01-14  2:50 [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism Huang Rui
                   ` (2 preceding siblings ...)
  2016-01-14  2:50 ` [PATCH v2 3/5] x86/cpufeature: add AMD Accumulated Power Mechanism feature flag Huang Rui
@ 2016-01-14  2:50 ` Huang Rui
  2016-01-14  2:50 ` [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
  2016-01-14  6:01 ` [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated " Borislav Petkov
  5 siblings, 0 replies; 21+ messages in thread
From: Huang Rui @ 2016-01-14  2:50 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker
  Cc: linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Borislav Petkov, Fengguang Wu, Aaron Lu, Huang Rui

This patch moves events_sysfs_show outside CONFIG_CPU_SUP_INTEL,
because this interface will be also used on AMD power reporting
performance event.

Otherwise, below build error would be encountered:

All error/warnings (new ones prefixed by >>):

   In file included from include/linux/kobject.h:21:0,
                    from include/linux/module.h:17,
                    from arch/x86/kernel/cpu/perf_event_amd_power.c:13:
>> arch/x86/kernel/cpu/perf_event.h:663:31: error: 'events_sysfs_show' undeclared here (not in a function)
     .attr  = __ATTR(_name, 0444, events_sysfs_show, NULL), \
                                  ^
   include/linux/sysfs.h:103:10: note: in definition of macro '__ATTR'
     .show = _show,      \
             ^
>> arch/x86/kernel/cpu/perf_event_amd_power.c:244:1: note: in expansion of macro 'EVENT_ATTR_STR'
    EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
    ^

Reported-by: build test robot <lkp@intel.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---
 arch/x86/kernel/cpu/perf_event.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 799e6bd..986f6ba 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -793,6 +793,9 @@ ssize_t intel_event_sysfs_show(char *page, u64 config);
 
 struct attribute **merge_attr(struct attribute **a, struct attribute **b);
 
+ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
+			  char *page);
+
 #ifdef CONFIG_CPU_SUP_AMD
 
 int amd_pmu_init(void);
@@ -917,9 +920,6 @@ int p6_pmu_init(void);
 
 int knc_pmu_init(void);
 
-ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
-			  char *page);
-
 static inline int is_ht_workaround_enabled(void)
 {
 	return !!(x86_pmu.flags & PMU_FL_EXCL_ENABLED);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-14  2:50 [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism Huang Rui
                   ` (3 preceding siblings ...)
  2016-01-14  2:50 ` [PATCH v2 4/5] perf/x86: Move events_sysfs_show outside CPU_SUP_INTEL Huang Rui
@ 2016-01-14  2:50 ` Huang Rui
  2016-01-19 12:12   ` Peter Zijlstra
  2016-01-14  6:01 ` [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated " Borislav Petkov
  5 siblings, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-14  2:50 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker
  Cc: linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Borislav Petkov, Fengguang Wu, Aaron Lu, Huang Rui

Introduce an AMD accumlated power reporting mechanism for Carrizo
(Family 15h, Model 60h) processor that should be used to calculate the
average power consumed by a processor during a measurement interval.
The feature of accumulated power mechanism is indicated by CPUID
Fn8000_0007_EDX[12].

---------------------------------------------------------------------
* Tsample: compute unit power accumulator sample period
* Tref: the PTSC counter period
* PTSC: performance timestamp counter
* N: the ratio of compute unit power accumulator sample period to the
  PTSC period
* Jmax: max compute unit accumulated power which is indicated by
  MaxCpuSwPwrAcc MSR C001007b
* Jx/Jy: compute unit accumulated power which is indicated by
  CpuSwPwrAcc MSR C001007a
* Tx/Ty: the value of performance timestamp counter which is indicated
  by CU_PTSC MSR C0010280
* PwrCPUave: CPU average power

i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007.
	N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]].

ii. Read the full range of the cumulative energy value from the new
MSR MaxCpuSwPwrAcc.
	Jmax = value returned.
iii. At time x, SW reads CpuSwPwrAcc MSR and samples the PTSC.
	Jx = value read from CpuSwPwrAcc and Tx = value read from
PTSC.

iv. At time y, SW reads CpuSwPwrAcc MSR and samples the PTSC.
	Jy = value read from CpuSwPwrAcc and Ty = value read from
PTSC.

v. Calculate the average power consumption for a compute unit over
time period (y-x). Unit of result is uWatt.
	if (Jy < Jx) // Rollover has occurred
		Jdelta = (Jy + Jmax) - Jx
	else
		Jdelta = Jy - Jx
	PwrCPUave = N * Jdelta * 1000 / (Ty - Tx)
----------------------------------------------------------------------

This feature will be implemented both on hwmon and perf that discussed
in mail list before. At current design, it provides one event to
report per package/processor power consumption by counting each
compute unit power value.

Simple example:

root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4
  CHK     include/config/kernel.release
  CHK     include/generated/uapi/linux/version.h
  CHK     include/generated/utsrelease.h
  CHK     include/generated/timeconst.h
  CHK     include/generated/bounds.h
  CHK     include/generated/asm-offsets.h
  CALL    scripts/checksyscalls.sh
  CHK     include/generated/compile.h
  SKIPPED include/generated/compile.h
  Building modules, stage 2.
Kernel: arch/x86/boot/bzImage is ready  (#40)
  MODPOST 4225 modules

 Performance counter stats for 'system wide':

            183.44 mWatts power/power-pkg/

     341.837270111 seconds time elapsed

root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10

 Performance counter stats for 'system wide':

              0.18 mWatts power/power-pkg/

      10.012551815 seconds time elapsed

Reference:
http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Ingo Molnar <mingo@kernel.org>
Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Cc: Guenter Roeck <linux@roeck-us.net>
---
 arch/x86/kernel/cpu/Makefile               |   1 +
 arch/x86/kernel/cpu/perf_event_amd_power.c | 531 +++++++++++++++++++++++++++++
 2 files changed, 532 insertions(+)
 create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 5803130..97f3413 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS)		+= perf_event.o
 
 ifdef CONFIG_PERF_EVENTS
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd.o perf_event_amd_uncore.o
+obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_power.o
 ifdef CONFIG_AMD_IOMMU
 obj-$(CONFIG_CPU_SUP_AMD)		+= perf_event_amd_iommu.o
 endif
diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
new file mode 100644
index 0000000..69ef234
--- /dev/null
+++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
@@ -0,0 +1,531 @@
+/*
+ * Performance events - AMD Processor Power Reporting Mechanism
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Huang Rui <ray.huang@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/perf_event.h>
+#include <asm/cpu_device_id.h>
+#include "perf_event.h"
+
+#define MSR_F15H_CU_PWR_ACCUMULATOR     0xc001007a
+#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
+#define MSR_F15H_PTSC			0xc0010280
+
+/*
+ * event code: LSB 8 bits, passed in attr->config
+ * any other bit is reserved
+ */
+#define AMD_POWER_EVENT_MASK	0xFFULL
+
+#define MAX_CUS	8
+
+/*
+ * Acc power status counters
+ */
+#define AMD_POWER_PKG_ID		0
+#define AMD_POWER_EVENTSEL_PKG		1
+
+/*
+ * the ratio of compute unit power accumulator sample period to the
+ * PTSC period
+ */
+static unsigned int cpu_pwr_sample_ratio;
+static unsigned int cores_per_cu;
+static unsigned int cu_num;
+
+/* maximum accumulated power of a compute unit */
+static u64 max_cu_acc_power;
+
+struct power_pmu {
+	spinlock_t		lock;
+	struct list_head	active_list;
+	struct pmu		*pmu; /* pointer to power_pmu_class */
+	local64_t		cpu_sw_pwr_ptsc;
+};
+
+static struct pmu pmu_class;
+
+/*
+ * Accumulated power is to measure the sum of each compute unit's
+ * power consumption. So it picks only one core from each compute unit
+ * to get the power with MSR_F15H_CU_PWR_ACCUMULATOR. The cpu_mask
+ * represents CPU bit map of all cores which are picked to measure the
+ * power for the compute units that they belong to.
+ */
+static cpumask_t cpu_mask;
+
+static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu);
+
+static u64 event_update(struct perf_event *event, struct power_pmu *pmu)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc;
+	u64 delta, tdelta;
+
+again:
+	prev_raw_count = local64_read(&hwc->prev_count);
+	prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc);
+	rdmsrl(event->hw.event_base, new_raw_count);
+	rdmsrl(MSR_F15H_PTSC, new_ptsc);
+
+	if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+			    new_raw_count) != prev_raw_count) {
+		cpu_relax();
+		goto again;
+	}
+
+	/*
+	 * calculate the power comsumption for each compute unit over
+	 * a time period, the unit of final value (delta) is
+	 * micro-Watts. Then add it into event count.
+	 */
+	if (new_raw_count < prev_raw_count) {
+		delta = max_cu_acc_power + new_raw_count;
+		delta -= prev_raw_count;
+	} else
+		delta = new_raw_count - prev_raw_count;
+
+	delta *= cpu_pwr_sample_ratio * 1000;
+	tdelta = new_ptsc - prev_ptsc;
+
+	do_div(delta, tdelta);
+	local64_add(delta, &event->count);
+
+	return new_raw_count;
+}
+
+static void __pmu_event_start(struct power_pmu *pmu,
+			      struct perf_event *event)
+{
+	u64 ptsc, counts;
+
+	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+		return;
+
+	event->hw.state = 0;
+
+	list_add_tail(&event->active_entry, &pmu->active_list);
+
+	rdmsrl(MSR_F15H_PTSC, ptsc);
+	local64_set(&pmu->cpu_sw_pwr_ptsc, ptsc);
+	rdmsrl(event->hw.event_base, counts);
+	local64_set(&event->hw.prev_count, counts);
+}
+
+static void pmu_event_start(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	unsigned long flags;
+
+	spin_lock_irqsave(&pmu->lock, flags);
+	__pmu_event_start(pmu, event);
+	spin_unlock_irqrestore(&pmu->lock, flags);
+}
+
+static void pmu_event_stop(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+	unsigned long flags;
+
+	spin_lock_irqsave(&pmu->lock, flags);
+
+	/* mark event as deactivated and stopped */
+	if (!(hwc->state & PERF_HES_STOPPED)) {
+		list_del(&event->active_entry);
+		hwc->state |= PERF_HES_STOPPED;
+	}
+
+	/* check if update of sw counter is necessary */
+	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+		/*
+		 * Drain the remaining delta count out of a event
+		 * that we are disabling:
+		 */
+		event_update(event, pmu);
+		hwc->state |= PERF_HES_UPTODATE;
+	}
+
+	spin_unlock_irqrestore(&pmu->lock, flags);
+}
+
+static int pmu_event_add(struct perf_event *event, int mode)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+	struct hw_perf_event *hwc = &event->hw;
+	unsigned long flags;
+
+	spin_lock_irqsave(&pmu->lock, flags);
+
+	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+
+	if (mode & PERF_EF_START)
+		__pmu_event_start(pmu, event);
+
+	spin_unlock_irqrestore(&pmu->lock, flags);
+
+	return 0;
+}
+
+static void pmu_event_del(struct perf_event *event, int flags)
+{
+	pmu_event_stop(event, PERF_EF_UPDATE);
+}
+
+static int pmu_event_init(struct perf_event *event)
+{
+	u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK;
+	int bit, ret = 0;
+
+	/* only look at AMD power events */
+	if (event->attr.type != pmu_class.type)
+		return -ENOENT;
+
+	/* unsupported modes and filters */
+	if (event->attr.exclude_user   ||
+	    event->attr.exclude_kernel ||
+	    event->attr.exclude_hv     ||
+	    event->attr.exclude_idle   ||
+	    event->attr.exclude_host   ||
+	    event->attr.exclude_guest  ||
+	    event->attr.sample_period) /* no sampling */
+		return -EINVAL;
+
+	if (cfg == AMD_POWER_EVENTSEL_PKG)
+		bit = AMD_POWER_PKG_ID;
+	else
+		return -EINVAL;
+
+	event->hw.event_base = MSR_F15H_CU_PWR_ACCUMULATOR;
+	event->hw.config = cfg;
+	event->hw.idx = bit;
+
+	return ret;
+}
+
+static void pmu_event_read(struct perf_event *event)
+{
+	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
+
+	event_update(event, pmu);
+}
+
+static ssize_t get_attr_cpumask(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	return cpumap_print_to_pagebuf(true, buf, &cpu_mask);
+}
+
+static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL);
+
+static struct attribute *pmu_attrs[] = {
+	&dev_attr_cpumask.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_attr_group = {
+	.attrs = pmu_attrs,
+};
+
+
+/*
+ * at current, it only supports to report the power of each
+ * processor/package
+ */
+EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01");
+
+EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts");
+
+/* convert the count from micro-Watts to milli-Watts */
+EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3");
+
+
+static struct attribute *events_attr[] = {
+	EVENT_PTR(power_pkg),
+	EVENT_PTR(power_pkg_unit),
+	EVENT_PTR(power_pkg_scale),
+	NULL,
+};
+
+static struct attribute_group pmu_events_group = {
+	.name = "events",
+	.attrs = events_attr,
+};
+
+PMU_FORMAT_ATTR(event, "config:0-7");
+
+static struct attribute *formats_attr[] = {
+	&format_attr_event.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_format_group = {
+	.name = "format",
+	.attrs = formats_attr,
+};
+
+static const struct attribute_group *attr_groups[] = {
+	&pmu_attr_group,
+	&pmu_format_group,
+	&pmu_events_group,
+	NULL,
+};
+
+static struct pmu pmu_class = {
+	.attr_groups	= attr_groups,
+	.task_ctx_nr	= perf_invalid_context, /* system-wide only */
+	.event_init	= pmu_event_init,
+	.add		= pmu_event_add, /* must have */
+	.del		= pmu_event_del, /* must have */
+	.start		= pmu_event_start,
+	.stop		= pmu_event_stop,
+	.read		= pmu_event_read,
+};
+
+
+static int power_cpu_exit(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int i, cu, ret = 0;
+	int target = nr_cpumask_bits;
+	cpumask_var_t mask, tmp_mask;
+
+	cu = cpu / cores_per_cu;
+
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return -ENOMEM;
+
+	if (!zalloc_cpumask_var(&tmp_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	for (i = 0; i < cores_per_cu; i++)
+		cpumask_set_cpu(i, mask);
+
+	cpumask_shift_left(mask, mask, cu * cores_per_cu);
+
+	cpumask_clear_cpu(cpu, &cpu_mask);
+	cpumask_clear_cpu(cpu, mask);
+
+	if (!cpumask_and(tmp_mask, mask, cpu_online_mask))
+		goto out1;
+
+	/*
+	 * find a new CPU on same compute unit, if was set in cpumask
+	 * and still some CPUs on compute unit, then move to the new
+	 * CPU
+	 */
+	target = cpumask_any(tmp_mask);
+	if (target < nr_cpumask_bits && target != cpu)
+		cpumask_set_cpu(target, &cpu_mask);
+
+	WARN_ON(cpumask_empty(&cpu_mask));
+
+out1:
+	/*
+	 * migrate events and context to new CPU
+	 */
+	if (target < nr_cpumask_bits)
+		perf_pmu_migrate_context(pmu->pmu, cpu, target);
+
+	free_cpumask_var(tmp_mask);
+out:
+	free_cpumask_var(mask);
+
+	return ret;
+
+}
+
+static int power_cpu_init(int cpu)
+{
+	int i, cu, ret = 0;
+	cpumask_var_t mask, dummy_mask;
+
+	cu = cpu / cores_per_cu;
+
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return -ENOMEM;
+
+	if (!zalloc_cpumask_var(&dummy_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	for (i = 0; i < cores_per_cu; i++)
+		cpumask_set_cpu(i, mask);
+
+	cpumask_shift_left(mask, mask, cu * cores_per_cu);
+
+	if (!cpumask_and(dummy_mask, mask, &cpu_mask))
+		cpumask_set_cpu(cpu, &cpu_mask);
+
+	free_cpumask_var(dummy_mask);
+out:
+	free_cpumask_var(mask);
+
+	return ret;
+}
+
+static int power_cpu_prepare(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int phys_id = topology_physical_package_id(cpu);
+
+	if (pmu)
+		return 0;
+
+	if (phys_id < 0)
+		return -EINVAL;
+
+	pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
+	if (!pmu)
+		return -ENOMEM;
+
+	spin_lock_init(&pmu->lock);
+
+	INIT_LIST_HEAD(&pmu->active_list);
+
+	pmu->pmu = &pmu_class;
+
+	per_cpu(amd_power_pmu, cpu) = pmu;
+
+	return 0;
+}
+
+static void power_cpu_kfree(int cpu)
+{
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+
+	kfree(pmu);
+}
+
+static int power_cpu_notifier(struct notifier_block *self,
+			      unsigned long action, void *hcpu)
+{
+	unsigned int cpu = (long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_UP_PREPARE:
+		if (power_cpu_prepare(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_STARTING:
+		if (power_cpu_init(cpu))
+			return NOTIFY_BAD;
+		break;
+	case CPU_ONLINE:
+	case CPU_DEAD:
+		power_cpu_kfree(cpu);
+		break;
+	case CPU_DOWN_PREPARE:
+		if (power_cpu_exit(cpu))
+			return NOTIFY_BAD;
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static const struct x86_cpu_id cpu_match[] = {
+	{ .vendor = X86_VENDOR_AMD, .family = 0x15 },
+	{},
+};
+
+static int __init amd_power_pmu_init(void)
+{
+	int i, ret;
+	u64 tmp;
+	cpumask_var_t tmp_mask, res_mask;
+
+	if (!x86_match_cpu(cpu_match))
+		return 0;
+
+	if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
+		return -ENODEV;
+
+	cores_per_cu = amd_get_cores_per_cu();
+	cu_num = boot_cpu_data.x86_max_cores / cores_per_cu;
+
+	if (WARN_ON_ONCE(cu_num > MAX_CUS))
+		return -EINVAL;
+
+	cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
+
+	if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
+		pr_err("Failed to read max compute unit power accumulator MSR\n");
+		return -ENODEV;
+	}
+	max_cu_acc_power = tmp;
+
+	if (!zalloc_cpumask_var(&tmp_mask, GFP_KERNEL))
+		return -ENOMEM;
+
+	if (!zalloc_cpumask_var(&res_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	for (i = 0; i < cores_per_cu; i++)
+		cpumask_set_cpu(i, tmp_mask);
+
+	cpu_notifier_register_begin();
+
+	/*
+	 * Choose the one online core of each compute unit
+	 */
+	for (i = 0; i < cu_num; i++) {
+		/* WARN_ON for empty CU masks */
+		WARN_ON(!cpumask_and(res_mask, tmp_mask, cpu_online_mask));
+		cpumask_set_cpu(cpumask_any(res_mask), &cpu_mask);
+		cpumask_shift_left(tmp_mask, tmp_mask, cores_per_cu);
+	}
+
+	for_each_present_cpu(i) {
+		ret = power_cpu_prepare(i);
+		if (ret) {
+			/* unwind on [0 ... i-1] CPUs */
+			while (i--)
+				power_cpu_kfree(i);
+			goto out1;
+		}
+		ret = power_cpu_init(i);
+		if (ret) {
+			/* unwind on [0 ... i] CPUs */
+			while (i >= 0)
+				power_cpu_kfree(i--);
+			goto out1;
+		}
+	}
+
+	__perf_cpu_notifier(power_cpu_notifier);
+
+	ret = perf_pmu_register(&pmu_class, "power", -1);
+	if (WARN_ON(ret)) {
+		pr_warn("AMD Power PMU registration failed\n");
+		goto out1;
+	}
+
+	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
+
+out1:
+	cpu_notifier_register_done();
+
+	free_cpumask_var(res_mask);
+out:
+	free_cpumask_var(tmp_mask);
+
+	return ret;
+}
+device_initcall(amd_power_pmu_init);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism
  2016-01-14  2:50 [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism Huang Rui
                   ` (4 preceding siblings ...)
  2016-01-14  2:50 ` [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
@ 2016-01-14  6:01 ` Borislav Petkov
  5 siblings, 0 replies; 21+ messages in thread
From: Borislav Petkov @ 2016-01-14  6:01 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Peter Zijlstra, Ingo Molnar, Andy Lutomirski,
	Thomas Gleixner, Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker, linux-kernel, spg_linux_kernel,
	x86, Guenter Roeck, Andreas Herrmann, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Fengguang Wu, Aaron Lu

On Thu, Jan 14, 2016 at 10:50:03AM +0800, Huang Rui wrote:
> Hi,
> 
> This series of patches introduces the perf implementation of
> accumulated power reporting algorithm. It will calculate the average
> power consumption for the processor. The CPU feature flag is
> CPUID.8000_0007H:EDX[12].

Ok, this is shaping up nicely, I've applied the first 4. 5 needs to be
looked at by Peter.

Thanks!

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-14  2:50 ` [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
@ 2016-01-19 12:12   ` Peter Zijlstra
  2016-01-20  4:48     ` Huang Rui
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-01-19 12:12 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Frédéric Weisbecker, linux-kernel, spg_linux_kernel,
	x86, Guenter Roeck, Andreas Herrmann, Suravee Suthikulpanit,
	Aravind Gopalakrishnan, Borislav Petkov, Fengguang Wu, Aaron Lu

On Thu, Jan 14, 2016 at 10:50:08AM +0800, Huang Rui wrote:
> +struct power_pmu {
> +	spinlock_t		lock;

This should be a raw_spinlock_t, as it'll be nested under other
raw_spinlock_t's.

> +	struct list_head	active_list;
> +	struct pmu		*pmu; /* pointer to power_pmu_class */
> +	local64_t		cpu_sw_pwr_ptsc;
> +};

> +static void pmu_event_start(struct perf_event *event, int mode)
> +{
> +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pmu->lock, flags);

IRQs will be disabled here.

> +	__pmu_event_start(pmu, event);
> +	spin_unlock_irqrestore(&pmu->lock, flags);
> +}
> +
> +static void pmu_event_stop(struct perf_event *event, int mode)
> +{
> +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> +	struct hw_perf_event *hwc = &event->hw;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pmu->lock, flags);

idem

> +
> +	/* mark event as deactivated and stopped */
> +	if (!(hwc->state & PERF_HES_STOPPED)) {
> +		list_del(&event->active_entry);
> +		hwc->state |= PERF_HES_STOPPED;
> +	}
> +
> +	/* check if update of sw counter is necessary */
> +	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
> +		/*
> +		 * Drain the remaining delta count out of a event
> +		 * that we are disabling:
> +		 */
> +		event_update(event, pmu);
> +		hwc->state |= PERF_HES_UPTODATE;
> +	}
> +
> +	spin_unlock_irqrestore(&pmu->lock, flags);
> +}
> +
> +static int pmu_event_add(struct perf_event *event, int mode)
> +{
> +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> +	struct hw_perf_event *hwc = &event->hw;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pmu->lock, flags);
> +

idem

> +	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> +
> +	if (mode & PERF_EF_START)
> +		__pmu_event_start(pmu, event);
> +
> +	spin_unlock_irqrestore(&pmu->lock, flags);
> +
> +	return 0;
> +}


> +static int power_cpu_init(int cpu)
> +{
> +	int i, cu, ret = 0;
> +	cpumask_var_t mask, dummy_mask;
> +
> +	cu = cpu / cores_per_cu;
> +
> +	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
> +		return -ENOMEM;
> +
> +	if (!zalloc_cpumask_var(&dummy_mask, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	for (i = 0; i < cores_per_cu; i++)
> +		cpumask_set_cpu(i, mask);
> +
> +	cpumask_shift_left(mask, mask, cu * cores_per_cu);
> +
> +	if (!cpumask_and(dummy_mask, mask, &cpu_mask))
> +		cpumask_set_cpu(cpu, &cpu_mask);
> +
> +	free_cpumask_var(dummy_mask);
> +out:
> +	free_cpumask_var(mask);
> +
> +	return ret;
> +}

> +static int power_cpu_notifier(struct notifier_block *self,
> +			      unsigned long action, void *hcpu)
> +{
> +	unsigned int cpu = (long)hcpu;
> +
> +	switch (action & ~CPU_TASKS_FROZEN) {
> +	case CPU_UP_PREPARE:
> +		if (power_cpu_prepare(cpu))
> +			return NOTIFY_BAD;
> +		break;
> +	case CPU_STARTING:
> +		if (power_cpu_init(cpu))
> +			return NOTIFY_BAD;

this is called with IRQs disabled, which makes those GFP_KERNEL allocs
above a pretty bad idea.

Also, note that -rt cannot actually do _any_ allocations/frees from
STARTING.

Please move the allocs/frees to PREPARE/ONLINE.

> +		break;
> +	case CPU_ONLINE:
> +	case CPU_DEAD:
> +		power_cpu_kfree(cpu);
> +		break;
> +	case CPU_DOWN_PREPARE:
> +		if (power_cpu_exit(cpu))
> +			return NOTIFY_BAD;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-19 12:12   ` Peter Zijlstra
@ 2016-01-20  4:48     ` Huang Rui
  2016-01-20  9:22       ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-20  4:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

Hi Peter,

Thanks so much to your comments.

On Tue, Jan 19, 2016 at 01:12:50PM +0100, Peter Zijlstra wrote:
> On Thu, Jan 14, 2016 at 10:50:08AM +0800, Huang Rui wrote:
> > +struct power_pmu {
> > +	spinlock_t		lock;
> 
> This should be a raw_spinlock_t, as it'll be nested under other
> raw_spinlock_t's.
> 

Do you mean the following spinlock operations are in hardware
interrupts disabled case, so I need use raw_spinlock_t instead, right?

Use raw_spin_lock_irqsave/raw_spin_unlock_irqrestore?

> > +	struct list_head	active_list;
> > +	struct pmu		*pmu; /* pointer to power_pmu_class */
> > +	local64_t		cpu_sw_pwr_ptsc;
> > +};
> 
> > +static void pmu_event_start(struct perf_event *event, int mode)
> > +{
> > +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&pmu->lock, flags);
> 
> IRQs will be disabled here.
> 
> > +	__pmu_event_start(pmu, event);
> > +	spin_unlock_irqrestore(&pmu->lock, flags);
> > +}
> > +
> > +static void pmu_event_stop(struct perf_event *event, int mode)
> > +{
> > +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> > +	struct hw_perf_event *hwc = &event->hw;
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&pmu->lock, flags);
> 
> idem
> 
> > +
> > +	/* mark event as deactivated and stopped */
> > +	if (!(hwc->state & PERF_HES_STOPPED)) {
> > +		list_del(&event->active_entry);
> > +		hwc->state |= PERF_HES_STOPPED;
> > +	}
> > +
> > +	/* check if update of sw counter is necessary */
> > +	if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
> > +		/*
> > +		 * Drain the remaining delta count out of a event
> > +		 * that we are disabling:
> > +		 */
> > +		event_update(event, pmu);
> > +		hwc->state |= PERF_HES_UPTODATE;
> > +	}
> > +
> > +	spin_unlock_irqrestore(&pmu->lock, flags);
> > +}
> > +
> > +static int pmu_event_add(struct perf_event *event, int mode)
> > +{
> > +	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> > +	struct hw_perf_event *hwc = &event->hw;
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&pmu->lock, flags);
> > +
> 
> idem
> 
> > +	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> > +
> > +	if (mode & PERF_EF_START)
> > +		__pmu_event_start(pmu, event);
> > +
> > +	spin_unlock_irqrestore(&pmu->lock, flags);
> > +
> > +	return 0;
> > +}
> 
> 
> > +static int power_cpu_init(int cpu)
> > +{
> > +	int i, cu, ret = 0;
> > +	cpumask_var_t mask, dummy_mask;
> > +
> > +	cu = cpu / cores_per_cu;
> > +
> > +	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
> > +		return -ENOMEM;
> > +
> > +	if (!zalloc_cpumask_var(&dummy_mask, GFP_KERNEL)) {
> > +		ret = -ENOMEM;
> > +		goto out;
> > +	}
> > +
> > +	for (i = 0; i < cores_per_cu; i++)
> > +		cpumask_set_cpu(i, mask);
> > +
> > +	cpumask_shift_left(mask, mask, cu * cores_per_cu);
> > +
> > +	if (!cpumask_and(dummy_mask, mask, &cpu_mask))
> > +		cpumask_set_cpu(cpu, &cpu_mask);
> > +
> > +	free_cpumask_var(dummy_mask);
> > +out:
> > +	free_cpumask_var(mask);
> > +
> > +	return ret;
> > +}
> 
> > +static int power_cpu_notifier(struct notifier_block *self,
> > +			      unsigned long action, void *hcpu)
> > +{
> > +	unsigned int cpu = (long)hcpu;
> > +
> > +	switch (action & ~CPU_TASKS_FROZEN) {
> > +	case CPU_UP_PREPARE:
> > +		if (power_cpu_prepare(cpu))
> > +			return NOTIFY_BAD;
> > +		break;
> > +	case CPU_STARTING:
> > +		if (power_cpu_init(cpu))
> > +			return NOTIFY_BAD;
> 
> this is called with IRQs disabled, which makes those GFP_KERNEL allocs
> above a pretty bad idea.
> 

Right, so should I use GFP_ATOMIC to allocate cpumask here?

> Also, note that -rt cannot actually do _any_ allocations/frees from
> STARTING.
> 
> Please move the allocs/frees to PREPARE/ONLINE.
> 

How about add two cpumask_var_t at power_pmu structure? Then allocate
the two cpumask_var_t (pmu->mask, pmu->dummy_mask), and they can be
also used on power_cpu_init.

Thanks,
Rui

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-20  4:48     ` Huang Rui
@ 2016-01-20  9:22       ` Peter Zijlstra
  2016-01-21  7:04         ` Huang Rui
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-01-20  9:22 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

On Wed, Jan 20, 2016 at 12:48:24PM +0800, Huang Rui wrote:
> Hi Peter,
> 
> Thanks so much to your comments.
> 
> On Tue, Jan 19, 2016 at 01:12:50PM +0100, Peter Zijlstra wrote:
> > On Thu, Jan 14, 2016 at 10:50:08AM +0800, Huang Rui wrote:
> > > +struct power_pmu {
> > > +	spinlock_t		lock;
> > 
> > This should be a raw_spinlock_t, as it'll be nested under other
> > raw_spinlock_t's.
> > 
> 
> Do you mean the following spinlock operations are in hardware
> interrupts disabled case, so I need use raw_spinlock_t instead, right?


			mainline		-rt

raw_spinlock_t		spin-waits		spin-waits
spinlock_t		spin-waits		blocks (rt-mutex)
struct mutex		blocks			blocks (rt-mutex)


since these functions are themselves called with raw_spinlock_t held
(perf_event_context::lock for example, but also rq::lock), any lock
nested inside them must also be raw_spinlock_t.

I have a lockdep patch somewhere that checks these ordering things; I
should rebase and post that again.

> Use raw_spin_lock_irqsave/raw_spin_unlock_irqrestore?

pmu::{start,stop,add,del} will be called with IRQs already disabled.

> > > +static int power_cpu_init(int cpu)
> > > +{
> > > +	int i, cu, ret = 0;
> > > +	cpumask_var_t mask, dummy_mask;
> > > +
> > > +	cu = cpu / cores_per_cu;
> > > +
> > > +	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
> > > +		return -ENOMEM;
> > > +
> > > +	if (!zalloc_cpumask_var(&dummy_mask, GFP_KERNEL)) {
> > > +		ret = -ENOMEM;
> > > +		goto out;
> > > +	}
> > > +
> > > +	for (i = 0; i < cores_per_cu; i++)
> > > +		cpumask_set_cpu(i, mask);
> > > +
> > > +	cpumask_shift_left(mask, mask, cu * cores_per_cu);
> > > +
> > > +	if (!cpumask_and(dummy_mask, mask, &cpu_mask))
> > > +		cpumask_set_cpu(cpu, &cpu_mask);
> > > +
> > > +	free_cpumask_var(dummy_mask);
> > > +out:
> > > +	free_cpumask_var(mask);
> > > +
> > > +	return ret;
> > > +}
> > 
> > > +static int power_cpu_notifier(struct notifier_block *self,
> > > +			      unsigned long action, void *hcpu)
> > > +{
> > > +	unsigned int cpu = (long)hcpu;
> > > +
> > > +	switch (action & ~CPU_TASKS_FROZEN) {
> > > +	case CPU_UP_PREPARE:
> > > +		if (power_cpu_prepare(cpu))
> > > +			return NOTIFY_BAD;
> > > +		break;
> > > +	case CPU_STARTING:
> > > +		if (power_cpu_init(cpu))
> > > +			return NOTIFY_BAD;
> > 
> > this is called with IRQs disabled, which makes those GFP_KERNEL allocs
> > above a pretty bad idea.
> > 
> 
> Right, so should I use GFP_ATOMIC to allocate cpumask here?

One should not use GFP_ATOMIC if at all possible, also no, -rt cannot do
_any_ allocations from this site.

> > Also, note that -rt cannot actually do _any_ allocations/frees from
> > STARTING.
> > 
> > Please move the allocs/frees to PREPARE/ONLINE.
> > 
> 
> How about add two cpumask_var_t at power_pmu structure? Then allocate
> the two cpumask_var_t (pmu->mask, pmu->dummy_mask), and they can be
> also used on power_cpu_init.

That would work.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-20  9:22       ` Peter Zijlstra
@ 2016-01-21  7:04         ` Huang Rui
  2016-01-21  9:02           ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-21  7:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

On Wed, Jan 20, 2016 at 10:22:44AM +0100, Peter Zijlstra wrote:
> On Wed, Jan 20, 2016 at 12:48:24PM +0800, Huang Rui wrote:
> > Hi Peter,
> > 
> > Thanks so much to your comments.
> > 
> > On Tue, Jan 19, 2016 at 01:12:50PM +0100, Peter Zijlstra wrote:
> > > On Thu, Jan 14, 2016 at 10:50:08AM +0800, Huang Rui wrote:
> > > > +struct power_pmu {
> > > > +	spinlock_t		lock;
> > > 
> > > This should be a raw_spinlock_t, as it'll be nested under other
> > > raw_spinlock_t's.
> > > 
> > 
> > Do you mean the following spinlock operations are in hardware
> > interrupts disabled case, so I need use raw_spinlock_t instead, right?
> 
> 
> 			mainline		-rt
> 
> raw_spinlock_t		spin-waits		spin-waits
> spinlock_t		spin-waits		blocks (rt-mutex)
> struct mutex		blocks			blocks (rt-mutex)
> 
> 
> since these functions are themselves called with raw_spinlock_t held
> (perf_event_context::lock for example, but also rq::lock), any lock
> nested inside them must also be raw_spinlock_t.
> 

I see, thank you. :-)

I just quickly looked at about the spinlock on -rt mode. Because
realtime linux kernel provides two kinds of spinlock, the original
spinlock_t will be replaced the one which is able to sleep, actually,
like mutex. And another one (you mentioned here, raw_spinlock_t) can
keep on non-sleep behavior, that is the real spinlock.

And my lock here also will be nested under perf_event_context::lock,
right?

> I have a lockdep patch somewhere that checks these ordering things; I
> should rebase and post that again.
> 

Can you CC me when you post that patch next time?

> > Use raw_spin_lock_irqsave/raw_spin_unlock_irqrestore?
> 
> pmu::{start,stop,add,del} will be called with IRQs already disabled.
> 
> > > > +static int power_cpu_init(int cpu)
> > > > +{
> > > > +	int i, cu, ret = 0;
> > > > +	cpumask_var_t mask, dummy_mask;
> > > > +
> > > > +	cu = cpu / cores_per_cu;
> > > > +
> > > > +	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
> > > > +		return -ENOMEM;
> > > > +
> > > > +	if (!zalloc_cpumask_var(&dummy_mask, GFP_KERNEL)) {
> > > > +		ret = -ENOMEM;
> > > > +		goto out;
> > > > +	}
> > > > +
> > > > +	for (i = 0; i < cores_per_cu; i++)
> > > > +		cpumask_set_cpu(i, mask);
> > > > +
> > > > +	cpumask_shift_left(mask, mask, cu * cores_per_cu);
> > > > +
> > > > +	if (!cpumask_and(dummy_mask, mask, &cpu_mask))
> > > > +		cpumask_set_cpu(cpu, &cpu_mask);
> > > > +
> > > > +	free_cpumask_var(dummy_mask);
> > > > +out:
> > > > +	free_cpumask_var(mask);
> > > > +
> > > > +	return ret;
> > > > +}
> > > 
> > > > +static int power_cpu_notifier(struct notifier_block *self,
> > > > +			      unsigned long action, void *hcpu)
> > > > +{
> > > > +	unsigned int cpu = (long)hcpu;
> > > > +
> > > > +	switch (action & ~CPU_TASKS_FROZEN) {
> > > > +	case CPU_UP_PREPARE:
> > > > +		if (power_cpu_prepare(cpu))
> > > > +			return NOTIFY_BAD;
> > > > +		break;
> > > > +	case CPU_STARTING:
> > > > +		if (power_cpu_init(cpu))
> > > > +			return NOTIFY_BAD;
> > > 
> > > this is called with IRQs disabled, which makes those GFP_KERNEL allocs
> > > above a pretty bad idea.
> > > 
> > 
> > Right, so should I use GFP_ATOMIC to allocate cpumask here?
> 
> One should not use GFP_ATOMIC if at all possible, also no, -rt cannot do
> _any_ allocations from this site.
> 

OK, that's because allocation might sleep when IRQ disabled. That's
incorrect.

> > > Also, note that -rt cannot actually do _any_ allocations/frees from
> > > STARTING.
> > > 
> > > Please move the allocs/frees to PREPARE/ONLINE.
> > > 
> > 
> > How about add two cpumask_var_t at power_pmu structure? Then allocate
> > the two cpumask_var_t (pmu->mask, pmu->dummy_mask), and they can be
> > also used on power_cpu_init.
> 
> That would work.

I draft an update diff that based on original patch, please take a
look.

8<--------------------------------------------------------------------------

diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
index 69ef234..e71d993 100644
--- a/arch/x86/kernel/cpu/perf_event_amd_power.c
+++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
@@ -46,10 +46,17 @@ static unsigned int cu_num;
 static u64 max_cu_acc_power;
 
 struct power_pmu {
-	spinlock_t		lock;
+	raw_spinlock_t		lock;
 	struct list_head	active_list;
 	struct pmu		*pmu; /* pointer to power_pmu_class */
 	local64_t		cpu_sw_pwr_ptsc;
+	/*
+	 * These two cpumasks is used for avoiding the allocations on
+	 * CPU_STARTING phase. Because power_cpu_prepare will be
+	 * called on IRQs disabled status.
+	 */
+	cpumask_var_t		mask;
+	cpumask_var_t		tmp_mask;
 };
 
 static struct pmu pmu_class;
@@ -126,9 +133,9 @@ static void pmu_event_start(struct perf_event *event, int mode)
 	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
 	unsigned long flags;
 
-	spin_lock_irqsave(&pmu->lock, flags);
+	raw_spin_lock_irqsave(&pmu->lock, flags);
 	__pmu_event_start(pmu, event);
-	spin_unlock_irqrestore(&pmu->lock, flags);
+	raw_spin_unlock_irqrestore(&pmu->lock, flags);
 }
 
 static void pmu_event_stop(struct perf_event *event, int mode)
@@ -137,7 +144,7 @@ static void pmu_event_stop(struct perf_event *event, int mode)
 	struct hw_perf_event *hwc = &event->hw;
 	unsigned long flags;
 
-	spin_lock_irqsave(&pmu->lock, flags);
+	raw_spin_lock_irqsave(&pmu->lock, flags);
 
 	/* mark event as deactivated and stopped */
 	if (!(hwc->state & PERF_HES_STOPPED)) {
@@ -155,7 +162,7 @@ static void pmu_event_stop(struct perf_event *event, int mode)
 		hwc->state |= PERF_HES_UPTODATE;
 	}
 
-	spin_unlock_irqrestore(&pmu->lock, flags);
+	raw_spin_unlock_irqrestore(&pmu->lock, flags);
 }
 
 static int pmu_event_add(struct perf_event *event, int mode)
@@ -164,14 +171,14 @@ static int pmu_event_add(struct perf_event *event, int mode)
 	struct hw_perf_event *hwc = &event->hw;
 	unsigned long flags;
 
-	spin_lock_irqsave(&pmu->lock, flags);
+	raw_spin_lock_irqsave(&pmu->lock, flags);
 
 	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
 
 	if (mode & PERF_EF_START)
 		__pmu_event_start(pmu, event);
 
-	spin_unlock_irqrestore(&pmu->lock, flags);
+	raw_spin_unlock_irqrestore(&pmu->lock, flags);
 
 	return 0;
 }
@@ -297,89 +304,71 @@ static int power_cpu_exit(int cpu)
 	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
 	int i, cu, ret = 0;
 	int target = nr_cpumask_bits;
-	cpumask_var_t mask, tmp_mask;
 
 	cu = cpu / cores_per_cu;
 
-	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
-		return -ENOMEM;
-
-	if (!zalloc_cpumask_var(&tmp_mask, GFP_KERNEL)) {
-		ret = -ENOMEM;
-		goto out;
-	}
+	cpumask_clear(pmu->mask);
+	cpumask_clear(pmu->tmp_mask);
 
 	for (i = 0; i < cores_per_cu; i++)
-		cpumask_set_cpu(i, mask);
+		cpumask_set_cpu(i, pmu->mask);
 
-	cpumask_shift_left(mask, mask, cu * cores_per_cu);
+	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
 
 	cpumask_clear_cpu(cpu, &cpu_mask);
-	cpumask_clear_cpu(cpu, mask);
+	cpumask_clear_cpu(cpu, pmu->mask);
 
-	if (!cpumask_and(tmp_mask, mask, cpu_online_mask))
-		goto out1;
+	if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask))
+		goto out;
 
 	/*
 	 * find a new CPU on same compute unit, if was set in cpumask
 	 * and still some CPUs on compute unit, then move to the new
 	 * CPU
 	 */
-	target = cpumask_any(tmp_mask);
+	target = cpumask_any(pmu->tmp_mask);
 	if (target < nr_cpumask_bits && target != cpu)
 		cpumask_set_cpu(target, &cpu_mask);
 
 	WARN_ON(cpumask_empty(&cpu_mask));
 
-out1:
+out:
 	/*
 	 * migrate events and context to new CPU
 	 */
 	if (target < nr_cpumask_bits)
 		perf_pmu_migrate_context(pmu->pmu, cpu, target);
 
-	free_cpumask_var(tmp_mask);
-out:
-	free_cpumask_var(mask);
-
 	return ret;
 
 }
 
 static int power_cpu_init(int cpu)
 {
-	int i, cu, ret = 0;
-	cpumask_var_t mask, dummy_mask;
-
-	cu = cpu / cores_per_cu;
+	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
+	int i, cu;
 
-	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
-		return -ENOMEM;
+	if (pmu)
+		return 0;
 
-	if (!zalloc_cpumask_var(&dummy_mask, GFP_KERNEL)) {
-		ret = -ENOMEM;
-		goto out;
-	}
+	cu = cpu / cores_per_cu;
 
 	for (i = 0; i < cores_per_cu; i++)
-		cpumask_set_cpu(i, mask);
+		cpumask_set_cpu(i, pmu->mask);
 
-	cpumask_shift_left(mask, mask, cu * cores_per_cu);
+	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
 
-	if (!cpumask_and(dummy_mask, mask, &cpu_mask))
+	if (!cpumask_and(pmu->tmp_mask, pmu->mask, &cpu_mask))
 		cpumask_set_cpu(cpu, &cpu_mask);
 
-	free_cpumask_var(dummy_mask);
-out:
-	free_cpumask_var(mask);
-
-	return ret;
+	return 0;
 }
 
 static int power_cpu_prepare(int cpu)
 {
 	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
 	int phys_id = topology_physical_package_id(cpu);
+	int ret = 0;
 
 	if (pmu)
 		return 0;
@@ -391,7 +380,17 @@ static int power_cpu_prepare(int cpu)
 	if (!pmu)
 		return -ENOMEM;
 
-	spin_lock_init(&pmu->lock);
+	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto out1;
+	}
+
+	raw_spin_lock_init(&pmu->lock);
 
 	INIT_LIST_HEAD(&pmu->active_list);
 
@@ -400,12 +399,21 @@ static int power_cpu_prepare(int cpu)
 	per_cpu(amd_power_pmu, cpu) = pmu;
 
 	return 0;
+
+out1:
+	free_cpumask_var(pmu->mask);
+out:
+	kfree(pmu);
+
+	return ret;
 }
 
 static void power_cpu_kfree(int cpu)
 {
 	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
 
+	free_cpumask_var(pmu->mask);
+	free_cpumask_var(pmu->tmp_mask);
 	kfree(pmu);
 }

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-21  7:04         ` Huang Rui
@ 2016-01-21  9:02           ` Peter Zijlstra
  2016-01-21 14:42             ` Huang Rui
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-01-21  9:02 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

On Thu, Jan 21, 2016 at 03:04:38PM +0800, Huang Rui wrote:
> I just quickly looked at about the spinlock on -rt mode. Because
> realtime linux kernel provides two kinds of spinlock, the original
> spinlock_t will be replaced the one which is able to sleep, actually,
> like mutex. And another one (you mentioned here, raw_spinlock_t) can
> keep on non-sleep behavior, that is the real spinlock.
> 
> And my lock here also will be nested under perf_event_context::lock,
> right?

Yep.

> > I have a lockdep patch somewhere that checks these ordering things; I
> > should rebase and post that again.
> > 
> 
> Can you CC me when you post that patch next time?

Sure.

> > One should not use GFP_ATOMIC if at all possible, also no, -rt cannot do
> > _any_ allocations from this site.
> > 
> 
> OK, that's because allocation might sleep when IRQ disabled. That's
> incorrect.

Right.

Its related to the above, the allocator locks are spinlock_t and as a
consequence of them becoming a blocking lock, spin_lock_irq() will also
no longer disable IRQs.

The CPU_STARTING notifier however will still be called with IRQs
disabled because it is CPU bringup.

So on -rt even GFP_ATOMIC will no longer work here.

> I draft an update diff that based on original patch, please take a
> look.
> 
> 8<--------------------------------------------------------------------------
> 
> diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
> index 69ef234..e71d993 100644
> --- a/arch/x86/kernel/cpu/perf_event_amd_power.c
> +++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
> @@ -46,10 +46,17 @@ static unsigned int cu_num;
>  static u64 max_cu_acc_power;
>  
>  struct power_pmu {
> -	spinlock_t		lock;
> +	raw_spinlock_t		lock;
>  	struct list_head	active_list;
>  	struct pmu		*pmu; /* pointer to power_pmu_class */
>  	local64_t		cpu_sw_pwr_ptsc;
> +	/*
> +	 * These two cpumasks is used for avoiding the allocations on
> +	 * CPU_STARTING phase. Because power_cpu_prepare will be
> +	 * called on IRQs disabled status.
> +	 */
> +	cpumask_var_t		mask;
> +	cpumask_var_t		tmp_mask;
>  };
>  
>  static struct pmu pmu_class;
> @@ -126,9 +133,9 @@ static void pmu_event_start(struct perf_event *event, int mode)
>  	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
>  	unsigned long flags;
>  
> -	spin_lock_irqsave(&pmu->lock, flags);
> +	raw_spin_lock_irqsave(&pmu->lock, flags);
>  	__pmu_event_start(pmu, event);
> -	spin_unlock_irqrestore(&pmu->lock, flags);
> +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
>  }
>  
>  static void pmu_event_stop(struct perf_event *event, int mode)
> @@ -137,7 +144,7 @@ static void pmu_event_stop(struct perf_event *event, int mode)
>  	struct hw_perf_event *hwc = &event->hw;
>  	unsigned long flags;
>  
> -	spin_lock_irqsave(&pmu->lock, flags);
> +	raw_spin_lock_irqsave(&pmu->lock, flags);
>  
>  	/* mark event as deactivated and stopped */
>  	if (!(hwc->state & PERF_HES_STOPPED)) {
> @@ -155,7 +162,7 @@ static void pmu_event_stop(struct perf_event *event, int mode)
>  		hwc->state |= PERF_HES_UPTODATE;
>  	}
>  
> -	spin_unlock_irqrestore(&pmu->lock, flags);
> +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
>  }
>  
>  static int pmu_event_add(struct perf_event *event, int mode)
> @@ -164,14 +171,14 @@ static int pmu_event_add(struct perf_event *event, int mode)
>  	struct hw_perf_event *hwc = &event->hw;
>  	unsigned long flags;
>  
> -	spin_lock_irqsave(&pmu->lock, flags);
> +	raw_spin_lock_irqsave(&pmu->lock, flags);
>  
>  	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
>  
>  	if (mode & PERF_EF_START)
>  		__pmu_event_start(pmu, event);
>  
> -	spin_unlock_irqrestore(&pmu->lock, flags);
> +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
>  
>  	return 0;
>  }

So for these 4 {start,stop,add,del} you can drop the irqsave/irqrestore
thing as its guaranteed that IRQs will be disabled.

> +	cpumask_clear(pmu->mask);
> +	cpumask_clear(pmu->tmp_mask);
>  
>  	for (i = 0; i < cores_per_cu; i++)
> +		cpumask_set_cpu(i, pmu->mask);
>  
> +	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);

Couldn't you simply use topology_sibling_cpumask(cpu) instead?

>  
>  static int power_cpu_init(int cpu)
>  {
> +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> +	int i, cu;
>  
> +	if (pmu)
> +		return 0;
>  
> +	cu = cpu / cores_per_cu;
>  
>  	for (i = 0; i < cores_per_cu; i++)
> +		cpumask_set_cpu(i, pmu->mask);
>  
> +	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);

topology_sibling_cpumask(cpu) again?

>  
> +	if (!cpumask_and(pmu->tmp_mask, pmu->mask, &cpu_mask))
>  		cpumask_set_cpu(cpu, &cpu_mask);
>  
> +	return 0;
>  }
>  
>  static int power_cpu_prepare(int cpu)
>  {
>  	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
>  	int phys_id = topology_physical_package_id(cpu);
> +	int ret = 0;
>  
>  	if (pmu)
>  		return 0;
> @@ -391,7 +380,17 @@ static int power_cpu_prepare(int cpu)
>  	if (!pmu)
>  		return -ENOMEM;
>  
> +	if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) {
> +		ret = -ENOMEM;
> +		goto out1;
> +	}
> +
> +	raw_spin_lock_init(&pmu->lock);
>  
>  	INIT_LIST_HEAD(&pmu->active_list);
>  
> @@ -400,12 +399,21 @@ static int power_cpu_prepare(int cpu)
>  	per_cpu(amd_power_pmu, cpu) = pmu;
>  
>  	return 0;
> +
> +out1:
> +	free_cpumask_var(pmu->mask);
> +out:
> +	kfree(pmu);
> +
> +	return ret;
>  }
>  
>  static void power_cpu_kfree(int cpu)
>  {
>  	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
>  
> +	free_cpumask_var(pmu->mask);
> +	free_cpumask_var(pmu->tmp_mask);
>  	kfree(pmu);
>  }

Yes this should work I think.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-21  9:02           ` Peter Zijlstra
@ 2016-01-21 14:42             ` Huang Rui
  2016-01-21 15:10               ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-21 14:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

On Thu, Jan 21, 2016 at 10:02:57AM +0100, Peter Zijlstra wrote:
> On Thu, Jan 21, 2016 at 03:04:38PM +0800, Huang Rui wrote:
> > I just quickly looked at about the spinlock on -rt mode. Because
> > realtime linux kernel provides two kinds of spinlock, the original
> > spinlock_t will be replaced the one which is able to sleep, actually,
> > like mutex. And another one (you mentioned here, raw_spinlock_t) can
> > keep on non-sleep behavior, that is the real spinlock.
> > 
> > And my lock here also will be nested under perf_event_context::lock,
> > right?
> 
> Yep.
> 
> > > I have a lockdep patch somewhere that checks these ordering things; I
> > > should rebase and post that again.
> > > 
> > 
> > Can you CC me when you post that patch next time?
> 
> Sure.
> 
> > > One should not use GFP_ATOMIC if at all possible, also no, -rt cannot do
> > > _any_ allocations from this site.
> > > 
> > 
> > OK, that's because allocation might sleep when IRQ disabled. That's
> > incorrect.
> 
> Right.
> 
> Its related to the above, the allocator locks are spinlock_t and as a
> consequence of them becoming a blocking lock, spin_lock_irq() will also
> no longer disable IRQs.
> 
> The CPU_STARTING notifier however will still be called with IRQs
> disabled because it is CPU bringup.
> 
> So on -rt even GFP_ATOMIC will no longer work here.
> 

OK, thanks to clarify it.

> > I draft an update diff that based on original patch, please take a
> > look.
> > 
> > 8<--------------------------------------------------------------------------
> > 
> > diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
> > index 69ef234..e71d993 100644
> > --- a/arch/x86/kernel/cpu/perf_event_amd_power.c
> > +++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
> > @@ -46,10 +46,17 @@ static unsigned int cu_num;
> >  static u64 max_cu_acc_power;
> >  
> >  struct power_pmu {
> > -	spinlock_t		lock;
> > +	raw_spinlock_t		lock;
> >  	struct list_head	active_list;
> >  	struct pmu		*pmu; /* pointer to power_pmu_class */
> >  	local64_t		cpu_sw_pwr_ptsc;
> > +	/*
> > +	 * These two cpumasks is used for avoiding the allocations on
> > +	 * CPU_STARTING phase. Because power_cpu_prepare will be
> > +	 * called on IRQs disabled status.
> > +	 */
> > +	cpumask_var_t		mask;
> > +	cpumask_var_t		tmp_mask;
> >  };
> >  
> >  static struct pmu pmu_class;
> > @@ -126,9 +133,9 @@ static void pmu_event_start(struct perf_event *event, int mode)
> >  	struct power_pmu *pmu = __this_cpu_read(amd_power_pmu);
> >  	unsigned long flags;
> >  
> > -	spin_lock_irqsave(&pmu->lock, flags);
> > +	raw_spin_lock_irqsave(&pmu->lock, flags);
> >  	__pmu_event_start(pmu, event);
> > -	spin_unlock_irqrestore(&pmu->lock, flags);
> > +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
> >  }
> >  
> >  static void pmu_event_stop(struct perf_event *event, int mode)
> > @@ -137,7 +144,7 @@ static void pmu_event_stop(struct perf_event *event, int mode)
> >  	struct hw_perf_event *hwc = &event->hw;
> >  	unsigned long flags;
> >  
> > -	spin_lock_irqsave(&pmu->lock, flags);
> > +	raw_spin_lock_irqsave(&pmu->lock, flags);
> >  
> >  	/* mark event as deactivated and stopped */
> >  	if (!(hwc->state & PERF_HES_STOPPED)) {
> > @@ -155,7 +162,7 @@ static void pmu_event_stop(struct perf_event *event, int mode)
> >  		hwc->state |= PERF_HES_UPTODATE;
> >  	}
> >  
> > -	spin_unlock_irqrestore(&pmu->lock, flags);
> > +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
> >  }
> >  
> >  static int pmu_event_add(struct perf_event *event, int mode)
> > @@ -164,14 +171,14 @@ static int pmu_event_add(struct perf_event *event, int mode)
> >  	struct hw_perf_event *hwc = &event->hw;
> >  	unsigned long flags;
> >  
> > -	spin_lock_irqsave(&pmu->lock, flags);
> > +	raw_spin_lock_irqsave(&pmu->lock, flags);
> >  
> >  	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> >  
> >  	if (mode & PERF_EF_START)
> >  		__pmu_event_start(pmu, event);
> >  
> > -	spin_unlock_irqrestore(&pmu->lock, flags);
> > +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
> >  
> >  	return 0;
> >  }
> 
> So for these 4 {start,stop,add,del} you can drop the irqsave/irqrestore
> thing as its guaranteed that IRQs will be disabled.
> 

OK, I will remove the lock.

> > +	cpumask_clear(pmu->mask);
> > +	cpumask_clear(pmu->tmp_mask);
> >  
> >  	for (i = 0; i < cores_per_cu; i++)
> > +		cpumask_set_cpu(i, pmu->mask);
> >  
> > +	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
> 
> Couldn't you simply use topology_sibling_cpumask(cpu) instead?
> 

Looks like we couldn't. That's because cores number per cu (compute
unit) is got by CPUID 0x8000001e EBX. That relies on the CPU hardware.

> >  
> >  static int power_cpu_init(int cpu)
> >  {
> > +	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
> > +	int i, cu;
> >  
> > +	if (pmu)
> > +		return 0;
> >  
> > +	cu = cpu / cores_per_cu;
> >  
> >  	for (i = 0; i < cores_per_cu; i++)
> > +		cpumask_set_cpu(i, pmu->mask);
> >  
> > +	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
> 
> topology_sibling_cpumask(cpu) again?
> 

Ditto.

Thanks,
Rui

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-21 14:42             ` Huang Rui
@ 2016-01-21 15:10               ` Peter Zijlstra
  2016-01-21 15:24                 ` Huang Rui
  2016-01-21 16:59                 ` Borislav Petkov
  0 siblings, 2 replies; 21+ messages in thread
From: Peter Zijlstra @ 2016-01-21 15:10 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

On Thu, Jan 21, 2016 at 10:42:35PM +0800, Huang Rui wrote:
> > > @@ -164,14 +171,14 @@ static int pmu_event_add(struct perf_event *event, int mode)
> > >  	struct hw_perf_event *hwc = &event->hw;
> > >  	unsigned long flags;
> > >  
> > > -	spin_lock_irqsave(&pmu->lock, flags);
> > > +	raw_spin_lock_irqsave(&pmu->lock, flags);
> > >  
> > >  	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> > >  
> > >  	if (mode & PERF_EF_START)
> > >  		__pmu_event_start(pmu, event);
> > >  
> > > -	spin_unlock_irqrestore(&pmu->lock, flags);
> > > +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
> > >  
> > >  	return 0;
> > >  }
> > 
> > So for these 4 {start,stop,add,del} you can drop the irqsave/irqrestore
> > thing as its guaranteed that IRQs will be disabled.
> > 
> 
> OK, I will remove the lock.

No, the lock seems needed, as the list is global. Just the
irqsave/irqrestore part is superfluous.

> > > +	cpumask_clear(pmu->mask);
> > > +	cpumask_clear(pmu->tmp_mask);
> > >  
> > >  	for (i = 0; i < cores_per_cu; i++)
> > > +		cpumask_set_cpu(i, pmu->mask);
> > >  
> > > +	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
> > 
> > Couldn't you simply use topology_sibling_cpumask(cpu) instead?
> > 
> 
> Looks like we couldn't. That's because cores number per cu (compute
> unit) is got by CPUID 0x8000001e EBX. That relies on the CPU hardware.

Borislav? I thought the AMD compute unit stuff was modeled as the SMT
topology.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-21 15:10               ` Peter Zijlstra
@ 2016-01-21 15:24                 ` Huang Rui
  2016-01-21 15:51                   ` Peter Zijlstra
  2016-01-21 16:59                 ` Borislav Petkov
  1 sibling, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-21 15:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

On Thu, Jan 21, 2016 at 04:10:40PM +0100, Peter Zijlstra wrote:
> On Thu, Jan 21, 2016 at 10:42:35PM +0800, Huang Rui wrote:
> > > > @@ -164,14 +171,14 @@ static int pmu_event_add(struct perf_event *event, int mode)
> > > >  	struct hw_perf_event *hwc = &event->hw;
> > > >  	unsigned long flags;
> > > >  
> > > > -	spin_lock_irqsave(&pmu->lock, flags);
> > > > +	raw_spin_lock_irqsave(&pmu->lock, flags);
> > > >  
> > > >  	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> > > >  
> > > >  	if (mode & PERF_EF_START)
> > > >  		__pmu_event_start(pmu, event);
> > > >  
> > > > -	spin_unlock_irqrestore(&pmu->lock, flags);
> > > > +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
> > > >  
> > > >  	return 0;
> > > >  }
> > > 
> > > So for these 4 {start,stop,add,del} you can drop the irqsave/irqrestore
> > > thing as its guaranteed that IRQs will be disabled.
> > > 
> > 
> > OK, I will remove the lock.
> 
> No, the lock seems needed, as the list is global. Just the
> irqsave/irqrestore part is superfluous.
> 

But actually, the lock is only used at {start,stop,add,del}. If we
drop irqsave/irqrestore on these 4 things, there won't be any use
cases.

Thanks,
Rui

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-21 15:24                 ` Huang Rui
@ 2016-01-21 15:51                   ` Peter Zijlstra
  0 siblings, 0 replies; 21+ messages in thread
From: Peter Zijlstra @ 2016-01-21 15:51 UTC (permalink / raw)
  To: Huang Rui
  Cc: Borislav Petkov, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Borislav Petkov,
	Fengguang Wu, Aaron Lu

On Thu, Jan 21, 2016 at 11:24:14PM +0800, Huang Rui wrote:
> On Thu, Jan 21, 2016 at 04:10:40PM +0100, Peter Zijlstra wrote:
> > On Thu, Jan 21, 2016 at 10:42:35PM +0800, Huang Rui wrote:
> > > > > @@ -164,14 +171,14 @@ static int pmu_event_add(struct perf_event *event, int mode)
> > > > >  	struct hw_perf_event *hwc = &event->hw;
> > > > >  	unsigned long flags;
> > > > >  
> > > > > -	spin_lock_irqsave(&pmu->lock, flags);
> > > > > +	raw_spin_lock_irqsave(&pmu->lock, flags);
> > > > >  
> > > > >  	hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
> > > > >  
> > > > >  	if (mode & PERF_EF_START)
> > > > >  		__pmu_event_start(pmu, event);
> > > > >  
> > > > > -	spin_unlock_irqrestore(&pmu->lock, flags);
> > > > > +	raw_spin_unlock_irqrestore(&pmu->lock, flags);
> > > > >  
> > > > >  	return 0;
> > > > >  }
> > > > 
> > > > So for these 4 {start,stop,add,del} you can drop the irqsave/irqrestore
> > > > thing as its guaranteed that IRQs will be disabled.
> > > > 
> > > 
> > > OK, I will remove the lock.
> > 
> > No, the lock seems needed, as the list is global. Just the
> > irqsave/irqrestore part is superfluous.
> > 
> 
> But actually, the lock is only used at {start,stop,add,del}. If we
> drop irqsave/irqrestore on these 4 things, there won't be any use
> cases.

But, but, the list is global !? something needs to serialize the access
to it, right?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-21 15:10               ` Peter Zijlstra
  2016-01-21 15:24                 ` Huang Rui
@ 2016-01-21 16:59                 ` Borislav Petkov
  2016-01-22  8:04                   ` Huang Rui
  1 sibling, 1 reply; 21+ messages in thread
From: Borislav Petkov @ 2016-01-21 16:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Huang Rui, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Fengguang Wu,
	Aaron Lu

On Thu, Jan 21, 2016 at 04:10:40PM +0100, Peter Zijlstra wrote:
> > > > +	cpumask_clear(pmu->mask);
> > > > +	cpumask_clear(pmu->tmp_mask);
> > > >  
> > > >  	for (i = 0; i < cores_per_cu; i++)
> > > > +		cpumask_set_cpu(i, pmu->mask);
> > > >  
> > > > +	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
> > > 
> > > Couldn't you simply use topology_sibling_cpumask(cpu) instead?
> > > 
> > 
> > Looks like we couldn't. That's because cores number per cu (compute
> > unit) is got by CPUID 0x8000001e EBX. That relies on the CPU hardware.
> 
> Borislav? I thought the AMD compute unit stuff was modeled as the SMT
> topology.

I would think so too:

	smp_num_siblings = ((ebx >> 8) & 3) + 1;

gets set based on that CPUID leaf above. And that value is
CoresPerComputeUnit which needs to be incremented by 1 to get the actual
count of cores in a compute unit.

And that participates in the setting of topology_sibling_cpumask() in
set_cpu_sibling_map().

And that looks correct on my system here:

$ grep -EriIn . /sys/devices/system/cpu/cpu?/topology/* | grep thread_siblings
/sys/devices/system/cpu/cpu0/topology/thread_siblings:1:03
/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:1:0-1
/sys/devices/system/cpu/cpu1/topology/thread_siblings:1:03
/sys/devices/system/cpu/cpu1/topology/thread_siblings_list:1:0-1
/sys/devices/system/cpu/cpu2/topology/thread_siblings:1:0c
/sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1:2-3
/sys/devices/system/cpu/cpu3/topology/thread_siblings:1:0c
/sys/devices/system/cpu/cpu3/topology/thread_siblings_list:1:2-3
/sys/devices/system/cpu/cpu4/topology/thread_siblings:1:30
/sys/devices/system/cpu/cpu4/topology/thread_siblings_list:1:4-5
/sys/devices/system/cpu/cpu5/topology/thread_siblings:1:30
/sys/devices/system/cpu/cpu5/topology/thread_siblings_list:1:4-5
/sys/devices/system/cpu/cpu6/topology/thread_siblings:1:c0
/sys/devices/system/cpu/cpu6/topology/thread_siblings_list:1:6-7
/sys/devices/system/cpu/cpu7/topology/thread_siblings:1:c0
/sys/devices/system/cpu/cpu7/topology/thread_siblings_list:1:6-7

and when we look at what CPUID reports:

$ cpuid -r | grep -E "^\s+0x8000001e" | awk '{ print $4 }'
ebx=0x00000100
ebx=0x00000100
ebx=0x00000101
ebx=0x00000101
ebx=0x00000102
ebx=0x00000102
ebx=0x00000103
ebx=0x00000103

We see that [15:8] is CoresPerComputeUnit which is + 1, so 2 cores per
compute unit.

And slice [7:0] gives the compute unit (CU) id of each core, so cores 0
and 1 are CU0, 2 and 3 are CU1 and so on...

So Rui, why do you say you can't use topology_sibling_cpumask()?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-21 16:59                 ` Borislav Petkov
@ 2016-01-22  8:04                   ` Huang Rui
  2016-01-22 17:51                     ` Borislav Petkov
  0 siblings, 1 reply; 21+ messages in thread
From: Huang Rui @ 2016-01-22  8:04 UTC (permalink / raw)
  To: Borislav Petkov, Peter Zijlstra
  Cc: Ingo Molnar, Andy Lutomirski, Thomas Gleixner, Robert Richter,
	Jacob Shin, John Stultz, Fr�d�ric Weisbecker,
	linux-kernel, spg_linux_kernel, x86, Guenter Roeck,
	Andreas Herrmann, Suravee Suthikulpanit, Aravind Gopalakrishnan,
	Fengguang Wu, Aaron Lu

On Thu, Jan 21, 2016 at 05:59:58PM +0100, Borislav Petkov wrote:
> On Thu, Jan 21, 2016 at 04:10:40PM +0100, Peter Zijlstra wrote:
> > > > > +	cpumask_clear(pmu->mask);
> > > > > +	cpumask_clear(pmu->tmp_mask);
> > > > >  
> > > > >  	for (i = 0; i < cores_per_cu; i++)
> > > > > +		cpumask_set_cpu(i, pmu->mask);
> > > > >  
> > > > > +	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
> > > > 
> > > > Couldn't you simply use topology_sibling_cpumask(cpu) instead?
> > > > 
> > > 
> > > Looks like we couldn't. That's because cores number per cu (compute
> > > unit) is got by CPUID 0x8000001e EBX. That relies on the CPU hardware.
> > 
> > Borislav? I thought the AMD compute unit stuff was modeled as the SMT
> > topology.
> 
> I would think so too:
> 
> 	smp_num_siblings = ((ebx >> 8) & 3) + 1;
> 
> gets set based on that CPUID leaf above. And that value is
> CoresPerComputeUnit which needs to be incremented by 1 to get the actual
> count of cores in a compute unit.
> 
> And that participates in the setting of topology_sibling_cpumask() in
> set_cpu_sibling_map().
> 
> And that looks correct on my system here:
> 
> $ grep -EriIn . /sys/devices/system/cpu/cpu?/topology/* | grep thread_siblings
> /sys/devices/system/cpu/cpu0/topology/thread_siblings:1:03
> /sys/devices/system/cpu/cpu0/topology/thread_siblings_list:1:0-1
> /sys/devices/system/cpu/cpu1/topology/thread_siblings:1:03
> /sys/devices/system/cpu/cpu1/topology/thread_siblings_list:1:0-1
> /sys/devices/system/cpu/cpu2/topology/thread_siblings:1:0c
> /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1:2-3
> /sys/devices/system/cpu/cpu3/topology/thread_siblings:1:0c
> /sys/devices/system/cpu/cpu3/topology/thread_siblings_list:1:2-3
> /sys/devices/system/cpu/cpu4/topology/thread_siblings:1:30
> /sys/devices/system/cpu/cpu4/topology/thread_siblings_list:1:4-5
> /sys/devices/system/cpu/cpu5/topology/thread_siblings:1:30
> /sys/devices/system/cpu/cpu5/topology/thread_siblings_list:1:4-5
> /sys/devices/system/cpu/cpu6/topology/thread_siblings:1:c0
> /sys/devices/system/cpu/cpu6/topology/thread_siblings_list:1:6-7
> /sys/devices/system/cpu/cpu7/topology/thread_siblings:1:c0
> /sys/devices/system/cpu/cpu7/topology/thread_siblings_list:1:6-7
> 
> and when we look at what CPUID reports:
> 
> $ cpuid -r | grep -E "^\s+0x8000001e" | awk '{ print $4 }'
> ebx=0x00000100
> ebx=0x00000100
> ebx=0x00000101
> ebx=0x00000101
> ebx=0x00000102
> ebx=0x00000102
> ebx=0x00000103
> ebx=0x00000103
> 
> We see that [15:8] is CoresPerComputeUnit which is + 1, so 2 cores per
> compute unit.
> 
> And slice [7:0] gives the compute unit (CU) id of each core, so cores 0
> and 1 are CU0, 2 and 3 are CU1 and so on...
> 
> So Rui, why do you say you can't use topology_sibling_cpumask()?
> 

OK, you're right. Peter, Boris, thanks for your information.
I might need look at topology deeper. :-)

So how about below update:

8<--------------------------------------------------------------------------

diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c
index 1f31157..d387fe7 100644
--- a/arch/x86/kernel/cpu/perf_event_amd_power.c
+++ b/arch/x86/kernel/cpu/perf_event_amd_power.c
@@ -301,18 +301,12 @@ static struct pmu pmu_class = {
 static int power_cpu_exit(int cpu)
 {
 	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
-	int i, cu, ret = 0;
+	int ret = 0;
 	int target = nr_cpumask_bits;
 
-	cu = cpu / cores_per_cu;
-
 	cpumask_clear(pmu->mask);
-	cpumask_clear(pmu->tmp_mask);
-
-	for (i = 0; i < cores_per_cu; i++)
-		cpumask_set_cpu(i, pmu->mask);
 
-	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
+	cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu));
 
 	cpumask_clear_cpu(cpu, &cpu_mask);
 	cpumask_clear_cpu(cpu, pmu->mask);
@@ -345,19 +339,12 @@ out:
 static int power_cpu_init(int cpu)
 {
 	struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu);
-	int i, cu;
 
 	if (pmu)
 		return 0;
 
-	cu = cpu / cores_per_cu;
-
-	for (i = 0; i < cores_per_cu; i++)
-		cpumask_set_cpu(i, pmu->mask);
-
-	cpumask_shift_left(pmu->mask, pmu->mask, cu * cores_per_cu);
-
-	if (!cpumask_and(pmu->tmp_mask, pmu->mask, &cpu_mask))
+	if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu),
+			 &cpu_mask))
 		cpumask_set_cpu(cpu, &cpu_mask);
 
 	return 0;
@@ -454,7 +441,6 @@ static int __init amd_power_pmu_init(void)
 {
 	int i, ret;
 	u64 tmp;
-	cpumask_var_t tmp_mask, res_mask;
 
 	if (!x86_match_cpu(cpu_match))
 		return 0;
@@ -476,27 +462,16 @@ static int __init amd_power_pmu_init(void)
 	}
 	max_cu_acc_power = tmp;
 
-	if (!zalloc_cpumask_var(&tmp_mask, GFP_KERNEL))
-		return -ENOMEM;
-
-	if (!zalloc_cpumask_var(&res_mask, GFP_KERNEL)) {
-		ret = -ENOMEM;
-		goto out;
-	}
-
-	for (i = 0; i < cores_per_cu; i++)
-		cpumask_set_cpu(i, tmp_mask);
-
 	cpu_notifier_register_begin();
 
 	/*
 	 * Choose the one online core of each compute unit
 	 */
-	for (i = 0; i < cu_num; i++) {
+	for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) {
 		/* WARN_ON for empty CU masks */
-		WARN_ON(!cpumask_and(res_mask, tmp_mask, cpu_online_mask));
-		cpumask_set_cpu(cpumask_any(res_mask), &cpu_mask);
-		cpumask_shift_left(tmp_mask, tmp_mask, cores_per_cu);
+		WARN_ON(cpumask_empty(topology_sibling_cpumask(i)));
+		cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)),
+				&cpu_mask);
 	}
 
 	for_each_present_cpu(i) {
@@ -505,14 +480,14 @@ static int __init amd_power_pmu_init(void)
 			/* unwind on [0 ... i-1] CPUs */
 			while (i--)
 				power_cpu_kfree(i);
-			goto out1;
+			goto out;
 		}
 		ret = power_cpu_init(i);
 		if (ret) {
 			/* unwind on [0 ... i] CPUs */
 			while (i >= 0)
 				power_cpu_kfree(i--);
-			goto out1;
+			goto out;
 		}
 	}
 
@@ -521,17 +496,13 @@ static int __init amd_power_pmu_init(void)
 	ret = perf_pmu_register(&pmu_class, "power", -1);
 	if (WARN_ON(ret)) {
 		pr_warn("AMD Power PMU registration failed\n");
-		goto out1;
+		goto out;
 	}
 
 	pr_info("AMD Power PMU detected, %d compute units\n", cu_num);
 
-out1:
-	cpu_notifier_register_done();
-
-	free_cpumask_var(res_mask);
 out:
-	free_cpumask_var(tmp_mask);
+	cpu_notifier_register_done();
 
 	return ret;
 }

8<--------------------------------------------------------------------------

BTW, "smp_num_siblings = ((ebx >> 8) & 3) + 1" should not put under
init_amd(), we would better move it to bsp_init_amd(). Because the AMD
"smp_num_siblings" number must be constant.

Thanks,
Rui

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism
  2016-01-22  8:04                   ` Huang Rui
@ 2016-01-22 17:51                     ` Borislav Petkov
  0 siblings, 0 replies; 21+ messages in thread
From: Borislav Petkov @ 2016-01-22 17:51 UTC (permalink / raw)
  To: Huang Rui
  Cc: Peter Zijlstra, Ingo Molnar, Andy Lutomirski, Thomas Gleixner,
	Robert Richter, Jacob Shin, John Stultz,
	Fr�d�ric Weisbecker, linux-kernel,
	spg_linux_kernel, x86, Guenter Roeck, Andreas Herrmann,
	Suravee Suthikulpanit, Aravind Gopalakrishnan, Fengguang Wu,
	Aaron Lu

On Fri, Jan 22, 2016 at 04:04:40PM +0800, Huang Rui wrote:
> OK, you're right. Peter, Boris, thanks for your information.
> I might need look at topology deeper. :-)
> 
> So how about below update:

Please send a full patch, not those diffs ontop - I don't know about
Peter but I absolutely can't grok it this way.

> BTW, "smp_num_siblings = ((ebx >> 8) & 3) + 1" should not put under
> init_amd(), we would better move it to bsp_init_amd(). Because the AMD
> "smp_num_siblings" number must be constant.

You can send me a fix once your other stuff hits tip and you can base it
ontop.

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [tip:perf/urgent] perf/x86/amd: Move nodes_per_socket into bsp_init_amd()
  2016-01-14  2:50 ` [PATCH v2 1/5] x86/amd: move nodes_per_socket into bsp_init_amd Huang Rui
@ 2016-03-21  9:54   ` tip-bot for Huang Rui
  0 siblings, 0 replies; 21+ messages in thread
From: tip-bot for Huang Rui @ 2016-03-21  9:54 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: alexander.shishkin, suravee.suthikulpanit, dvlasenk, rric,
	vincent.weaver, peterz, bp, eranian, ray.huang, dsahern, bp,
	herrmann.der.user, namhyung, luto, linux-kernel, fengguang.wu,
	john.stultz, hecmargi, mingo, aaron.lu, linux, brgerst, jolsa,
	luto, hpa, jacob.w.shin, fweisbec, tglx, acme, torvalds,
	Aravind.Gopalakrishnan

Commit-ID:  8dfeae0d73bf803be1a533e147b3b0ea69375596
Gitweb:     http://git.kernel.org/tip/8dfeae0d73bf803be1a533e147b3b0ea69375596
Author:     Huang Rui <ray.huang@amd.com>
AuthorDate: Thu, 14 Jan 2016 10:50:04 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 21 Mar 2016 09:08:22 +0100

perf/x86/amd: Move nodes_per_socket into bsp_init_amd()

nodes_per_socket is static and it needn't be initialized many
times during every CPU core init. So move its initialization into
bsp_init_amd().

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andreas Herrmann <herrmann.der.user@googlemail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hector Marco-Gisbert <hecmargi@upv.es>
Cc: Jacob Shin <jacob.w.shin@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <rric@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: spg_linux_kernel@amd.com
Link: http://lkml.kernel.org/r/1452739808-11871-2-git-send-email-ray.huang@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/cpu/amd.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index b39338c..d4b06e8 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -306,7 +306,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 		u32 eax, ebx, ecx, edx;
 
 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
-		nodes_per_socket = ((ecx >> 8) & 7) + 1;
 		node_id = ecx & 7;
 
 		/* get compute unit information */
@@ -317,7 +316,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 		u64 value;
 
 		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		nodes_per_socket = ((value >> 3) & 7) + 1;
 		node_id = value & 7;
 	} else
 		return;
@@ -519,6 +517,18 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
 
 	if (cpu_has(c, X86_FEATURE_MWAITX))
 		use_mwaitx_delay();
+
+	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
+		u32 ecx;
+
+		ecx = cpuid_ecx(0x8000001e);
+		nodes_per_socket = ((ecx >> 8) & 7) + 1;
+	} else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
+		u64 value;
+
+		rdmsrl(MSR_FAM10H_NODE_ID, value);
+		nodes_per_socket = ((value >> 3) & 7) + 1;
+	}
 }
 
 static void early_init_amd(struct cpuinfo_x86 *c)

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [tip:perf/urgent] x86/cpufeature, perf/x86: Add AMD Accumulated Power Mechanism feature flag
  2016-01-14  2:50 ` [PATCH v2 3/5] x86/cpufeature: add AMD Accumulated Power Mechanism feature flag Huang Rui
@ 2016-03-21  9:55   ` tip-bot for Huang Rui
  0 siblings, 0 replies; 21+ messages in thread
From: tip-bot for Huang Rui @ 2016-03-21  9:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: fengguang.wu, alexander.shishkin, fweisbec, luto, Vincent.Wan,
	hecmargi, kristen, suravee.suthikulpanit, bp, dvlasenk, peterz,
	ross.zwisler, ray.huang, brgerst, john.stultz, acme, aaron.lu,
	tglx, Aravind.Gopalakrishnan, dsahern, rric, vincent.weaver,
	jolsa, luto, linux, jacob.w.shin, eranian, hpa, namhyung, mingo,
	bp, herrmann.der.user, torvalds, linux-kernel

Commit-ID:  01fe03ff1c7ce6b6e2212cb6171a49c2858fbc7c
Gitweb:     http://git.kernel.org/tip/01fe03ff1c7ce6b6e2212cb6171a49c2858fbc7c
Author:     Huang Rui <ray.huang@amd.com>
AuthorDate: Thu, 14 Jan 2016 10:50:06 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 21 Mar 2016 09:35:29 +0100

x86/cpufeature, perf/x86: Add AMD Accumulated Power Mechanism feature flag

AMD CPU family 15h model 0x60 introduces a mechanism for measuring
accumulated power. It is used to report the processor power consumption
and support for it is indicated by CPUID Fn8000_0007_EDX[12].

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andreas Herrmann <herrmann.der.user@googlemail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hector Marco-Gisbert <hecmargi@upv.es>
Cc: Jacob Shin <jacob.w.shin@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Kristen Carlson Accardi <kristen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <rric@kernel.org>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Wan Zongshun <Vincent.Wan@amd.com>
Cc: spg_linux_kernel@amd.com
Link: http://lkml.kernel.org/r/1452739808-11871-4-git-send-email-ray.huang@amd.com
[ Resolved conflict and moved the synthetic CPUID slot to 19. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/cpufeatures.h | 2 +-
 arch/x86/kernel/cpu/amd.c          | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 746dd6a..44ebd04 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -94,7 +94,7 @@
 #define X86_FEATURE_REP_GOOD	( 3*32+16) /* rep microcode works well */
 #define X86_FEATURE_MFENCE_RDTSC ( 3*32+17) /* "" Mfence synchronizes RDTSC */
 #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" Lfence synchronizes RDTSC */
-/* free, was #define X86_FEATURE_11AP	( 3*32+19) * "" Bad local APIC aka 11AP */
+#define X86_FEATURE_ACC_POWER	( 3*32+19) /* AMD Accumulated Power Mechanism */
 #define X86_FEATURE_NOPL	( 3*32+20) /* The NOPL (0F 1F) instructions */
 #define X86_FEATURE_ALWAYS	( 3*32+21) /* "" Always-present feature */
 #define X86_FEATURE_XTOPOLOGY	( 3*32+22) /* cpu topology enum extensions */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index d4b06e8..68fe8d3 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -546,6 +546,10 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 			set_sched_clock_stable();
 	}
 
+	/* Bit 12 of 8000_0007 edx is accumulated power mechanism. */
+	if (c->x86_power & BIT(12))
+		set_cpu_cap(c, X86_FEATURE_ACC_POWER);
+
 #ifdef CONFIG_X86_64
 	set_cpu_cap(c, X86_FEATURE_SYSCALL32);
 #else

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-03-21 10:01 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-14  2:50 [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated power reporting mechanism Huang Rui
2016-01-14  2:50 ` [PATCH v2 1/5] x86/amd: move nodes_per_socket into bsp_init_amd Huang Rui
2016-03-21  9:54   ` [tip:perf/urgent] perf/x86/amd: Move nodes_per_socket into bsp_init_amd() tip-bot for Huang Rui
2016-01-14  2:50 ` [PATCH v2 2/5] x86/amd: add accessor for number of cores per compute unit Huang Rui
2016-01-14  2:50 ` [PATCH v2 3/5] x86/cpufeature: add AMD Accumulated Power Mechanism feature flag Huang Rui
2016-03-21  9:55   ` [tip:perf/urgent] x86/cpufeature, perf/x86: Add " tip-bot for Huang Rui
2016-01-14  2:50 ` [PATCH v2 4/5] perf/x86: Move events_sysfs_show outside CPU_SUP_INTEL Huang Rui
2016-01-14  2:50 ` [PATCH v2 5/5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Huang Rui
2016-01-19 12:12   ` Peter Zijlstra
2016-01-20  4:48     ` Huang Rui
2016-01-20  9:22       ` Peter Zijlstra
2016-01-21  7:04         ` Huang Rui
2016-01-21  9:02           ` Peter Zijlstra
2016-01-21 14:42             ` Huang Rui
2016-01-21 15:10               ` Peter Zijlstra
2016-01-21 15:24                 ` Huang Rui
2016-01-21 15:51                   ` Peter Zijlstra
2016-01-21 16:59                 ` Borislav Petkov
2016-01-22  8:04                   ` Huang Rui
2016-01-22 17:51                     ` Borislav Petkov
2016-01-14  6:01 ` [PATCH v2 0/5] perf/x86/power: Introduce AMD accumlated " Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).