linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs
@ 2020-11-06 12:53 Ionela Voinescu
  2020-11-06 12:53 ` [PATCH v4 1/3] arm64: wrap and generalise counter read functions Ionela Voinescu
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-06 12:53 UTC (permalink / raw)
  To: catalin.marinas, mark.rutland, sudeep.holla, will
  Cc: morten.rasmussen, linux-arm-kernel, linux-kernel, ionela.voinescu

Hi guys,

Many thanks for everyone's review.

This series adds support for CPPC's delivered and reference performance
counters through the FFH methods by using the AMU equivalent core and
constant cycle counters.

This support is added in patch 3/3, while the first 2 patches generalise
the existing AMU counter read and validation functionality to be reused
for this usecase.

The specification that drove this implementation can be found at [1],
chapter 3.2.

v3 -> v4:
 - v3 can be found at [4]
 - 1/3, 3/3: Modified counter read functions as per Mark's suggestions.
 - This version is based on v5.10-rc2

v2 -> v3:
 - v2 can be found at [3]
 - Sorted out part of the issues flagged by 0day testing in patches 1/3
   and 3/3.
 - This version is based on v5.10-rc2.

RESEND v2:
 - Rebased and retested on v5.10-rc1.

v1 -> v2:
 - v1 can be found at [2]
 - The previous patch 1/4 was removed and a get_cpu_with_amu_feat()
   function was introduced instead, in 3/3, as suggested by Catalin.
   Given that most checks for the presence of AMUs is done at CPU
   level, followed by other validation, this implementation works
   better than the one initially introduced in v1/->patch 1/4.
 - Fixed warning reported by 0-day kernel test robot.
 - All build tests and FVP tests at [2] were re-run for this version.
 - This version is based on linux-next/20201001.

[1] https://documentation-service.arm.com/static/5f106ad60daa596235e80081
[2] https://lore.kernel.org/lkml/20200826130309.28027-1-ionela.voinescu@arm.com/
[3] https://lore.kernel.org/linux-arm-kernel/20201027163624.20747-1-ionela.voinescu@arm.com/
[4] https://lore.kernel.org/linux-arm-kernel/20201105122702.13916-1-ionela.voinescu@arm.com/

Thank you,
Ionela.

Ionela Voinescu (3):
  arm64: wrap and generalise counter read functions
  arm64: split counter validation function
  arm64: implement CPPC FFH support using AMUs

 arch/arm64/include/asm/cpufeature.h |   8 ++
 arch/arm64/include/asm/topology.h   |   4 +-
 arch/arm64/kernel/cpufeature.c      |  13 ++-
 arch/arm64/kernel/topology.c        | 129 ++++++++++++++++++++++------
 4 files changed, 124 insertions(+), 30 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v4 1/3] arm64: wrap and generalise counter read functions
  2020-11-06 12:53 [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Ionela Voinescu
@ 2020-11-06 12:53 ` Ionela Voinescu
  2020-11-13 14:11   ` Sudeep Holla
  2020-11-06 12:53 ` [PATCH v4 2/3] arm64: split counter validation function Ionela Voinescu
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-06 12:53 UTC (permalink / raw)
  To: catalin.marinas, mark.rutland, sudeep.holla, will
  Cc: morten.rasmussen, linux-arm-kernel, linux-kernel, ionela.voinescu

In preparation for other uses of Activity Monitors (AMU) cycle counters,
place counter read functionality in generic functions that can reused:
read_corecnt() and read_constcnt().

As a result, implement update_freq_counters_refs() to replace
init_cpu_freq_invariance_counters() and both initialise and update
the per-cpu reference variables.

Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h |  5 +++++
 arch/arm64/include/asm/topology.h   |  4 +++-
 arch/arm64/kernel/cpufeature.c      |  5 +----
 arch/arm64/kernel/topology.c        | 23 ++++++++++++++---------
 4 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 97244d4feca9..751bd9d3376b 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -765,6 +765,11 @@ static inline bool cpu_has_hw_af(void)
 #ifdef CONFIG_ARM64_AMU_EXTN
 /* Check whether the cpu supports the Activity Monitors Unit (AMU) */
 extern bool cpu_has_amu_feat(int cpu);
+#else
+static inline bool cpu_has_amu_feat(int cpu)
+{
+	return false;
+}
 #endif
 
 static inline unsigned int get_vmid_bits(u64 mmfr1)
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
index 11a465243f66..3b8dca4eb08d 100644
--- a/arch/arm64/include/asm/topology.h
+++ b/arch/arm64/include/asm/topology.h
@@ -16,12 +16,14 @@ int pcibus_to_node(struct pci_bus *bus);
 
 #include <linux/arch_topology.h>
 
+void update_freq_counters_refs(void);
+void topology_scale_freq_tick(void);
+
 #ifdef CONFIG_ARM64_AMU_EXTN
 /*
  * Replace task scheduler's default counter-based
  * frequency-invariance scale factor setting.
  */
-void topology_scale_freq_tick(void);
 #define arch_scale_freq_tick topology_scale_freq_tick
 #endif /* CONFIG_ARM64_AMU_EXTN */
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index dcc165b3fc04..1142970e985b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1526,16 +1526,13 @@ bool cpu_has_amu_feat(int cpu)
 	return cpumask_test_cpu(cpu, &amu_cpus);
 }
 
-/* Initialize the use of AMU counters for frequency invariance */
-extern void init_cpu_freq_invariance_counters(void);
-
 static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
 {
 	if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU)) {
 		pr_info("detected CPU%d: Activity Monitors Unit (AMU)\n",
 			smp_processor_id());
 		cpumask_set_cpu(smp_processor_id(), &amu_cpus);
-		init_cpu_freq_invariance_counters();
+		update_freq_counters_refs();
 	}
 }
 
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 543c67cae02f..03f4882362ce 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -124,6 +124,12 @@ int __init parse_acpi_topology(void)
 #endif
 
 #ifdef CONFIG_ARM64_AMU_EXTN
+#define read_corecnt()	read_sysreg_s(SYS_AMEVCNTR0_CORE_EL0)
+#define read_constcnt()	read_sysreg_s(SYS_AMEVCNTR0_CONST_EL0)
+#else
+#define read_corecnt()	(0UL)
+#define read_constcnt()	(0UL)
+#endif
 
 #undef pr_fmt
 #define pr_fmt(fmt) "AMU: " fmt
@@ -133,13 +139,10 @@ static DEFINE_PER_CPU(u64, arch_const_cycles_prev);
 static DEFINE_PER_CPU(u64, arch_core_cycles_prev);
 static cpumask_var_t amu_fie_cpus;
 
-/* Initialize counter reference per-cpu variables for the current CPU */
-void init_cpu_freq_invariance_counters(void)
+void update_freq_counters_refs(void)
 {
-	this_cpu_write(arch_core_cycles_prev,
-		       read_sysreg_s(SYS_AMEVCNTR0_CORE_EL0));
-	this_cpu_write(arch_const_cycles_prev,
-		       read_sysreg_s(SYS_AMEVCNTR0_CONST_EL0));
+	this_cpu_write(arch_core_cycles_prev, read_corecnt());
+	this_cpu_write(arch_const_cycles_prev, read_constcnt());
 }
 
 static int validate_cpu_freq_invariance_counters(int cpu)
@@ -280,11 +283,14 @@ void topology_scale_freq_tick(void)
 	if (!cpumask_test_cpu(cpu, amu_fie_cpus))
 		return;
 
-	const_cnt = read_sysreg_s(SYS_AMEVCNTR0_CONST_EL0);
-	core_cnt = read_sysreg_s(SYS_AMEVCNTR0_CORE_EL0);
 	prev_const_cnt = this_cpu_read(arch_const_cycles_prev);
 	prev_core_cnt = this_cpu_read(arch_core_cycles_prev);
 
+	update_freq_counters_refs();
+
+	const_cnt = this_cpu_read(arch_const_cycles_prev);
+	core_cnt = this_cpu_read(arch_core_cycles_prev);
+
 	if (unlikely(core_cnt <= prev_core_cnt ||
 		     const_cnt <= prev_const_cnt))
 		goto store_and_exit;
@@ -309,4 +315,3 @@ void topology_scale_freq_tick(void)
 	this_cpu_write(arch_core_cycles_prev, core_cnt);
 	this_cpu_write(arch_const_cycles_prev, const_cnt);
 }
-#endif /* CONFIG_ARM64_AMU_EXTN */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 2/3] arm64: split counter validation function
  2020-11-06 12:53 [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Ionela Voinescu
  2020-11-06 12:53 ` [PATCH v4 1/3] arm64: wrap and generalise counter read functions Ionela Voinescu
@ 2020-11-06 12:53 ` Ionela Voinescu
  2020-11-13 14:12   ` Sudeep Holla
  2020-11-06 12:53 ` [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs Ionela Voinescu
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-06 12:53 UTC (permalink / raw)
  To: catalin.marinas, mark.rutland, sudeep.holla, will
  Cc: morten.rasmussen, linux-arm-kernel, linux-kernel, ionela.voinescu

In order for the counter validation function to be reused, split
validate_cpu_freq_invariance_counters() into:
 - freq_counters_valid(cpu) - check cpu for valid cycle counters
 - freq_inv_set_max_ratio(int cpu, u64 max_rate, u64 ref_rate) -
   generic function that sets the normalization ratio used by
   topology_scale_freq_tick()

Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/topology.c | 44 +++++++++++++++++++++---------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 03f4882362ce..b8cb16e3a2cc 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -145,45 +145,49 @@ void update_freq_counters_refs(void)
 	this_cpu_write(arch_const_cycles_prev, read_constcnt());
 }
 
-static int validate_cpu_freq_invariance_counters(int cpu)
+static inline bool freq_counters_valid(int cpu)
 {
-	u64 max_freq_hz, ratio;
-
 	if (!cpu_has_amu_feat(cpu)) {
 		pr_debug("CPU%d: counters are not supported.\n", cpu);
-		return -EINVAL;
+		return false;
 	}
 
 	if (unlikely(!per_cpu(arch_const_cycles_prev, cpu) ||
 		     !per_cpu(arch_core_cycles_prev, cpu))) {
 		pr_debug("CPU%d: cycle counters are not enabled.\n", cpu);
-		return -EINVAL;
+		return false;
 	}
 
-	/* Convert maximum frequency from KHz to Hz and validate */
-	max_freq_hz = cpufreq_get_hw_max_freq(cpu) * 1000;
-	if (unlikely(!max_freq_hz)) {
-		pr_debug("CPU%d: invalid maximum frequency.\n", cpu);
+	return true;
+}
+
+static int freq_inv_set_max_ratio(int cpu, u64 max_rate, u64 ref_rate)
+{
+	u64 ratio;
+
+	if (unlikely(!max_rate || !ref_rate)) {
+		pr_debug("CPU%d: invalid maximum or reference frequency.\n",
+			 cpu);
 		return -EINVAL;
 	}
 
 	/*
 	 * Pre-compute the fixed ratio between the frequency of the constant
-	 * counter and the maximum frequency of the CPU.
+	 * reference counter and the maximum frequency of the CPU.
 	 *
-	 *			      const_freq
-	 * arch_max_freq_scale =   ---------------- * SCHED_CAPACITY_SCALE²
-	 *			   cpuinfo_max_freq
+	 *			    ref_rate
+	 * arch_max_freq_scale =   ---------- * SCHED_CAPACITY_SCALE²
+	 *			    max_rate
 	 *
 	 * We use a factor of 2 * SCHED_CAPACITY_SHIFT -> SCHED_CAPACITY_SCALE²
 	 * in order to ensure a good resolution for arch_max_freq_scale for
-	 * very low arch timer frequencies (down to the KHz range which should
+	 * very low reference frequencies (down to the KHz range which should
 	 * be unlikely).
 	 */
-	ratio = (u64)arch_timer_get_rate() << (2 * SCHED_CAPACITY_SHIFT);
-	ratio = div64_u64(ratio, max_freq_hz);
+	ratio = ref_rate << (2 * SCHED_CAPACITY_SHIFT);
+	ratio = div64_u64(ratio, max_rate);
 	if (!ratio) {
-		WARN_ONCE(1, "System timer frequency too low.\n");
+		WARN_ONCE(1, "Reference frequency too low.\n");
 		return -EINVAL;
 	}
 
@@ -230,8 +234,12 @@ static int __init init_amu_fie(void)
 	}
 
 	for_each_present_cpu(cpu) {
-		if (validate_cpu_freq_invariance_counters(cpu))
+		if (!freq_counters_valid(cpu) ||
+		    freq_inv_set_max_ratio(cpu,
+					   cpufreq_get_hw_max_freq(cpu) * 1000,
+					   arch_timer_get_rate()))
 			continue;
+
 		cpumask_set_cpu(cpu, valid_cpus);
 		have_policy |= enable_policy_freq_counters(cpu, valid_cpus);
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs
  2020-11-06 12:53 [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Ionela Voinescu
  2020-11-06 12:53 ` [PATCH v4 1/3] arm64: wrap and generalise counter read functions Ionela Voinescu
  2020-11-06 12:53 ` [PATCH v4 2/3] arm64: split counter validation function Ionela Voinescu
@ 2020-11-06 12:53 ` Ionela Voinescu
  2020-11-12 18:00   ` Catalin Marinas
  2020-11-13 14:16   ` Sudeep Holla
  2020-11-13 15:53 ` [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled() Ionela Voinescu
  2020-11-13 20:26 ` [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Catalin Marinas
  4 siblings, 2 replies; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-06 12:53 UTC (permalink / raw)
  To: catalin.marinas, mark.rutland, sudeep.holla, will
  Cc: morten.rasmussen, linux-arm-kernel, linux-kernel, ionela.voinescu

If Activity Monitors (AMUs) are present, two of the counters can be used
to implement support for CPPC's (Collaborative Processor Performance
Control) delivered and reference performance monitoring functionality
using FFH (Functional Fixed Hardware).

Given that counters for a certain CPU can only be read from that CPU,
while FFH operations can be called from any CPU for any of the CPUs, use
smp_call_function_single() to provide the requested values.

Therefore, depending on the register addresses, the following values
are returned:
 - 0x0 (DeliveredPerformanceCounterRegister): AMU core counter
 - 0x1 (ReferencePerformanceCounterRegister): AMU constant counter

The use of Activity Monitors is hidden behind the generic
cpu_read_{corecnt,constcnt}() functions.

Read functionality for these two registers represents the only current
FFH support for CPPC. Read operations for other register values or write
operation for all registers are unsupported. Therefore, keep CPPC's FFH
unsupported if no CPUs have valid AMU frequency counters. For this
purpose, the get_cpu_with_amu_feat() is introduced.

Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h |  3 ++
 arch/arm64/kernel/cpufeature.c      | 10 +++++
 arch/arm64/kernel/topology.c        | 64 +++++++++++++++++++++++++++++
 3 files changed, 77 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 751bd9d3376b..f5b44ac354dc 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -772,6 +772,9 @@ static inline bool cpu_has_amu_feat(int cpu)
 }
 #endif
 
+/* Get a cpu that supports the Activity Monitors Unit (AMU) */
+extern int get_cpu_with_amu_feat(void);
+
 static inline unsigned int get_vmid_bits(u64 mmfr1)
 {
 	int vmid_bits;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 1142970e985b..6b08ae74ad0a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1526,6 +1526,11 @@ bool cpu_has_amu_feat(int cpu)
 	return cpumask_test_cpu(cpu, &amu_cpus);
 }
 
+int get_cpu_with_amu_feat(void)
+{
+	return cpumask_any(&amu_cpus);
+}
+
 static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
 {
 	if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU)) {
@@ -1554,6 +1559,11 @@ static bool has_amu(const struct arm64_cpu_capabilities *cap,
 
 	return true;
 }
+#else
+int get_cpu_with_amu_feat(void)
+{
+	return nr_cpu_ids;
+}
 #endif
 
 #ifdef CONFIG_ARM64_VHE
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index b8cb16e3a2cc..7c9b6a0ecd6a 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -147,6 +147,9 @@ void update_freq_counters_refs(void)
 
 static inline bool freq_counters_valid(int cpu)
 {
+	if ((cpu >= nr_cpu_ids) || !cpumask_test_cpu(cpu, cpu_present_mask))
+		return false;
+
 	if (!cpu_has_amu_feat(cpu)) {
 		pr_debug("CPU%d: counters are not supported.\n", cpu);
 		return false;
@@ -323,3 +326,64 @@ void topology_scale_freq_tick(void)
 	this_cpu_write(arch_core_cycles_prev, core_cnt);
 	this_cpu_write(arch_const_cycles_prev, const_cnt);
 }
+
+#ifdef CONFIG_ACPI_CPPC_LIB
+#include <acpi/cppc_acpi.h>
+
+static void cpu_read_corecnt(void *val)
+{
+	*(u64 *)val = read_corecnt();
+}
+
+static void cpu_read_constcnt(void *val)
+{
+	*(u64 *)val = read_constcnt();
+}
+
+static inline
+int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
+{
+	if (!cpu_has_amu_feat(cpu))
+		return -EOPNOTSUPP;
+
+	smp_call_function_single(cpu, func, val, 1);
+
+	return 0;
+}
+
+/*
+ * Refer to drivers/acpi/cppc_acpi.c for the description of the functions
+ * below.
+ */
+bool cpc_ffh_supported(void)
+{
+	return freq_counters_valid(get_cpu_with_amu_feat());
+}
+
+int cpc_read_ffh(int cpu, struct cpc_reg *reg, u64 *val)
+{
+	int ret = -EOPNOTSUPP;
+
+	switch ((u64)reg->address) {
+	case 0x0:
+		ret = counters_read_on_cpu(cpu, cpu_read_corecnt, val);
+		break;
+	case 0x1:
+		ret = counters_read_on_cpu(cpu, cpu_read_constcnt, val);
+		break;
+	}
+
+	if (!ret) {
+		*val &= GENMASK_ULL(reg->bit_offset + reg->bit_width - 1,
+				    reg->bit_offset);
+		*val >>= reg->bit_offset;
+	}
+
+	return ret;
+}
+
+int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val)
+{
+	return -EOPNOTSUPP;
+}
+#endif /* CONFIG_ACPI_CPPC_LIB */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs
  2020-11-06 12:53 ` [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs Ionela Voinescu
@ 2020-11-12 18:00   ` Catalin Marinas
  2020-11-13 12:28     ` Ionela Voinescu
  2020-11-13 14:16   ` Sudeep Holla
  1 sibling, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2020-11-12 18:00 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: mark.rutland, sudeep.holla, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 06, 2020 at 12:53:34PM +0000, Ionela Voinescu wrote:
> +static inline
> +int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
> +{
> +	if (!cpu_has_amu_feat(cpu))
> +		return -EOPNOTSUPP;
> +
> +	smp_call_function_single(cpu, func, val, 1);
> +
> +	return 0;
> +}

I got lost in the cpufreq call chains. Can this function ever be called
with interrupts disabled?

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs
  2020-11-12 18:00   ` Catalin Marinas
@ 2020-11-13 12:28     ` Ionela Voinescu
  2020-11-13 12:54       ` Catalin Marinas
  0 siblings, 1 reply; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-13 12:28 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: mark.rutland, sudeep.holla, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

Hi,

On Thursday 12 Nov 2020 at 18:00:46 (+0000), Catalin Marinas wrote:
> On Fri, Nov 06, 2020 at 12:53:34PM +0000, Ionela Voinescu wrote:
> > +static inline
> > +int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
> > +{
> > +	if (!cpu_has_amu_feat(cpu))
> > +		return -EOPNOTSUPP;
> > +
> > +	smp_call_function_single(cpu, func, val, 1);
> > +
> > +	return 0;
> > +}
> 
> I got lost in the cpufreq call chains. Can this function ever be called
> with interrupts disabled?
> 

The short answer is: not with the current implementation of its only
user, the cppc_cpufreq driver (given the current cpufreq implementation).

The long answer is: there is a case where the cpufreq .get function is
called with local interrupts disabled - cpufreq_quick_get(), but there
are a few "if"s in between this becoming a problem:

 1. If cppc_cpufreq ever implements .setpolicy or,
    1.1 Current implementation of cpufreq_quick_get() changes.
 2. If one of the few users of cpufreq_quick_get() is used: cppc_cpufreq
    ends up being used as CPU cooling device(+IPA governor) or
    devfreq/tegra30 is used.

 In this potential case, smp_call_function_single() will warn us of call
 with irqs_disable() and if the stars align there could be a potential
 deadlock if two CPUs try to IPI (get counter reads of) each other.

So it could be called with irqs disabled, but a few code changes are
needed before that happens.

I can bail out of counters_read_on_cpu() if irqs_disabled() to be on the
safe side.

Thanks for the review,
Ionela.


> -- 
> Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs
  2020-11-13 12:28     ` Ionela Voinescu
@ 2020-11-13 12:54       ` Catalin Marinas
  0 siblings, 0 replies; 18+ messages in thread
From: Catalin Marinas @ 2020-11-13 12:54 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: mark.rutland, sudeep.holla, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 13, 2020 at 12:28:46PM +0000, Ionela Voinescu wrote:
> On Thursday 12 Nov 2020 at 18:00:46 (+0000), Catalin Marinas wrote:
> > On Fri, Nov 06, 2020 at 12:53:34PM +0000, Ionela Voinescu wrote:
> > > +static inline
> > > +int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
> > > +{
> > > +	if (!cpu_has_amu_feat(cpu))
> > > +		return -EOPNOTSUPP;
> > > +
> > > +	smp_call_function_single(cpu, func, val, 1);
> > > +
> > > +	return 0;
> > > +}
> > 
> > I got lost in the cpufreq call chains. Can this function ever be called
> > with interrupts disabled?
> 
> The short answer is: not with the current implementation of its only
> user, the cppc_cpufreq driver (given the current cpufreq implementation).
> 
> The long answer is: there is a case where the cpufreq .get function is
> called with local interrupts disabled - cpufreq_quick_get(), but there
> are a few "if"s in between this becoming a problem:
> 
>  1. If cppc_cpufreq ever implements .setpolicy or,
>     1.1 Current implementation of cpufreq_quick_get() changes.
>  2. If one of the few users of cpufreq_quick_get() is used: cppc_cpufreq
>     ends up being used as CPU cooling device(+IPA governor) or
>     devfreq/tegra30 is used.
> 
>  In this potential case, smp_call_function_single() will warn us of call
>  with irqs_disable() and if the stars align there could be a potential
>  deadlock if two CPUs try to IPI (get counter reads of) each other.
> 
> So it could be called with irqs disabled, but a few code changes are
> needed before that happens.
> 
> I can bail out of counters_read_on_cpu() if irqs_disabled() to be on the
> safe side.

Thanks for the explanation. In case we forget two years from now and the
core code changes, I think it's safe to bail out early with an error.
You can add a patch on top of this series, no need to resend the whole.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/3] arm64: wrap and generalise counter read functions
  2020-11-06 12:53 ` [PATCH v4 1/3] arm64: wrap and generalise counter read functions Ionela Voinescu
@ 2020-11-13 14:11   ` Sudeep Holla
  0 siblings, 0 replies; 18+ messages in thread
From: Sudeep Holla @ 2020-11-13 14:11 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: catalin.marinas, mark.rutland, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 06, 2020 at 12:53:32PM +0000, Ionela Voinescu wrote:
> In preparation for other uses of Activity Monitors (AMU) cycle counters,
> place counter read functionality in generic functions that can reused:
> read_corecnt() and read_constcnt().
> 
> As a result, implement update_freq_counters_refs() to replace
> init_cpu_freq_invariance_counters() and both initialise and update
> the per-cpu reference variables.
>

Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>

-- 
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/3] arm64: split counter validation function
  2020-11-06 12:53 ` [PATCH v4 2/3] arm64: split counter validation function Ionela Voinescu
@ 2020-11-13 14:12   ` Sudeep Holla
  0 siblings, 0 replies; 18+ messages in thread
From: Sudeep Holla @ 2020-11-13 14:12 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: catalin.marinas, mark.rutland, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 06, 2020 at 12:53:33PM +0000, Ionela Voinescu wrote:
> In order for the counter validation function to be reused, split
> validate_cpu_freq_invariance_counters() into:
>  - freq_counters_valid(cpu) - check cpu for valid cycle counters
>  - freq_inv_set_max_ratio(int cpu, u64 max_rate, u64 ref_rate) -
>    generic function that sets the normalization ratio used by
>    topology_scale_freq_tick()
> 

Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>

-- 
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs
  2020-11-06 12:53 ` [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs Ionela Voinescu
  2020-11-12 18:00   ` Catalin Marinas
@ 2020-11-13 14:16   ` Sudeep Holla
  2020-11-13 16:37     ` Ionela Voinescu
  1 sibling, 1 reply; 18+ messages in thread
From: Sudeep Holla @ 2020-11-13 14:16 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: catalin.marinas, mark.rutland, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 06, 2020 at 12:53:34PM +0000, Ionela Voinescu wrote:
> If Activity Monitors (AMUs) are present, two of the counters can be used
> to implement support for CPPC's (Collaborative Processor Performance
> Control) delivered and reference performance monitoring functionality
> using FFH (Functional Fixed Hardware).
>
> Given that counters for a certain CPU can only be read from that CPU,
> while FFH operations can be called from any CPU for any of the CPUs, use
> smp_call_function_single() to provide the requested values.
>
> Therefore, depending on the register addresses, the following values
> are returned:
>  - 0x0 (DeliveredPerformanceCounterRegister): AMU core counter
>  - 0x1 (ReferencePerformanceCounterRegister): AMU constant counter
>
> The use of Activity Monitors is hidden behind the generic
> cpu_read_{corecnt,constcnt}() functions.
>
> Read functionality for these two registers represents the only current
> FFH support for CPPC. Read operations for other register values or write
> operation for all registers are unsupported. Therefore, keep CPPC's FFH
> unsupported if no CPUs have valid AMU frequency counters. For this
> purpose, the get_cpu_with_amu_feat() is introduced.
>
> Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/cpufeature.h |  3 ++
>  arch/arm64/kernel/cpufeature.c      | 10 +++++
>  arch/arm64/kernel/topology.c        | 64 +++++++++++++++++++++++++++++
>  3 files changed, 77 insertions(+)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 751bd9d3376b..f5b44ac354dc 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -772,6 +772,9 @@ static inline bool cpu_has_amu_feat(int cpu)
>  }
>  #endif
>
> +/* Get a cpu that supports the Activity Monitors Unit (AMU) */
> +extern int get_cpu_with_amu_feat(void);
> +
>  static inline unsigned int get_vmid_bits(u64 mmfr1)
>  {
>  	int vmid_bits;
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 1142970e985b..6b08ae74ad0a 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1526,6 +1526,11 @@ bool cpu_has_amu_feat(int cpu)
>  	return cpumask_test_cpu(cpu, &amu_cpus);
>  }
>
> +int get_cpu_with_amu_feat(void)
> +{
> +	return cpumask_any(&amu_cpus);
> +}
> +
>  static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
>  {
>  	if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU)) {
> @@ -1554,6 +1559,11 @@ static bool has_amu(const struct arm64_cpu_capabilities *cap,
>
>  	return true;
>  }
> +#else
> +int get_cpu_with_amu_feat(void)
> +{
> +	return nr_cpu_ids;
> +}
>  #endif
>
>  #ifdef CONFIG_ARM64_VHE
> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> index b8cb16e3a2cc..7c9b6a0ecd6a 100644
> --- a/arch/arm64/kernel/topology.c
> +++ b/arch/arm64/kernel/topology.c
> @@ -147,6 +147,9 @@ void update_freq_counters_refs(void)
>
>  static inline bool freq_counters_valid(int cpu)
>  {
> +	if ((cpu >= nr_cpu_ids) || !cpumask_test_cpu(cpu, cpu_present_mask))
> +		return false;
> +
>  	if (!cpu_has_amu_feat(cpu)) {
>  		pr_debug("CPU%d: counters are not supported.\n", cpu);
>  		return false;
> @@ -323,3 +326,64 @@ void topology_scale_freq_tick(void)
>  	this_cpu_write(arch_core_cycles_prev, core_cnt);
>  	this_cpu_write(arch_const_cycles_prev, const_cnt);
>  }
> +
> +#ifdef CONFIG_ACPI_CPPC_LIB
> +#include <acpi/cppc_acpi.h>

Not sure what arm64 maintainers prefer, but this code has nothing to do
with topolopy strictly speaking. I wonder if we can put it in separate
file conditionally compiled if CONFIG_ACPI_CPPC_LIB is enabled there
by eliminating #ifdef(main reason for raising this point).

Either way:

Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>

--
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled()
  2020-11-06 12:53 [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Ionela Voinescu
                   ` (2 preceding siblings ...)
  2020-11-06 12:53 ` [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs Ionela Voinescu
@ 2020-11-13 15:53 ` Ionela Voinescu
  2020-11-13 16:02   ` Mark Rutland
  2020-11-13 20:26 ` [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Catalin Marinas
  4 siblings, 1 reply; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-13 15:53 UTC (permalink / raw)
  To: catalin.marinas, mark.rutland, sudeep.holla, will
  Cc: morten.rasmussen, linux-arm-kernel, linux-kernel, ionela.voinescu

Given that smp_call_function_single() can deadlock when interrupts are
disabled, abort the SMP call if irqs_disabled(). This scenario is
currently not possible given the function's uses, but safeguard this for
potential future uses.

Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/topology.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 3a083a9a8ef2..e387188741f2 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -343,7 +343,11 @@ static void cpu_read_constcnt(void *val)
 static inline
 int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
 {
-	if (!cpu_has_amu_feat(cpu))
+	/*
+	 * Abort call on counterless CPU or when interrupts are
+	 * disabled - can lead to deadlock in smp sync call.
+	 */
+	if (!cpu_has_amu_feat(cpu) || unlikely(irqs_disabled()))
 		return -EOPNOTSUPP;
 
 	smp_call_function_single(cpu, func, val, 1);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled()
  2020-11-13 15:53 ` [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled() Ionela Voinescu
@ 2020-11-13 16:02   ` Mark Rutland
  2020-11-13 16:58     ` Ionela Voinescu
  0 siblings, 1 reply; 18+ messages in thread
From: Mark Rutland @ 2020-11-13 16:02 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: catalin.marinas, sudeep.holla, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 13, 2020 at 03:53:28PM +0000, Ionela Voinescu wrote:
> Given that smp_call_function_single() can deadlock when interrupts are
> disabled, abort the SMP call if irqs_disabled(). This scenario is
> currently not possible given the function's uses, but safeguard this for
> potential future uses.

Sorry to contradict earlier feedback, but I think this is preferable
as-is, since smp_call_function_single() will
WARN_ON_ONCE(irqs_disabled())), but this will silently mask any dodgy
usage.

If we want a separate check here, I reckon we should wrap it with a
WARN_ON_ONCE(), and only relax that if/when we have a legitimate case
for calling this with IRQs disabled.

Thanks,
Mark.

> 
> Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/topology.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> index 3a083a9a8ef2..e387188741f2 100644
> --- a/arch/arm64/kernel/topology.c
> +++ b/arch/arm64/kernel/topology.c
> @@ -343,7 +343,11 @@ static void cpu_read_constcnt(void *val)
>  static inline
>  int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
>  {
> -	if (!cpu_has_amu_feat(cpu))
> +	/*
> +	 * Abort call on counterless CPU or when interrupts are
> +	 * disabled - can lead to deadlock in smp sync call.
> +	 */
> +	if (!cpu_has_amu_feat(cpu) || unlikely(irqs_disabled()))
>  		return -EOPNOTSUPP;
>  
>  	smp_call_function_single(cpu, func, val, 1);
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs
  2020-11-13 14:16   ` Sudeep Holla
@ 2020-11-13 16:37     ` Ionela Voinescu
  2020-11-13 20:03       ` Catalin Marinas
  0 siblings, 1 reply; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-13 16:37 UTC (permalink / raw)
  To: Sudeep Holla
  Cc: catalin.marinas, mark.rutland, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

Hi Sudeep,

On Friday 13 Nov 2020 at 14:16:58 (+0000), Sudeep Holla wrote:
[..]
> > diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> > index b8cb16e3a2cc..7c9b6a0ecd6a 100644
> > --- a/arch/arm64/kernel/topology.c
> > +++ b/arch/arm64/kernel/topology.c
> > @@ -147,6 +147,9 @@ void update_freq_counters_refs(void)
> >
> >  static inline bool freq_counters_valid(int cpu)
> >  {
> > +	if ((cpu >= nr_cpu_ids) || !cpumask_test_cpu(cpu, cpu_present_mask))
> > +		return false;
> > +
> >  	if (!cpu_has_amu_feat(cpu)) {
> >  		pr_debug("CPU%d: counters are not supported.\n", cpu);
> >  		return false;
> > @@ -323,3 +326,64 @@ void topology_scale_freq_tick(void)
> >  	this_cpu_write(arch_core_cycles_prev, core_cnt);
> >  	this_cpu_write(arch_const_cycles_prev, const_cnt);
> >  }
> > +
> > +#ifdef CONFIG_ACPI_CPPC_LIB
> > +#include <acpi/cppc_acpi.h>
> 
> Not sure what arm64 maintainers prefer, but this code has nothing to do
> with topolopy strictly speaking. I wonder if we can put it in separate

Yes, you are correct. I am/was wondering the same for all the
counters/AMU related functions, but given they were only used for
topology_scale_freq_tick() *until now*, it was okay to keep them in
topology.c.

But I might soon have at least one additional (to FIE and FFH) small
usecase for them in the implementation of arch_freq_get_on_cpu(), so all
these functions might be better off in a separate file as well.

Side note: I don't think frequency invariance is strictly speaking
related to topology either. Nether are other functions in the
arch_topology driver. It's likely we got used to placing all
arch function implementation in either the arch_topology driver or the
<arch>/kernel/topology.c.

> file conditionally compiled if CONFIG_ACPI_CPPC_LIB is enabled there
> by eliminating #ifdef(main reason for raising this point).
> 

I'm happy to split either one(FFH) or both(FFH and counters) in separate
files. Given the above, let me know if/how you guys prefer this done.

> Either way:
> 
> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
> 

Thank you for the reviews,
Ionela.

> --
> Regards,
> Sudeep

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled()
  2020-11-13 16:02   ` Mark Rutland
@ 2020-11-13 16:58     ` Ionela Voinescu
  2020-11-13 17:30       ` Mark Rutland
  2020-11-13 19:55       ` Catalin Marinas
  0 siblings, 2 replies; 18+ messages in thread
From: Ionela Voinescu @ 2020-11-13 16:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, sudeep.holla, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Friday 13 Nov 2020 at 16:02:34 (+0000), Mark Rutland wrote:
> On Fri, Nov 13, 2020 at 03:53:28PM +0000, Ionela Voinescu wrote:
> > Given that smp_call_function_single() can deadlock when interrupts are
> > disabled, abort the SMP call if irqs_disabled(). This scenario is
> > currently not possible given the function's uses, but safeguard this for
> > potential future uses.
> 
> Sorry to contradict earlier feedback, but I think this is preferable
> as-is, since smp_call_function_single() will
> WARN_ON_ONCE(irqs_disabled())), but this will silently mask any dodgy
> usage.

Probably it only contradicts the chosen implementation.

> 
> If we want a separate check here, I reckon we should wrap it with a
> WARN_ON_ONCE(), and only relax that if/when we have a legitimate case
> for calling this with IRQs disabled.
> 

That's fair. I'll replace the condition below with:

	if (!cpu_has_amu_feat(cpu))
		return -EOPNOTSUPP;

	if (WARN_ON_ONCE(irqs_disabled())
		return -EPERM;

Thanks for your time,
Ionela.

> Thanks,
> Mark.
> 
> > 
> > Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/topology.c | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> > index 3a083a9a8ef2..e387188741f2 100644
> > --- a/arch/arm64/kernel/topology.c
> > +++ b/arch/arm64/kernel/topology.c
> > @@ -343,7 +343,11 @@ static void cpu_read_constcnt(void *val)
> >  static inline
> >  int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
> >  {
> > -	if (!cpu_has_amu_feat(cpu))
> > +	/*
> > +	 * Abort call on counterless CPU or when interrupts are
> > +	 * disabled - can lead to deadlock in smp sync call.
> > +	 */
> > +	if (!cpu_has_amu_feat(cpu) || unlikely(irqs_disabled()))
> >  		return -EOPNOTSUPP;
> >  
> >  	smp_call_function_single(cpu, func, val, 1);
> > -- 
> > 2.17.1
> > 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled()
  2020-11-13 16:58     ` Ionela Voinescu
@ 2020-11-13 17:30       ` Mark Rutland
  2020-11-13 19:55       ` Catalin Marinas
  1 sibling, 0 replies; 18+ messages in thread
From: Mark Rutland @ 2020-11-13 17:30 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: catalin.marinas, sudeep.holla, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 13, 2020 at 04:58:43PM +0000, Ionela Voinescu wrote:
> On Friday 13 Nov 2020 at 16:02:34 (+0000), Mark Rutland wrote:
> > On Fri, Nov 13, 2020 at 03:53:28PM +0000, Ionela Voinescu wrote:
> > > Given that smp_call_function_single() can deadlock when interrupts are
> > > disabled, abort the SMP call if irqs_disabled(). This scenario is
> > > currently not possible given the function's uses, but safeguard this for
> > > potential future uses.
> > 
> > Sorry to contradict earlier feedback, but I think this is preferable
> > as-is, since smp_call_function_single() will
> > WARN_ON_ONCE(irqs_disabled())), but this will silently mask any dodgy
> > usage.
> 
> Probably it only contradicts the chosen implementation.
> 
> > 
> > If we want a separate check here, I reckon we should wrap it with a
> > WARN_ON_ONCE(), and only relax that if/when we have a legitimate case
> > for calling this with IRQs disabled.
> > 
> 
> That's fair. I'll replace the condition below with:
> 
> 	if (!cpu_has_amu_feat(cpu))
> 		return -EOPNOTSUPP;
> 
> 	if (WARN_ON_ONCE(irqs_disabled())
> 		return -EPERM;

That'd be great, thanks!

With that, feel free to add:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled()
  2020-11-13 16:58     ` Ionela Voinescu
  2020-11-13 17:30       ` Mark Rutland
@ 2020-11-13 19:55       ` Catalin Marinas
  1 sibling, 0 replies; 18+ messages in thread
From: Catalin Marinas @ 2020-11-13 19:55 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: Mark Rutland, sudeep.holla, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 13, 2020 at 04:58:43PM +0000, Ionela Voinescu wrote:
> On Friday 13 Nov 2020 at 16:02:34 (+0000), Mark Rutland wrote:
> > On Fri, Nov 13, 2020 at 03:53:28PM +0000, Ionela Voinescu wrote:
> > > Given that smp_call_function_single() can deadlock when interrupts are
> > > disabled, abort the SMP call if irqs_disabled(). This scenario is
> > > currently not possible given the function's uses, but safeguard this for
> > > potential future uses.
> > 
> > Sorry to contradict earlier feedback, but I think this is preferable
> > as-is, since smp_call_function_single() will
> > WARN_ON_ONCE(irqs_disabled())), but this will silently mask any dodgy
> > usage.
> 
> Probably it only contradicts the chosen implementation.
> 
> > If we want a separate check here, I reckon we should wrap it with a
> > WARN_ON_ONCE(), and only relax that if/when we have a legitimate case
> > for calling this with IRQs disabled.
> > 
> 
> That's fair. I'll replace the condition below with:
> 
> 	if (!cpu_has_amu_feat(cpu))
> 		return -EOPNOTSUPP;
> 
> 	if (WARN_ON_ONCE(irqs_disabled())
> 		return -EPERM;

Works for me. Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs
  2020-11-13 16:37     ` Ionela Voinescu
@ 2020-11-13 20:03       ` Catalin Marinas
  0 siblings, 0 replies; 18+ messages in thread
From: Catalin Marinas @ 2020-11-13 20:03 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: Sudeep Holla, mark.rutland, will, morten.rasmussen,
	linux-arm-kernel, linux-kernel

On Fri, Nov 13, 2020 at 04:37:12PM +0000, Ionela Voinescu wrote:
> On Friday 13 Nov 2020 at 14:16:58 (+0000), Sudeep Holla wrote:
> [..]
> > > diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> > > index b8cb16e3a2cc..7c9b6a0ecd6a 100644
> > > --- a/arch/arm64/kernel/topology.c
> > > +++ b/arch/arm64/kernel/topology.c
> > > @@ -147,6 +147,9 @@ void update_freq_counters_refs(void)
> > >
> > >  static inline bool freq_counters_valid(int cpu)
> > >  {
> > > +	if ((cpu >= nr_cpu_ids) || !cpumask_test_cpu(cpu, cpu_present_mask))
> > > +		return false;
> > > +
> > >  	if (!cpu_has_amu_feat(cpu)) {
> > >  		pr_debug("CPU%d: counters are not supported.\n", cpu);
> > >  		return false;
> > > @@ -323,3 +326,64 @@ void topology_scale_freq_tick(void)
> > >  	this_cpu_write(arch_core_cycles_prev, core_cnt);
> > >  	this_cpu_write(arch_const_cycles_prev, const_cnt);
> > >  }
> > > +
> > > +#ifdef CONFIG_ACPI_CPPC_LIB
> > > +#include <acpi/cppc_acpi.h>
> > 
> > Not sure what arm64 maintainers prefer, but this code has nothing to do
> > with topolopy strictly speaking. I wonder if we can put it in separate
> 
> Yes, you are correct. I am/was wondering the same for all the
> counters/AMU related functions, but given they were only used for
> topology_scale_freq_tick() *until now*, it was okay to keep them in
> topology.c.
> 
> But I might soon have at least one additional (to FIE and FFH) small
> usecase for them in the implementation of arch_freq_get_on_cpu(), so all
> these functions might be better off in a separate file as well.
> 
> Side note: I don't think frequency invariance is strictly speaking
> related to topology either. Nether are other functions in the
> arch_topology driver. It's likely we got used to placing all
> arch function implementation in either the arch_topology driver or the
> <arch>/kernel/topology.c.

Yeah, it looks like these topology files became a dumping ground for
whatever power related ;).

I'm ok with these patches as they are for now but it would be good to do
some refactoring on top and maybe move them to an amu.c file (it's not
urgent, it can be for 5.12 or when you plan to add more stuff next). I
don't have an opinion for arch_topology.c, so far it doesn't seem to
have any AMU stuff in it.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs
  2020-11-06 12:53 [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Ionela Voinescu
                   ` (3 preceding siblings ...)
  2020-11-13 15:53 ` [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled() Ionela Voinescu
@ 2020-11-13 20:26 ` Catalin Marinas
  4 siblings, 0 replies; 18+ messages in thread
From: Catalin Marinas @ 2020-11-13 20:26 UTC (permalink / raw)
  To: will, sudeep.holla, mark.rutland, Ionela Voinescu
  Cc: morten.rasmussen, linux-kernel, linux-arm-kernel

On Fri, 6 Nov 2020 12:53:31 +0000, Ionela Voinescu wrote:
> Many thanks for everyone's review.
> 
> This series adds support for CPPC's delivered and reference performance
> counters through the FFH methods by using the AMU equivalent core and
> constant cycle counters.
> 
> This support is added in patch 3/3, while the first 2 patches generalise
> the existing AMU counter read and validation functionality to be reused
> for this usecase.
> 
> [...]

Applied to arm64 (for-next/cppc-ffh), thanks!

[1/3] arm64: wrap and generalise counter read functions
      https://git.kernel.org/arm64/c/4b9cf23c179a
[2/3] arm64: split counter validation function
      https://git.kernel.org/arm64/c/bc3b6562a1ac
[3/3] arm64: implement CPPC FFH support using AMUs
      https://git.kernel.org/arm64/c/68c5debcc06d

I also applied the irq_disabled() abort as per Mark's comments:

      https://git.kernel.org/arm64/c/74490422522d

-- 
Catalin


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-11-13 20:26 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-06 12:53 [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Ionela Voinescu
2020-11-06 12:53 ` [PATCH v4 1/3] arm64: wrap and generalise counter read functions Ionela Voinescu
2020-11-13 14:11   ` Sudeep Holla
2020-11-06 12:53 ` [PATCH v4 2/3] arm64: split counter validation function Ionela Voinescu
2020-11-13 14:12   ` Sudeep Holla
2020-11-06 12:53 ` [PATCH v4 3/3] arm64: implement CPPC FFH support using AMUs Ionela Voinescu
2020-11-12 18:00   ` Catalin Marinas
2020-11-13 12:28     ` Ionela Voinescu
2020-11-13 12:54       ` Catalin Marinas
2020-11-13 14:16   ` Sudeep Holla
2020-11-13 16:37     ` Ionela Voinescu
2020-11-13 20:03       ` Catalin Marinas
2020-11-13 15:53 ` [PATCH] arm64: abort counter_read_on_cpu() when irqs_disabled() Ionela Voinescu
2020-11-13 16:02   ` Mark Rutland
2020-11-13 16:58     ` Ionela Voinescu
2020-11-13 17:30       ` Mark Rutland
2020-11-13 19:55       ` Catalin Marinas
2020-11-13 20:26 ` [PATCH v4 0/3] arm64: cppc: add FFH support using AMUs Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).