linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC] arm64: Enable perf events based hard lockup detector
@ 2020-05-15  8:49 Sumit Garg
  2020-05-18 14:34 ` Mark Rutland
  0 siblings, 1 reply; 4+ messages in thread
From: Sumit Garg @ 2020-05-15  8:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Sumit Garg, daniel.thompson, peterz,
	catalin.marinas, jolsa, dianders, acme, alexander.shishkin,
	mingo, namhyung, tglx, will, julien.thierry.kdev

With the recent feature added to enable perf events to use pseudo NMIs
as interrupts on platforms which support GICv3 or later, its now been
possible to enable hard lockup detector (or NMI watchdog) on arm64
platforms. So enable corresponding support.

One thing to note here is that normally lockup detector is initialized
just after the early initcalls but PMU on arm64 comes up much later as
device_initcall(). So we need to re-initialize lockup detection once
PMU has been initialized.

Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
---

This patch is dependent on perf NMI patch-set [1].

[1] https://patchwork.kernel.org/cover/11047407/

 arch/arm64/Kconfig             |  2 ++
 arch/arm64/kernel/perf_event.c | 32 ++++++++++++++++++++++++++++++--
 drivers/perf/arm_pmu.c         | 11 +++++++++++
 include/linux/perf/arm_pmu.h   |  2 ++
 4 files changed, 45 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 40fb05d..36f75c2 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -160,6 +160,8 @@ config ARM64
 	select HAVE_NMI
 	select HAVE_PATA_PLATFORM
 	select HAVE_PERF_EVENTS
+	select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
+	select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
 	select HAVE_REGS_AND_STACK_ACCESS_API
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 3ad5c8f..df57360 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -20,6 +20,8 @@
 #include <linux/perf/arm_pmu.h>
 #include <linux/platform_device.h>
 #include <linux/smp.h>
+#include <linux/nmi.h>
+#include <linux/cpufreq.h>
 
 /* ARMv8 Cortex-A53 specific event types. */
 #define ARMV8_A53_PERFCTR_PREF_LINEFILL				0xC2
@@ -1190,10 +1192,21 @@ static struct platform_driver armv8_pmu_driver = {
 
 static int __init armv8_pmu_driver_init(void)
 {
+	int ret;
+
 	if (acpi_disabled)
-		return platform_driver_register(&armv8_pmu_driver);
+		ret = platform_driver_register(&armv8_pmu_driver);
 	else
-		return arm_pmu_acpi_probe(armv8_pmuv3_init);
+		ret = arm_pmu_acpi_probe(armv8_pmuv3_init);
+
+	/*
+	 * Try to re-initialize lockup detector after PMU init in
+	 * case PMU events are triggered via NMIs.
+	 */
+	if (arm_pmu_irq_is_nmi())
+		lockup_detector_init();
+
+	return ret;
 }
 device_initcall(armv8_pmu_driver_init)
 
@@ -1225,3 +1238,18 @@ void arch_perf_update_userpage(struct perf_event *event,
 	userpg->time_shift = (u16)shift;
 	userpg->time_offset = -now;
 }
+
+#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF
+#define SAFE_MAX_CPU_FREQ	4000000000UL // 4 GHz
+u64 hw_nmi_get_sample_period(int watchdog_thresh)
+{
+	unsigned int cpu = smp_processor_id();
+	unsigned int max_cpu_freq;
+
+	max_cpu_freq = cpufreq_get_hw_max_freq(cpu);
+	if (max_cpu_freq)
+		return (u64)max_cpu_freq * 1000 * watchdog_thresh;
+	else
+		return (u64)SAFE_MAX_CPU_FREQ * watchdog_thresh;
+}
+#endif
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index f96cfc4..691dfc9 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -718,6 +718,17 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
 	return per_cpu(hw_events->irq, cpu);
 }
 
+bool arm_pmu_irq_is_nmi(void)
+{
+	const struct pmu_irq_ops *irq_ops;
+
+	irq_ops = per_cpu(cpu_irq_ops, smp_processor_id());
+	if (irq_ops == &pmunmi_ops || irq_ops == &percpu_pmunmi_ops)
+		return true;
+	else
+		return false;
+}
+
 /*
  * PMU hardware loses all context when a CPU goes offline.
  * When a CPU is hotplugged back in, since some hardware registers are
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index d9b8b76..a71f029 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -155,6 +155,8 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn);
 static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
 #endif
 
+bool arm_pmu_irq_is_nmi(void);
+
 /* Internal functions only for core arm_pmu code */
 struct arm_pmu *armpmu_alloc(void);
 struct arm_pmu *armpmu_alloc_atomic(void);
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [RFC] arm64: Enable perf events based hard lockup detector
  2020-05-15  8:49 [RFC] arm64: Enable perf events based hard lockup detector Sumit Garg
@ 2020-05-18 14:34 ` Mark Rutland
  2020-05-18 14:52   ` Daniel Thompson
  0 siblings, 1 reply; 4+ messages in thread
From: Mark Rutland @ 2020-05-18 14:34 UTC (permalink / raw)
  To: Sumit Garg
  Cc: daniel.thompson, peterz, catalin.marinas, jolsa, dianders, acme,
	alexander.shishkin, mingo, julien.thierry.kdev, namhyung, tglx,
	will, linux-arm-kernel

Hi Sumit,

On Fri, May 15, 2020 at 02:19:53PM +0530, Sumit Garg wrote:
> With the recent feature added to enable perf events to use pseudo NMIs
> as interrupts on platforms which support GICv3 or later, its now been
> possible to enable hard lockup detector (or NMI watchdog) on arm64
> platforms. So enable corresponding support.

Where/when do we expect to see this used?

I thought for server systems we'd expect to have the SBSA watchdog, so
why would we need this?

> One thing to note here is that normally lockup detector is initialized
> just after the early initcalls but PMU on arm64 comes up much later as
> device_initcall(). So we need to re-initialize lockup detection once
> PMU has been initialized.
> 
> Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
> ---
> 
> This patch is dependent on perf NMI patch-set [1].
> 
> [1] https://patchwork.kernel.org/cover/11047407/
> 
>  arch/arm64/Kconfig             |  2 ++
>  arch/arm64/kernel/perf_event.c | 32 ++++++++++++++++++++++++++++++--
>  drivers/perf/arm_pmu.c         | 11 +++++++++++
>  include/linux/perf/arm_pmu.h   |  2 ++
>  4 files changed, 45 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 40fb05d..36f75c2 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -160,6 +160,8 @@ config ARM64
>  	select HAVE_NMI
>  	select HAVE_PATA_PLATFORM
>  	select HAVE_PERF_EVENTS
> +	select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
> +	select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
>  	select HAVE_PERF_REGS
>  	select HAVE_PERF_USER_STACK_DUMP
>  	select HAVE_REGS_AND_STACK_ACCESS_API
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 3ad5c8f..df57360 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -20,6 +20,8 @@
>  #include <linux/perf/arm_pmu.h>
>  #include <linux/platform_device.h>
>  #include <linux/smp.h>
> +#include <linux/nmi.h>
> +#include <linux/cpufreq.h>
>  
>  /* ARMv8 Cortex-A53 specific event types. */
>  #define ARMV8_A53_PERFCTR_PREF_LINEFILL				0xC2
> @@ -1190,10 +1192,21 @@ static struct platform_driver armv8_pmu_driver = {
>  
>  static int __init armv8_pmu_driver_init(void)
>  {
> +	int ret;
> +
>  	if (acpi_disabled)
> -		return platform_driver_register(&armv8_pmu_driver);
> +		ret = platform_driver_register(&armv8_pmu_driver);
>  	else
> -		return arm_pmu_acpi_probe(armv8_pmuv3_init);
> +		ret = arm_pmu_acpi_probe(armv8_pmuv3_init);
> +
> +	/*
> +	 * Try to re-initialize lockup detector after PMU init in
> +	 * case PMU events are triggered via NMIs.
> +	 */
> +	if (arm_pmu_irq_is_nmi())
> +		lockup_detector_init();
> +
> +	return ret;
>  }
>  device_initcall(armv8_pmu_driver_init)
>  
> @@ -1225,3 +1238,18 @@ void arch_perf_update_userpage(struct perf_event *event,
>  	userpg->time_shift = (u16)shift;
>  	userpg->time_offset = -now;
>  }
> +
> +#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF
> +#define SAFE_MAX_CPU_FREQ	4000000000UL // 4 GHz

Why is 4GHz "safe"?

There's no architectural requirement on max frequency, and it's
conceviable that there could be parts faster than this.

If the frequency is critical, then we should bail out when it is
unknown rather than guessing. If it is not cirital then we should
explain what the requirements are and why using a hard-coded value is
sane.

> +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> +{
> +	unsigned int cpu = smp_processor_id();
> +	unsigned int max_cpu_freq;
> +
> +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu);
> +	if (max_cpu_freq)
> +		return (u64)max_cpu_freq * 1000 * watchdog_thresh;
> +	else
> +		return (u64)SAFE_MAX_CPU_FREQ * watchdog_thresh;
> +}

I take it this uses CPU cycles?

AFAIK those can be gated in idle/retention states (e.g. for WFI/WFE or
any other instruction that could block). So if the CPU were blocked on
one of those, the counter would never overflow and trigger the
interrupt.

i.e. this isn't going to detect a hard lockup of that sort.

> +#endif
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index f96cfc4..691dfc9 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -718,6 +718,17 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
>  	return per_cpu(hw_events->irq, cpu);
>  }
>  
> +bool arm_pmu_irq_is_nmi(void)
> +{
> +	const struct pmu_irq_ops *irq_ops;
> +
> +	irq_ops = per_cpu(cpu_irq_ops, smp_processor_id());
> +	if (irq_ops == &pmunmi_ops || irq_ops == &percpu_pmunmi_ops)
> +		return true;
> +	else
> +		return false;

You can simplify:

| if (x)
|	return true;
| else
|	return false;

... to:

| return x;

Thanks,
Mark.

> +}
> +
>  /*
>   * PMU hardware loses all context when a CPU goes offline.
>   * When a CPU is hotplugged back in, since some hardware registers are
> diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
> index d9b8b76..a71f029 100644
> --- a/include/linux/perf/arm_pmu.h
> +++ b/include/linux/perf/arm_pmu.h
> @@ -155,6 +155,8 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn);
>  static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
>  #endif
>  
> +bool arm_pmu_irq_is_nmi(void);
> +
>  /* Internal functions only for core arm_pmu code */
>  struct arm_pmu *armpmu_alloc(void);
>  struct arm_pmu *armpmu_alloc_atomic(void);
> -- 
> 2.7.4
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC] arm64: Enable perf events based hard lockup detector
  2020-05-18 14:34 ` Mark Rutland
@ 2020-05-18 14:52   ` Daniel Thompson
  2020-05-19  6:36     ` Sumit Garg
  0 siblings, 1 reply; 4+ messages in thread
From: Daniel Thompson @ 2020-05-18 14:52 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Sumit Garg, peterz, catalin.marinas, jolsa, dianders, acme,
	alexander.shishkin, mingo, julien.thierry.kdev, namhyung, tglx,
	will, linux-arm-kernel

On Mon, May 18, 2020 at 03:34:55PM +0100, Mark Rutland wrote:
> Hi Sumit,
> 
> On Fri, May 15, 2020 at 02:19:53PM +0530, Sumit Garg wrote:
> > With the recent feature added to enable perf events to use pseudo NMIs
> > as interrupts on platforms which support GICv3 or later, its now been
> > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > platforms. So enable corresponding support.
> 
> Where/when do we expect to see this used?
> 
> I thought for server systems we'd expect to have the SBSA watchdog, so
> why would we need this?

I view the lockup detector as a debug tool rather than a traditional
watchdog.

Certainly kernel machinery that prints the stack trace of a CPU that
has got wedged in a manner where it cannot service interrupts should
have fairly obvious applications for debugging embedded systems.


Daniel.


> 
> > One thing to note here is that normally lockup detector is initialized
> > just after the early initcalls but PMU on arm64 comes up much later as
> > device_initcall(). So we need to re-initialize lockup detection once
> > PMU has been initialized.
> > 
> > Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
> > ---
> > 
> > This patch is dependent on perf NMI patch-set [1].
> > 
> > [1] https://patchwork.kernel.org/cover/11047407/
> > 
> >  arch/arm64/Kconfig             |  2 ++
> >  arch/arm64/kernel/perf_event.c | 32 ++++++++++++++++++++++++++++++--
> >  drivers/perf/arm_pmu.c         | 11 +++++++++++
> >  include/linux/perf/arm_pmu.h   |  2 ++
> >  4 files changed, 45 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 40fb05d..36f75c2 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -160,6 +160,8 @@ config ARM64
> >  	select HAVE_NMI
> >  	select HAVE_PATA_PLATFORM
> >  	select HAVE_PERF_EVENTS
> > +	select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
> > +	select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
> >  	select HAVE_PERF_REGS
> >  	select HAVE_PERF_USER_STACK_DUMP
> >  	select HAVE_REGS_AND_STACK_ACCESS_API
> > diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> > index 3ad5c8f..df57360 100644
> > --- a/arch/arm64/kernel/perf_event.c
> > +++ b/arch/arm64/kernel/perf_event.c
> > @@ -20,6 +20,8 @@
> >  #include <linux/perf/arm_pmu.h>
> >  #include <linux/platform_device.h>
> >  #include <linux/smp.h>
> > +#include <linux/nmi.h>
> > +#include <linux/cpufreq.h>
> >  
> >  /* ARMv8 Cortex-A53 specific event types. */
> >  #define ARMV8_A53_PERFCTR_PREF_LINEFILL				0xC2
> > @@ -1190,10 +1192,21 @@ static struct platform_driver armv8_pmu_driver = {
> >  
> >  static int __init armv8_pmu_driver_init(void)
> >  {
> > +	int ret;
> > +
> >  	if (acpi_disabled)
> > -		return platform_driver_register(&armv8_pmu_driver);
> > +		ret = platform_driver_register(&armv8_pmu_driver);
> >  	else
> > -		return arm_pmu_acpi_probe(armv8_pmuv3_init);
> > +		ret = arm_pmu_acpi_probe(armv8_pmuv3_init);
> > +
> > +	/*
> > +	 * Try to re-initialize lockup detector after PMU init in
> > +	 * case PMU events are triggered via NMIs.
> > +	 */
> > +	if (arm_pmu_irq_is_nmi())
> > +		lockup_detector_init();
> > +
> > +	return ret;
> >  }
> >  device_initcall(armv8_pmu_driver_init)
> >  
> > @@ -1225,3 +1238,18 @@ void arch_perf_update_userpage(struct perf_event *event,
> >  	userpg->time_shift = (u16)shift;
> >  	userpg->time_offset = -now;
> >  }
> > +
> > +#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF
> > +#define SAFE_MAX_CPU_FREQ	4000000000UL // 4 GHz
> 
> Why is 4GHz "safe"?
> 
> There's no architectural requirement on max frequency, and it's
> conceviable that there could be parts faster than this.
> 
> If the frequency is critical, then we should bail out when it is
> unknown rather than guessing. If it is not cirital then we should
> explain what the requirements are and why using a hard-coded value is
> sane.
> 
> > +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> > +{
> > +	unsigned int cpu = smp_processor_id();
> > +	unsigned int max_cpu_freq;
> > +
> > +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu);
> > +	if (max_cpu_freq)
> > +		return (u64)max_cpu_freq * 1000 * watchdog_thresh;
> > +	else
> > +		return (u64)SAFE_MAX_CPU_FREQ * watchdog_thresh;
> > +}
> 
> I take it this uses CPU cycles?
> 
> AFAIK those can be gated in idle/retention states (e.g. for WFI/WFE or
> any other instruction that could block). So if the CPU were blocked on
> one of those, the counter would never overflow and trigger the
> interrupt.
> 
> i.e. this isn't going to detect a hard lockup of that sort.
> 
> > +#endif
> > diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> > index f96cfc4..691dfc9 100644
> > --- a/drivers/perf/arm_pmu.c
> > +++ b/drivers/perf/arm_pmu.c
> > @@ -718,6 +718,17 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
> >  	return per_cpu(hw_events->irq, cpu);
> >  }
> >  
> > +bool arm_pmu_irq_is_nmi(void)
> > +{
> > +	const struct pmu_irq_ops *irq_ops;
> > +
> > +	irq_ops = per_cpu(cpu_irq_ops, smp_processor_id());
> > +	if (irq_ops == &pmunmi_ops || irq_ops == &percpu_pmunmi_ops)
> > +		return true;
> > +	else
> > +		return false;
> 
> You can simplify:
> 
> | if (x)
> |	return true;
> | else
> |	return false;
> 
> ... to:
> 
> | return x;
> 
> Thanks,
> Mark.
> 
> > +}
> > +
> >  /*
> >   * PMU hardware loses all context when a CPU goes offline.
> >   * When a CPU is hotplugged back in, since some hardware registers are
> > diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
> > index d9b8b76..a71f029 100644
> > --- a/include/linux/perf/arm_pmu.h
> > +++ b/include/linux/perf/arm_pmu.h
> > @@ -155,6 +155,8 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn);
> >  static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
> >  #endif
> >  
> > +bool arm_pmu_irq_is_nmi(void);
> > +
> >  /* Internal functions only for core arm_pmu code */
> >  struct arm_pmu *armpmu_alloc(void);
> >  struct arm_pmu *armpmu_alloc_atomic(void);
> > -- 
> > 2.7.4
> > 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC] arm64: Enable perf events based hard lockup detector
  2020-05-18 14:52   ` Daniel Thompson
@ 2020-05-19  6:36     ` Sumit Garg
  0 siblings, 0 replies; 4+ messages in thread
From: Sumit Garg @ 2020-05-19  6:36 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Daniel Thompson, Peter Zijlstra, Catalin Marinas, jolsa,
	Douglas Anderson, acme, alexander.shishkin, mingo,
	julien.thierry.kdev, namhyung, Thomas Gleixner, Will Deacon,
	linux-arm-kernel

Hi Mark,

On Mon, 18 May 2020 at 20:22, Daniel Thompson
<daniel.thompson@linaro.org> wrote:
>
> On Mon, May 18, 2020 at 03:34:55PM +0100, Mark Rutland wrote:
> > Hi Sumit,
> >
> > On Fri, May 15, 2020 at 02:19:53PM +0530, Sumit Garg wrote:
> > > With the recent feature added to enable perf events to use pseudo NMIs
> > > as interrupts on platforms which support GICv3 or later, its now been
> > > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > > platforms. So enable corresponding support.
> >
> > Where/when do we expect to see this used?
> >
> > I thought for server systems we'd expect to have the SBSA watchdog, so
> > why would we need this?
>
> I view the lockup detector as a debug tool rather than a traditional
> watchdog.
>
> Certainly kernel machinery that prints the stack trace of a CPU that
> has got wedged in a manner where it cannot service interrupts should
> have fairly obvious applications for debugging embedded systems.
>

+1

>
>
> >
> > > One thing to note here is that normally lockup detector is initialized
> > > just after the early initcalls but PMU on arm64 comes up much later as
> > > device_initcall(). So we need to re-initialize lockup detection once
> > > PMU has been initialized.
> > >
> > > Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
> > > ---
> > >
> > > This patch is dependent on perf NMI patch-set [1].
> > >
> > > [1] https://patchwork.kernel.org/cover/11047407/
> > >
> > >  arch/arm64/Kconfig             |  2 ++
> > >  arch/arm64/kernel/perf_event.c | 32 ++++++++++++++++++++++++++++++--
> > >  drivers/perf/arm_pmu.c         | 11 +++++++++++
> > >  include/linux/perf/arm_pmu.h   |  2 ++
> > >  4 files changed, 45 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index 40fb05d..36f75c2 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -160,6 +160,8 @@ config ARM64
> > >     select HAVE_NMI
> > >     select HAVE_PATA_PLATFORM
> > >     select HAVE_PERF_EVENTS
> > > +   select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
> > > +   select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
> > >     select HAVE_PERF_REGS
> > >     select HAVE_PERF_USER_STACK_DUMP
> > >     select HAVE_REGS_AND_STACK_ACCESS_API
> > > diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> > > index 3ad5c8f..df57360 100644
> > > --- a/arch/arm64/kernel/perf_event.c
> > > +++ b/arch/arm64/kernel/perf_event.c
> > > @@ -20,6 +20,8 @@
> > >  #include <linux/perf/arm_pmu.h>
> > >  #include <linux/platform_device.h>
> > >  #include <linux/smp.h>
> > > +#include <linux/nmi.h>
> > > +#include <linux/cpufreq.h>
> > >
> > >  /* ARMv8 Cortex-A53 specific event types. */
> > >  #define ARMV8_A53_PERFCTR_PREF_LINEFILL                            0xC2
> > > @@ -1190,10 +1192,21 @@ static struct platform_driver armv8_pmu_driver = {
> > >
> > >  static int __init armv8_pmu_driver_init(void)
> > >  {
> > > +   int ret;
> > > +
> > >     if (acpi_disabled)
> > > -           return platform_driver_register(&armv8_pmu_driver);
> > > +           ret = platform_driver_register(&armv8_pmu_driver);
> > >     else
> > > -           return arm_pmu_acpi_probe(armv8_pmuv3_init);
> > > +           ret = arm_pmu_acpi_probe(armv8_pmuv3_init);
> > > +
> > > +   /*
> > > +    * Try to re-initialize lockup detector after PMU init in
> > > +    * case PMU events are triggered via NMIs.
> > > +    */
> > > +   if (arm_pmu_irq_is_nmi())
> > > +           lockup_detector_init();
> > > +
> > > +   return ret;
> > >  }
> > >  device_initcall(armv8_pmu_driver_init)
> > >
> > > @@ -1225,3 +1238,18 @@ void arch_perf_update_userpage(struct perf_event *event,
> > >     userpg->time_shift = (u16)shift;
> > >     userpg->time_offset = -now;
> > >  }
> > > +
> > > +#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF
> > > +#define SAFE_MAX_CPU_FREQ  4000000000UL // 4 GHz
> >
> > Why is 4GHz "safe"?
> >
> > There's no architectural requirement on max frequency, and it's
> > conceviable that there could be parts faster than this.
> >
> > If the frequency is critical, then we should bail out when it is
> > unknown rather than guessing. If it is not cirital then we should
> > explain what the requirements are and why using a hard-coded value is
> > sane.

The frequency is critical in the sense that it shouldn't lead to a
timeout less than watchdog threshold (10 sec.) for hard-lockup
detector. And we can't simply bail out here, since there could be
platforms which doesn't implement cpufreq driver (eg. Developerbox).

I chose 4GHz as a safe maximum here as I couldn't find any real parts
as of now running faster than 4GHz. But I agree with you that
architecture doesn't put any restrictions on max. frequency. So we can
certainly put a higher hardcoded value here and the only side effect
of this would be a bigger hard-lockup detector timeout (which I think
should be acceptable) on parts which are running slower (eg. 1GHz on
Developerbox) and doesn't possess a cpufreq driver.

> >
> > > +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> > > +{
> > > +   unsigned int cpu = smp_processor_id();
> > > +   unsigned int max_cpu_freq;
> > > +
> > > +   max_cpu_freq = cpufreq_get_hw_max_freq(cpu);
> > > +   if (max_cpu_freq)
> > > +           return (u64)max_cpu_freq * 1000 * watchdog_thresh;
> > > +   else
> > > +           return (u64)SAFE_MAX_CPU_FREQ * watchdog_thresh;
> > > +}
> >
> > I take it this uses CPU cycles?

Yes its based on perf event with config attribute as PERF_COUNT_HW_CPU_CYCLES.

> >
> > AFAIK those can be gated in idle/retention states (e.g. for WFI/WFE or
> > any other instruction that could block). So if the CPU were blocked on
> > one of those, the counter would never overflow and trigger the
> > interrupt.
> >
> > i.e. this isn't going to detect a hard lockup of that sort.

Isn't this a correct behaviour as we shouldn't raise a false
hard-lockup detection while the CPU is actually in idle/retention
states? IMO, this feature is useful for debugging purposes when a
particular CPU is stuck in a deadlock loop with interrupts disabled.

> >
> > > +#endif
> > > diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> > > index f96cfc4..691dfc9 100644
> > > --- a/drivers/perf/arm_pmu.c
> > > +++ b/drivers/perf/arm_pmu.c
> > > @@ -718,6 +718,17 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
> > >     return per_cpu(hw_events->irq, cpu);
> > >  }
> > >
> > > +bool arm_pmu_irq_is_nmi(void)
> > > +{
> > > +   const struct pmu_irq_ops *irq_ops;
> > > +
> > > +   irq_ops = per_cpu(cpu_irq_ops, smp_processor_id());
> > > +   if (irq_ops == &pmunmi_ops || irq_ops == &percpu_pmunmi_ops)
> > > +           return true;
> > > +   else
> > > +           return false;
> >
> > You can simplify:
> >
> > | if (x)
> > |     return true;
> > | else
> > |     return false;
> >
> > ... to:
> >
> > | return x;
> >

Looks clean, will use it instead.

-Sumit

> > Thanks,
> > Mark.
> >
> > > +}
> > > +
> > >  /*
> > >   * PMU hardware loses all context when a CPU goes offline.
> > >   * When a CPU is hotplugged back in, since some hardware registers are
> > > diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
> > > index d9b8b76..a71f029 100644
> > > --- a/include/linux/perf/arm_pmu.h
> > > +++ b/include/linux/perf/arm_pmu.h
> > > @@ -155,6 +155,8 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn);
> > >  static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
> > >  #endif
> > >
> > > +bool arm_pmu_irq_is_nmi(void);
> > > +
> > >  /* Internal functions only for core arm_pmu code */
> > >  struct arm_pmu *armpmu_alloc(void);
> > >  struct arm_pmu *armpmu_alloc_atomic(void);
> > > --
> > > 2.7.4
> > >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-05-19  6:36 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-15  8:49 [RFC] arm64: Enable perf events based hard lockup detector Sumit Garg
2020-05-18 14:34 ` Mark Rutland
2020-05-18 14:52   ` Daniel Thompson
2020-05-19  6:36     ` Sumit Garg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).