sparclinux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for
@ 2022-09-03  9:34 Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 1/6] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, yj.chiang

As we already used hld internally for arm64 since 2020, there still
doesn't have a proper commit on the upstream and we badly need it.

This serise rework on 5.17 from [1] and the origin author is
Pingfan Liu <kernelfans@gmail.com>
Sumit Garg <sumit.garg@linaro.org>

Qoute from [1]:

> Hard lockup detector is helpful to diagnose unpaired irq
> enable/disable.
> But the current watchdog framework can not cope with arm64 hw perf
> event
> easily.

> On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is
> not
> ready until device_initcall(armv8_pmu_driver_init).  And it is deeply
> integrated with the driver model and cpuhp. Hence it is hard to push
> the
> initialization of armv8_pmu_driver_init() before smp_init().

> But it is easy to take an opposite approach by enabling watchdog_hld
> to
> get the capability of PMU async. 
> The async model is achieved by expanding watchdog_nmi_probe() with
> -EBUSY, and a re-initializing work_struct which waits on a
> wait_queue_head.

Provide an API - retry_lockup_detector_init() for anyone who needs
to delayed init lockup detector.

The original assumption is: nobody should use delayed probe after
lockup_detector_check() (which has __init attribute).
That is, anyone uses this API must call between lockup_detector_init()
and lockup_detector_check(), and the caller must have __init attribute

The delayed init flow is:
1. lockup_detector_init() -> watchdog_nmi_probe() get non-zero retun,
   then set allow_lockup_detector_init_retry to true which means it's
   able to do delayed probe later.

2. PMU arch code init done, call retry_lockup_detector_init().

3. retry_lockup_detector_init() queue the work only when
   allow_lockup_detector_init_retry is true which means nobody should
call
   this before lockup_detector_init().

4. the work lockup_detector_delay_init() is doing without wait event.
   if probe success, set allow_lockup_detector_init_retry to false.

5. at late_initcall_sync(), lockup_detector_check() set
   allow_lockup_detector_init_retry to false first to avoid any later
retry,
   and then flush_work() to make sure the __init section won't be freed
   before the work done.

[1]
https://lore.kernel.org/lkml/20211014024155.15253-1-kernelfans@gmail.com/

v7:
  rebase on v6.0-rc3

v6:
  fix build failed reported by kernel test robot <lkp@intel.com>
https://lore.kernel.org/lkml/20220614062835.7196-1-lecopzer.chen@mediatek.com/

v5:
  1. rebase on v5.19-rc2
  2. change to proper schedule api
  3. return value checking before retry_lockup_detector_init()
https://lore.kernel.org/lkml/20220613135956.15711-1-lecopzer.chen@mediatek.com/

v4:
  1. remove -EBUSY protocal, let all the non-zero value from
     watchdog_nmi_probe() be able to retry.
  2. separate arm64 part patch into hw_nmi_get_sample_period and retry
     delayed init
  3. tweak commit msg that we don't have to limit to -EBUSY  
  4. rebase on v5.18-rc4
https://lore.kernel.org/lkml/20220427161340.8518-1-lecopzer.chen@mediatek.com/

v3:
  1. Tweak commit message in patch 04 
	2. Remove wait event
  3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/
  4. provide api retry_lockup_detector_init() 
https://lore.kernel.org/lkml/20220324141405.10835-1-lecopzer.chen@mediatek.com/ 

v2:
  1. Tweak commit message in patch 01/02/04/05 
  2. Remove vobose WARN in patch 04 within watchdog core.
  3. Change from three states variable: detector_delay_init_state to
     two states variable: allow_lockup_detector_init_retry

     Thanks Petr Mladek <pmladek@suse.com> for the idea.
     > 1.  lockup_detector_work() called before lockup_detector_check().
     >     In this case, wait_event() will wait until
     >     lockup_detector_check()
     >     clears detector_delay_pending_init and calls wake_up().

     > 2. lockup_detector_check() called before lockup_detector_work().
     >    In this case, wait_even() will immediately continue because
     >    it will see cleared detector_delay_pending_init.
  4. Add comment in code in patch 04/05 for two states variable
changing.
https://lore.kernel.org/lkml/20220307154729.13477-1-lecopzer.chen@mediatek.com/


Lecopzer Chen (5):
  kernel/watchdog: remove WATCHDOG_DEFAULT
  kernel/watchdog: change watchdog_nmi_enable() to void
  kernel/watchdog: Adapt the watchdog_hld interface for async model
  arm64: add hw_nmi_get_sample_period for preparation of lockup detector
  arm64: Enable perf events based hard lockup detector

Pingfan Liu (1):
  kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup
    detector event

 arch/arm64/Kconfig               |  2 +
 arch/arm64/kernel/Makefile       |  1 +
 arch/arm64/kernel/perf_event.c   | 12 +++++-
 arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++
 arch/sparc/kernel/nmi.c          |  8 ++--
 drivers/perf/arm_pmu.c           |  5 +++
 include/linux/nmi.h              |  4 +-
 include/linux/perf/arm_pmu.h     |  2 +
 kernel/watchdog.c                | 72 +++++++++++++++++++++++++++++---
 kernel/watchdog_hld.c            |  8 +++-
 10 files changed, 139 insertions(+), 14 deletions(-)
 create mode 100644 arch/arm64/kernel/watchdog_hld.c

-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v7 1/6] kernel/watchdog: remove WATCHDOG_DEFAULT
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
@ 2022-09-03  9:34 ` Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 2/6] kernel/watchdog: change watchdog_nmi_enable() to void Lecopzer Chen
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, yj.chiang

No reference to WATCHDOG_DEFAULT, remove it.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
---
 kernel/watchdog.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 8e61f21e7e33..582d572e1379 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -30,10 +30,8 @@
 static DEFINE_MUTEX(watchdog_mutex);
 
 #if defined(CONFIG_HARDLOCKUP_DETECTOR) || defined(CONFIG_HAVE_NMI_WATCHDOG)
-# define WATCHDOG_DEFAULT	(SOFT_WATCHDOG_ENABLED | NMI_WATCHDOG_ENABLED)
 # define NMI_WATCHDOG_DEFAULT	1
 #else
-# define WATCHDOG_DEFAULT	(SOFT_WATCHDOG_ENABLED)
 # define NMI_WATCHDOG_DEFAULT	0
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 2/6] kernel/watchdog: change watchdog_nmi_enable() to void
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 1/6] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
@ 2022-09-03  9:34 ` Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 3/6] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event Lecopzer Chen
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, yj.chiang

Nobody cares about the return value of watchdog_nmi_enable(),
changing its prototype to void.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
---
 arch/sparc/kernel/nmi.c | 8 +++-----
 include/linux/nmi.h     | 2 +-
 kernel/watchdog.c       | 3 +--
 3 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
index 060fff95a305..5dcf31f7e81f 100644
--- a/arch/sparc/kernel/nmi.c
+++ b/arch/sparc/kernel/nmi.c
@@ -282,11 +282,11 @@ __setup("nmi_watchdog=", setup_nmi_watchdog);
  * sparc specific NMI watchdog enable function.
  * Enables watchdog if it is not enabled already.
  */
-int watchdog_nmi_enable(unsigned int cpu)
+void watchdog_nmi_enable(unsigned int cpu)
 {
 	if (atomic_read(&nmi_active) == -1) {
 		pr_warn("NMI watchdog cannot be enabled or disabled\n");
-		return -1;
+		return;
 	}
 
 	/*
@@ -295,11 +295,9 @@ int watchdog_nmi_enable(unsigned int cpu)
 	 * process first.
 	 */
 	if (!nmi_init_done)
-		return 0;
+		return;
 
 	smp_call_function_single(cpu, start_nmi_watchdog, NULL, 1);
-
-	return 0;
 }
 /*
  * sparc specific NMI watchdog disable function.
diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index f700ff2df074..81217ebbc4bd 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -119,7 +119,7 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
 void watchdog_nmi_stop(void);
 void watchdog_nmi_start(void);
 int watchdog_nmi_probe(void);
-int watchdog_nmi_enable(unsigned int cpu);
+void watchdog_nmi_enable(unsigned int cpu);
 void watchdog_nmi_disable(unsigned int cpu);
 
 void lockup_detector_reconfigure(void);
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 582d572e1379..c705a18b26bf 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -93,10 +93,9 @@ __setup("nmi_watchdog=", hardlockup_panic_setup);
  * softlockup watchdog start and stop. The arch must select the
  * SOFTLOCKUP_DETECTOR Kconfig.
  */
-int __weak watchdog_nmi_enable(unsigned int cpu)
+void __weak watchdog_nmi_enable(unsigned int cpu)
 {
 	hardlockup_detector_perf_enable();
-	return 0;
 }
 
 void __weak watchdog_nmi_disable(unsigned int cpu)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 3/6] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 1/6] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 2/6] kernel/watchdog: change watchdog_nmi_enable() to void Lecopzer Chen
@ 2022-09-03  9:34 ` Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 4/6] kernel/watchdog: Adapt the watchdog_hld interface for async model Lecopzer Chen
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, yj.chiang

From: Pingfan Liu <kernelfans@gmail.com>

hardlockup_detector_event_create() should create perf_event on the
current CPU. Preemption could not get disabled because
perf_event_create_kernel_counter() allocates memory. Instead,
the CPU locality is achieved by processing the code in a per-CPU
bound kthread.

Add a check to prevent mistakes when calling the code in another
code path.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Co-developed-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
---
 kernel/watchdog_hld.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
index 247bf0b1582c..96b717205952 100644
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -165,10 +165,16 @@ static void watchdog_overflow_callback(struct perf_event *event,
 
 static int hardlockup_detector_event_create(void)
 {
-	unsigned int cpu = smp_processor_id();
+	unsigned int cpu;
 	struct perf_event_attr *wd_attr;
 	struct perf_event *evt;
 
+	/*
+	 * Preemption is not disabled because memory will be allocated.
+	 * Ensure CPU-locality by calling this in per-CPU kthread.
+	 */
+	WARN_ON(!is_percpu_thread());
+	cpu = raw_smp_processor_id();
 	wd_attr = &wd_hw_attr;
 	wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 4/6] kernel/watchdog: Adapt the watchdog_hld interface for async model
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
                   ` (2 preceding siblings ...)
  2022-09-03  9:34 ` [PATCH v7 3/6] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event Lecopzer Chen
@ 2022-09-03  9:34 ` Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 5/6] arm64: add hw_nmi_get_sample_period for preparation of lockup detector Lecopzer Chen
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, yj.chiang

When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
yet. E.g. on arm64, PMU is not ready until
device_initcall(armv8_pmu_driver_init).  And it is deeply integrated
with the driver model and cpuhp. Hence it is hard to push this
initialization before smp_init().

But it is easy to take an opposite approach and try to initialize
the watchdog once again later.
The delayed probe is called using workqueues. It need to allocate
memory and must be proceed in a normal context.
The delayed probe is able to use if watchdog_nmi_probe() returns
non-zero which means the return code returned when PMU is not ready yet.

Provide an API - retry_lockup_detector_init() for anyone who needs
to delayed init lockup detector if they had ever failed at
lockup_detector_init().

The original assumption is: nobody should use delayed probe after
lockup_detector_check() which has __init attribute.
That is, anyone uses this API must call between lockup_detector_init()
and lockup_detector_check(), and the caller must have __init attribute

Reviewed-by: Petr Mladek <pmladek@suse.com>
Co-developed-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Suggested-by: Petr Mladek <pmladek@suse.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 include/linux/nmi.h |  2 ++
 kernel/watchdog.c   | 67 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index 81217ebbc4bd..7f128e3aae38 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -118,6 +118,8 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
 
 void watchdog_nmi_stop(void);
 void watchdog_nmi_start(void);
+
+void retry_lockup_detector_init(void);
 int watchdog_nmi_probe(void);
 void watchdog_nmi_enable(unsigned int cpu);
 void watchdog_nmi_disable(unsigned int cpu);
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index c705a18b26bf..0b650d726e50 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -103,7 +103,13 @@ void __weak watchdog_nmi_disable(unsigned int cpu)
 	hardlockup_detector_perf_disable();
 }
 
-/* Return 0, if a NMI watchdog is available. Error code otherwise */
+/*
+ * Arch specific API.
+ *
+ * Return 0 when NMI watchdog is available, negative value otherwise.
+ * Note that the negative value means that a delayed probe might
+ * succeed later.
+ */
 int __weak __init watchdog_nmi_probe(void)
 {
 	return hardlockup_detector_perf_init();
@@ -850,6 +856,62 @@ static void __init watchdog_sysctl_init(void)
 #define watchdog_sysctl_init() do { } while (0)
 #endif /* CONFIG_SYSCTL */
 
+static void __init lockup_detector_delay_init(struct work_struct *work);
+static bool allow_lockup_detector_init_retry __initdata;
+
+static struct work_struct detector_work __initdata =
+		__WORK_INITIALIZER(detector_work, lockup_detector_delay_init);
+
+static void __init lockup_detector_delay_init(struct work_struct *work)
+{
+	int ret;
+
+	ret = watchdog_nmi_probe();
+	if (ret) {
+		pr_info("Delayed init of the lockup detector failed: %d\n", ret);
+		pr_info("Perf NMI watchdog permanently disabled\n");
+		return;
+	}
+
+	allow_lockup_detector_init_retry = false;
+
+	nmi_watchdog_available = true;
+	lockup_detector_setup();
+}
+
+/*
+ * retry_lockup_detector_init - retry init lockup detector if possible.
+ *
+ * Retry hardlockup detector init. It is useful when it requires some
+ * functionality that has to be initialized later on a particular
+ * platform.
+ */
+void __init retry_lockup_detector_init(void)
+{
+	/* Must be called before late init calls */
+	if (!allow_lockup_detector_init_retry)
+		return;
+
+	schedule_work(&detector_work);
+}
+
+/*
+ * Ensure that optional delayed hardlockup init is proceed before
+ * the init code and memory is freed.
+ */
+static int __init lockup_detector_check(void)
+{
+	/* Prevent any later retry. */
+	allow_lockup_detector_init_retry = false;
+
+	/* Make sure no work is pending. */
+	flush_work(&detector_work);
+
+	return 0;
+
+}
+late_initcall_sync(lockup_detector_check);
+
 void __init lockup_detector_init(void)
 {
 	if (tick_nohz_full_enabled())
@@ -860,6 +922,9 @@ void __init lockup_detector_init(void)
 
 	if (!watchdog_nmi_probe())
 		nmi_watchdog_available = true;
+	else
+		allow_lockup_detector_init_retry = true;
+
 	lockup_detector_setup();
 	watchdog_sysctl_init();
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 5/6] arm64: add hw_nmi_get_sample_period for preparation of lockup detector
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
                   ` (3 preceding siblings ...)
  2022-09-03  9:34 ` [PATCH v7 4/6] kernel/watchdog: Adapt the watchdog_hld interface for async model Lecopzer Chen
@ 2022-09-03  9:34 ` Lecopzer Chen
  2022-09-03  9:34 ` [PATCH v7 6/6] arm64: Enable perf events based hard " Lecopzer Chen
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, yj.chiang

Set safe maximum CPU frequency to 5 GHz in case a particular platform
doesn't implement cpufreq driver.
Although, architecture doesn't put any restrictions on
maximum frequency but 5 GHz seems to be safe maximum given the available
Arm CPUs in the market which are clocked much less than 5 GHz.

On the other hand, we can't make it much higher as it would lead to
a large hard-lockup detection timeout on parts which are running
slower (eg. 1GHz on Developerbox) and doesn't possess a cpufreq driver.

[1]:http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org

Co-developed-by: Sumit Garg <sumit.garg@linaro.org>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Co-developed-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/kernel/Makefile       |  1 +
 arch/arm64/kernel/watchdog_hld.c | 25 +++++++++++++++++++++++++
 2 files changed, 26 insertions(+)
 create mode 100644 arch/arm64/kernel/watchdog_hld.c

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 1add7b01efa7..122b50bfcc0e 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -51,6 +51,7 @@ obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
 obj-$(CONFIG_PERF_EVENTS)		+= perf_regs.o perf_callchain.o
 obj-$(CONFIG_HW_PERF_EVENTS)		+= perf_event.o
+obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF)	+= watchdog_hld.o
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_CPU_PM)			+= sleep.o suspend.o
 obj-$(CONFIG_CPU_IDLE)			+= cpuidle.o
diff --git a/arch/arm64/kernel/watchdog_hld.c b/arch/arm64/kernel/watchdog_hld.c
new file mode 100644
index 000000000000..de43318e4dd6
--- /dev/null
+++ b/arch/arm64/kernel/watchdog_hld.c
@@ -0,0 +1,25 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/cpufreq.h>
+
+/*
+ * Safe maximum CPU frequency in case a particular platform doesn't implement
+ * cpufreq driver. Although, architecture doesn't put any restrictions on
+ * maximum frequency but 5 GHz seems to be safe maximum given the available
+ * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
+ * hand, we can't make it much higher as it would lead to a large hard-lockup
+ * detection timeout on parts which are running slower (eg. 1GHz on
+ * Developerbox) and doesn't possess a cpufreq driver.
+ */
+#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
+u64 hw_nmi_get_sample_period(int watchdog_thresh)
+{
+	unsigned int cpu = smp_processor_id();
+	unsigned long max_cpu_freq;
+
+	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
+	if (!max_cpu_freq)
+		max_cpu_freq = SAFE_MAX_CPU_FREQ;
+
+	return (u64)max_cpu_freq * watchdog_thresh;
+}
+
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 6/6] arm64: Enable perf events based hard lockup detector
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
                   ` (4 preceding siblings ...)
  2022-09-03  9:34 ` [PATCH v7 5/6] arm64: add hw_nmi_get_sample_period for preparation of lockup detector Lecopzer Chen
@ 2022-09-03  9:34 ` Lecopzer Chen
  2022-09-04 22:57   ` kernel test robot
  2022-09-03  9:45 ` [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, yj.chiang

With the recent feature added to enable perf events to use pseudo NMIs
as interrupts on platforms which support GICv3 or later, its now been
possible to enable hard lockup detector (or NMI watchdog) on arm64
platforms. So enable corresponding support.

One thing to note here is that normally lockup detector is initialized
just after the early initcalls but PMU on arm64 comes up much later as
device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
let the watchdog framework know PMU not ready, and inform the framework
to re-initialize lockup detection once PMU has been initialized.

[1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org

Co-developed-by: Sumit Garg <sumit.garg@linaro.org>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Co-developed-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/kernel/perf_event.c   | 12 ++++++++++--
 arch/arm64/kernel/watchdog_hld.c | 14 ++++++++++++++
 drivers/perf/arm_pmu.c           |  5 +++++
 include/linux/perf/arm_pmu.h     |  2 ++
 5 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 9fb9fff08c94..9ec7d3d7a0ac 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -189,6 +189,7 @@ config ARM64
 	select HAVE_FUNCTION_ERROR_INJECTION
 	select HAVE_FUNCTION_GRAPH_TRACER
 	select HAVE_GCC_PLUGINS
+	select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
 	select HAVE_HW_BREAKPOINT if PERF_EVENTS
 	select HAVE_IOREMAP_PROT
 	select HAVE_IRQ_TIME_ACCOUNTING
@@ -196,6 +197,7 @@ config ARM64
 	select HAVE_NMI
 	select HAVE_PATA_PLATFORM
 	select HAVE_PERF_EVENTS
+	select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
 	select HAVE_PREEMPT_DYNAMIC_KEY
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index cb69ff1e6138..d9eec8911bf0 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -23,6 +23,7 @@
 #include <linux/platform_device.h>
 #include <linux/sched_clock.h>
 #include <linux/smp.h>
+#include <linux/nmi.h>
 
 /* ARMv8 Cortex-A53 specific event types. */
 #define ARMV8_A53_PERFCTR_PREF_LINEFILL				0xC2
@@ -1390,10 +1391,17 @@ static struct platform_driver armv8_pmu_driver = {
 
 static int __init armv8_pmu_driver_init(void)
 {
+	int ret;
+
 	if (acpi_disabled)
-		return platform_driver_register(&armv8_pmu_driver);
+		ret = platform_driver_register(&armv8_pmu_driver);
 	else
-		return arm_pmu_acpi_probe(armv8_pmuv3_pmu_init);
+		ret = arm_pmu_acpi_probe(armv8_pmuv3_pmu_init);
+
+	if (!ret)
+		retry_lockup_detector_init();
+
+	return ret;
 }
 device_initcall(armv8_pmu_driver_init)
 
diff --git a/arch/arm64/kernel/watchdog_hld.c b/arch/arm64/kernel/watchdog_hld.c
index de43318e4dd6..c9c6ec889c15 100644
--- a/arch/arm64/kernel/watchdog_hld.c
+++ b/arch/arm64/kernel/watchdog_hld.c
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <linux/nmi.h>
 #include <linux/cpufreq.h>
+#include <linux/perf/arm_pmu.h>
 
 /*
  * Safe maximum CPU frequency in case a particular platform doesn't implement
@@ -23,3 +25,15 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh)
 	return (u64)max_cpu_freq * watchdog_thresh;
 }
 
+int __init watchdog_nmi_probe(void)
+{
+	/*
+	 * hardlockup_detector_perf_init() will success even if Pseudo-NMI turns off,
+	 * however, the pmu interrupts will act like a normal interrupt instead of
+	 * NMI and the hardlockup detector would be broken.
+	 */
+	if (!arm_pmu_irq_is_nmi())
+		return -ENODEV;
+
+	return hardlockup_detector_perf_init();
+}
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 59d3980b8ca2..ceee2c55d436 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -697,6 +697,11 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
 	return per_cpu(hw_events->irq, cpu);
 }
 
+bool arm_pmu_irq_is_nmi(void)
+{
+	return has_nmi;
+}
+
 /*
  * PMU hardware loses all context when a CPU goes offline.
  * When a CPU is hotplugged back in, since some hardware registers are
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 0407a38b470a..29c56c92bab7 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -171,6 +171,8 @@ void kvm_host_pmu_init(struct arm_pmu *pmu);
 #define kvm_host_pmu_init(x)	do { } while(0)
 #endif
 
+bool arm_pmu_irq_is_nmi(void);
+
 /* Internal functions only for core arm_pmu code */
 struct arm_pmu *armpmu_alloc(void);
 struct arm_pmu *armpmu_alloc_atomic(void);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* RE: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for arm64
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
                   ` (5 preceding siblings ...)
  2022-09-03  9:34 ` [PATCH v7 6/6] arm64: Enable perf events based hard " Lecopzer Chen
@ 2022-09-03  9:45 ` Lecopzer Chen
  2022-11-07 15:18 ` [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Will Deacon
  2023-05-04 22:41 ` Doug Anderson
  8 siblings, 0 replies; 11+ messages in thread
From: Lecopzer Chen @ 2022-09-03  9:45 UTC (permalink / raw)
  To: mark.rutland, will
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-kernel, linux-perf-users, masahiroy, matthias.bgg, maz,
	mcgrof, mingo, namhyung, nixiaoming, peterz, pmladek, sparclinux,
	sumit.garg, wangqing, yj.chiang

Hi Will, Mark
 
Sorry for bothering you, this need to be reviewed by ARM Perf maintainer,

could you please help review this pathset or comment about it?




Thanks a lot.

 
> Hi Will, Mark
>
> Could you help review arm parts of this patchset, please?
> 
> For the question mention in both [1] and [2],
> 
> > I'd still like Mark's Ack on this, as the approach you have taken doesn't
> > really sit with what he was suggesting.
> >
> > I also don't understand how all the CPUs get initialised with your patch,
> > since the PMU driver will be initialised after SMP is up and running.
> 
> The hardlock detector utilizes the softlockup_start_all() to start all
> the cpu on watchdog_allowed_mask, which will do watchdog_nmi_enable()
> that registers perf event on each CPUs.
> Thus we simply need to retry lockup_detector_init() in a single cpu which
> will reconfig and call to softlockup_start_all().
> 
> Also, the CONFIG_HARDLOCKUP_DETECTOR_PERF selects SOFTLOCKUP_DETECTOR,
> IMO, this shows that hardlockup detector supports from softlockup.
> 
> 
> > We should know whether pNMIs are possible once we've completed
> > setup_arch() (and possibly init_IRQ()), long before SMP, so so I reckon
> > we should have all the information available once we get to
> > lockup_detector_init(), even if that requires some preparatory rework.
> 
> Hardlockup depends on PMU driver , I think the only way is moving
> pmu driver at setup_arch() or any point which is earlier than
> lockup_detector_init(), and I guess we have to reorganize the architecture
> of arm PMU.
> 
> The retry function should benifit all the arch/ not only for arm64.
> Any arch who needs to probe its pmu as module can use this without providing
> a chance to mess up the setup order. 
> 
> 
> Please let me know if you have any concern about this, thank you
> 
> 
> [1] https://lore.kernel.org/all/CAFA6WYPPgUvHCpN5=EpJ2Us5h5uVWCbBA59C-YwYQX2ovyVeEw@mail.gmail.com/
> [2] https://lore.kernel.org/linux-arm-kernel/20210419170331.GB31045@willie-the-truck/
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 6/6] arm64: Enable perf events based hard lockup detector
  2022-09-03  9:34 ` [PATCH v7 6/6] arm64: Enable perf events based hard " Lecopzer Chen
@ 2022-09-04 22:57   ` kernel test robot
  0 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2022-09-04 22:57 UTC (permalink / raw)
  To: Lecopzer Chen, linux-arm-kernel, linux-kernel, linux-perf-users,
	mark.rutland, will
  Cc: kbuild-all, lecopzer.chen, acme, akpm, alexander.shishkin,
	catalin.marinas, davem, jolsa, jthierry, keescook, kernelfans,
	masahiroy, matthias.bgg, maz, mcgrof, mingo, namhyung,
	nixiaoming, peterz, pmladek, sparclinux, sumit.garg, wangqing,
	yj.chiang

Hi Lecopzer,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on arm/for-next soc/for-next linus/master v6.0-rc4 next-20220901]
[cannot apply to xilinx-xlnx/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Lecopzer-Chen/Support-hld-delayed-init-based-on-Pseudo-NMI-for/20220903-173641
base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
config: arm64-randconfig-c033-20220904 (https://download.01.org/0day-ci/archive/20220905/202209050639.jDaWd49E-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/de75eba8785b631eb168737fbff6dc31418cb852
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Lecopzer-Chen/Support-hld-delayed-init-based-on-Pseudo-NMI-for/20220903-173641
        git checkout de75eba8785b631eb168737fbff6dc31418cb852
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arm64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   aarch64-linux-ld: Unexpected GOT/PLT entries detected!
   aarch64-linux-ld: Unexpected run-time procedure linkages detected!
   aarch64-linux-ld: arch/arm64/kernel/perf_event.o: in function `armv8_pmu_driver_init':
>> arch/arm64/kernel/perf_event.c:1402: undefined reference to `retry_lockup_detector_init'


vim +1402 arch/arm64/kernel/perf_event.c

  1391	
  1392	static int __init armv8_pmu_driver_init(void)
  1393	{
  1394		int ret;
  1395	
  1396		if (acpi_disabled)
  1397			ret = platform_driver_register(&armv8_pmu_driver);
  1398		else
  1399			ret = arm_pmu_acpi_probe(armv8_pmuv3_pmu_init);
  1400	
  1401		if (!ret)
> 1402			retry_lockup_detector_init();
  1403	
  1404		return ret;
  1405	}
  1406	device_initcall(armv8_pmu_driver_init)
  1407	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
                   ` (6 preceding siblings ...)
  2022-09-03  9:45 ` [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
@ 2022-11-07 15:18 ` Will Deacon
  2023-05-04 22:41 ` Doug Anderson
  8 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2022-11-07 15:18 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland,
	acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, masahiroy, matthias.bgg, maz,
	mcgrof, mingo, namhyung, nixiaoming, peterz, pmladek, sparclinux,
	sumit.garg, wangqing, yj.chiang

On Sat, Sep 03, 2022 at 05:34:09PM +0800, Lecopzer Chen wrote:
> As we already used hld internally for arm64 since 2020, there still
> doesn't have a proper commit on the upstream and we badly need it.
> 
> This serise rework on 5.17 from [1] and the origin author is
> Pingfan Liu <kernelfans@gmail.com>
> Sumit Garg <sumit.garg@linaro.org>

I'd definitely want Mark's ack on this, as he previously had suggestions
when we reverted the old broken code back in:

https://lore.kernel.org/r/20210113130235.GB19011@C02TD0UTHF1T.local

Will

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for
  2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
                   ` (7 preceding siblings ...)
  2022-11-07 15:18 ` [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Will Deacon
@ 2023-05-04 22:41 ` Doug Anderson
  8 siblings, 0 replies; 11+ messages in thread
From: Doug Anderson @ 2023-05-04 22:41 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-arm-kernel, linux-kernel, linux-perf-users, mark.rutland,
	will, acme, akpm, alexander.shishkin, catalin.marinas, davem,
	jolsa, jthierry, keescook, kernelfans, masahiroy, matthias.bgg,
	maz, mcgrof, mingo, namhyung, nixiaoming, peterz, pmladek,
	sparclinux, sumit.garg, wangqing, yj.chiang

Hi,

On Sat, Sep 3, 2022 at 2:35 AM Lecopzer Chen <lecopzer.chen@mediatek.com> wrote:
>
> As we already used hld internally for arm64 since 2020, there still
> doesn't have a proper commit on the upstream and we badly need it.
>
> This serise rework on 5.17 from [1] and the origin author is
> Pingfan Liu <kernelfans@gmail.com>
> Sumit Garg <sumit.garg@linaro.org>
>
> Qoute from [1]:
>
> > Hard lockup detector is helpful to diagnose unpaired irq
> > enable/disable.
> > But the current watchdog framework can not cope with arm64 hw perf
> > event
> > easily.
>
> > On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is
> > not
> > ready until device_initcall(armv8_pmu_driver_init).  And it is deeply
> > integrated with the driver model and cpuhp. Hence it is hard to push
> > the
> > initialization of armv8_pmu_driver_init() before smp_init().
>
> > But it is easy to take an opposite approach by enabling watchdog_hld
> > to
> > get the capability of PMU async.
> > The async model is achieved by expanding watchdog_nmi_probe() with
> > -EBUSY, and a re-initializing work_struct which waits on a
> > wait_queue_head.
>
> Provide an API - retry_lockup_detector_init() for anyone who needs
> to delayed init lockup detector.
>
> The original assumption is: nobody should use delayed probe after
> lockup_detector_check() (which has __init attribute).
> That is, anyone uses this API must call between lockup_detector_init()
> and lockup_detector_check(), and the caller must have __init attribute
>
> The delayed init flow is:
> 1. lockup_detector_init() -> watchdog_nmi_probe() get non-zero retun,
>    then set allow_lockup_detector_init_retry to true which means it's
>    able to do delayed probe later.
>
> 2. PMU arch code init done, call retry_lockup_detector_init().
>
> 3. retry_lockup_detector_init() queue the work only when
>    allow_lockup_detector_init_retry is true which means nobody should
> call
>    this before lockup_detector_init().
>
> 4. the work lockup_detector_delay_init() is doing without wait event.
>    if probe success, set allow_lockup_detector_init_retry to false.
>
> 5. at late_initcall_sync(), lockup_detector_check() set
>    allow_lockup_detector_init_retry to false first to avoid any later
> retry,
>    and then flush_work() to make sure the __init section won't be freed
>    before the work done.
>
> [1]
> https://lore.kernel.org/lkml/20211014024155.15253-1-kernelfans@gmail.com/
>
> v7:
>   rebase on v6.0-rc3
>
> v6:
>   fix build failed reported by kernel test robot <lkp@intel.com>
> https://lore.kernel.org/lkml/20220614062835.7196-1-lecopzer.chen@mediatek.com/
>
> v5:
>   1. rebase on v5.19-rc2
>   2. change to proper schedule api
>   3. return value checking before retry_lockup_detector_init()
> https://lore.kernel.org/lkml/20220613135956.15711-1-lecopzer.chen@mediatek.com/
>
> v4:
>   1. remove -EBUSY protocal, let all the non-zero value from
>      watchdog_nmi_probe() be able to retry.
>   2. separate arm64 part patch into hw_nmi_get_sample_period and retry
>      delayed init
>   3. tweak commit msg that we don't have to limit to -EBUSY
>   4. rebase on v5.18-rc4
> https://lore.kernel.org/lkml/20220427161340.8518-1-lecopzer.chen@mediatek.com/
>
> v3:
>   1. Tweak commit message in patch 04
>         2. Remove wait event
>   3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/
>   4. provide api retry_lockup_detector_init()
> https://lore.kernel.org/lkml/20220324141405.10835-1-lecopzer.chen@mediatek.com/
>
> v2:
>   1. Tweak commit message in patch 01/02/04/05
>   2. Remove vobose WARN in patch 04 within watchdog core.
>   3. Change from three states variable: detector_delay_init_state to
>      two states variable: allow_lockup_detector_init_retry
>
>      Thanks Petr Mladek <pmladek@suse.com> for the idea.
>      > 1.  lockup_detector_work() called before lockup_detector_check().
>      >     In this case, wait_event() will wait until
>      >     lockup_detector_check()
>      >     clears detector_delay_pending_init and calls wake_up().
>
>      > 2. lockup_detector_check() called before lockup_detector_work().
>      >    In this case, wait_even() will immediately continue because
>      >    it will see cleared detector_delay_pending_init.
>   4. Add comment in code in patch 04/05 for two states variable
> changing.
> https://lore.kernel.org/lkml/20220307154729.13477-1-lecopzer.chen@mediatek.com/
>
>
> Lecopzer Chen (5):
>   kernel/watchdog: remove WATCHDOG_DEFAULT
>   kernel/watchdog: change watchdog_nmi_enable() to void
>   kernel/watchdog: Adapt the watchdog_hld interface for async model
>   arm64: add hw_nmi_get_sample_period for preparation of lockup detector
>   arm64: Enable perf events based hard lockup detector
>
> Pingfan Liu (1):
>   kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup
>     detector event
>
>  arch/arm64/Kconfig               |  2 +
>  arch/arm64/kernel/Makefile       |  1 +
>  arch/arm64/kernel/perf_event.c   | 12 +++++-
>  arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++
>  arch/sparc/kernel/nmi.c          |  8 ++--
>  drivers/perf/arm_pmu.c           |  5 +++
>  include/linux/nmi.h              |  4 +-
>  include/linux/perf/arm_pmu.h     |  2 +
>  kernel/watchdog.c                | 72 +++++++++++++++++++++++++++++---
>  kernel/watchdog_hld.c            |  8 +++-
>  10 files changed, 139 insertions(+), 14 deletions(-)

To leave some breadcrumbs here, I've included all the patches here in
my latest "buddy" hardlockup detector series. I'm hoping that the
cleanup patches that were part of your series can land as part of my
series. I'm not necessarily expecting the the arm64 perf hardlockup
detector patches will land as part of my series, though. See the cover
letter and "after-the-cut" notes on the later patches in my series for
details.

https://lore.kernel.org/r/20230504221349.1535669-1-dianders@chromium.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-05-04 22:42 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-03  9:34 [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Lecopzer Chen
2022-09-03  9:34 ` [PATCH v7 1/6] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
2022-09-03  9:34 ` [PATCH v7 2/6] kernel/watchdog: change watchdog_nmi_enable() to void Lecopzer Chen
2022-09-03  9:34 ` [PATCH v7 3/6] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event Lecopzer Chen
2022-09-03  9:34 ` [PATCH v7 4/6] kernel/watchdog: Adapt the watchdog_hld interface for async model Lecopzer Chen
2022-09-03  9:34 ` [PATCH v7 5/6] arm64: add hw_nmi_get_sample_period for preparation of lockup detector Lecopzer Chen
2022-09-03  9:34 ` [PATCH v7 6/6] arm64: Enable perf events based hard " Lecopzer Chen
2022-09-04 22:57   ` kernel test robot
2022-09-03  9:45 ` [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
2022-11-07 15:18 ` [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for Will Deacon
2023-05-04 22:41 ` Doug Anderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).