linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Support hld delayed init based on Pseudo-NMI for arm64
@ 2022-03-24 14:14 Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 1/5] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Lecopzer Chen @ 2022-03-24 14:14 UTC (permalink / raw)
  To: linux-kernel
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, will, yj.chiang

As we already used hld internally for arm64 since 2020, there still
doesn't have a proper commit on the upstream and we badly need it.

This serise rework on 5.17 from [1] and the origin author is
Pingfan Liu <kernelfans@gmail.com>
Sumit Garg <sumit.garg@linaro.org>

Qoute from [1]:

> Hard lockup detector is helpful to diagnose unpaired irq enable/disable.
> But the current watchdog framework can not cope with arm64 hw perf event
> easily.

> On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is not
> ready until device_initcall(armv8_pmu_driver_init).  And it is deeply
> integrated with the driver model and cpuhp. Hence it is hard to push the
> initialization of armv8_pmu_driver_init() before smp_init().

> But it is easy to take an opposite approach by enabling watchdog_hld to
> get the capability of PMU async. 
> The async model is achieved by expanding watchdog_nmi_probe() with
> -EBUSY, and a re-initializing work_struct which waits on a
> wait_queue_head.

Provide an API - retry_lockup_detector_init() for anyone who needs
to delayed init lockup detector.

The original assumption is: nobody should use delayed probe after
lockup_detector_check() (which has __init attribute).
That is, anyone uses this API must call between lockup_detector_init()
and lockup_detector_check(), and the caller must have __init attribute

The delayed init flow is:
1. lockup_detector_init() get -EBUSY from arch code, set
   allow_lockup_detector_init_retry=true

2. PMU arch code init done, call retry_lockup_detector_init().

3. retry_lockup_detector_init() queue the work only when
   allow_lockup_detector_init_retry=true which means nobody should call
   this before lockup_detector_init().

4. the work lockup_detector_delay_init() is doing without wait event.
   if probe success, set allow_lockup_detector_init_retry=false.

5. at late_initcall_sync(), lockup_detector_check() call flush_work() first
   to avoid previous retry_lockup_detector_init() is not scheduled.
   And then test whether allow_lockup_detector_init_retry is false, if it's
   true, means we have pending init un-finished, than forcely queue work
   again and flush_work to make sure the __init section won't be freed
   before the work done.

[1] https://lore.kernel.org/lkml/20211014024155.15253-1-kernelfans@gmail.com/

v3:
  1. Tweak commit message in patch 04 
	2. Remove wait event
  3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/
  4. provide api retry_lockup_detector_init()


v2:
  1. Tweak commit message in patch 01/02/04/05 
  2. Remove vobose WARN in patch 04 within watchdog core.
  3. Change from three states variable: detector_delay_init_state to
     two states variable: allow_lockup_detector_init_retry

     Thanks Petr Mladek <pmladek@suse.com> for the idea.
     > 1.  lockup_detector_work() called before lockup_detector_check().
     >     In this case, wait_event() will wait until lockup_detector_check()
     >     clears detector_delay_pending_init and calls wake_up().

     > 2. lockup_detector_check() called before lockup_detector_work().
     >    In this case, wait_even() will immediately continue because
     >    it will see cleared detector_delay_pending_init.
  4. Add comment in code in patch 04/05 for two states variable changing.
https://lore.kernel.org/lkml/20220307154729.13477-1-lecopzer.chen@mediatek.com/

Lecopzer Chen (4):
  kernel/watchdog: remove WATCHDOG_DEFAULT
  kernel/watchdog: change watchdog_nmi_enable() to void
  kernel/watchdog: Adapt the watchdog_hld interface for async model
  arm64: Enable perf events based hard lockup detector

Pingfan Liu (1):
  kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup
    detector event

 arch/arm64/Kconfig             |  2 +
 arch/arm64/kernel/Makefile     |  1 +
 arch/arm64/kernel/perf_event.c | 12 +++++-
 arch/sparc/kernel/nmi.c        |  8 ++--
 drivers/perf/arm_pmu.c         |  5 +++
 include/linux/nmi.h            |  5 ++-
 include/linux/perf/arm_pmu.h   |  2 +
 kernel/watchdog.c              | 67 +++++++++++++++++++++++++++++++---
 kernel/watchdog_hld.c          |  8 +++-
 9 files changed, 95 insertions(+), 15 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/5] kernel/watchdog: remove WATCHDOG_DEFAULT
  2022-03-24 14:14 [PATCH v3 0/5] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
@ 2022-03-24 14:14 ` Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 2/5] kernel/watchdog: change watchdog_nmi_enable() to void Lecopzer Chen
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Lecopzer Chen @ 2022-03-24 14:14 UTC (permalink / raw)
  To: linux-kernel
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, will, yj.chiang

No reference to WATCHDOG_DEFAULT, remove it.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
---
 kernel/watchdog.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 99afb88d2e85..db451e943a04 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -30,10 +30,8 @@
 static DEFINE_MUTEX(watchdog_mutex);
 
 #if defined(CONFIG_HARDLOCKUP_DETECTOR) || defined(CONFIG_HAVE_NMI_WATCHDOG)
-# define WATCHDOG_DEFAULT	(SOFT_WATCHDOG_ENABLED | NMI_WATCHDOG_ENABLED)
 # define NMI_WATCHDOG_DEFAULT	1
 #else
-# define WATCHDOG_DEFAULT	(SOFT_WATCHDOG_ENABLED)
 # define NMI_WATCHDOG_DEFAULT	0
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/5] kernel/watchdog: change watchdog_nmi_enable() to void
  2022-03-24 14:14 [PATCH v3 0/5] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 1/5] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
@ 2022-03-24 14:14 ` Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 3/5] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event Lecopzer Chen
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Lecopzer Chen @ 2022-03-24 14:14 UTC (permalink / raw)
  To: linux-kernel
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, will, yj.chiang

Nobody cares about the return value of watchdog_nmi_enable(),
changing its prototype to void.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
---
 arch/sparc/kernel/nmi.c | 8 +++-----
 include/linux/nmi.h     | 2 +-
 kernel/watchdog.c       | 3 +--
 3 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
index 060fff95a305..5dcf31f7e81f 100644
--- a/arch/sparc/kernel/nmi.c
+++ b/arch/sparc/kernel/nmi.c
@@ -282,11 +282,11 @@ __setup("nmi_watchdog=", setup_nmi_watchdog);
  * sparc specific NMI watchdog enable function.
  * Enables watchdog if it is not enabled already.
  */
-int watchdog_nmi_enable(unsigned int cpu)
+void watchdog_nmi_enable(unsigned int cpu)
 {
 	if (atomic_read(&nmi_active) == -1) {
 		pr_warn("NMI watchdog cannot be enabled or disabled\n");
-		return -1;
+		return;
 	}
 
 	/*
@@ -295,11 +295,9 @@ int watchdog_nmi_enable(unsigned int cpu)
 	 * process first.
 	 */
 	if (!nmi_init_done)
-		return 0;
+		return;
 
 	smp_call_function_single(cpu, start_nmi_watchdog, NULL, 1);
-
-	return 0;
 }
 /*
  * sparc specific NMI watchdog disable function.
diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index 750c7f395ca9..b7bcd63c36b4 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -119,7 +119,7 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
 void watchdog_nmi_stop(void);
 void watchdog_nmi_start(void);
 int watchdog_nmi_probe(void);
-int watchdog_nmi_enable(unsigned int cpu);
+void watchdog_nmi_enable(unsigned int cpu);
 void watchdog_nmi_disable(unsigned int cpu);
 
 /**
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index db451e943a04..b71d434cf648 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -93,10 +93,9 @@ __setup("nmi_watchdog=", hardlockup_panic_setup);
  * softlockup watchdog start and stop. The arch must select the
  * SOFTLOCKUP_DETECTOR Kconfig.
  */
-int __weak watchdog_nmi_enable(unsigned int cpu)
+void __weak watchdog_nmi_enable(unsigned int cpu)
 {
 	hardlockup_detector_perf_enable();
-	return 0;
 }
 
 void __weak watchdog_nmi_disable(unsigned int cpu)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/5] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event
  2022-03-24 14:14 [PATCH v3 0/5] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 1/5] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 2/5] kernel/watchdog: change watchdog_nmi_enable() to void Lecopzer Chen
@ 2022-03-24 14:14 ` Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model Lecopzer Chen
  2022-03-24 14:14 ` [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector Lecopzer Chen
  4 siblings, 0 replies; 18+ messages in thread
From: Lecopzer Chen @ 2022-03-24 14:14 UTC (permalink / raw)
  To: linux-kernel
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, will, yj.chiang

From: Pingfan Liu <kernelfans@gmail.com>

hardlockup_detector_event_create() should create perf_event on the
current CPU. Preemption could not get disabled because
perf_event_create_kernel_counter() allocates memory. Instead,
the CPU locality is achieved by processing the code in a per-CPU
bound kthread.

Add a check to prevent mistakes when calling the code in another
code path.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Co-developed-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
---
 kernel/watchdog_hld.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
index 247bf0b1582c..96b717205952 100644
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -165,10 +165,16 @@ static void watchdog_overflow_callback(struct perf_event *event,
 
 static int hardlockup_detector_event_create(void)
 {
-	unsigned int cpu = smp_processor_id();
+	unsigned int cpu;
 	struct perf_event_attr *wd_attr;
 	struct perf_event *evt;
 
+	/*
+	 * Preemption is not disabled because memory will be allocated.
+	 * Ensure CPU-locality by calling this in per-CPU kthread.
+	 */
+	WARN_ON(!is_percpu_thread());
+	cpu = raw_smp_processor_id();
 	wd_attr = &wd_hw_attr;
 	wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model
  2022-03-24 14:14 [PATCH v3 0/5] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
                   ` (2 preceding siblings ...)
  2022-03-24 14:14 ` [PATCH v3 3/5] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event Lecopzer Chen
@ 2022-03-24 14:14 ` Lecopzer Chen
  2022-04-04 14:41   ` Petr Mladek
  2022-03-24 14:14 ` [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector Lecopzer Chen
  4 siblings, 1 reply; 18+ messages in thread
From: Lecopzer Chen @ 2022-03-24 14:14 UTC (permalink / raw)
  To: linux-kernel
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, will, yj.chiang

When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
yet. E.g. on arm64, PMU is not ready until
device_initcall(armv8_pmu_driver_init).  And it is deeply integrated
with the driver model and cpuhp. Hence it is hard to push this
initialization before smp_init().

But it is easy to take an opposite approach and try to initialize
the watchdog once again later.
The delayed probe is called using workqueues. It need to allocate
memory and must be proceed in a normal context.
The delayed probe is queued only when the early one returns -EBUSY.
It is the return code returned when PMU is not ready yet.

Provide an API - retry_lockup_detector_init() for anyone who needs
to delayed init lockup detector.

The original assumption is: nobody should use delayed probe after
lockup_detector_check() which has __init attribute.
That is, anyone uses this API must call between lockup_detector_init()
and lockup_detector_check(), and the caller must have __init attribute

Co-developed-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Suggested-by: Petr Mladek <pmladek@suse.com>
---
 include/linux/nmi.h |  3 ++
 kernel/watchdog.c   | 69 +++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index b7bcd63c36b4..1d84c9a8b460 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -118,6 +118,9 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
 
 void watchdog_nmi_stop(void);
 void watchdog_nmi_start(void);
+
+extern bool allow_lockup_detector_init_retry;
+void retry_lockup_detector_init(void);
 int watchdog_nmi_probe(void);
 void watchdog_nmi_enable(unsigned int cpu);
 void watchdog_nmi_disable(unsigned int cpu);
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index b71d434cf648..308ba29f8f0f 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -103,7 +103,13 @@ void __weak watchdog_nmi_disable(unsigned int cpu)
 	hardlockup_detector_perf_disable();
 }
 
-/* Return 0, if a NMI watchdog is available. Error code otherwise */
+/*
+ * Arch specific API.
+ *
+ * Return 0 when NMI watchdog is available, negative value otherwise.
+ * The error code -EBUSY is special. It means that a deferred probe
+ * might succeed later.
+ */
 int __weak __init watchdog_nmi_probe(void)
 {
 	return hardlockup_detector_perf_init();
@@ -839,16 +845,75 @@ static void __init watchdog_sysctl_init(void)
 #define watchdog_sysctl_init() do { } while (0)
 #endif /* CONFIG_SYSCTL */
 
+static void lockup_detector_delay_init(struct work_struct *work);
+bool allow_lockup_detector_init_retry __initdata;
+
+static struct work_struct detector_work __initdata =
+		__WORK_INITIALIZER(detector_work, lockup_detector_delay_init);
+
+static void __init lockup_detector_delay_init(struct work_struct *work)
+{
+	int ret;
+
+	ret = watchdog_nmi_probe();
+	if (ret) {
+		pr_info("Delayed init of the lockup detector failed: %d\n", ret);
+		pr_info("Perf NMI watchdog permanently disabled\n");
+		return;
+	}
+
+	nmi_watchdog_available = true;
+	lockup_detector_setup();
+	allow_lockup_detector_init_retry = false;
+}
+
+/*
+ * retry_lockup_detector_init - retry init lockup detector if possible.
+ *
+ * Only take effect when allow_lockup_detector_init_retry is true, which
+ * means it must call between lockup_detector_init() and lockup_detector_check().
+ * Be aware that caller must have __init attribute, relative functions
+ * will be freed after kernel initialization.
+ */
+void __init retry_lockup_detector_init(void)
+{
+	if (!allow_lockup_detector_init_retry)
+		return;
+
+	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
+}
+
+/* Ensure the check is called after the initialization of driver */
+static int __init lockup_detector_check(void)
+{
+	/* Make sure no work is pending. */
+	flush_work(&detector_work);
+
+	if (!allow_lockup_detector_init_retry)
+		return 0;
+
+	allow_lockup_detector_init_retry = false;
+	pr_info("Delayed init checking failed, please check your driver.\n");
+	return 0;
+}
+late_initcall_sync(lockup_detector_check);
+
 void __init lockup_detector_init(void)
 {
+	int ret;
+
 	if (tick_nohz_full_enabled())
 		pr_info("Disabling watchdog on nohz_full cores by default\n");
 
 	cpumask_copy(&watchdog_cpumask,
 		     housekeeping_cpumask(HK_FLAG_TIMER));
 
-	if (!watchdog_nmi_probe())
+	ret = watchdog_nmi_probe();
+	if (!ret)
 		nmi_watchdog_available = true;
+	else if (ret == -EBUSY)
+		allow_lockup_detector_init_retry = true;
+
 	lockup_detector_setup();
 	watchdog_sysctl_init();
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-03-24 14:14 [PATCH v3 0/5] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
                   ` (3 preceding siblings ...)
  2022-03-24 14:14 ` [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model Lecopzer Chen
@ 2022-03-24 14:14 ` Lecopzer Chen
  2022-04-04 14:17   ` Petr Mladek
  4 siblings, 1 reply; 18+ messages in thread
From: Lecopzer Chen @ 2022-03-24 14:14 UTC (permalink / raw)
  To: linux-kernel
  Cc: lecopzer.chen, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	pmladek, sparclinux, sumit.garg, wangqing, will, yj.chiang

With the recent feature added to enable perf events to use pseudo NMIs
as interrupts on platforms which support GICv3 or later, its now been
possible to enable hard lockup detector (or NMI watchdog) on arm64
platforms. So enable corresponding support.

One thing to note here is that normally lockup detector is initialized
just after the early initcalls but PMU on arm64 comes up much later as
device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
let the watchdog framework know PMU not ready, and inform the framework
to re-initialize lockup detection once PMU has been initialized.

[1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org

Co-developed-by: Sumit Garg <sumit.garg@linaro.org>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Co-developed-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/kernel/Makefile       |  1 +
 arch/arm64/kernel/perf_event.c   | 10 +++++++--
 arch/arm64/kernel/watchdog_hld.c | 37 ++++++++++++++++++++++++++++++++
 drivers/perf/arm_pmu.c           |  5 +++++
 include/linux/perf/arm_pmu.h     |  2 ++
 6 files changed, 55 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/kernel/watchdog_hld.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c842878f8133..9c24052c5360 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -190,6 +190,8 @@ config ARM64
 	select HAVE_NMI
 	select HAVE_PATA_PLATFORM
 	select HAVE_PERF_EVENTS
+	select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
+	select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
 	select HAVE_REGS_AND_STACK_ACCESS_API
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 88b3e2a21408..3e62a8877ed7 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -46,6 +46,7 @@ obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
 obj-$(CONFIG_PERF_EVENTS)		+= perf_regs.o perf_callchain.o
 obj-$(CONFIG_HW_PERF_EVENTS)		+= perf_event.o
+obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF)	+= watchdog_hld.o
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
 obj-$(CONFIG_CPU_PM)			+= sleep.o suspend.o
 obj-$(CONFIG_CPU_IDLE)			+= cpuidle.o
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index cab678ed6618..f44ad669c18f 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -23,6 +23,7 @@
 #include <linux/platform_device.h>
 #include <linux/sched_clock.h>
 #include <linux/smp.h>
+#include <linux/nmi.h>
 
 /* ARMv8 Cortex-A53 specific event types. */
 #define ARMV8_A53_PERFCTR_PREF_LINEFILL				0xC2
@@ -1380,10 +1381,15 @@ static struct platform_driver armv8_pmu_driver = {
 
 static int __init armv8_pmu_driver_init(void)
 {
+	int ret;
+
 	if (acpi_disabled)
-		return platform_driver_register(&armv8_pmu_driver);
+		ret = platform_driver_register(&armv8_pmu_driver);
 	else
-		return arm_pmu_acpi_probe(armv8_pmuv3_pmu_init);
+		ret = arm_pmu_acpi_probe(armv8_pmuv3_pmu_init);
+
+	retry_lockup_detector_init();
+	return ret;
 }
 device_initcall(armv8_pmu_driver_init)
 
diff --git a/arch/arm64/kernel/watchdog_hld.c b/arch/arm64/kernel/watchdog_hld.c
new file mode 100644
index 000000000000..c085e99b3cd2
--- /dev/null
+++ b/arch/arm64/kernel/watchdog_hld.c
@@ -0,0 +1,37 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/nmi.h>
+#include <linux/cpufreq.h>
+#include <linux/perf/arm_pmu.h>
+
+/*
+ * Safe maximum CPU frequency in case a particular platform doesn't implement
+ * cpufreq driver. Although, architecture doesn't put any restrictions on
+ * maximum frequency but 5 GHz seems to be safe maximum given the available
+ * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
+ * hand, we can't make it much higher as it would lead to a large hard-lockup
+ * detection timeout on parts which are running slower (eg. 1GHz on
+ * Developerbox) and doesn't possess a cpufreq driver.
+ */
+#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
+u64 hw_nmi_get_sample_period(int watchdog_thresh)
+{
+	unsigned int cpu = smp_processor_id();
+	unsigned long max_cpu_freq;
+
+	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
+	if (!max_cpu_freq)
+		max_cpu_freq = SAFE_MAX_CPU_FREQ;
+
+	return (u64)max_cpu_freq * watchdog_thresh;
+}
+
+int __init watchdog_nmi_probe(void)
+{
+	if (!allow_lockup_detector_init_retry)
+		return -EBUSY;
+
+	if (!arm_pmu_irq_is_nmi())
+		return -ENODEV;
+
+	return hardlockup_detector_perf_init();
+}
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 295cc7952d0e..e77f4897fca2 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -697,6 +697,11 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
 	return per_cpu(hw_events->irq, cpu);
 }
 
+bool arm_pmu_irq_is_nmi(void)
+{
+	return has_nmi;
+}
+
 /*
  * PMU hardware loses all context when a CPU goes offline.
  * When a CPU is hotplugged back in, since some hardware registers are
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 2512e2f9cd4e..9325d01adc3e 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -169,6 +169,8 @@ void kvm_host_pmu_init(struct arm_pmu *pmu);
 #define kvm_host_pmu_init(x)	do { } while(0)
 #endif
 
+bool arm_pmu_irq_is_nmi(void);
+
 /* Internal functions only for core arm_pmu code */
 struct arm_pmu *armpmu_alloc(void);
 struct arm_pmu *armpmu_alloc_atomic(void);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-03-24 14:14 ` [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector Lecopzer Chen
@ 2022-04-04 14:17   ` Petr Mladek
  2022-04-05 12:53     ` Lecopzer Chen
  0 siblings, 1 reply; 18+ messages in thread
From: Petr Mladek @ 2022-04-04 14:17 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	sparclinux, sumit.garg, wangqing, will, yj.chiang

On Thu 2022-03-24 22:14:05, Lecopzer Chen wrote:
> With the recent feature added to enable perf events to use pseudo NMIs
> as interrupts on platforms which support GICv3 or later, its now been
> possible to enable hard lockup detector (or NMI watchdog) on arm64
> platforms. So enable corresponding support.
> 
> One thing to note here is that normally lockup detector is initialized
> just after the early initcalls but PMU on arm64 comes up much later as
> device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
> let the watchdog framework know PMU not ready, and inform the framework
> to re-initialize lockup detection once PMU has been initialized.
> 
> [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> 
> --- /dev/null
> +++ b/arch/arm64/kernel/watchdog_hld.c
> @@ -0,0 +1,37 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/nmi.h>
> +#include <linux/cpufreq.h>
> +#include <linux/perf/arm_pmu.h>
> +
> +/*
> + * Safe maximum CPU frequency in case a particular platform doesn't implement
> + * cpufreq driver. Although, architecture doesn't put any restrictions on
> + * maximum frequency but 5 GHz seems to be safe maximum given the available
> + * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
> + * hand, we can't make it much higher as it would lead to a large hard-lockup
> + * detection timeout on parts which are running slower (eg. 1GHz on
> + * Developerbox) and doesn't possess a cpufreq driver.
> + */
> +#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
> +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> +{
> +	unsigned int cpu = smp_processor_id();
> +	unsigned long max_cpu_freq;
> +
> +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
> +	if (!max_cpu_freq)
> +		max_cpu_freq = SAFE_MAX_CPU_FREQ;
> +
> +	return (u64)max_cpu_freq * watchdog_thresh;
> +}

This change is not mentioned in the commit message.
Please, put it into a separate patch.

> +int __init watchdog_nmi_probe(void)
> +{
> +	if (!allow_lockup_detector_init_retry)
> +		return -EBUSY;

How do you know that you should return -EBUSY
when retry in not enabled?

I guess that it is an optimization to make it fast
during the first call. But the logic is far from
obvious.

> +
> +	if (!arm_pmu_irq_is_nmi())
> +		return -ENODEV;
> +
> +	return hardlockup_detector_perf_init();
> +}

Is this just an optimization or is it really needed?
Why this was not needed in v2 patchset?

If it is just an optimization then I would remove it.
IMHO, it just adds confusion and it is not worth it.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model
  2022-03-24 14:14 ` [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model Lecopzer Chen
@ 2022-04-04 14:41   ` Petr Mladek
  2022-04-05 13:35     ` Lecopzer Chen
  0 siblings, 1 reply; 18+ messages in thread
From: Petr Mladek @ 2022-04-04 14:41 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: linux-kernel, acme, akpm, alexander.shishkin, catalin.marinas,
	davem, jolsa, jthierry, keescook, kernelfans, linux-arm-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	sparclinux, sumit.garg, wangqing, will, yj.chiang

On Thu 2022-03-24 22:14:04, Lecopzer Chen wrote:
> When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
> yet. E.g. on arm64, PMU is not ready until
> device_initcall(armv8_pmu_driver_init).  And it is deeply integrated
> with the driver model and cpuhp. Hence it is hard to push this
> initialization before smp_init().
> 
> But it is easy to take an opposite approach and try to initialize
> the watchdog once again later.
> The delayed probe is called using workqueues. It need to allocate
> memory and must be proceed in a normal context.
> The delayed probe is queued only when the early one returns -EBUSY.
> It is the return code returned when PMU is not ready yet.
> 
> Provide an API - retry_lockup_detector_init() for anyone who needs
> to delayed init lockup detector.
> 
> The original assumption is: nobody should use delayed probe after
> lockup_detector_check() which has __init attribute.
> That is, anyone uses this API must call between lockup_detector_init()
> and lockup_detector_check(), and the caller must have __init attribute
> 
> Co-developed-by: Pingfan Liu <kernelfans@gmail.com>
> Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> Suggested-by: Petr Mladek <pmladek@suse.com>
> ---
>  include/linux/nmi.h |  3 ++
>  kernel/watchdog.c   | 69 +++++++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 70 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/nmi.h b/include/linux/nmi.h
> index b7bcd63c36b4..1d84c9a8b460 100644
> --- a/include/linux/nmi.h
> +++ b/include/linux/nmi.h
> @@ -118,6 +118,9 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
>  
>  void watchdog_nmi_stop(void);
>  void watchdog_nmi_start(void);
> +
> +extern bool allow_lockup_detector_init_retry;
> +void retry_lockup_detector_init(void);
>  int watchdog_nmi_probe(void);
>  void watchdog_nmi_enable(unsigned int cpu);
>  void watchdog_nmi_disable(unsigned int cpu);
> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> index b71d434cf648..308ba29f8f0f 100644
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -103,7 +103,13 @@ void __weak watchdog_nmi_disable(unsigned int cpu)
>  	hardlockup_detector_perf_disable();
>  }
>  
> -/* Return 0, if a NMI watchdog is available. Error code otherwise */
> +/*
> + * Arch specific API.
> + *
> + * Return 0 when NMI watchdog is available, negative value otherwise.
> + * The error code -EBUSY is special. It means that a deferred probe
> + * might succeed later.
> + */
>  int __weak __init watchdog_nmi_probe(void)
>  {
>  	return hardlockup_detector_perf_init();
> @@ -839,16 +845,75 @@ static void __init watchdog_sysctl_init(void)
>  #define watchdog_sysctl_init() do { } while (0)
>  #endif /* CONFIG_SYSCTL */
>  
> +static void lockup_detector_delay_init(struct work_struct *work);
> +bool allow_lockup_detector_init_retry __initdata;
> +
> +static struct work_struct detector_work __initdata =
> +		__WORK_INITIALIZER(detector_work, lockup_detector_delay_init);
> +
> +static void __init lockup_detector_delay_init(struct work_struct *work)
> +{
> +	int ret;
> +
> +	ret = watchdog_nmi_probe();
> +	if (ret) {
> +		pr_info("Delayed init of the lockup detector failed: %d\n", ret);
> +		pr_info("Perf NMI watchdog permanently disabled\n");
> +		return;
> +	}
> +
> +	nmi_watchdog_available = true;
> +	lockup_detector_setup();

The name of the variable "allow_lockup_detector_init_retry" is
slightly confusing in this context. I suggest to add a comment:

	/* Retry is not needed any longer. */
> +	allow_lockup_detector_init_retry = false;


> +}
> +
> +/*
> + * retry_lockup_detector_init - retry init lockup detector if possible.
> + *
> + * Only take effect when allow_lockup_detector_init_retry is true, which
> + * means it must call between lockup_detector_init() and lockup_detector_check().
> + * Be aware that caller must have __init attribute, relative functions
> + * will be freed after kernel initialization.
> + */
> +void __init retry_lockup_detector_init(void)
> +{
> +	if (!allow_lockup_detector_init_retry)
> +		return;
> +
> +	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> +}
> +
> +/* Ensure the check is called after the initialization of driver */
> +static int __init lockup_detector_check(void)
> +{
> +	/* Make sure no work is pending. */
> +	flush_work(&detector_work);

This is racy. We should first disable
"allow_lockup_detector_init_retry" to make sure
that retry_lockup_detector_init() will not queue
the work any longer.

> +	if (!allow_lockup_detector_init_retry)
> +		return 0;
> +
> +	allow_lockup_detector_init_retry = false;
> +	pr_info("Delayed init checking failed, please check your driver.\n");

This prints that the init failed without checking the state
of the watchdog. I guess that it works but it is far from
obvious and any further change might break it.

Is the message really needed?
Does it help?
What exact driver needs checking?

IMHO, it just makes the code more complicated and
it is not worth it.

I suggest to keep it simple:

/*
 * Ensure the check is called after the initialization of driver
 * and before removing init code.
 */
static int __init lockup_detector_check(void)
{
	allow_lockup_detector_init_retry = false;
	flush_work(&detector_work);

	return 0;
}

or if you really want that message then I would do:

/*
 * Ensure the check is called after the initialization of driver
 * and before removing init code.
 */
static int __init lockup_detector_check(void)
{
	bool delayed_init_allowed = allow_lockup_detector_init_retry;

	allow_lockup_detector_init_retry = false;
	flush_work(&detector_work);

	if (delayed_init_allowed && !nmi_watchdog_available)
		pr_info("Delayed init failed. Please, check your driver.\n");

	return 0;
}

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-04-04 14:17   ` Petr Mladek
@ 2022-04-05 12:53     ` Lecopzer Chen
  2022-04-05 14:36       ` Petr Mladek
  0 siblings, 1 reply; 18+ messages in thread
From: Lecopzer Chen @ 2022-04-05 12:53 UTC (permalink / raw)
  To: pmladek
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, lecopzer.chen, linux-arm-kernel,
	linux-kernel, linux-mediatek, linux-perf-users, mark.rutland,
	masahiroy, matthias.bgg, maz, mcgrof, mingo, namhyung,
	nixiaoming, peterz, sparclinux, sumit.garg, wangqing, will,
	yj.chiang

 
> On Thu 2022-03-24 22:14:05, Lecopzer Chen wrote:
> > With the recent feature added to enable perf events to use pseudo NMIs
> > as interrupts on platforms which support GICv3 or later, its now been
> > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > platforms. So enable corresponding support.
> > 
> > One thing to note here is that normally lockup detector is initialized
> > just after the early initcalls but PMU on arm64 comes up much later as
> > device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
> > let the watchdog framework know PMU not ready, and inform the framework
> > to re-initialize lockup detection once PMU has been initialized.
> > 
> > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > 
> > --- /dev/null
> > +++ b/arch/arm64/kernel/watchdog_hld.c
> > @@ -0,0 +1,37 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include <linux/nmi.h>
> > +#include <linux/cpufreq.h>
> > +#include <linux/perf/arm_pmu.h>
> > +
> > +/*
> > + * Safe maximum CPU frequency in case a particular platform doesn't implement
> > + * cpufreq driver. Although, architecture doesn't put any restrictions on
> > + * maximum frequency but 5 GHz seems to be safe maximum given the available
> > + * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
> > + * hand, we can't make it much higher as it would lead to a large hard-lockup
> > + * detection timeout on parts which are running slower (eg. 1GHz on
> > + * Developerbox) and doesn't possess a cpufreq driver.
> > + */
> > +#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
> > +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> > +{
> > +	unsigned int cpu = smp_processor_id();
> > +	unsigned long max_cpu_freq;
> > +
> > +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
> > +	if (!max_cpu_freq)
> > +		max_cpu_freq = SAFE_MAX_CPU_FREQ;
> > +
> > +	return (u64)max_cpu_freq * watchdog_thresh;
> > +}
> 
> This change is not mentioned in the commit message.
> Please, put it into a separate patch.


Actully, This cames from
[1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
And I didn't touch the commit message from the origin patch.
But of course, I could imporve it with proper description if
anyone thinks it's not good enough.

Would you mean put this function hw_nmi_get_sample_period() in patch 6th?
In the view of "arm64 uses delayed init with all the functionality it need to set up",
IMO, this make sense for me to put into a single patch.

But if you still think this should put into a separate patch, I'll do it:)


> 
> > +int __init watchdog_nmi_probe(void)
> > +{
> > +	if (!allow_lockup_detector_init_retry)
> > +		return -EBUSY;
> 
> How do you know that you should return -EBUSY
> when retry in not enabled?
> 
> I guess that it is an optimization to make it fast
> during the first call. But the logic is far from
> obvious.
> 

Yes, you can see this as an optimization, because arm64 PMU is not ready
during lockup_detector_init(), so the watchdog_nmi_probe() must fail.

Thus we only want to do watchdog_nmi_probe() in delayed init,
so if not in the state (allow_lockup_detector_init_retry=true), just tell

if it's unclear, maybe a brief comment can be add like this:

+	/* arm64 is only able to initialize lockup detecor during delayed init */
+	if (!allow_lockup_detector_init_retry)
+		return -EBUSY;




> > +
> > +	if (!arm_pmu_irq_is_nmi())
> > +		return -ENODEV;
> > +
> > +	return hardlockup_detector_perf_init();
> > +}
> 
> Is this just an optimization or is it really needed?
> Why this was not needed in v2 patchset?
> 
> If it is just an optimization then I would remove it.
> IMHO, it just adds confusion and it is not worth it.
> 

It was a mistake when I rebased v2, This should be included in v2
but I missed it.

For arm_pmu_irq_is_nmi() checking, we do need it, becasue arm64 needs
explictly turns on Pseudo-NMI to support base function for NMI.

hardlockup_detector_perf_init() will success even if we haven't had
Pseudo-NMI turns on, however, the pmu interrupts will act like a
normal interrupt instead of NMI and the hardlockup detector would be broken.


thanks for all the comment

BRs,
Lecopzer







^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model
  2022-04-04 14:41   ` Petr Mladek
@ 2022-04-05 13:35     ` Lecopzer Chen
  2022-04-05 15:19       ` Petr Mladek
  0 siblings, 1 reply; 18+ messages in thread
From: Lecopzer Chen @ 2022-04-05 13:35 UTC (permalink / raw)
  To: pmladek
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, lecopzer.chen, linux-arm-kernel,
	linux-kernel, linux-mediatek, linux-perf-users, mark.rutland,
	masahiroy, matthias.bgg, maz, mcgrof, mingo, namhyung,
	nixiaoming, peterz, sparclinux, sumit.garg, wangqing, will,
	yj.chiang

> On Thu 2022-03-24 22:14:04, Lecopzer Chen wrote:
> > When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
> > yet. E.g. on arm64, PMU is not ready until
> > device_initcall(armv8_pmu_driver_init).  And it is deeply integrated
> > with the driver model and cpuhp. Hence it is hard to push this
> > initialization before smp_init().
> > 
> > But it is easy to take an opposite approach and try to initialize
> > the watchdog once again later.
> > The delayed probe is called using workqueues. It need to allocate
> > memory and must be proceed in a normal context.
> > The delayed probe is queued only when the early one returns -EBUSY.
> > It is the return code returned when PMU is not ready yet.
> > 
> > Provide an API - retry_lockup_detector_init() for anyone who needs
> > to delayed init lockup detector.
> > 
> > The original assumption is: nobody should use delayed probe after
> > lockup_detector_check() which has __init attribute.
> > That is, anyone uses this API must call between lockup_detector_init()
> > and lockup_detector_check(), and the caller must have __init attribute
> > 
> > Co-developed-by: Pingfan Liu <kernelfans@gmail.com>
> > Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
> > Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> > Suggested-by: Petr Mladek <pmladek@suse.com>
> > ---
> >  include/linux/nmi.h |  3 ++
> >  kernel/watchdog.c   | 69 +++++++++++++++++++++++++++++++++++++++++++--
> >  2 files changed, 70 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/nmi.h b/include/linux/nmi.h
> > index b7bcd63c36b4..1d84c9a8b460 100644
> > --- a/include/linux/nmi.h
> > +++ b/include/linux/nmi.h
> > @@ -118,6 +118,9 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
> >  
> >  void watchdog_nmi_stop(void);
> >  void watchdog_nmi_start(void);
> > +
> > +extern bool allow_lockup_detector_init_retry;
> > +void retry_lockup_detector_init(void);
> >  int watchdog_nmi_probe(void);
> >  void watchdog_nmi_enable(unsigned int cpu);
> >  void watchdog_nmi_disable(unsigned int cpu);
> > diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> > index b71d434cf648..308ba29f8f0f 100644
> > --- a/kernel/watchdog.c
> > +++ b/kernel/watchdog.c
> > @@ -103,7 +103,13 @@ void __weak watchdog_nmi_disable(unsigned int cpu)
> >  	hardlockup_detector_perf_disable();
> >  }
> >  
> > -/* Return 0, if a NMI watchdog is available. Error code otherwise */
> > +/*
> > + * Arch specific API.
> > + *
> > + * Return 0 when NMI watchdog is available, negative value otherwise.
> > + * The error code -EBUSY is special. It means that a deferred probe
> > + * might succeed later.
> > + */
> >  int __weak __init watchdog_nmi_probe(void)
> >  {
> >  	return hardlockup_detector_perf_init();
> > @@ -839,16 +845,75 @@ static void __init watchdog_sysctl_init(void)
> >  #define watchdog_sysctl_init() do { } while (0)
> >  #endif /* CONFIG_SYSCTL */
> >  
> > +static void lockup_detector_delay_init(struct work_struct *work);
> > +bool allow_lockup_detector_init_retry __initdata;
> > +
> > +static struct work_struct detector_work __initdata =
> > +		__WORK_INITIALIZER(detector_work, lockup_detector_delay_init);
> > +
> > +static void __init lockup_detector_delay_init(struct work_struct *work)
> > +{
> > +	int ret;
> > +
> > +	ret = watchdog_nmi_probe();
> > +	if (ret) {
> > +		pr_info("Delayed init of the lockup detector failed: %d\n", ret);
> > +		pr_info("Perf NMI watchdog permanently disabled\n");
> > +		return;
> > +	}
> > +
> > +	nmi_watchdog_available = true;
> > +	lockup_detector_setup();
> 
> The name of the variable "allow_lockup_detector_init_retry" is
> slightly confusing in this context. I suggest to add a comment:
> 
> 	/* Retry is not needed any longer. */
> > +	allow_lockup_detector_init_retry = false;
> 

Got it, I'll add it, thanks.


> 
> > +}
> > +
> > +/*
> > + * retry_lockup_detector_init - retry init lockup detector if possible.
> > + *
> > + * Only take effect when allow_lockup_detector_init_retry is true, which
> > + * means it must call between lockup_detector_init() and lockup_detector_check().
> > + * Be aware that caller must have __init attribute, relative functions
> > + * will be freed after kernel initialization.
> > + */
> > +void __init retry_lockup_detector_init(void)
> > +{
> > +	if (!allow_lockup_detector_init_retry)
> > +		return;
> > +
> > +	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> > +}
> > +
> > +/* Ensure the check is called after the initialization of driver */
> > +static int __init lockup_detector_check(void)
> > +{
> > +	/* Make sure no work is pending. */
> > +	flush_work(&detector_work);
> 
> This is racy. We should first disable
> "allow_lockup_detector_init_retry" to make sure
> that retry_lockup_detector_init() will not queue
> the work any longer.

But disable before flush_work will make the 
    lockup_detector_delay_init() ->
    watchdog_nmi_probe ->
    +	if (!allow_lockup_detector_init_retry)
    +		return -EBUSY;

Plese check the code I provide below.


> 
> > +	if (!allow_lockup_detector_init_retry)
> > +		return 0;
> > +
> > +	allow_lockup_detector_init_retry = false;
> > +	pr_info("Delayed init checking failed, please check your driver.\n");
> 
> This prints that the init failed without checking the state
> of the watchdog. I guess that it works but it is far from
> obvious and any further change might break it.
> 
> Is the message really needed?
> Does it help?
> What exact driver needs checking?
> 
> IMHO, it just makes the code more complicated and
> it is not worth it.
> 

I think you're right, the message was needed in the patch v2 because we
did another retry in lockup_detector_check().
But now we only do "checking" and the failed message in
lockup_detector_delay_init should be enough.


> I suggest to keep it simple:
> 
> /*
>  * Ensure the check is called after the initialization of driver
>  * and before removing init code.
>  */
> static int __init lockup_detector_check(void)
> {
> 	allow_lockup_detector_init_retry = false;
> 	flush_work(&detector_work);
> 
> 	return 0;
> }
> 


Combine with the first racy problem, let me limit retry_lockup_detector_init
can be called only once.

how about:
...
static bool __init delayed_init_allowed = true;
...
/*
 * retry_lockup_detector_init - retry init lockup detector if possible.
 *
 * Only take effect when allow_lockup_detector_init_retry is true, which
 * means it must call between lockup_detector_init() and lockup_detector_check().
 * Be aware that caller must have __init attribute, relative functions
 * will be freed after kernel initialization.
 */
void __init retry_lockup_detector_init(void)
{
	if (!allow_lockup_detector_init_retry || !delayed_init_allowed)
		return;

	/* 
	 * we shouldn't queue any delayed init work twice to avoid
	 * any unwanted racy.
	 */
	delayed_init_allowed = false;
	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
}


/*
 * Ensure the check is called after the initialization of driver
 * and before removing init code.
 */
static int __init lockup_detector_check(void)
{
	delayed_init_allowed = false;
	flush_work(&detector_work);
	allow_lockup_detector_init_retry = false;

	return 0;
}



> or if you really want that message then I would do:
> 
> /*
>  * Ensure the check is called after the initialization of driver
>  * and before removing init code.
>  */
> static int __init lockup_detector_check(void)
> {
> 	bool delayed_init_allowed = allow_lockup_detector_init_retry;
> 
> 	allow_lockup_detector_init_retry = false;
> 	flush_work(&detector_work);
> 
> 	if (delayed_init_allowed && !nmi_watchdog_available)
> 		pr_info("Delayed init failed. Please, check your driver.\n");
> 
> 	return 0;
> }
> 


thanks
BRs,
Lecopzer

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-04-05 12:53     ` Lecopzer Chen
@ 2022-04-05 14:36       ` Petr Mladek
  2022-04-07 16:59         ` Lecopzer Chen
  0 siblings, 1 reply; 18+ messages in thread
From: Petr Mladek @ 2022-04-05 14:36 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	sparclinux, sumit.garg, wangqing, will, yj.chiang

On Tue 2022-04-05 20:53:04, Lecopzer Chen wrote:
>  
> > On Thu 2022-03-24 22:14:05, Lecopzer Chen wrote:
> > > With the recent feature added to enable perf events to use pseudo NMIs
> > > as interrupts on platforms which support GICv3 or later, its now been
> > > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > > platforms. So enable corresponding support.
> > > 
> > > One thing to note here is that normally lockup detector is initialized
> > > just after the early initcalls but PMU on arm64 comes up much later as
> > > device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
> > > let the watchdog framework know PMU not ready, and inform the framework
> > > to re-initialize lockup detection once PMU has been initialized.
> > > 
> > > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > > 
> > > --- /dev/null
> > > +++ b/arch/arm64/kernel/watchdog_hld.c
> > > @@ -0,0 +1,37 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +#include <linux/nmi.h>
> > > +#include <linux/cpufreq.h>
> > > +#include <linux/perf/arm_pmu.h>
> > > +
> > > +/*
> > > + * Safe maximum CPU frequency in case a particular platform doesn't implement
> > > + * cpufreq driver. Although, architecture doesn't put any restrictions on
> > > + * maximum frequency but 5 GHz seems to be safe maximum given the available
> > > + * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
> > > + * hand, we can't make it much higher as it would lead to a large hard-lockup
> > > + * detection timeout on parts which are running slower (eg. 1GHz on
> > > + * Developerbox) and doesn't possess a cpufreq driver.
> > > + */
> > > +#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
> > > +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> > > +{
> > > +	unsigned int cpu = smp_processor_id();
> > > +	unsigned long max_cpu_freq;
> > > +
> > > +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
> > > +	if (!max_cpu_freq)
> > > +		max_cpu_freq = SAFE_MAX_CPU_FREQ;
> > > +
> > > +	return (u64)max_cpu_freq * watchdog_thresh;
> > > +}
> > 
> > This change is not mentioned in the commit message.
> > Please, put it into a separate patch.
> 
> 
> Actully, This cames from
> [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> And I didn't touch the commit message from the origin patch.
> But of course, I could imporve it with proper description if
> anyone thinks it's not good enough.

I see.

> Would you mean put this function hw_nmi_get_sample_period() in patch
> 6th?
> In the view of "arm64 uses delayed init with all the functionality it need to set up",
> IMO, this make sense for me to put into a single patch.

Or you could split it in two patches and add
hw_nmi_get_sample_period() in the earlier patch.


> But if you still think this should put into a separate patch, I'll do it:)

It is always better to split the changes whenever possible. It makes
the review easier. And it also helps to find the real culprit of
a regression using bisection.


> > > +int __init watchdog_nmi_probe(void)
> > > +{
> > > +	if (!allow_lockup_detector_init_retry)
> > > +		return -EBUSY;
> > 
> > How do you know that you should return -EBUSY
> > when retry in not enabled?
> > 
> > I guess that it is an optimization to make it fast
> > during the first call. But the logic is far from
> > obvious.
> > 
> 
> Yes, you can see this as an optimization, because arm64 PMU is not ready
> during lockup_detector_init(), so the watchdog_nmi_probe() must fail.
>
> Thus we only want to do watchdog_nmi_probe() in delayed init,
> so if not in the state (allow_lockup_detector_init_retry=true), just tell
> 
> if it's unclear

Yes, it is far from obvious.

> maybe a brief comment can be add like this:
> 
> +	/* arm64 is only able to initialize lockup detecor during delayed init */
> +	if (!allow_lockup_detector_init_retry)
> +		return -EBUSY;

No, please, remove this optimization. It just makes problems:

   + it requires a comment here because the logic is far from obvious.

   + it is the reason why we need another variable to avoid the race in
     lockup_detector_check(), see the discussion about the 4th patch.


> > > +
> > > +	if (!arm_pmu_irq_is_nmi())
> > > +		return -ENODEV;
> > > +
> > > +	return hardlockup_detector_perf_init();
> > > +}
> > 
> For arm_pmu_irq_is_nmi() checking, we do need it, becasue arm64 needs
> explictly turns on Pseudo-NMI to support base function for NMI.
>
> hardlockup_detector_perf_init() will success even if we haven't had
> Pseudo-NMI turns on, however, the pmu interrupts will act like a
> normal interrupt instead of NMI and the hardlockup detector would be broken.

I see. Please, explain this in a comment. It is another thing
that is far from obvious.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model
  2022-04-05 13:35     ` Lecopzer Chen
@ 2022-04-05 15:19       ` Petr Mladek
  2022-04-07 16:21         ` Lecopzer Chen
  0 siblings, 1 reply; 18+ messages in thread
From: Petr Mladek @ 2022-04-05 15:19 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	sparclinux, sumit.garg, wangqing, will, yj.chiang

On Tue 2022-04-05 21:35:03, Lecopzer Chen wrote:
> > On Thu 2022-03-24 22:14:04, Lecopzer Chen wrote:
> > > When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
> > > yet. E.g. on arm64, PMU is not ready until
> > > device_initcall(armv8_pmu_driver_init).  And it is deeply integrated
> > > with the driver model and cpuhp. Hence it is hard to push this
> > > initialization before smp_init().
> > > 
> > > But it is easy to take an opposite approach and try to initialize
> > > the watchdog once again later.
> > > The delayed probe is called using workqueues. It need to allocate
> > > memory and must be proceed in a normal context.
> > > The delayed probe is queued only when the early one returns -EBUSY.
> > > It is the return code returned when PMU is not ready yet.
> > > 
> > > Provide an API - retry_lockup_detector_init() for anyone who needs
> > > to delayed init lockup detector.
> > > 
> > > The original assumption is: nobody should use delayed probe after
> > > lockup_detector_check() which has __init attribute.
> > > That is, anyone uses this API must call between lockup_detector_init()
> > > and lockup_detector_check(), and the caller must have __init attribute
> > > 
> > > --- a/kernel/watchdog.c
> > > +++ b/kernel/watchdog.c
> > > +}
> > > +
> > > +/*
> > > + * retry_lockup_detector_init - retry init lockup detector if possible.
> > > + *
> > > + * Only take effect when allow_lockup_detector_init_retry is true, which
> > > + * means it must call between lockup_detector_init() and lockup_detector_check().
> > > + * Be aware that caller must have __init attribute, relative functions
> > > + * will be freed after kernel initialization.
> > > + */
> > > +void __init retry_lockup_detector_init(void)
> > > +{
> > > +	if (!allow_lockup_detector_init_retry)
> > > +		return;
> > > +
> > > +	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> > > +}
> > > +
> > > +/* Ensure the check is called after the initialization of driver */
> > > +static int __init lockup_detector_check(void)
> > > +{
> > > +	/* Make sure no work is pending. */
> > > +	flush_work(&detector_work);
> > 
> > This is racy. We should first disable
> > "allow_lockup_detector_init_retry" to make sure
> > that retry_lockup_detector_init() will not queue
> > the work any longer.
> 
> But disable before flush_work will make the 
>     lockup_detector_delay_init() ->
>     watchdog_nmi_probe ->
>     +	if (!allow_lockup_detector_init_retry)
>     +		return -EBUSY;

I see. It is exactly the reason why I suggest to remove the
optimization and keep the code simple.

> how about:
> ...
> static bool __init delayed_init_allowed = true;
> ...
> /*
>  * retry_lockup_detector_init - retry init lockup detector if possible.
>  *
>  * Only take effect when allow_lockup_detector_init_retry is true, which
>  * means it must call between lockup_detector_init() and lockup_detector_check().
>  * Be aware that caller must have __init attribute, relative functions
>  * will be freed after kernel initialization.
>  */
> void __init retry_lockup_detector_init(void)
> {
> 	if (!allow_lockup_detector_init_retry || !delayed_init_allowed)
> 		return;
> 
> 	/* 
> 	 * we shouldn't queue any delayed init work twice to avoid
> 	 * any unwanted racy.
> 	 */
> 	delayed_init_allowed = false;

Grrr, this is so complicated and confusing. It might be because of
badly selected variable names or comments. But I think that it is
simply a bad approach.

OK, you suggest two variables. If I get it correctly:

    + The variable "delayed_init_allowed"
     tries to prevent the race in lockup_detector_check().

     It will make sure that the work could not be queued after
     flush_work() finishes.

     Is this obvious from the comment?
     Is this obvious from the variable name?

     I am sorry. But it is not obvious to me. I understand it only
     because I see it together in this mail. It will be pretty
     hard to get it from the code when I see it one year later.


   + The variable "allow_lockup_detector_init_retry" has an unclear
     meaning. It might mean:

	+ watchdog_nmi_probe() ended with -EBUSY in
	  lockup_detector_init() and we can try the delayed init.

	+ but it also means that watchdog_nmi_probe() succeeded in
	  lockup_detector_delay_init() and there is no need to
	  try the delayed init any longer.

       Is this obvious from the variable name?
       Is it explained anywhere?
       Is it easy to understand?

       No, from my POV. It is really bad idea to have a single
       variable with so many meanings.


And this is my problem with this approach. There was one variable with
unclear meanting. And you are trying to fix it by two variables
with unclear meaning.

> 	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> }
> 
> 
> /*
>  * Ensure the check is called after the initialization of driver
>  * and before removing init code.
>  */
> static int __init lockup_detector_check(void)
> {
> 	delayed_init_allowed = false;
> 	flush_work(&detector_work);
> 	allow_lockup_detector_init_retry = false;
> 
> 	return 0;
> }

No, please keep it simple. Just have one variable that will say
whether we are allowed to queue the work:

  + It will be allowed when watchdog_nmi_probe() ended
    with -EBUSY in lockup_detector_init()

  + It will not longer be allowed when watchdog_nmi_probe()
    succeeded or when lockup_detector_check() flushes
    the pending works.


Best Regards,
Petr

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model
  2022-04-05 15:19       ` Petr Mladek
@ 2022-04-07 16:21         ` Lecopzer Chen
  0 siblings, 0 replies; 18+ messages in thread
From: Lecopzer Chen @ 2022-04-07 16:21 UTC (permalink / raw)
  To: pmladek
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, lecopzer.chen, linux-arm-kernel,
	linux-kernel, linux-mediatek, linux-perf-users, mark.rutland,
	masahiroy, matthias.bgg, maz, mcgrof, mingo, namhyung,
	nixiaoming, peterz, sparclinux, sumit.garg, wangqing, will,
	yj.chiang

> On Tue 2022-04-05 21:35:03, Lecopzer Chen wrote:
> > > On Thu 2022-03-24 22:14:04, Lecopzer Chen wrote:
> > > > When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
> > > > yet. E.g. on arm64, PMU is not ready until
> > > > device_initcall(armv8_pmu_driver_init).  And it is deeply integrated
> > > > with the driver model and cpuhp. Hence it is hard to push this
> > > > initialization before smp_init().
> > > > 
> > > > But it is easy to take an opposite approach and try to initialize
> > > > the watchdog once again later.
> > > > The delayed probe is called using workqueues. It need to allocate
> > > > memory and must be proceed in a normal context.
> > > > The delayed probe is queued only when the early one returns -EBUSY.
> > > > It is the return code returned when PMU is not ready yet.
> > > > 
> > > > Provide an API - retry_lockup_detector_init() for anyone who needs
> > > > to delayed init lockup detector.
> > > > 
> > > > The original assumption is: nobody should use delayed probe after
> > > > lockup_detector_check() which has __init attribute.
> > > > That is, anyone uses this API must call between lockup_detector_init()
> > > > and lockup_detector_check(), and the caller must have __init attribute
> > > > 
> > > > --- a/kernel/watchdog.c
> > > > +++ b/kernel/watchdog.c
> > > > +}
> > > > +
> > > > +/*
> > > > + * retry_lockup_detector_init - retry init lockup detector if possible.
> > > > + *
> > > > + * Only take effect when allow_lockup_detector_init_retry is true, which
> > > > + * means it must call between lockup_detector_init() and lockup_detector_check().
> > > > + * Be aware that caller must have __init attribute, relative functions
> > > > + * will be freed after kernel initialization.
> > > > + */
> > > > +void __init retry_lockup_detector_init(void)
> > > > +{
> > > > +	if (!allow_lockup_detector_init_retry)
> > > > +		return;
> > > > +
> > > > +	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> > > > +}
> > > > +
> > > > +/* Ensure the check is called after the initialization of driver */
> > > > +static int __init lockup_detector_check(void)
> > > > +{
> > > > +	/* Make sure no work is pending. */
> > > > +	flush_work(&detector_work);
> > > 
> > > This is racy. We should first disable
> > > "allow_lockup_detector_init_retry" to make sure
> > > that retry_lockup_detector_init() will not queue
> > > the work any longer.
> > 
> > But disable before flush_work will make the 
> >     lockup_detector_delay_init() ->
> >     watchdog_nmi_probe ->
> >     +	if (!allow_lockup_detector_init_retry)
> >     +		return -EBUSY;
> 
> I see. It is exactly the reason why I suggest to remove the
> optimization and keep the code simple.
> 
> > how about:
> > ...
> > static bool __init delayed_init_allowed = true;
> > ...
> > /*
> >  * retry_lockup_detector_init - retry init lockup detector if possible.
> >  *
> >  * Only take effect when allow_lockup_detector_init_retry is true, which
> >  * means it must call between lockup_detector_init() and lockup_detector_check().
> >  * Be aware that caller must have __init attribute, relative functions
> >  * will be freed after kernel initialization.
> >  */
> > void __init retry_lockup_detector_init(void)
> > {
> > 	if (!allow_lockup_detector_init_retry || !delayed_init_allowed)
> > 		return;
> > 
> > 	/* 
> > 	 * we shouldn't queue any delayed init work twice to avoid
> > 	 * any unwanted racy.
> > 	 */
> > 	delayed_init_allowed = false;
> 
> Grrr, this is so complicated and confusing. It might be because of
> badly selected variable names or comments. But I think that it is
> simply a bad approach.
> 
> OK, you suggest two variables. If I get it correctly:
> 
>     + The variable "delayed_init_allowed"
>      tries to prevent the race in lockup_detector_check().
> 
>      It will make sure that the work could not be queued after
>      flush_work() finishes.
> 
>      Is this obvious from the comment?
>      Is this obvious from the variable name?
> 
>      I am sorry. But it is not obvious to me. I understand it only
>      because I see it together in this mail. It will be pretty
>      hard to get it from the code when I see it one year later.
> 
> 
>    + The variable "allow_lockup_detector_init_retry" has an unclear
>      meaning. It might mean:
> 
> 	+ watchdog_nmi_probe() ended with -EBUSY in
> 	  lockup_detector_init() and we can try the delayed init.
> 
> 	+ but it also means that watchdog_nmi_probe() succeeded in
> 	  lockup_detector_delay_init() and there is no need to
> 	  try the delayed init any longer.
> 
>        Is this obvious from the variable name?
>        Is it explained anywhere?
>        Is it easy to understand?
> 
>        No, from my POV. It is really bad idea to have a single
>        variable with so many meanings.
> 
> 
> And this is my problem with this approach. There was one variable with
> unclear meanting. And you are trying to fix it by two variables
> with unclear meaning.
> 

I really apreciate for your reply, many thanks for it.

For my point of view, the naming for "delayed_init_allowed" is the
whole system state now is able to(allowed) do delayed init.
The "allow_lockup_detector_init_retry" is that delayed init is ready
to retry register NMI watchdog.

Thus the meaning of delayed_init_"allowed" is, we are allowed to
do delayed init including
    1. initialization of delayed init
    2. the retry of delayed init

I'm sorry for that I didn't express the meaning clearly.
I'll have definition in detail in the later version patch not only
a brief comment and I hope you can review much easier.
This only explain the thought how I made decision for the naming.

So let me back to the discussion below.



> > 	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> > }
> > 
> > 
> > /*
> >  * Ensure the check is called after the initialization of driver
> >  * and before removing init code.
> >  */
> > static int __init lockup_detector_check(void)
> > {
> > 	delayed_init_allowed = false;
> > 	flush_work(&detector_work);
> > 	allow_lockup_detector_init_retry = false;
> > 
> > 	return 0;
> > }
> 
> No, please keep it simple. Just have one variable that will say
> whether we are allowed to queue the work:
> 
>   + It will be allowed when watchdog_nmi_probe() ended
>     with -EBUSY in lockup_detector_init()
> 
>   + It will not longer be allowed when watchdog_nmi_probe()
>     succeeded or when lockup_detector_check() flushes
>     the pending works.
> 

Okay, let me think about it. I'll try to find a better solution that
only uses one variable.
And it's strongly about how users use it in 5th patch, I'll give further
reply in 5th patch


thanks
BRs,
Lecopzer





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-04-05 14:36       ` Petr Mladek
@ 2022-04-07 16:59         ` Lecopzer Chen
  2022-04-13 10:25           ` Petr Mladek
  0 siblings, 1 reply; 18+ messages in thread
From: Lecopzer Chen @ 2022-04-07 16:59 UTC (permalink / raw)
  To: pmladek
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, lecopzer.chen, linux-arm-kernel,
	linux-kernel, linux-mediatek, linux-perf-users, mark.rutland,
	masahiroy, matthias.bgg, maz, mcgrof, mingo, namhyung,
	nixiaoming, peterz, sparclinux, sumit.garg, wangqing, will,
	yj.chiang


> On Tue 2022-04-05 20:53:04, Lecopzer Chen wrote:
> >  
> > > On Thu 2022-03-24 22:14:05, Lecopzer Chen wrote:
> > > > With the recent feature added to enable perf events to use pseudo NMIs
> > > > as interrupts on platforms which support GICv3 or later, its now been
> > > > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > > > platforms. So enable corresponding support.
> > > > 
> > > > One thing to note here is that normally lockup detector is initialized
> > > > just after the early initcalls but PMU on arm64 comes up much later as
> > > > device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
> > > > let the watchdog framework know PMU not ready, and inform the framework
> > > > to re-initialize lockup detection once PMU has been initialized.
> > > > 
> > > > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > > > 
> > > > --- /dev/null
> > > > +++ b/arch/arm64/kernel/watchdog_hld.c
> > > > @@ -0,0 +1,37 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +#include <linux/nmi.h>
> > > > +#include <linux/cpufreq.h>
> > > > +#include <linux/perf/arm_pmu.h>
> > > > +
> > > > +/*
> > > > + * Safe maximum CPU frequency in case a particular platform doesn't implement
> > > > + * cpufreq driver. Although, architecture doesn't put any restrictions on
> > > > + * maximum frequency but 5 GHz seems to be safe maximum given the available
> > > > + * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
> > > > + * hand, we can't make it much higher as it would lead to a large hard-lockup
> > > > + * detection timeout on parts which are running slower (eg. 1GHz on
> > > > + * Developerbox) and doesn't possess a cpufreq driver.
> > > > + */
> > > > +#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
> > > > +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> > > > +{
> > > > +	unsigned int cpu = smp_processor_id();
> > > > +	unsigned long max_cpu_freq;
> > > > +
> > > > +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
> > > > +	if (!max_cpu_freq)
> > > > +		max_cpu_freq = SAFE_MAX_CPU_FREQ;
> > > > +
> > > > +	return (u64)max_cpu_freq * watchdog_thresh;
> > > > +}
> > > 
> > > This change is not mentioned in the commit message.
> > > Please, put it into a separate patch.
> > 
> > 
> > Actully, This cames from
> > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > And I didn't touch the commit message from the origin patch.
> > But of course, I could imporve it with proper description if
> > anyone thinks it's not good enough.
> 
> I see.
> 
> > Would you mean put this function hw_nmi_get_sample_period() in patch
> > 6th?
> > In the view of "arm64 uses delayed init with all the functionality it need to set up",
> > IMO, this make sense for me to put into a single patch.
> 
> Or you could split it in two patches and add
> hw_nmi_get_sample_period() in the earlier patch.
> 
> 
> > But if you still think this should put into a separate patch, I'll do it:)
> 
> It is always better to split the changes whenever possible. It makes
> the review easier. And it also helps to find the real culprit of
> a regression using bisection.

Okay, I'll split this part into another change, thanks.


> > > > +int __init watchdog_nmi_probe(void)
> > > > +{
> > > > +	if (!allow_lockup_detector_init_retry)
> > > > +		return -EBUSY;
> > > 
> > > How do you know that you should return -EBUSY
> > > when retry in not enabled?
> > > 
> > > I guess that it is an optimization to make it fast
> > > during the first call. But the logic is far from
> > > obvious.
> > > 
> > 
> > Yes, you can see this as an optimization, because arm64 PMU is not ready
> > during lockup_detector_init(), so the watchdog_nmi_probe() must fail.
> >
> > Thus we only want to do watchdog_nmi_probe() in delayed init,
> > so if not in the state (allow_lockup_detector_init_retry=true), just tell
> > 
> > if it's unclear
> 
> Yes, it is far from obvious.
> 
> > maybe a brief comment can be add like this:
> > 
> > +	/* arm64 is only able to initialize lockup detecor during delayed init */
> > +	if (!allow_lockup_detector_init_retry)
> > +		return -EBUSY;
> 
> No, please, remove this optimization. It just makes problems:
> 
>    + it requires a comment here because the logic is far from obvious.
> 
>    + it is the reason why we need another variable to avoid the race in
>      lockup_detector_check(), see the discussion about the 4th patch.

After some days studying, if I remove this if-condition which means the
following hardlockup_detector_perf_init() needs to return -EBUSY.
However, the default return value that if pmu is not ready is -ENOENT.

The call path for hardlockup_detector_perf_init() is really complicated,

I have some approach about this:
  1. abstract second variable with Kconfig.
    a. Add a ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT
       (the naming is a little bit long, may have better naming)
       in "lib/Kconfig.debug" if ARCH knew they do need delayed init for
       lockup detector.

       + select ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT if HAVE_HARDLOCKUP_DETECTOR_PERF

    b. and the watchdog_nmi_probe would look like.

    +int __init watchdog_nmi_probe(void)
    +{
    +	int ret;
    +
    + /* comment here... */
    +	if (!arm_pmu_irq_is_nmi())
    +		return -ENODEV;
    +
    +	ret = hardlockup_detector_perf_init();
    +	if (ret &&
    +		  IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
    +		return -EBUSY;
    +
    + return ret;
    +}

    and than we can have only one variable (allow_lockup_detector_init_retry)
    in 4th patch.

 
  2. base on ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT, change
     inside hardlockup_detector_perf_init().

int __init hardlockup_detector_perf_init(void)
{
	int ret = hardlockup_detector_event_create();

	if (ret) {
		pr_info("Perf NMI watchdog permanently disabled\n");
+
+		/* comment here... */
+		if (IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
+			ret = -EBUSY;
	} else {
		perf_event_release_kernel(this_cpu_read(watchdog_ev));
		this_cpu_write(watchdog_ev, NULL);
	}
	return ret;
}

  3. Don't add any other config, try to find a proper location
     to return -EBUSY in hardlockup_detector_event_create().
     IMHO, this may involve the PMU subsys and should be
     the hardest approach.



> > > > +
> > > > +	if (!arm_pmu_irq_is_nmi())
> > > > +		return -ENODEV;
> > > > +
> > > > +	return hardlockup_detector_perf_init();
> > > > +}
> > > 
> > For arm_pmu_irq_is_nmi() checking, we do need it, becasue arm64 needs
> > explictly turns on Pseudo-NMI to support base function for NMI.
> >
> > hardlockup_detector_perf_init() will success even if we haven't had
> > Pseudo-NMI turns on, however, the pmu interrupts will act like a
> > normal interrupt instead of NMI and the hardlockup detector would be broken.
> 
> I see. Please, explain this in a comment. It is another thing
> that is far from obvious.
> 

thank you, I'll just add the comment above like this.
/*
 * hardlockup_detector_perf_init() will success even if we haven't had
 * Pseudo-NMI turns on, however, the pmu interrupts will act like a
 * normal interrupt instead of NMI and the hardlockup detector would be broken.
 */
	if (!arm_pmu_irq_is_nmi())
		return -ENODEV;


thanks
BRs,
Lecopzer

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-04-07 16:59         ` Lecopzer Chen
@ 2022-04-13 10:25           ` Petr Mladek
  2022-04-21 16:30             ` Lecopzer Chen
  0 siblings, 1 reply; 18+ messages in thread
From: Petr Mladek @ 2022-04-13 10:25 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	sparclinux, sumit.garg, wangqing, will, yj.chiang

On Fri 2022-04-08 00:59:49, Lecopzer Chen wrote:
> 
> > On Tue 2022-04-05 20:53:04, Lecopzer Chen wrote:
> > >  
> > > > On Thu 2022-03-24 22:14:05, Lecopzer Chen wrote:
> > > > > With the recent feature added to enable perf events to use pseudo NMIs
> > > > > as interrupts on platforms which support GICv3 or later, its now been
> > > > > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > > > > platforms. So enable corresponding support.
> > > > > 
> > > > > One thing to note here is that normally lockup detector is initialized
> > > > > just after the early initcalls but PMU on arm64 comes up much later as
> > > > > device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
> > > > > let the watchdog framework know PMU not ready, and inform the framework
> > > > > to re-initialize lockup detection once PMU has been initialized.
> > > > > 
> > > > > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > > > > 
> > > > > --- /dev/null
> > > > > +++ b/arch/arm64/kernel/watchdog_hld.c
> > > > > @@ -0,0 +1,37 @@
> > > > > +// SPDX-License-Identifier: GPL-2.0
> > > > > +#include <linux/nmi.h>
> > > > > +#include <linux/cpufreq.h>
> > > > > +#include <linux/perf/arm_pmu.h>
> > > > > +
> > > > > +/*
> > > > > + * Safe maximum CPU frequency in case a particular platform doesn't implement
> > > > > + * cpufreq driver. Although, architecture doesn't put any restrictions on
> > > > > + * maximum frequency but 5 GHz seems to be safe maximum given the available
> > > > > + * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
> > > > > + * hand, we can't make it much higher as it would lead to a large hard-lockup
> > > > > + * detection timeout on parts which are running slower (eg. 1GHz on
> > > > > + * Developerbox) and doesn't possess a cpufreq driver.
> > > > > + */
> > > > > +#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
> > > > > +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> > > > > +{
> > > > > +	unsigned int cpu = smp_processor_id();
> > > > > +	unsigned long max_cpu_freq;
> > > > > +
> > > > > +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
> > > > > +	if (!max_cpu_freq)
> > > > > +		max_cpu_freq = SAFE_MAX_CPU_FREQ;
> > > > > +
> > > > > +	return (u64)max_cpu_freq * watchdog_thresh;
> > > > > +}
> > > > 
> > > > This change is not mentioned in the commit message.
> > > > Please, put it into a separate patch.
> > > 
> > > 
> > > Actully, This cames from
> > > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > > And I didn't touch the commit message from the origin patch.
> > > But of course, I could imporve it with proper description if
> > > anyone thinks it's not good enough.
> > 
> > I see.
> > 
> > > Would you mean put this function hw_nmi_get_sample_period() in patch
> > > 6th?
> > > In the view of "arm64 uses delayed init with all the functionality it need to set up",
> > > IMO, this make sense for me to put into a single patch.
> > 
> > Or you could split it in two patches and add
> > hw_nmi_get_sample_period() in the earlier patch.
> > 
> > 
> > > But if you still think this should put into a separate patch, I'll do it:)
> > 
> > It is always better to split the changes whenever possible. It makes
> > the review easier. And it also helps to find the real culprit of
> > a regression using bisection.
> 
> Okay, I'll split this part into another change, thanks.
> 
> 
> > > > > +int __init watchdog_nmi_probe(void)
> > > > > +{
> > > > > +	if (!allow_lockup_detector_init_retry)
> > > > > +		return -EBUSY;
> > > > 
> > > > How do you know that you should return -EBUSY
> > > > when retry in not enabled?
> > > > 
> > > > I guess that it is an optimization to make it fast
> > > > during the first call. But the logic is far from
> > > > obvious.
> > > > 
> > > 
> > > Yes, you can see this as an optimization, because arm64 PMU is not ready
> > > during lockup_detector_init(), so the watchdog_nmi_probe() must fail.
> > >
> > > Thus we only want to do watchdog_nmi_probe() in delayed init,
> > > so if not in the state (allow_lockup_detector_init_retry=true), just tell
> > > 
> > > if it's unclear
> > 
> > Yes, it is far from obvious.
> > 
> > > maybe a brief comment can be add like this:
> > > 
> > > +	/* arm64 is only able to initialize lockup detecor during delayed init */
> > > +	if (!allow_lockup_detector_init_retry)
> > > +		return -EBUSY;
> > 
> > No, please, remove this optimization. It just makes problems:
> > 
> >    + it requires a comment here because the logic is far from obvious.
> > 
> >    + it is the reason why we need another variable to avoid the race in
> >      lockup_detector_check(), see the discussion about the 4th patch.
> 
> After some days studying, if I remove this if-condition which means the
> following hardlockup_detector_perf_init() needs to return -EBUSY.
> However, the default return value that if pmu is not ready is -ENOENT.

I see.

> The call path for hardlockup_detector_perf_init() is really complicated,
> 
> I have some approach about this:
>   1. abstract second variable with Kconfig.
>     a. Add a ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT
>        (the naming is a little bit long, may have better naming)
>        in "lib/Kconfig.debug" if ARCH knew they do need delayed init for
>        lockup detector.
> 
>        + select ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT if HAVE_HARDLOCKUP_DETECTOR_PERF
> 
>     b. and the watchdog_nmi_probe would look like.
> 
>     +int __init watchdog_nmi_probe(void)
>     +{
>     +	int ret;
>     +
>     + /* comment here... */
>     +	if (!arm_pmu_irq_is_nmi())
>     +		return -ENODEV;
>     +
>     +	ret = hardlockup_detector_perf_init();
>     +	if (ret &&
>     +		  IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
>     +		return -EBUSY;
>     +
>     + return ret;
>     +}
> 
>     and than we can have only one variable (allow_lockup_detector_init_retry)
>     in 4th patch.
> 
>  
>   2. base on ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT, change
>      inside hardlockup_detector_perf_init().
> 
> int __init hardlockup_detector_perf_init(void)
> {
> 	int ret = hardlockup_detector_event_create();
> 
> 	if (ret) {
> 		pr_info("Perf NMI watchdog permanently disabled\n");
> +
> +		/* comment here... */
> +		if (IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
> +			ret = -EBUSY;
> 	} else {
> 		perf_event_release_kernel(this_cpu_read(watchdog_ev));
> 		this_cpu_write(watchdog_ev, NULL);
> 	}
> 	return ret;
> }
> 
>   3. Don't add any other config, try to find a proper location
>      to return -EBUSY in hardlockup_detector_event_create().
>      IMHO, this may involve the PMU subsys and should be
>      the hardest approach.

Honestly, everything looks a bit ugly and complicated to me.

OKAY, is the return value actually important?

What about just introducing the API that will allow to try to
initialize the hardlockup detector later:

/*
 * Retry hardlockup detector init. It is useful when it requires some
 * functionality that has to be initialized later on a particular
 * platform.
 */
void __init retry_lockup_detector_init(void)
{
	/* Must be called before late init calls. */
	if (!allow_lockup_detector_init_retry)
		return 0;

	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
}

/*
 * Ensure that optional delayed hardlockup init is proceed before
 * the init code and memory is freed.
 */
static int __init lockup_detector_check(void)
{
	/* Prevent any later retry. */
	allow_lockup_detector_init_retry = false;

	/* Make sure no work is pending. */
	flush_work(&detector_work);
}
late_initcall_sync(lockup_detector_check);

You could leave lockup_detector_init() as it is. It does not really
matter what was the exact error value returned by watchdog_nmi_probe().

Then you could call retry_lockup_detector_init() in
armv8_pmu_driver_init() and be done with it.

It will be universal API that might be used on any architecture
for any reason. If nobody calls retry_lockup_detector_init()
then nohing will happen and the code will work as before.

It might make sense to provide the API only on architectures that
really need it. We could hide it under

   ARCH_NEED_DELAYED_HARDLOCKUP_DETECTOR_INIT

, similar to ARCH_NEEDS_CPU_IDLE_COUPLE.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-04-13 10:25           ` Petr Mladek
@ 2022-04-21 16:30             ` Lecopzer Chen
  2022-04-26 16:38               ` Lecopzer Chen
  0 siblings, 1 reply; 18+ messages in thread
From: Lecopzer Chen @ 2022-04-21 16:30 UTC (permalink / raw)
  To: pmladek
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, lecopzer.chen, linux-arm-kernel,
	linux-kernel, linux-mediatek, linux-perf-users, mark.rutland,
	masahiroy, matthias.bgg, maz, mcgrof, mingo, namhyung,
	nixiaoming, peterz, sparclinux, sumit.garg, wangqing, will,
	yj.chiang

> On Fri 2022-04-08 00:59:49, Lecopzer Chen wrote:
> > 
> > > On Tue 2022-04-05 20:53:04, Lecopzer Chen wrote:
> > > >  
> > > > > On Thu 2022-03-24 22:14:05, Lecopzer Chen wrote:
> > > > > > With the recent feature added to enable perf events to use pseudo NMIs
> > > > > > as interrupts on platforms which support GICv3 or later, its now been
> > > > > > possible to enable hard lockup detector (or NMI watchdog) on arm64
> > > > > > platforms. So enable corresponding support.
> > > > > > 
> > > > > > One thing to note here is that normally lockup detector is initialized
> > > > > > just after the early initcalls but PMU on arm64 comes up much later as
> > > > > > device_initcall(). To cope with that, overriding watchdog_nmi_probe() to
> > > > > > let the watchdog framework know PMU not ready, and inform the framework
> > > > > > to re-initialize lockup detection once PMU has been initialized.
> > > > > > 
> > > > > > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > > > > > 
> > > > > > --- /dev/null
> > > > > > +++ b/arch/arm64/kernel/watchdog_hld.c
> > > > > > @@ -0,0 +1,37 @@
> > > > > > +// SPDX-License-Identifier: GPL-2.0
> > > > > > +#include <linux/nmi.h>
> > > > > > +#include <linux/cpufreq.h>
> > > > > > +#include <linux/perf/arm_pmu.h>
> > > > > > +
> > > > > > +/*
> > > > > > + * Safe maximum CPU frequency in case a particular platform doesn't implement
> > > > > > + * cpufreq driver. Although, architecture doesn't put any restrictions on
> > > > > > + * maximum frequency but 5 GHz seems to be safe maximum given the available
> > > > > > + * Arm CPUs in the market which are clocked much less than 5 GHz. On the other
> > > > > > + * hand, we can't make it much higher as it would lead to a large hard-lockup
> > > > > > + * detection timeout on parts which are running slower (eg. 1GHz on
> > > > > > + * Developerbox) and doesn't possess a cpufreq driver.
> > > > > > + */
> > > > > > +#define SAFE_MAX_CPU_FREQ	5000000000UL // 5 GHz
> > > > > > +u64 hw_nmi_get_sample_period(int watchdog_thresh)
> > > > > > +{
> > > > > > +	unsigned int cpu = smp_processor_id();
> > > > > > +	unsigned long max_cpu_freq;
> > > > > > +
> > > > > > +	max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL;
> > > > > > +	if (!max_cpu_freq)
> > > > > > +		max_cpu_freq = SAFE_MAX_CPU_FREQ;
> > > > > > +
> > > > > > +	return (u64)max_cpu_freq * watchdog_thresh;
> > > > > > +}
> > > > > 
> > > > > This change is not mentioned in the commit message.
> > > > > Please, put it into a separate patch.
> > > > 
> > > > 
> > > > Actully, This cames from
> > > > [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org
> > > > And I didn't touch the commit message from the origin patch.
> > > > But of course, I could imporve it with proper description if
> > > > anyone thinks it's not good enough.
> > > 
> > > I see.
> > > 
> > > > Would you mean put this function hw_nmi_get_sample_period() in patch
> > > > 6th?
> > > > In the view of "arm64 uses delayed init with all the functionality it need to set up",
> > > > IMO, this make sense for me to put into a single patch.
> > > 
> > > Or you could split it in two patches and add
> > > hw_nmi_get_sample_period() in the earlier patch.
> > > 
> > > 
> > > > But if you still think this should put into a separate patch, I'll do it:)
> > > 
> > > It is always better to split the changes whenever possible. It makes
> > > the review easier. And it also helps to find the real culprit of
> > > a regression using bisection.
> > 
> > Okay, I'll split this part into another change, thanks.
> > 
> > 
> > > > > > +int __init watchdog_nmi_probe(void)
> > > > > > +{
> > > > > > +	if (!allow_lockup_detector_init_retry)
> > > > > > +		return -EBUSY;
> > > > > 
> > > > > How do you know that you should return -EBUSY
> > > > > when retry in not enabled?
> > > > > 
> > > > > I guess that it is an optimization to make it fast
> > > > > during the first call. But the logic is far from
> > > > > obvious.
> > > > > 
> > > > 
> > > > Yes, you can see this as an optimization, because arm64 PMU is not ready
> > > > during lockup_detector_init(), so the watchdog_nmi_probe() must fail.
> > > >
> > > > Thus we only want to do watchdog_nmi_probe() in delayed init,
> > > > so if not in the state (allow_lockup_detector_init_retry=true), just tell
> > > > 
> > > > if it's unclear
> > > 
> > > Yes, it is far from obvious.
> > > 
> > > > maybe a brief comment can be add like this:
> > > > 
> > > > +	/* arm64 is only able to initialize lockup detecor during delayed init */
> > > > +	if (!allow_lockup_detector_init_retry)
> > > > +		return -EBUSY;
> > > 
> > > No, please, remove this optimization. It just makes problems:
> > > 
> > >    + it requires a comment here because the logic is far from obvious.
> > > 
> > >    + it is the reason why we need another variable to avoid the race in
> > >      lockup_detector_check(), see the discussion about the 4th patch.
> > 
> > After some days studying, if I remove this if-condition which means the
> > following hardlockup_detector_perf_init() needs to return -EBUSY.
> > However, the default return value that if pmu is not ready is -ENOENT.
> 
> I see.
> 
> > The call path for hardlockup_detector_perf_init() is really complicated,
> > 
> > I have some approach about this:
> >   1. abstract second variable with Kconfig.
> >     a. Add a ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT
> >        (the naming is a little bit long, may have better naming)
> >        in "lib/Kconfig.debug" if ARCH knew they do need delayed init for
> >        lockup detector.
> > 
> >        + select ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT if HAVE_HARDLOCKUP_DETECTOR_PERF
> > 
> >     b. and the watchdog_nmi_probe would look like.
> > 
> >     +int __init watchdog_nmi_probe(void)
> >     +{
> >     +	int ret;
> >     +
> >     + /* comment here... */
> >     +	if (!arm_pmu_irq_is_nmi())
> >     +		return -ENODEV;
> >     +
> >     +	ret = hardlockup_detector_perf_init();
> >     +	if (ret &&
> >     +		  IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
> >     +		return -EBUSY;
> >     +
> >     + return ret;
> >     +}
> > 
> >     and than we can have only one variable (allow_lockup_detector_init_retry)
> >     in 4th patch.
> > 
> >  
> >   2. base on ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT, change
> >      inside hardlockup_detector_perf_init().
> > 
> > int __init hardlockup_detector_perf_init(void)
> > {
> > 	int ret = hardlockup_detector_event_create();
> > 
> > 	if (ret) {
> > 		pr_info("Perf NMI watchdog permanently disabled\n");
> > +
> > +		/* comment here... */
> > +		if (IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
> > +			ret = -EBUSY;
> > 	} else {
> > 		perf_event_release_kernel(this_cpu_read(watchdog_ev));
> > 		this_cpu_write(watchdog_ev, NULL);
> > 	}
> > 	return ret;
> > }
> > 
> >   3. Don't add any other config, try to find a proper location
> >      to return -EBUSY in hardlockup_detector_event_create().
> >      IMHO, this may involve the PMU subsys and should be
> >      the hardest approach.
> 
> Honestly, everything looks a bit ugly and complicated to me.
> 
> OKAY, is the return value actually important?
> 
> What about just introducing the API that will allow to try to
> initialize the hardlockup detector later:
> 
> /*
>  * Retry hardlockup detector init. It is useful when it requires some
>  * functionality that has to be initialized later on a particular
>  * platform.
>  */
> void __init retry_lockup_detector_init(void)
> {
> 	/* Must be called before late init calls. */
> 	if (!allow_lockup_detector_init_retry)
> 		return 0;
> 
> 	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> }
> 
> /*
>  * Ensure that optional delayed hardlockup init is proceed before
>  * the init code and memory is freed.
>  */
> static int __init lockup_detector_check(void)
> {
> 	/* Prevent any later retry. */
> 	allow_lockup_detector_init_retry = false;
> 
> 	/* Make sure no work is pending. */
> 	flush_work(&detector_work);
> }
> late_initcall_sync(lockup_detector_check);
> 
> You could leave lockup_detector_init() as it is. It does not really
> matter what was the exact error value returned by watchdog_nmi_probe().
> 
> Then you could call retry_lockup_detector_init() in
> armv8_pmu_driver_init() and be done with it.
> 
> It will be universal API that might be used on any architecture
> for any reason. If nobody calls retry_lockup_detector_init()
> then nohing will happen and the code will work as before.
> 
> It might make sense to provide the API only on architectures that
> really need it. We could hide it under
> 
>    ARCH_NEED_DELAYED_HARDLOCKUP_DETECTOR_INIT
> 
> , similar to ARCH_NEEDS_CPU_IDLE_COUPLE.
> 

Sorry for late reply.

It's really a good idea.

Since I have already had lots things to revise in v3, I'm now preparing the V4.
I'll send it in these few days.

Thanks a lots for your great idea.


BRs,
Lecopzer



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-04-21 16:30             ` Lecopzer Chen
@ 2022-04-26 16:38               ` Lecopzer Chen
  2022-04-28  8:27                 ` Petr Mladek
  0 siblings, 1 reply; 18+ messages in thread
From: Lecopzer Chen @ 2022-04-26 16:38 UTC (permalink / raw)
  To: lecopzer.chen, pmladek
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	sparclinux, sumit.garg, wangqing, will, yj.chiang

> > The call path for hardlockup_detector_perf_init() is really complicated,
> > 
> > I have some approach about this:
> >   1. abstract second variable with Kconfig.
> >     a. Add a ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT
> >        (the naming is a little bit long, may have better naming)
> >        in "lib/Kconfig.debug" if ARCH knew they do need delayed init for
> >        lockup detector.
> > 
> >        + select ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT if HAVE_HARDLOCKUP_DETECTOR_PERF
> > 
> >     b. and the watchdog_nmi_probe would look like.
> > 
> >     +int __init watchdog_nmi_probe(void)
> >     +{
> >     +	int ret;
> >     +
> >     + /* comment here... */
> >     +	if (!arm_pmu_irq_is_nmi())
> >     +		return -ENODEV;
> >     +
> >     +	ret = hardlockup_detector_perf_init();
> >     +	if (ret &&
> >     +		  IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
> >     +		return -EBUSY;
> >     +
> >     + return ret;
> >     +}
> > 
> >     and than we can have only one variable (allow_lockup_detector_init_retry)
> >     in 4th patch.
> > 
> >  
> >   2. base on ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT, change
> >      inside hardlockup_detector_perf_init().
> > 
> > int __init hardlockup_detector_perf_init(void)
> > {
> > 	int ret = hardlockup_detector_event_create();
> > 
> > 	if (ret) {
> > 		pr_info("Perf NMI watchdog permanently disabled\n");
> > +
> > +		/* comment here... */
> > +		if (IS_ENABLED(ARCH_SUPPORTS_HARDLOCKUP_DETECTOR_DLAYED_INIT))
> > +			ret = -EBUSY;
> > 	} else {
> > 		perf_event_release_kernel(this_cpu_read(watchdog_ev));
> > 		this_cpu_write(watchdog_ev, NULL);
> > 	}
> > 	return ret;
> > }
> > 
> >   3. Don't add any other config, try to find a proper location
> >      to return -EBUSY in hardlockup_detector_event_create().
> >      IMHO, this may involve the PMU subsys and should be
> >      the hardest approach.
> 
> Honestly, everything looks a bit ugly and complicated to me.
> 
> OKAY, is the return value actually important?
> 
> What about just introducing the API that will allow to try to
> initialize the hardlockup detector later:
> 
> /*
>  * Retry hardlockup detector init. It is useful when it requires some
>  * functionality that has to be initialized later on a particular
>  * platform.
>  */
> void __init retry_lockup_detector_init(void)
> {
> 	/* Must be called before late init calls. */
> 	if (!allow_lockup_detector_init_retry)
> 		return 0;
> 
> 	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> }
> 
> /*
>  * Ensure that optional delayed hardlockup init is proceed before
>  * the init code and memory is freed.
>  */
> static int __init lockup_detector_check(void)
> {
> 	/* Prevent any later retry. */
> 	allow_lockup_detector_init_retry = false;
> 
> 	/* Make sure no work is pending. */
> 	flush_work(&detector_work);
> }
> late_initcall_sync(lockup_detector_check);
> 
> You could leave lockup_detector_init() as it is. It does not really
> matter what was the exact error value returned by watchdog_nmi_probe().
> 
> Then you could call retry_lockup_detector_init() in
> armv8_pmu_driver_init() and be done with it.
> 
> It will be universal API that might be used on any architecture
> for any reason. If nobody calls retry_lockup_detector_init()
> then nohing will happen and the code will work as before.
> 
> It might make sense to provide the API only on architectures that
> really need it. We could hide it under
> 
>    ARCH_NEED_DELAYED_HARDLOCKUP_DETECTOR_INIT
> 
> , similar to ARCH_NEEDS_CPU_IDLE_COUPLE.
> 

During implementation, if I add ARCH_NEED_DELAYED_..., there will contain
many enclosed ifdef-endif and is a little bit ugly.
Also, I didn't find a must-have reason to this Kconfig after I rebase from
your suggestion.

The one calls retry_lockup_detector_init() must fail at lockup_detector_init,
so I think anyone who has aleady failed at lockup_detector_init() has right
to retry no matter HW, SW or Arch reason.
Thus I might not introduce a new Kconfig in v4, and I would be glad to see
if any further commet on this.







^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector
  2022-04-26 16:38               ` Lecopzer Chen
@ 2022-04-28  8:27                 ` Petr Mladek
  0 siblings, 0 replies; 18+ messages in thread
From: Petr Mladek @ 2022-04-28  8:27 UTC (permalink / raw)
  To: Lecopzer Chen
  Cc: acme, akpm, alexander.shishkin, catalin.marinas, davem, jolsa,
	jthierry, keescook, kernelfans, linux-arm-kernel, linux-kernel,
	linux-mediatek, linux-perf-users, mark.rutland, masahiroy,
	matthias.bgg, maz, mcgrof, mingo, namhyung, nixiaoming, peterz,
	sparclinux, sumit.garg, wangqing, will, yj.chiang

On Wed 2022-04-27 00:38:40, Lecopzer Chen wrote:
> > What about just introducing the API that will allow to try to
> > initialize the hardlockup detector later:
> > 
> > /*
> >  * Retry hardlockup detector init. It is useful when it requires some
> >  * functionality that has to be initialized later on a particular
> >  * platform.
> >  */
> > void __init retry_lockup_detector_init(void)
> > {
> > 	/* Must be called before late init calls. */
> > 	if (!allow_lockup_detector_init_retry)
> > 		return 0;
> > 
> > 	queue_work_on(__smp_processor_id(), system_wq, &detector_work);
> > }
> > 
> > /*
> >  * Ensure that optional delayed hardlockup init is proceed before
> >  * the init code and memory is freed.
> >  */
> > static int __init lockup_detector_check(void)
> > {
> > 	/* Prevent any later retry. */
> > 	allow_lockup_detector_init_retry = false;
> > 
> > 	/* Make sure no work is pending. */
> > 	flush_work(&detector_work);
> > }
> > late_initcall_sync(lockup_detector_check);
> > 
> > You could leave lockup_detector_init() as it is. It does not really
> > matter what was the exact error value returned by watchdog_nmi_probe().
> > 
> > Then you could call retry_lockup_detector_init() in
> > armv8_pmu_driver_init() and be done with it.
> > 
> > It will be universal API that might be used on any architecture
> > for any reason. If nobody calls retry_lockup_detector_init()
> > then nohing will happen and the code will work as before.
> > 
> > It might make sense to provide the API only on architectures that
> > really need it. We could hide it under
> > 
> >    ARCH_NEED_DELAYED_HARDLOCKUP_DETECTOR_INIT
> > 
> > , similar to ARCH_NEEDS_CPU_IDLE_COUPLE.
> > 
> 
> During implementation, if I add ARCH_NEED_DELAYED_..., there will contain
> many enclosed ifdef-endif and is a little bit ugly.
> Also, I didn't find a must-have reason to this Kconfig after I rebase from
> your suggestion.
> 
> The one calls retry_lockup_detector_init() must fail at lockup_detector_init,
> so I think anyone who has aleady failed at lockup_detector_init() has right
> to retry no matter HW, SW or Arch reason.
> Thus I might not introduce a new Kconfig in v4, and I would be glad to see
> if any further commet on this.

Sounds reasonable from my POV. There is not much code. It will be removed
after the init phase. It will be most likely removed even during
compilation when linked with gcc LTE optimization and the symbols
are not used.

Best Regards,
Petr

PS: It might take few days until I do the review of v4. I am a bit
    overloaded at the moment.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-04-28  8:27 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-24 14:14 [PATCH v3 0/5] Support hld delayed init based on Pseudo-NMI for arm64 Lecopzer Chen
2022-03-24 14:14 ` [PATCH v3 1/5] kernel/watchdog: remove WATCHDOG_DEFAULT Lecopzer Chen
2022-03-24 14:14 ` [PATCH v3 2/5] kernel/watchdog: change watchdog_nmi_enable() to void Lecopzer Chen
2022-03-24 14:14 ` [PATCH v3 3/5] kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event Lecopzer Chen
2022-03-24 14:14 ` [PATCH v3 4/5] kernel/watchdog: Adapt the watchdog_hld interface for async model Lecopzer Chen
2022-04-04 14:41   ` Petr Mladek
2022-04-05 13:35     ` Lecopzer Chen
2022-04-05 15:19       ` Petr Mladek
2022-04-07 16:21         ` Lecopzer Chen
2022-03-24 14:14 ` [PATCH v3 5/5] arm64: Enable perf events based hard lockup detector Lecopzer Chen
2022-04-04 14:17   ` Petr Mladek
2022-04-05 12:53     ` Lecopzer Chen
2022-04-05 14:36       ` Petr Mladek
2022-04-07 16:59         ` Lecopzer Chen
2022-04-13 10:25           ` Petr Mladek
2022-04-21 16:30             ` Lecopzer Chen
2022-04-26 16:38               ` Lecopzer Chen
2022-04-28  8:27                 ` Petr Mladek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).