All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/4] powerpc: watchdog fixes
@ 2021-11-10  2:50 Nicholas Piggin
  2021-11-10  2:50 ` [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race Nicholas Piggin
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-11-10  2:50 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Laurent Dufour, Nicholas Piggin

These are some watchdog fixes and improvements, in particular a
deadlock between the wd_smp_lock and console lock when the watchdog
fires, found by Laurent.

Thanks,
Nick

Since v2:
- Fix a false positive warning in patch 1 found by Laurent.
- Move a comment change hunk to the correct patch.
- Drop the patch that removed the unstuck backtrace which is considered
  useful.

Since v1:
- Fixes noticed by Laurent in v1.
- Correct the description of the ABBA deadlock I wrote incorrectly in
  v1.
- Made several other improvements (patches 2,4,5).

Nicholas Piggin (4):
  powerpc/watchdog: Fix missed watchdog reset due to memory ordering
    race
  powerpc/watchdog: tighten non-atomic read-modify-write access
  powerpc/watchdog: Avoid holding wd_smp_lock over printk and
    smp_send_nmi_ipi
  powerpc/watchdog: read TB close to where it is used

 arch/powerpc/kernel/watchdog.c | 182 ++++++++++++++++++++++++++-------
 1 file changed, 147 insertions(+), 35 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
  2021-11-10  2:50 [PATCH v3 0/4] powerpc: watchdog fixes Nicholas Piggin
@ 2021-11-10  2:50 ` Nicholas Piggin
  2021-11-15 15:09   ` Laurent Dufour
  2021-11-10  2:50 ` [PATCH v3 2/4] powerpc/watchdog: tighten non-atomic read-modify-write access Nicholas Piggin
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Nicholas Piggin @ 2021-11-10  2:50 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Laurent Dufour, Nicholas Piggin

It is possible for all CPUs to miss the pending cpumask becoming clear,
and then nobody resetting it, which will cause the lockup detector to
stop working. It will eventually expire, but watchdog_smp_panic will
avoid doing anything if the pending mask is clear and it will never be
reset.

Order the cpumask clear vs the subsequent test to close this race.

Add an extra check for an empty pending mask when the watchdog fires and
finds its bit still clear, to try to catch any other possible races or
bugs here and keep the watchdog working. The extra test in
arch_touch_nmi_watchdog is required to prevent the new warning from
firing off.

Debugged-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/watchdog.c | 41 +++++++++++++++++++++++++++++++++-
 1 file changed, 40 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index f9ea0e5357f9..3c60872b6a2c 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -135,6 +135,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
 {
 	cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
 	cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
+	/*
+	 * See wd_smp_clear_cpu_pending()
+	 */
+	smp_mb();
 	if (cpumask_empty(&wd_smp_cpus_pending)) {
 		wd_smp_last_reset_tb = tb;
 		cpumask_andnot(&wd_smp_cpus_pending,
@@ -215,13 +219,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
 
 			cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
 			wd_smp_unlock(&flags);
+		} else {
+			/*
+			 * The last CPU to clear pending should have reset the
+			 * watchdog so we generally should not find it empty
+			 * here if our CPU was clear. However it could happen
+			 * due to a rare race with another CPU taking the
+			 * last CPU out of the mask concurrently.
+			 *
+			 * We can't add a warning for it. But just in case
+			 * there is a problem with the watchdog that is causing
+			 * the mask to not be reset, try to kick it along here.
+			 */
+			if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
+				goto none_pending;
 		}
 		return;
 	}
+
 	cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
+
+	/*
+	 * Order the store to clear pending with the load(s) to check all
+	 * words in the pending mask to check they are all empty. This orders
+	 * with the same barrier on another CPU. This prevents two CPUs
+	 * clearing the last 2 pending bits, but neither seeing the other's
+	 * store when checking if the mask is empty, and missing an empty
+	 * mask, which ends with a false positive.
+	 */
+	smp_mb();
 	if (cpumask_empty(&wd_smp_cpus_pending)) {
 		unsigned long flags;
 
+none_pending:
+		/*
+		 * Double check under lock because more than one CPU could see
+		 * a clear mask with the lockless check after clearing their
+		 * pending bits.
+		 */
 		wd_smp_lock(&flags);
 		if (cpumask_empty(&wd_smp_cpus_pending)) {
 			wd_smp_last_reset_tb = tb;
@@ -312,8 +347,12 @@ void arch_touch_nmi_watchdog(void)
 {
 	unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
 	int cpu = smp_processor_id();
-	u64 tb = get_tb();
+	u64 tb;
 
+	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
+		return;
+
+	tb = get_tb();
 	if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
 		per_cpu(wd_timer_tb, cpu) = tb;
 		wd_smp_clear_cpu_pending(cpu, tb);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/4] powerpc/watchdog: tighten non-atomic read-modify-write access
  2021-11-10  2:50 [PATCH v3 0/4] powerpc: watchdog fixes Nicholas Piggin
  2021-11-10  2:50 ` [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race Nicholas Piggin
@ 2021-11-10  2:50 ` Nicholas Piggin
  2021-11-10  2:50 ` [PATCH v3 3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi Nicholas Piggin
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-11-10  2:50 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Laurent Dufour, Nicholas Piggin

Most updates to wd_smp_cpus_pending are under lock except the watchdog
interrupt bit clear.

This can race with non-atomic RMW updates to the mask under lock, which
can happen in two instances:

Firstly, if another CPU detects this one is stuck, removes it from the
mask, mask becomes empty and is re-filled with non-atomic stores. This
is okay because it would re-fill the mask with this CPU's bit clear
anyway (because this CPU is now stuck), so it doesn't matter that the
bit clear update got "lost". Add a comment for this.

Secondly, if another CPU detects a different CPU is stuck and removes it
from the pending mask with a non-atomic store to bytes which also
include the bit of this CPU. This case can result in the bit clear being
lost and the end result being the bit is set. This should be so rare it
hardly matters, but to make things simpler to reason about just avoid
the non-atomic access for that case.

Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/watchdog.c | 36 ++++++++++++++++++++++++----------
 1 file changed, 26 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index 3c60872b6a2c..668ea1c13bef 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -131,10 +131,10 @@ static void wd_lockup_ipi(struct pt_regs *regs)
 	/* Do not panic from here because that can recurse into NMI IPI layer */
 }
 
-static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
+static bool set_cpu_stuck(int cpu, u64 tb)
 {
-	cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
-	cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
+	cpumask_set_cpu(cpu, &wd_smp_cpus_stuck);
+	cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
 	/*
 	 * See wd_smp_clear_cpu_pending()
 	 */
@@ -144,11 +144,9 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
 		cpumask_andnot(&wd_smp_cpus_pending,
 				&wd_cpus_enabled,
 				&wd_smp_cpus_stuck);
+		return true;
 	}
-}
-static void set_cpu_stuck(int cpu, u64 tb)
-{
-	set_cpumask_stuck(cpumask_of(cpu), tb);
+	return false;
 }
 
 static void watchdog_smp_panic(int cpu, u64 tb)
@@ -177,15 +175,17 @@ static void watchdog_smp_panic(int cpu, u64 tb)
 		 * get a backtrace on all of them anyway.
 		 */
 		for_each_cpu(c, &wd_smp_cpus_pending) {
+			bool empty;
 			if (c == cpu)
 				continue;
+			/* Take the stuck CPUs out of the watch group */
+			empty = set_cpu_stuck(c, tb);
 			smp_send_nmi_ipi(c, wd_lockup_ipi, 1000000);
+			if (empty)
+				break;
 		}
 	}
 
-	/* Take the stuck CPUs out of the watch group */
-	set_cpumask_stuck(&wd_smp_cpus_pending, tb);
-
 	wd_smp_unlock(&flags);
 
 	if (sysctl_hardlockup_all_cpu_backtrace)
@@ -237,6 +237,22 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
 		return;
 	}
 
+	/*
+	 * All other updates to wd_smp_cpus_pending are performed under
+	 * wd_smp_lock. All of them are atomic except the case where the
+	 * mask becomes empty and is reset. This will not happen here because
+	 * cpu was tested to be in the bitmap (above), and a CPU only clears
+	 * its own bit. _Except_ in the case where another CPU has detected a
+	 * hard lockup on our CPU and takes us out of the pending mask. So in
+	 * normal operation there will be no race here, no problem.
+	 *
+	 * In the lockup case, this atomic clear-bit vs a store that refills
+	 * other bits in the accessed word wll not be a problem. The bit clear
+	 * is atomic so it will not cause the store to get lost, and the store
+	 * will never set this bit so it will not overwrite the bit clear. The
+	 * only way for a stuck CPU to return to the pending bitmap is to
+	 * become unstuck itself.
+	 */
 	cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
 
 	/*
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi
  2021-11-10  2:50 [PATCH v3 0/4] powerpc: watchdog fixes Nicholas Piggin
  2021-11-10  2:50 ` [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race Nicholas Piggin
  2021-11-10  2:50 ` [PATCH v3 2/4] powerpc/watchdog: tighten non-atomic read-modify-write access Nicholas Piggin
@ 2021-11-10  2:50 ` Nicholas Piggin
  2021-11-19 11:05   ` Nicholas Piggin
  2021-11-10  2:50 ` [PATCH v3 4/4] powerpc/watchdog: read TB close to where it is used Nicholas Piggin
  2021-11-25  9:36 ` [PATCH v3 0/4] powerpc: watchdog fixes Michael Ellerman
  4 siblings, 1 reply; 13+ messages in thread
From: Nicholas Piggin @ 2021-11-10  2:50 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Laurent Dufour, Nicholas Piggin

There is a deadlock with the console_owner lock and the wd_smp_lock:

CPU x takes the console_owner lock
CPU y takes a watchdog timer interrupt and takes __wd_smp_lock
CPU x takes a soft-NMI interrupt, detects deadlock, spins on __wd_smp_lock
CPU y detects deadlock, tries to print something and spins on console_owner
-> deadlock

Change the watchdog locking scheme so wd_smp_lock protects the watchdog
internal data, but "reporting" (printing, issuing NMI IPIs, taking any
action outside of watchdog) uses a non-waiting exclusion. If a CPU detects
a problem but can not take the reporting lock, it just returns because
something else is already reporting. It will try again at some point.

Typically hard lockup watchdog report usefulness is not impacted due to
failure to spewing a large enough amount of data in as short a time as
possible, but by messages getting garbled.

Laurent debugged this and found the deadlock, and this patch is based on
his general approach to avoid expensive operations while holding the lock.
With the addition of the reporting exclusion.

Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
[np: rework to add reporting exclusion update changelog]
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/watchdog.c | 93 +++++++++++++++++++++++++++-------
 1 file changed, 74 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index 668ea1c13bef..1b11c4b1c79e 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -85,10 +85,36 @@ static DEFINE_PER_CPU(u64, wd_timer_tb);
 
 /* SMP checker bits */
 static unsigned long __wd_smp_lock;
+static unsigned long __wd_reporting;
 static cpumask_t wd_smp_cpus_pending;
 static cpumask_t wd_smp_cpus_stuck;
 static u64 wd_smp_last_reset_tb;
 
+/*
+ * Try to take the exclusive watchdog action / NMI IPI / printing lock.
+ * wd_smp_lock must be held. If this fails, we should return and wait
+ * for the watchdog to kick in again (or another CPU to trigger it).
+ *
+ * Importantly, if hardlockup_panic is set, wd_try_report failure should
+ * not delay the panic, because whichever other CPU is reporting will
+ * call panic.
+ */
+static bool wd_try_report(void)
+{
+	if (__wd_reporting)
+		return false;
+	__wd_reporting = 1;
+	return true;
+}
+
+/* End printing after successful wd_try_report. wd_smp_lock not required. */
+static void wd_end_reporting(void)
+{
+	smp_mb(); /* End printing "critical section" */
+	WARN_ON_ONCE(__wd_reporting == 0);
+	WRITE_ONCE(__wd_reporting, 0);
+}
+
 static inline void wd_smp_lock(unsigned long *flags)
 {
 	/*
@@ -151,6 +177,7 @@ static bool set_cpu_stuck(int cpu, u64 tb)
 
 static void watchdog_smp_panic(int cpu, u64 tb)
 {
+	static cpumask_t wd_smp_cpus_ipi; // protected by reporting
 	unsigned long flags;
 	int c;
 
@@ -160,11 +187,26 @@ static void watchdog_smp_panic(int cpu, u64 tb)
 		goto out;
 	if (cpumask_test_cpu(cpu, &wd_smp_cpus_pending))
 		goto out;
-	if (cpumask_weight(&wd_smp_cpus_pending) == 0)
+	if (!wd_try_report())
 		goto out;
+	for_each_online_cpu(c) {
+		if (!cpumask_test_cpu(c, &wd_smp_cpus_pending))
+			continue;
+		if (c == cpu)
+			continue; // should not happen
+
+		__cpumask_set_cpu(c, &wd_smp_cpus_ipi);
+		if (set_cpu_stuck(c, tb))
+			break;
+	}
+	if (cpumask_empty(&wd_smp_cpus_ipi)) {
+		wd_end_reporting();
+		goto out;
+	}
+	wd_smp_unlock(&flags);
 
 	pr_emerg("CPU %d detected hard LOCKUP on other CPUs %*pbl\n",
-		 cpu, cpumask_pr_args(&wd_smp_cpus_pending));
+		 cpu, cpumask_pr_args(&wd_smp_cpus_ipi));
 	pr_emerg("CPU %d TB:%lld, last SMP heartbeat TB:%lld (%lldms ago)\n",
 		 cpu, tb, wd_smp_last_reset_tb,
 		 tb_to_ns(tb - wd_smp_last_reset_tb) / 1000000);
@@ -174,26 +216,20 @@ static void watchdog_smp_panic(int cpu, u64 tb)
 		 * Try to trigger the stuck CPUs, unless we are going to
 		 * get a backtrace on all of them anyway.
 		 */
-		for_each_cpu(c, &wd_smp_cpus_pending) {
-			bool empty;
-			if (c == cpu)
-				continue;
-			/* Take the stuck CPUs out of the watch group */
-			empty = set_cpu_stuck(c, tb);
+		for_each_cpu(c, &wd_smp_cpus_ipi) {
 			smp_send_nmi_ipi(c, wd_lockup_ipi, 1000000);
-			if (empty)
-				break;
+			__cpumask_clear_cpu(c, &wd_smp_cpus_ipi);
 		}
-	}
-
-	wd_smp_unlock(&flags);
-
-	if (sysctl_hardlockup_all_cpu_backtrace)
+	} else {
 		trigger_allbutself_cpu_backtrace();
+		cpumask_clear(&wd_smp_cpus_ipi);
+	}
 
 	if (hardlockup_panic)
 		nmi_panic(NULL, "Hard LOCKUP");
 
+	wd_end_reporting();
+
 	return;
 
 out:
@@ -207,8 +243,6 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
 			struct pt_regs *regs = get_irq_regs();
 			unsigned long flags;
 
-			wd_smp_lock(&flags);
-
 			pr_emerg("CPU %d became unstuck TB:%lld\n",
 				 cpu, tb);
 			print_irqtrace_events(current);
@@ -217,6 +251,7 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
 			else
 				dump_stack();
 
+			wd_smp_lock(&flags);
 			cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
 			wd_smp_unlock(&flags);
 		} else {
@@ -312,13 +347,28 @@ DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
 
 	tb = get_tb();
 	if (tb - per_cpu(wd_timer_tb, cpu) >= wd_panic_timeout_tb) {
+		/*
+		 * Taking wd_smp_lock here means it is a soft-NMI lock, which
+		 * means we can't take any regular or irqsafe spin locks while
+		 * holding this lock. This is why timers can't printk while
+		 * holding the lock.
+		 */
 		wd_smp_lock(&flags);
 		if (cpumask_test_cpu(cpu, &wd_smp_cpus_stuck)) {
 			wd_smp_unlock(&flags);
 			return 0;
 		}
+		if (!wd_try_report()) {
+			wd_smp_unlock(&flags);
+			/* Couldn't report, try again in 100ms */
+			mtspr(SPRN_DEC, 100 * tb_ticks_per_usec * 1000);
+			return 0;
+		}
+
 		set_cpu_stuck(cpu, tb);
 
+		wd_smp_unlock(&flags);
+
 		pr_emerg("CPU %d self-detected hard LOCKUP @ %pS\n",
 			 cpu, (void *)regs->nip);
 		pr_emerg("CPU %d TB:%lld, last heartbeat TB:%lld (%lldms ago)\n",
@@ -328,14 +378,19 @@ DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
 		print_irqtrace_events(current);
 		show_regs(regs);
 
-		wd_smp_unlock(&flags);
-
 		if (sysctl_hardlockup_all_cpu_backtrace)
 			trigger_allbutself_cpu_backtrace();
 
 		if (hardlockup_panic)
 			nmi_panic(regs, "Hard LOCKUP");
+
+		wd_end_reporting();
 	}
+	/*
+	 * We are okay to change DEC in soft_nmi_interrupt because the masked
+	 * handler has marked a DEC as pending, so the timer interrupt will be
+	 * replayed as soon as local irqs are enabled again.
+	 */
 	if (wd_panic_timeout_tb < 0x7fffffff)
 		mtspr(SPRN_DEC, wd_panic_timeout_tb);
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/4] powerpc/watchdog: read TB close to where it is used
  2021-11-10  2:50 [PATCH v3 0/4] powerpc: watchdog fixes Nicholas Piggin
                   ` (2 preceding siblings ...)
  2021-11-10  2:50 ` [PATCH v3 3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi Nicholas Piggin
@ 2021-11-10  2:50 ` Nicholas Piggin
  2021-11-25  9:36 ` [PATCH v3 0/4] powerpc: watchdog fixes Michael Ellerman
  4 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-11-10  2:50 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Laurent Dufour, Nicholas Piggin

When taking watchdog actions, printing messages, comparing and
re-setting wd_smp_last_reset_tb, etc., read TB close to the point of use
and under wd_smp_lock or printing lock (if applicable).

This should keep timebase mostly monotonic with kernel log messages, and
could prevent (in theory) a laggy CPU updating wd_smp_last_reset_tb to
something a long way in the past, and causing other CPUs to appear to be
stuck.

These additional TB reads are all slowpath (lockup has been detected),
so performance does not matter.

Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/watchdog.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index 1b11c4b1c79e..936f889995d3 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -157,7 +157,7 @@ static void wd_lockup_ipi(struct pt_regs *regs)
 	/* Do not panic from here because that can recurse into NMI IPI layer */
 }
 
-static bool set_cpu_stuck(int cpu, u64 tb)
+static bool set_cpu_stuck(int cpu)
 {
 	cpumask_set_cpu(cpu, &wd_smp_cpus_stuck);
 	cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
@@ -166,7 +166,7 @@ static bool set_cpu_stuck(int cpu, u64 tb)
 	 */
 	smp_mb();
 	if (cpumask_empty(&wd_smp_cpus_pending)) {
-		wd_smp_last_reset_tb = tb;
+		wd_smp_last_reset_tb = get_tb();
 		cpumask_andnot(&wd_smp_cpus_pending,
 				&wd_cpus_enabled,
 				&wd_smp_cpus_stuck);
@@ -175,14 +175,16 @@ static bool set_cpu_stuck(int cpu, u64 tb)
 	return false;
 }
 
-static void watchdog_smp_panic(int cpu, u64 tb)
+static void watchdog_smp_panic(int cpu)
 {
 	static cpumask_t wd_smp_cpus_ipi; // protected by reporting
 	unsigned long flags;
+	u64 tb;
 	int c;
 
 	wd_smp_lock(&flags);
 	/* Double check some things under lock */
+	tb = get_tb();
 	if ((s64)(tb - wd_smp_last_reset_tb) < (s64)wd_smp_panic_timeout_tb)
 		goto out;
 	if (cpumask_test_cpu(cpu, &wd_smp_cpus_pending))
@@ -196,7 +198,7 @@ static void watchdog_smp_panic(int cpu, u64 tb)
 			continue; // should not happen
 
 		__cpumask_set_cpu(c, &wd_smp_cpus_ipi);
-		if (set_cpu_stuck(c, tb))
+		if (set_cpu_stuck(c))
 			break;
 	}
 	if (cpumask_empty(&wd_smp_cpus_ipi)) {
@@ -236,7 +238,7 @@ static void watchdog_smp_panic(int cpu, u64 tb)
 	wd_smp_unlock(&flags);
 }
 
-static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
+static void wd_smp_clear_cpu_pending(int cpu)
 {
 	if (!cpumask_test_cpu(cpu, &wd_smp_cpus_pending)) {
 		if (unlikely(cpumask_test_cpu(cpu, &wd_smp_cpus_stuck))) {
@@ -244,7 +246,7 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
 			unsigned long flags;
 
 			pr_emerg("CPU %d became unstuck TB:%lld\n",
-				 cpu, tb);
+				 cpu, get_tb());
 			print_irqtrace_events(current);
 			if (regs)
 				show_regs(regs);
@@ -310,7 +312,7 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
 		 */
 		wd_smp_lock(&flags);
 		if (cpumask_empty(&wd_smp_cpus_pending)) {
-			wd_smp_last_reset_tb = tb;
+			wd_smp_last_reset_tb = get_tb();
 			cpumask_andnot(&wd_smp_cpus_pending,
 					&wd_cpus_enabled,
 					&wd_smp_cpus_stuck);
@@ -325,10 +327,10 @@ static void watchdog_timer_interrupt(int cpu)
 
 	per_cpu(wd_timer_tb, cpu) = tb;
 
-	wd_smp_clear_cpu_pending(cpu, tb);
+	wd_smp_clear_cpu_pending(cpu);
 
 	if ((s64)(tb - wd_smp_last_reset_tb) >= (s64)wd_smp_panic_timeout_tb)
-		watchdog_smp_panic(cpu, tb);
+		watchdog_smp_panic(cpu);
 }
 
 DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
@@ -365,7 +367,7 @@ DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
 			return 0;
 		}
 
-		set_cpu_stuck(cpu, tb);
+		set_cpu_stuck(cpu);
 
 		wd_smp_unlock(&flags);
 
@@ -426,7 +428,7 @@ void arch_touch_nmi_watchdog(void)
 	tb = get_tb();
 	if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
 		per_cpu(wd_timer_tb, cpu) = tb;
-		wd_smp_clear_cpu_pending(cpu, tb);
+		wd_smp_clear_cpu_pending(cpu);
 	}
 }
 EXPORT_SYMBOL(arch_touch_nmi_watchdog);
@@ -484,7 +486,7 @@ static void stop_watchdog(void *arg)
 	cpumask_clear_cpu(cpu, &wd_cpus_enabled);
 	wd_smp_unlock(&flags);
 
-	wd_smp_clear_cpu_pending(cpu, get_tb());
+	wd_smp_clear_cpu_pending(cpu);
 }
 
 static int stop_watchdog_on_cpu(unsigned int cpu)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
  2021-11-10  2:50 ` [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race Nicholas Piggin
@ 2021-11-15 15:09   ` Laurent Dufour
  2021-11-19  9:05     ` Nicholas Piggin
  0 siblings, 1 reply; 13+ messages in thread
From: Laurent Dufour @ 2021-11-15 15:09 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev

Le 10/11/2021 à 03:50, Nicholas Piggin a écrit :
> It is possible for all CPUs to miss the pending cpumask becoming clear,
> and then nobody resetting it, which will cause the lockup detector to
> stop working. It will eventually expire, but watchdog_smp_panic will
> avoid doing anything if the pending mask is clear and it will never be
> reset.
> 
> Order the cpumask clear vs the subsequent test to close this race.
> 
> Add an extra check for an empty pending mask when the watchdog fires and
> finds its bit still clear, to try to catch any other possible races or
> bugs here and keep the watchdog working. The extra test in
> arch_touch_nmi_watchdog is required to prevent the new warning from
> firing off.
> 
> Debugged-by: Laurent Dufour <ldufour@linux.ibm.com>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/kernel/watchdog.c | 41 +++++++++++++++++++++++++++++++++-
>   1 file changed, 40 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
> index f9ea0e5357f9..3c60872b6a2c 100644
> --- a/arch/powerpc/kernel/watchdog.c
> +++ b/arch/powerpc/kernel/watchdog.c
> @@ -135,6 +135,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
>   {
>   	cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
>   	cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
> +	/*
> +	 * See wd_smp_clear_cpu_pending()
> +	 */
> +	smp_mb();
>   	if (cpumask_empty(&wd_smp_cpus_pending)) {
>   		wd_smp_last_reset_tb = tb;
>   		cpumask_andnot(&wd_smp_cpus_pending,
> @@ -215,13 +219,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
>   
>   			cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
>   			wd_smp_unlock(&flags);
> +		} else {
> +			/*
> +			 * The last CPU to clear pending should have reset the
> +			 * watchdog so we generally should not find it empty
> +			 * here if our CPU was clear. However it could happen
> +			 * due to a rare race with another CPU taking the
> +			 * last CPU out of the mask concurrently.
> +			 *
> +			 * We can't add a warning for it. But just in case
> +			 * there is a problem with the watchdog that is causing
> +			 * the mask to not be reset, try to kick it along here.
> +			 */
> +			if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
> +				goto none_pending;

If I understand correctly, that branch is a security in case the code is not 
working as expected. But I'm really wondering if that's really needed, and we 
will end up with a contention on the watchdog lock while this path should be 
lockless, and I'd say that in most of the case there is nothing to do after 
grabbing that lock. Am I missing something risky here?

>   		}
>   		return;
>   	}
> +
>   	cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
> +
> +	/*
> +	 * Order the store to clear pending with the load(s) to check all
> +	 * words in the pending mask to check they are all empty. This orders
> +	 * with the same barrier on another CPU. This prevents two CPUs
> +	 * clearing the last 2 pending bits, but neither seeing the other's
> +	 * store when checking if the mask is empty, and missing an empty
> +	 * mask, which ends with a false positive.
> +	 */
> +	smp_mb();
>   	if (cpumask_empty(&wd_smp_cpus_pending)) {
>   		unsigned long flags;
>   
> +none_pending:
> +		/*
> +		 * Double check under lock because more than one CPU could see
> +		 * a clear mask with the lockless check after clearing their
> +		 * pending bits.
> +		 */
>   		wd_smp_lock(&flags);
>   		if (cpumask_empty(&wd_smp_cpus_pending)) {
>   			wd_smp_last_reset_tb = tb;
> @@ -312,8 +347,12 @@ void arch_touch_nmi_watchdog(void)
>   {
>   	unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
>   	int cpu = smp_processor_id();
> -	u64 tb = get_tb();
> +	u64 tb;
>   
> +	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
> +		return;
> +
> +	tb = get_tb();
>   	if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
>   		per_cpu(wd_timer_tb, cpu) = tb;
>   		wd_smp_clear_cpu_pending(cpu, tb);
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
  2021-11-15 15:09   ` Laurent Dufour
@ 2021-11-19  9:05     ` Nicholas Piggin
  2021-11-19  9:25       ` Laurent Dufour
  0 siblings, 1 reply; 13+ messages in thread
From: Nicholas Piggin @ 2021-11-19  9:05 UTC (permalink / raw)
  To: Laurent Dufour, linuxppc-dev

Excerpts from Laurent Dufour's message of November 16, 2021 1:09 am:
> Le 10/11/2021 à 03:50, Nicholas Piggin a écrit :
>> It is possible for all CPUs to miss the pending cpumask becoming clear,
>> and then nobody resetting it, which will cause the lockup detector to
>> stop working. It will eventually expire, but watchdog_smp_panic will
>> avoid doing anything if the pending mask is clear and it will never be
>> reset.
>> 
>> Order the cpumask clear vs the subsequent test to close this race.
>> 
>> Add an extra check for an empty pending mask when the watchdog fires and
>> finds its bit still clear, to try to catch any other possible races or
>> bugs here and keep the watchdog working. The extra test in
>> arch_touch_nmi_watchdog is required to prevent the new warning from
>> firing off.
>> 
>> Debugged-by: Laurent Dufour <ldufour@linux.ibm.com>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/kernel/watchdog.c | 41 +++++++++++++++++++++++++++++++++-
>>   1 file changed, 40 insertions(+), 1 deletion(-)
>> 
>> diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
>> index f9ea0e5357f9..3c60872b6a2c 100644
>> --- a/arch/powerpc/kernel/watchdog.c
>> +++ b/arch/powerpc/kernel/watchdog.c
>> @@ -135,6 +135,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
>>   {
>>   	cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
>>   	cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
>> +	/*
>> +	 * See wd_smp_clear_cpu_pending()
>> +	 */
>> +	smp_mb();
>>   	if (cpumask_empty(&wd_smp_cpus_pending)) {
>>   		wd_smp_last_reset_tb = tb;
>>   		cpumask_andnot(&wd_smp_cpus_pending,
>> @@ -215,13 +219,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
>>   
>>   			cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
>>   			wd_smp_unlock(&flags);
>> +		} else {
>> +			/*
>> +			 * The last CPU to clear pending should have reset the
>> +			 * watchdog so we generally should not find it empty
>> +			 * here if our CPU was clear. However it could happen
>> +			 * due to a rare race with another CPU taking the
>> +			 * last CPU out of the mask concurrently.
>> +			 *
>> +			 * We can't add a warning for it. But just in case
>> +			 * there is a problem with the watchdog that is causing
>> +			 * the mask to not be reset, try to kick it along here.
>> +			 */
>> +			if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
>> +				goto none_pending;
> 
> If I understand correctly, that branch is a security in case the code is not 
> working as expected. But I'm really wondering if that's really needed, and we 
> will end up with a contention on the watchdog lock while this path should be 
> lockless, and I'd say that in most of the case there is nothing to do after 
> grabbing that lock. Am I missing something risky here?

I'm thinking it should not hit very much because that first test

    if (!cpumask_test_cpu(cpu, &wd_smp_cpus_pending)) {

I think it should not be true too often, it would mean a CPU has taken 
two timer interrupts while another one has not taken any, so hopefully
that's pretty rare in normal operation.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
  2021-11-19  9:05     ` Nicholas Piggin
@ 2021-11-19  9:25       ` Laurent Dufour
  0 siblings, 0 replies; 13+ messages in thread
From: Laurent Dufour @ 2021-11-19  9:25 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev

Le 19/11/2021 à 10:05, Nicholas Piggin a écrit :
> Excerpts from Laurent Dufour's message of November 16, 2021 1:09 am:
>> Le 10/11/2021 à 03:50, Nicholas Piggin a écrit :
>>> It is possible for all CPUs to miss the pending cpumask becoming clear,
>>> and then nobody resetting it, which will cause the lockup detector to
>>> stop working. It will eventually expire, but watchdog_smp_panic will
>>> avoid doing anything if the pending mask is clear and it will never be
>>> reset.
>>>
>>> Order the cpumask clear vs the subsequent test to close this race.
>>>
>>> Add an extra check for an empty pending mask when the watchdog fires and
>>> finds its bit still clear, to try to catch any other possible races or
>>> bugs here and keep the watchdog working. The extra test in
>>> arch_touch_nmi_watchdog is required to prevent the new warning from
>>> firing off.
>>>
>>> Debugged-by: Laurent Dufour <ldufour@linux.ibm.com>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>    arch/powerpc/kernel/watchdog.c | 41 +++++++++++++++++++++++++++++++++-
>>>    1 file changed, 40 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
>>> index f9ea0e5357f9..3c60872b6a2c 100644
>>> --- a/arch/powerpc/kernel/watchdog.c
>>> +++ b/arch/powerpc/kernel/watchdog.c
>>> @@ -135,6 +135,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
>>>    {
>>>    	cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
>>>    	cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
>>> +	/*
>>> +	 * See wd_smp_clear_cpu_pending()
>>> +	 */
>>> +	smp_mb();
>>>    	if (cpumask_empty(&wd_smp_cpus_pending)) {
>>>    		wd_smp_last_reset_tb = tb;
>>>    		cpumask_andnot(&wd_smp_cpus_pending,
>>> @@ -215,13 +219,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
>>>    
>>>    			cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
>>>    			wd_smp_unlock(&flags);
>>> +		} else {
>>> +			/*
>>> +			 * The last CPU to clear pending should have reset the
>>> +			 * watchdog so we generally should not find it empty
>>> +			 * here if our CPU was clear. However it could happen
>>> +			 * due to a rare race with another CPU taking the
>>> +			 * last CPU out of the mask concurrently.
>>> +			 *
>>> +			 * We can't add a warning for it. But just in case
>>> +			 * there is a problem with the watchdog that is causing
>>> +			 * the mask to not be reset, try to kick it along here.
>>> +			 */
>>> +			if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
>>> +				goto none_pending;
>>
>> If I understand correctly, that branch is a security in case the code is not
>> working as expected. But I'm really wondering if that's really needed, and we
>> will end up with a contention on the watchdog lock while this path should be
>> lockless, and I'd say that in most of the case there is nothing to do after
>> grabbing that lock. Am I missing something risky here?
> 
> I'm thinking it should not hit very much because that first test
> 
>      if (!cpumask_test_cpu(cpu, &wd_smp_cpus_pending)) {
> 
> I think it should not be true too often, it would mean a CPU has taken
> two timer interrupts while another one has not taken any, so hopefully
> that's pretty rare in normal operation.

Thanks, Nick, for the clarification.

Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi
  2021-11-10  2:50 ` [PATCH v3 3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi Nicholas Piggin
@ 2021-11-19 11:05   ` Nicholas Piggin
  0 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-11-19 11:05 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Laurent Dufour

Excerpts from Nicholas Piggin's message of November 10, 2021 12:50 pm:
> @@ -160,11 +187,26 @@ static void watchdog_smp_panic(int cpu, u64 tb)
>  		goto out;
>  	if (cpumask_test_cpu(cpu, &wd_smp_cpus_pending))
>  		goto out;
> -	if (cpumask_weight(&wd_smp_cpus_pending) == 0)
> +	if (!wd_try_report())
>  		goto out;
> +	for_each_online_cpu(c) {
> +		if (!cpumask_test_cpu(c, &wd_smp_cpus_pending))
> +			continue;
> +		if (c == cpu)
> +			continue; // should not happen
> +
> +		__cpumask_set_cpu(c, &wd_smp_cpus_ipi);
> +		if (set_cpu_stuck(c, tb))
> +			break;
> +	}
> +	if (cpumask_empty(&wd_smp_cpus_ipi)) {
> +		wd_end_reporting();
> +		goto out;
> +	}
> +	wd_smp_unlock(&flags);
>  
>  	pr_emerg("CPU %d detected hard LOCKUP on other CPUs %*pbl\n",
> -		 cpu, cpumask_pr_args(&wd_smp_cpus_pending));
> +		 cpu, cpumask_pr_args(&wd_smp_cpus_ipi));
>  	pr_emerg("CPU %d TB:%lld, last SMP heartbeat TB:%lld (%lldms ago)\n",
>  		 cpu, tb, wd_smp_last_reset_tb,
>  		 tb_to_ns(tb - wd_smp_last_reset_tb) / 1000000);

Oops, this has a bug: wd_smp_last_reset_tb gets reset above by
set_cpu_stuck when all the stuck CPUs are taken out of the pending
mask, so this prints nonsense last-reset times.

I might just send out an updated series, because the fix has a slight
clash with the next patch. All I do is take a local copy of
wd_smp_last_reset_tb near the start of the function.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/4] powerpc: watchdog fixes
  2021-11-10  2:50 [PATCH v3 0/4] powerpc: watchdog fixes Nicholas Piggin
                   ` (3 preceding siblings ...)
  2021-11-10  2:50 ` [PATCH v3 4/4] powerpc/watchdog: read TB close to where it is used Nicholas Piggin
@ 2021-11-25  9:36 ` Michael Ellerman
  2021-11-25 15:11   ` Laurent Dufour
  4 siblings, 1 reply; 13+ messages in thread
From: Michael Ellerman @ 2021-11-25  9:36 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Laurent Dufour

On Wed, 10 Nov 2021 12:50:52 +1000, Nicholas Piggin wrote:
> These are some watchdog fixes and improvements, in particular a
> deadlock between the wd_smp_lock and console lock when the watchdog
> fires, found by Laurent.
> 
> Thanks,
> Nick
> 
> [...]

Applied to powerpc/next.

[1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
      https://git.kernel.org/powerpc/c/5dad4ba68a2483fc80d70b9dc90bbe16e1f27263
[2/4] powerpc/watchdog: tighten non-atomic read-modify-write access
      https://git.kernel.org/powerpc/c/858c93c31504ac1507084493d7eafbe7e2302dc2
[3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi
      https://git.kernel.org/powerpc/c/76521c4b0291ad25723638ade5a0ff4d5f659771
[4/4] powerpc/watchdog: read TB close to where it is used
      https://git.kernel.org/powerpc/c/1f01bf90765fa5f88fbae452c131c1edf5cda7ba

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/4] powerpc: watchdog fixes
  2021-11-25  9:36 ` [PATCH v3 0/4] powerpc: watchdog fixes Michael Ellerman
@ 2021-11-25 15:11   ` Laurent Dufour
  2021-11-25 15:26     ` Michal Suchánek
  0 siblings, 1 reply; 13+ messages in thread
From: Laurent Dufour @ 2021-11-25 15:11 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, linuxppc-dev

On 25/11/2021, 10:36:43, Michael Ellerman wrote:
> On Wed, 10 Nov 2021 12:50:52 +1000, Nicholas Piggin wrote:
>> These are some watchdog fixes and improvements, in particular a
>> deadlock between the wd_smp_lock and console lock when the watchdog
>> fires, found by Laurent.
>>
>> Thanks,
>> Nick
>>
>> [...]
> 
> Applied to powerpc/next.
> 
> [1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
>       https://git.kernel.org/powerpc/c/5dad4ba68a2483fc80d70b9dc90bbe16e1f27263
> [2/4] powerpc/watchdog: tighten non-atomic read-modify-write access
>       https://git.kernel.org/powerpc/c/858c93c31504ac1507084493d7eafbe7e2302dc2
> [3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi
>       https://git.kernel.org/powerpc/c/76521c4b0291ad25723638ade5a0ff4d5f659771
> [4/4] powerpc/watchdog: read TB close to where it is used
>       https://git.kernel.org/powerpc/c/1f01bf90765fa5f88fbae452c131c1edf5cda7ba
> 
> cheers
> 

Hi Michael,

This series has been superseded by this series (v4)
http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=272865

Do you plan to apply that v4?

Thanks,
Laurent.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/4] powerpc: watchdog fixes
  2021-11-25 15:11   ` Laurent Dufour
@ 2021-11-25 15:26     ` Michal Suchánek
  2021-11-25 17:20       ` Laurent Dufour
  0 siblings, 1 reply; 13+ messages in thread
From: Michal Suchánek @ 2021-11-25 15:26 UTC (permalink / raw)
  To: Laurent Dufour; +Cc: Michael Ellerman, linuxppc-dev, Nicholas Piggin

Hello,

On Thu, Nov 25, 2021 at 04:11:03PM +0100, Laurent Dufour wrote:
> On 25/11/2021, 10:36:43, Michael Ellerman wrote:
> > On Wed, 10 Nov 2021 12:50:52 +1000, Nicholas Piggin wrote:
> >> These are some watchdog fixes and improvements, in particular a
> >> deadlock between the wd_smp_lock and console lock when the watchdog
> >> fires, found by Laurent.
> >>
> >> Thanks,
> >> Nick
> >>
> >> [...]
> > 
> > Applied to powerpc/next.
> > 
> > [1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
> >       https://git.kernel.org/powerpc/c/5dad4ba68a2483fc80d70b9dc90bbe16e1f27263
> > [2/4] powerpc/watchdog: tighten non-atomic read-modify-write access
> >       https://git.kernel.org/powerpc/c/858c93c31504ac1507084493d7eafbe7e2302dc2
> > [3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi
> >       https://git.kernel.org/powerpc/c/76521c4b0291ad25723638ade5a0ff4d5f659771
> > [4/4] powerpc/watchdog: read TB close to where it is used
> >       https://git.kernel.org/powerpc/c/1f01bf90765fa5f88fbae452c131c1edf5cda7ba
> > 
> > cheers
> > 
> 
> Hi Michael,
> 
> This series has been superseded by this series (v4)
> http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=272865
> 
> Do you plan to apply that v4?

It has been fixed up in

https://lore.kernel.org/linuxppc-dev/20211125103346.1188958-1-npiggin@gmail.com/

Thanks

Michal

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/4] powerpc: watchdog fixes
  2021-11-25 15:26     ` Michal Suchánek
@ 2021-11-25 17:20       ` Laurent Dufour
  0 siblings, 0 replies; 13+ messages in thread
From: Laurent Dufour @ 2021-11-25 17:20 UTC (permalink / raw)
  To: Michal Suchánek; +Cc: Michael Ellerman, linuxppc-dev, Nicholas Piggin

On 25/11/2021, 16:26:33, Michal Suchánek wrote:
> Hello,
> 
> On Thu, Nov 25, 2021 at 04:11:03PM +0100, Laurent Dufour wrote:
>> On 25/11/2021, 10:36:43, Michael Ellerman wrote:
>>> On Wed, 10 Nov 2021 12:50:52 +1000, Nicholas Piggin wrote:
>>>> These are some watchdog fixes and improvements, in particular a
>>>> deadlock between the wd_smp_lock and console lock when the watchdog
>>>> fires, found by Laurent.
>>>>
>>>> Thanks,
>>>> Nick
>>>>
>>>> [...]
>>>
>>> Applied to powerpc/next.
>>>
>>> [1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
>>>       https://git.kernel.org/powerpc/c/5dad4ba68a2483fc80d70b9dc90bbe16e1f27263
>>> [2/4] powerpc/watchdog: tighten non-atomic read-modify-write access
>>>       https://git.kernel.org/powerpc/c/858c93c31504ac1507084493d7eafbe7e2302dc2
>>> [3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi
>>>       https://git.kernel.org/powerpc/c/76521c4b0291ad25723638ade5a0ff4d5f659771
>>> [4/4] powerpc/watchdog: read TB close to where it is used
>>>       https://git.kernel.org/powerpc/c/1f01bf90765fa5f88fbae452c131c1edf5cda7ba
>>>
>>> cheers
>>>
>>
>> Hi Michael,
>>
>> This series has been superseded by this series (v4)
>> http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=272865
>>
>> Do you plan to apply that v4?
> 
> It has been fixed up in
> 
> https://lore.kernel.org/linuxppc-dev/20211125103346.1188958-1-npiggin@gmail.com/

Thanks Michal, I forgot that one.




^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-11-25 17:21 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-10  2:50 [PATCH v3 0/4] powerpc: watchdog fixes Nicholas Piggin
2021-11-10  2:50 ` [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race Nicholas Piggin
2021-11-15 15:09   ` Laurent Dufour
2021-11-19  9:05     ` Nicholas Piggin
2021-11-19  9:25       ` Laurent Dufour
2021-11-10  2:50 ` [PATCH v3 2/4] powerpc/watchdog: tighten non-atomic read-modify-write access Nicholas Piggin
2021-11-10  2:50 ` [PATCH v3 3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi Nicholas Piggin
2021-11-19 11:05   ` Nicholas Piggin
2021-11-10  2:50 ` [PATCH v3 4/4] powerpc/watchdog: read TB close to where it is used Nicholas Piggin
2021-11-25  9:36 ` [PATCH v3 0/4] powerpc: watchdog fixes Michael Ellerman
2021-11-25 15:11   ` Laurent Dufour
2021-11-25 15:26     ` Michal Suchánek
2021-11-25 17:20       ` Laurent Dufour

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.