All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4 V2] Print traces on softlockups
@ 2014-04-04 20:47 ` Don Zickus
  0 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Don Zickus

Version 2 of this patchset.

Added more patches to handle the 'uniprocessor' panic case by sending NMIs
to every cpu but self.  Only affects x86, sparc.

Aaron Tomlin (4):
  nmi: Provide the option to issue an NMI back trace to every cpu but
    current
  x86, nmi: Add more flexible NMI back trace support
  sparc64, nmi: Add more flexible NMI back trace support
  watchdog: Printing traces for all cpus on lockup detection

 Documentation/kernel-parameters.txt |    5 +++++
 Documentation/sysctl/kernel.txt     |   17 +++++++++++++++++
 arch/sparc/include/asm/irq_64.h     |    2 +-
 arch/sparc/kernel/process_64.c      |   14 +++++++++-----
 arch/x86/include/asm/irq.h          |    2 +-
 arch/x86/kernel/apic/hw_nmi.c       |   16 +++++++++++++---
 include/linux/nmi.h                 |   12 +++++++++++-
 kernel/sysctl.c                     |    9 +++++++++
 kernel/watchdog.c                   |   32 ++++++++++++++++++++++++++++++++
 9 files changed, 98 insertions(+), 11 deletions(-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/4 V2] Print traces on softlockups
@ 2014-04-04 20:47 ` Don Zickus
  0 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Don Zickus

Version 2 of this patchset.

Added more patches to handle the 'uniprocessor' panic case by sending NMIs
to every cpu but self.  Only affects x86, sparc.

Aaron Tomlin (4):
  nmi: Provide the option to issue an NMI back trace to every cpu but
    current
  x86, nmi: Add more flexible NMI back trace support
  sparc64, nmi: Add more flexible NMI back trace support
  watchdog: Printing traces for all cpus on lockup detection

 Documentation/kernel-parameters.txt |    5 +++++
 Documentation/sysctl/kernel.txt     |   17 +++++++++++++++++
 arch/sparc/include/asm/irq_64.h     |    2 +-
 arch/sparc/kernel/process_64.c      |   14 +++++++++-----
 arch/x86/include/asm/irq.h          |    2 +-
 arch/x86/kernel/apic/hw_nmi.c       |   16 +++++++++++++---
 include/linux/nmi.h                 |   12 +++++++++++-
 kernel/sysctl.c                     |    9 +++++++++
 kernel/watchdog.c                   |   32 ++++++++++++++++++++++++++++++++
 9 files changed, 98 insertions(+), 11 deletions(-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/4 v2] nmi: Provide the option to issue an NMI back trace to every cpu but current
  2014-04-04 20:47 ` Don Zickus
@ 2014-04-04 20:47   ` Don Zickus
  -1 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

Some times it is preferred not to use the
trigger_all_cpu_backtrace() routine when one wants
to avoid capturing a back trace for current.
For instance if one was previously captured
recently.

This patch provides a new routine namely
trigger_allbutself_cpu_backtrace() which offers
the flexibility to issue an NMI to every cpu but
current and capture a back trace accordingly.

[Added stub in #else clause - dcz]

Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 include/linux/nmi.h |   11 ++++++++++-
 1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index 6a45fb5..a17ab63 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -32,15 +32,24 @@ static inline void touch_nmi_watchdog(void)
 #ifdef arch_trigger_all_cpu_backtrace
 static inline bool trigger_all_cpu_backtrace(void)
 {
-	arch_trigger_all_cpu_backtrace();
+	arch_trigger_all_cpu_backtrace(true);
 
 	return true;
 }
+static inline bool trigger_allbutself_cpu_backtrace(void)
+{
+	arch_trigger_all_cpu_backtrace(false);
+	return true;
+}
 #else
 static inline bool trigger_all_cpu_backtrace(void)
 {
 	return false;
 }
+static inline bool trigger_allbutself_cpu_backtrace(void)
+{
+	return false;
+}
 #endif
 
 #ifdef CONFIG_LOCKUP_DETECTOR
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 1/4 v2] nmi: Provide the option to issue an NMI back trace to every cpu but current
@ 2014-04-04 20:47   ` Don Zickus
  0 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

Some times it is preferred not to use the
trigger_all_cpu_backtrace() routine when one wants
to avoid capturing a back trace for current.
For instance if one was previously captured
recently.

This patch provides a new routine namely
trigger_allbutself_cpu_backtrace() which offers
the flexibility to issue an NMI to every cpu but
current and capture a back trace accordingly.

[Added stub in #else clause - dcz]

Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 include/linux/nmi.h |   11 ++++++++++-
 1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index 6a45fb5..a17ab63 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -32,15 +32,24 @@ static inline void touch_nmi_watchdog(void)
 #ifdef arch_trigger_all_cpu_backtrace
 static inline bool trigger_all_cpu_backtrace(void)
 {
-	arch_trigger_all_cpu_backtrace();
+	arch_trigger_all_cpu_backtrace(true);
 
 	return true;
 }
+static inline bool trigger_allbutself_cpu_backtrace(void)
+{
+	arch_trigger_all_cpu_backtrace(false);
+	return true;
+}
 #else
 static inline bool trigger_all_cpu_backtrace(void)
 {
 	return false;
 }
+static inline bool trigger_allbutself_cpu_backtrace(void)
+{
+	return false;
+}
 #endif
 
 #ifdef CONFIG_LOCKUP_DETECTOR
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/4 v2] x86, nmi: Add more flexible NMI back trace support
  2014-04-04 20:47 ` Don Zickus
@ 2014-04-04 20:47   ` Don Zickus
  -1 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

This patch introduces the x86 specific implementation
changes to the arch_trigger_all_cpu_backtrace() routine.
Now users have the ability to choose whether or not to
issue an NMI back trace which includes current.

[Don't print message in single processor case - dcz]

Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/x86/include/asm/irq.h    |    2 +-
 arch/x86/kernel/apic/hw_nmi.c |   16 +++++++++++++---
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h
index cb6cfcd..a80cbb8 100644
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -43,7 +43,7 @@ extern int vector_used_by_percpu_irq(unsigned int vector);
 extern void init_ISA_irqs(void);
 
 #ifdef CONFIG_X86_LOCAL_APIC
-void arch_trigger_all_cpu_backtrace(void);
+void arch_trigger_all_cpu_backtrace(bool);
 #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace
 #endif
 
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index a698d71..3614e34 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -33,7 +33,7 @@ static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
 /* "in progress" flag of arch_trigger_all_cpu_backtrace */
 static unsigned long backtrace_flag;
 
-void arch_trigger_all_cpu_backtrace(void)
+void arch_trigger_all_cpu_backtrace(bool include_self)
 {
 	int i;
 
@@ -46,8 +46,18 @@ void arch_trigger_all_cpu_backtrace(void)
 
 	cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
 
-	printk(KERN_INFO "sending NMI to all CPUs:\n");
-	apic->send_IPI_all(NMI_VECTOR);
+	if (include_self) {
+		printk(KERN_INFO "sending NMI to all CPUs:\n");
+		apic->send_IPI_all(NMI_VECTOR);
+	} else {
+		cpumask_clear_cpu(smp_processor_id(),
+			to_cpumask(backtrace_mask));
+
+		if (!cpumask_empty(to_cpumask(backtrace_mask))) {
+			printk(KERN_INFO "sending NMI to other CPUs:\n");
+			apic->send_IPI_allbutself(NMI_VECTOR);
+		}
+	}
 
 	/* Wait for up to 10 seconds for all CPUs to do the backtrace */
 	for (i = 0; i < 10 * 1000; i++) {
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/4 v2] x86, nmi: Add more flexible NMI back trace support
@ 2014-04-04 20:47   ` Don Zickus
  0 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

This patch introduces the x86 specific implementation
changes to the arch_trigger_all_cpu_backtrace() routine.
Now users have the ability to choose whether or not to
issue an NMI back trace which includes current.

[Don't print message in single processor case - dcz]

Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/x86/include/asm/irq.h    |    2 +-
 arch/x86/kernel/apic/hw_nmi.c |   16 +++++++++++++---
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h
index cb6cfcd..a80cbb8 100644
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -43,7 +43,7 @@ extern int vector_used_by_percpu_irq(unsigned int vector);
 extern void init_ISA_irqs(void);
 
 #ifdef CONFIG_X86_LOCAL_APIC
-void arch_trigger_all_cpu_backtrace(void);
+void arch_trigger_all_cpu_backtrace(bool);
 #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace
 #endif
 
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index a698d71..3614e34 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -33,7 +33,7 @@ static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
 /* "in progress" flag of arch_trigger_all_cpu_backtrace */
 static unsigned long backtrace_flag;
 
-void arch_trigger_all_cpu_backtrace(void)
+void arch_trigger_all_cpu_backtrace(bool include_self)
 {
 	int i;
 
@@ -46,8 +46,18 @@ void arch_trigger_all_cpu_backtrace(void)
 
 	cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
 
-	printk(KERN_INFO "sending NMI to all CPUs:\n");
-	apic->send_IPI_all(NMI_VECTOR);
+	if (include_self) {
+		printk(KERN_INFO "sending NMI to all CPUs:\n");
+		apic->send_IPI_all(NMI_VECTOR);
+	} else {
+		cpumask_clear_cpu(smp_processor_id(),
+			to_cpumask(backtrace_mask));
+
+		if (!cpumask_empty(to_cpumask(backtrace_mask))) {
+			printk(KERN_INFO "sending NMI to other CPUs:\n");
+			apic->send_IPI_allbutself(NMI_VECTOR);
+		}
+	}
 
 	/* Wait for up to 10 seconds for all CPUs to do the backtrace */
 	for (i = 0; i < 10 * 1000; i++) {
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/4 v2] sparc64, nmi: Add more flexible NMI back trace support
  2014-04-04 20:47 ` Don Zickus
@ 2014-04-04 20:47   ` Don Zickus
  -1 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

This patch introduces the sparc specific implementation
changes to the arch_trigger_all_cpu_backtrace() routine.
Now users have the ability to choose whether or not to
issue an NMI back trace which includes current.

Update sysrq_handle_globreg() to use the new interface

[squash two sparc patches, update changelog - dcz]

Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/sparc/include/asm/irq_64.h |    2 +-
 arch/sparc/kernel/process_64.c  |   14 +++++++++-----
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/include/asm/irq_64.h b/arch/sparc/include/asm/irq_64.h
index abf6afe..4f072b9 100644
--- a/arch/sparc/include/asm/irq_64.h
+++ b/arch/sparc/include/asm/irq_64.h
@@ -89,7 +89,7 @@ static inline unsigned long get_softint(void)
 	return retval;
 }
 
-void arch_trigger_all_cpu_backtrace(void);
+void arch_trigger_all_cpu_backtrace(bool);
 #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace
 
 extern void *hardirq_stack[NR_CPUS];
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index 32a280e..3d61b98 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -237,7 +237,7 @@ static void __global_reg_poll(struct global_reg_snapshot *gp)
 	}
 }
 
-void arch_trigger_all_cpu_backtrace(void)
+void arch_trigger_all_cpu_backtrace(bool include_self)
 {
 	struct thread_info *tp = current_thread_info();
 	struct pt_regs *regs = get_irq_regs();
@@ -249,15 +249,19 @@ void arch_trigger_all_cpu_backtrace(void)
 
 	spin_lock_irqsave(&global_cpu_snapshot_lock, flags);
 
-	memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
-
 	this_cpu = raw_smp_processor_id();
 
-	__global_reg_self(tp, regs, this_cpu);
+	memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
+
+	if (include_self)
+		__global_reg_self(tp, regs, this_cpu);
 
 	smp_fetch_global_regs();
 
 	for_each_online_cpu(cpu) {
+		if (!include_self && cpu == this_cpu)
+			continue;
+
 		struct global_reg_snapshot *gp = &global_cpu_snapshot[cpu].reg;
 
 		__global_reg_poll(gp);
@@ -290,7 +294,7 @@ void arch_trigger_all_cpu_backtrace(void)
 
 static void sysrq_handle_globreg(int key)
 {
-	arch_trigger_all_cpu_backtrace();
+	arch_trigger_all_cpu_backtrace(true);
 }
 
 static struct sysrq_key_op sparc_globalreg_op = {
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/4 v2] sparc64, nmi: Add more flexible NMI back trace support
@ 2014-04-04 20:47   ` Don Zickus
  0 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

This patch introduces the sparc specific implementation
changes to the arch_trigger_all_cpu_backtrace() routine.
Now users have the ability to choose whether or not to
issue an NMI back trace which includes current.

Update sysrq_handle_globreg() to use the new interface

[squash two sparc patches, update changelog - dcz]

Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/sparc/include/asm/irq_64.h |    2 +-
 arch/sparc/kernel/process_64.c  |   14 +++++++++-----
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/include/asm/irq_64.h b/arch/sparc/include/asm/irq_64.h
index abf6afe..4f072b9 100644
--- a/arch/sparc/include/asm/irq_64.h
+++ b/arch/sparc/include/asm/irq_64.h
@@ -89,7 +89,7 @@ static inline unsigned long get_softint(void)
 	return retval;
 }
 
-void arch_trigger_all_cpu_backtrace(void);
+void arch_trigger_all_cpu_backtrace(bool);
 #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace
 
 extern void *hardirq_stack[NR_CPUS];
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index 32a280e..3d61b98 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -237,7 +237,7 @@ static void __global_reg_poll(struct global_reg_snapshot *gp)
 	}
 }
 
-void arch_trigger_all_cpu_backtrace(void)
+void arch_trigger_all_cpu_backtrace(bool include_self)
 {
 	struct thread_info *tp = current_thread_info();
 	struct pt_regs *regs = get_irq_regs();
@@ -249,15 +249,19 @@ void arch_trigger_all_cpu_backtrace(void)
 
 	spin_lock_irqsave(&global_cpu_snapshot_lock, flags);
 
-	memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
-
 	this_cpu = raw_smp_processor_id();
 
-	__global_reg_self(tp, regs, this_cpu);
+	memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
+
+	if (include_self)
+		__global_reg_self(tp, regs, this_cpu);
 
 	smp_fetch_global_regs();
 
 	for_each_online_cpu(cpu) {
+		if (!include_self && cpu = this_cpu)
+			continue;
+
 		struct global_reg_snapshot *gp = &global_cpu_snapshot[cpu].reg;
 
 		__global_reg_poll(gp);
@@ -290,7 +294,7 @@ void arch_trigger_all_cpu_backtrace(void)
 
 static void sysrq_handle_globreg(int key)
 {
-	arch_trigger_all_cpu_backtrace();
+	arch_trigger_all_cpu_backtrace(true);
 }
 
 static struct sysrq_key_op sparc_globalreg_op = {
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/4 v2] watchdog: Printing traces for all cpus on lockup detection
  2014-04-04 20:47 ` Don Zickus
@ 2014-04-04 20:47   ` Don Zickus
  -1 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

A 'softlockup' is defined as a bug that causes the kernel to
loop in kernel mode for more than a predefined period to
time, without giving other tasks a chance to run.

Currently, upon detection of this condition by the per-cpu
watchdog task, debug information (including a stack trace)
is sent to the system log.

On some occasions, we have observed that the "victim" rather
than the actual "culprit" (i.e. the owner/holder of the
contended resource) is reported to the user.
Often this information has proven to be insufficient to
assist debugging efforts.

To avoid loss of useful debug information, for architectures
which support NMI, this patch makes it possible to improve
soft lockup reporting. This is accomplished by issuing an
NMI to each cpu to obtain a stack trace.

If NMI is not supported we just revert back to the old method.
A sysctl and boot-time parameter is available to toggle this
feature.

V2: review cleanups, added arch_trigger_allbutself patches

Suggested-by: Mateusz Guzik <mguzik@redhat.com>
Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 Documentation/kernel-parameters.txt |    5 +++++
 Documentation/sysctl/kernel.txt     |   17 +++++++++++++++++
 include/linux/nmi.h                 |    1 +
 kernel/sysctl.c                     |    9 +++++++++
 kernel/watchdog.c                   |   32 ++++++++++++++++++++++++++++++++
 5 files changed, 64 insertions(+), 0 deletions(-)

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 7116fda..80f2a21 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3047,6 +3047,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			[KNL] Should the soft-lockup detector generate panics.
 			Format: <integer>
 
+	softlockup_all_cpu_backtrace=
+			[KNL] Should the soft-lockup detector generate
+			backtraces on all cpus.
+			Format: <integer>
+
 	sonypi.*=	[HW] Sony Programmable I/O Control Device driver
 			See Documentation/laptops/sonypi.txt
 
diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index e55124e..b6873b2 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -75,6 +75,7 @@ show up in /proc/sys/kernel:
 - shmall
 - shmmax                      [ sysv ipc ]
 - shmmni
+- softlockup_all_cpu_backtrace
 - stop-a                      [ SPARC only ]
 - sysrq                       ==> Documentation/sysrq.txt
 - tainted
@@ -768,6 +769,22 @@ without users and with a dead originative process will be destroyed.
 
 ==============================================================
 
+softlockup_all_cpu_backtrace:
+
+This value controls the soft lockup detector thread's behavior
+when a soft lockup condition is detected as to whether or not
+to gather further debug information. If enabled, each cpu will
+be issued an NMI and instructed to capture stack trace.
+
+This feature is only applicable for architectures which support
+NMI.
+
+0: do nothing. This is the default behavior.
+
+1: on detection capture more debug information.
+
+==============================================================
+
 tainted:
 
 Non-zero if the kernel has been tainted.  Numeric values, which
diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index a17ab63..447775e 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -57,6 +57,7 @@ int hw_nmi_is_cpu_stuck(struct pt_regs *);
 u64 hw_nmi_get_sample_period(int watchdog_thresh);
 extern int watchdog_user_enabled;
 extern int watchdog_thresh;
+extern int sysctl_softlockup_all_cpu_backtrace;
 struct ctl_table;
 extern int proc_dowatchdog(struct ctl_table *, int ,
 			   void __user *, size_t *, loff_t *);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 49e13e1..e3e84f1 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -855,6 +855,15 @@ static struct ctl_table kern_table[] = {
 		.extra2		= &one,
 	},
 	{
+		.procname	= "softlockup_all_cpu_backtrace",
+		.data		= &sysctl_softlockup_all_cpu_backtrace,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= &zero,
+		.extra2		= &one,
+	},
+	{
 		.procname       = "nmi_watchdog",
 		.data           = &watchdog_user_enabled,
 		.maxlen         = sizeof (int),
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 4431610..9e661de 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -31,6 +31,7 @@
 
 int watchdog_user_enabled = 1;
 int __read_mostly watchdog_thresh = 10;
+int __read_mostly sysctl_softlockup_all_cpu_backtrace;
 static int __read_mostly watchdog_running;
 static u64 __read_mostly sample_period;
 
@@ -47,6 +48,7 @@ static DEFINE_PER_CPU(bool, watchdog_nmi_touch);
 static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
 static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
 #endif
+static unsigned long soft_lockup_nmi_warn;
 
 /* boot commands */
 /*
@@ -95,6 +97,13 @@ static int __init nosoftlockup_setup(char *str)
 }
 __setup("nosoftlockup", nosoftlockup_setup);
 /*  */
+static int __init softlockup_all_cpu_backtrace_setup(char *str)
+{
+	sysctl_softlockup_all_cpu_backtrace =
+		!!simple_strtol(str, NULL, 0);
+	return 1;
+}
+__setup("softlockup_all_cpu_backtrace=", softlockup_all_cpu_backtrace_setup);
 
 /*
  * Hard-lockup warnings should be triggered after just a few seconds. Soft-
@@ -267,6 +276,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
 	unsigned long touch_ts = __this_cpu_read(watchdog_touch_ts);
 	struct pt_regs *regs = get_irq_regs();
 	int duration;
+	int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace;
 
 	/* kick the hardlockup detector */
 	watchdog_interrupt_count();
@@ -313,6 +323,17 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
 		if (__this_cpu_read(soft_watchdog_warn) == true)
 			return HRTIMER_RESTART;
 
+		if (softlockup_all_cpu_backtrace) {
+			/* Prevent multiple soft-lockup reports if one cpu is already
+			 * engaged in dumping cpu back traces
+			 */
+			if (test_and_set_bit(0, &soft_lockup_nmi_warn)) {
+				/* Someone else will report us. Let's give up */
+				__this_cpu_write(soft_watchdog_warn, true);
+				return HRTIMER_RESTART;
+			}
+		}
+
 		printk(KERN_EMERG "BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
 			smp_processor_id(), duration,
 			current->comm, task_pid_nr(current));
@@ -323,6 +344,17 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
 		else
 			dump_stack();
 
+		if (softlockup_all_cpu_backtrace) {
+			/* Avoid generating two back traces for current
+			 * given that one is already made above
+			 */
+			trigger_allbutself_cpu_backtrace();
+
+			clear_bit(0, &soft_lockup_nmi_warn);
+			/* Barrier to sync with other cpus */
+			smp_mb__after_clear_bit();
+		}
+
 		if (softlockup_panic)
 			panic("softlockup: hung tasks");
 		__this_cpu_write(soft_watchdog_warn, true);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/4 v2] watchdog: Printing traces for all cpus on lockup detection
@ 2014-04-04 20:47   ` Don Zickus
  0 siblings, 0 replies; 12+ messages in thread
From: Don Zickus @ 2014-04-04 20:47 UTC (permalink / raw)
  To: LKML; +Cc: akpm, x86, davem, sparclinux, mguzik, Aaron Tomlin, Don Zickus

From: Aaron Tomlin <atomlin@redhat.com>

A 'softlockup' is defined as a bug that causes the kernel to
loop in kernel mode for more than a predefined period to
time, without giving other tasks a chance to run.

Currently, upon detection of this condition by the per-cpu
watchdog task, debug information (including a stack trace)
is sent to the system log.

On some occasions, we have observed that the "victim" rather
than the actual "culprit" (i.e. the owner/holder of the
contended resource) is reported to the user.
Often this information has proven to be insufficient to
assist debugging efforts.

To avoid loss of useful debug information, for architectures
which support NMI, this patch makes it possible to improve
soft lockup reporting. This is accomplished by issuing an
NMI to each cpu to obtain a stack trace.

If NMI is not supported we just revert back to the old method.
A sysctl and boot-time parameter is available to toggle this
feature.

V2: review cleanups, added arch_trigger_allbutself patches

Suggested-by: Mateusz Guzik <mguzik@redhat.com>
Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 Documentation/kernel-parameters.txt |    5 +++++
 Documentation/sysctl/kernel.txt     |   17 +++++++++++++++++
 include/linux/nmi.h                 |    1 +
 kernel/sysctl.c                     |    9 +++++++++
 kernel/watchdog.c                   |   32 ++++++++++++++++++++++++++++++++
 5 files changed, 64 insertions(+), 0 deletions(-)

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 7116fda..80f2a21 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3047,6 +3047,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			[KNL] Should the soft-lockup detector generate panics.
 			Format: <integer>
 
+	softlockup_all_cpu_backtrace+			[KNL] Should the soft-lockup detector generate
+			backtraces on all cpus.
+			Format: <integer>
+
 	sonypi.*=	[HW] Sony Programmable I/O Control Device driver
 			See Documentation/laptops/sonypi.txt
 
diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index e55124e..b6873b2 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -75,6 +75,7 @@ show up in /proc/sys/kernel:
 - shmall
 - shmmax                      [ sysv ipc ]
 - shmmni
+- softlockup_all_cpu_backtrace
 - stop-a                      [ SPARC only ]
 - sysrq                       => Documentation/sysrq.txt
 - tainted
@@ -768,6 +769,22 @@ without users and with a dead originative process will be destroyed.
 
 ===============================
 
+softlockup_all_cpu_backtrace:
+
+This value controls the soft lockup detector thread's behavior
+when a soft lockup condition is detected as to whether or not
+to gather further debug information. If enabled, each cpu will
+be issued an NMI and instructed to capture stack trace.
+
+This feature is only applicable for architectures which support
+NMI.
+
+0: do nothing. This is the default behavior.
+
+1: on detection capture more debug information.
+
+===============================
+
 tainted:
 
 Non-zero if the kernel has been tainted.  Numeric values, which
diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index a17ab63..447775e 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -57,6 +57,7 @@ int hw_nmi_is_cpu_stuck(struct pt_regs *);
 u64 hw_nmi_get_sample_period(int watchdog_thresh);
 extern int watchdog_user_enabled;
 extern int watchdog_thresh;
+extern int sysctl_softlockup_all_cpu_backtrace;
 struct ctl_table;
 extern int proc_dowatchdog(struct ctl_table *, int ,
 			   void __user *, size_t *, loff_t *);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 49e13e1..e3e84f1 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -855,6 +855,15 @@ static struct ctl_table kern_table[] = {
 		.extra2		= &one,
 	},
 	{
+		.procname	= "softlockup_all_cpu_backtrace",
+		.data		= &sysctl_softlockup_all_cpu_backtrace,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= &zero,
+		.extra2		= &one,
+	},
+	{
 		.procname       = "nmi_watchdog",
 		.data           = &watchdog_user_enabled,
 		.maxlen         = sizeof (int),
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 4431610..9e661de 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -31,6 +31,7 @@
 
 int watchdog_user_enabled = 1;
 int __read_mostly watchdog_thresh = 10;
+int __read_mostly sysctl_softlockup_all_cpu_backtrace;
 static int __read_mostly watchdog_running;
 static u64 __read_mostly sample_period;
 
@@ -47,6 +48,7 @@ static DEFINE_PER_CPU(bool, watchdog_nmi_touch);
 static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
 static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
 #endif
+static unsigned long soft_lockup_nmi_warn;
 
 /* boot commands */
 /*
@@ -95,6 +97,13 @@ static int __init nosoftlockup_setup(char *str)
 }
 __setup("nosoftlockup", nosoftlockup_setup);
 /*  */
+static int __init softlockup_all_cpu_backtrace_setup(char *str)
+{
+	sysctl_softlockup_all_cpu_backtrace +		!!simple_strtol(str, NULL, 0);
+	return 1;
+}
+__setup("softlockup_all_cpu_backtrace=", softlockup_all_cpu_backtrace_setup);
 
 /*
  * Hard-lockup warnings should be triggered after just a few seconds. Soft-
@@ -267,6 +276,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
 	unsigned long touch_ts = __this_cpu_read(watchdog_touch_ts);
 	struct pt_regs *regs = get_irq_regs();
 	int duration;
+	int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace;
 
 	/* kick the hardlockup detector */
 	watchdog_interrupt_count();
@@ -313,6 +323,17 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
 		if (__this_cpu_read(soft_watchdog_warn) = true)
 			return HRTIMER_RESTART;
 
+		if (softlockup_all_cpu_backtrace) {
+			/* Prevent multiple soft-lockup reports if one cpu is already
+			 * engaged in dumping cpu back traces
+			 */
+			if (test_and_set_bit(0, &soft_lockup_nmi_warn)) {
+				/* Someone else will report us. Let's give up */
+				__this_cpu_write(soft_watchdog_warn, true);
+				return HRTIMER_RESTART;
+			}
+		}
+
 		printk(KERN_EMERG "BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
 			smp_processor_id(), duration,
 			current->comm, task_pid_nr(current));
@@ -323,6 +344,17 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
 		else
 			dump_stack();
 
+		if (softlockup_all_cpu_backtrace) {
+			/* Avoid generating two back traces for current
+			 * given that one is already made above
+			 */
+			trigger_allbutself_cpu_backtrace();
+
+			clear_bit(0, &soft_lockup_nmi_warn);
+			/* Barrier to sync with other cpus */
+			smp_mb__after_clear_bit();
+		}
+
 		if (softlockup_panic)
 			panic("softlockup: hung tasks");
 		__this_cpu_write(soft_watchdog_warn, true);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4 v2] sparc64, nmi: Add more flexible NMI back trace support
  2014-04-04 20:47   ` Don Zickus
@ 2014-04-04 20:53     ` David Miller
  -1 siblings, 0 replies; 12+ messages in thread
From: David Miller @ 2014-04-04 20:53 UTC (permalink / raw)
  To: dzickus; +Cc: linux-kernel, akpm, x86, sparclinux, mguzik, atomlin

From: Don Zickus <dzickus@redhat.com>
Date: Fri,  4 Apr 2014 16:47:09 -0400

> From: Aaron Tomlin <atomlin@redhat.com>
> 
> This patch introduces the sparc specific implementation
> changes to the arch_trigger_all_cpu_backtrace() routine.
> Now users have the ability to choose whether or not to
> issue an NMI back trace which includes current.
> 
> Update sysrq_handle_globreg() to use the new interface
> 
> [squash two sparc patches, update changelog - dcz]
> 
> Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
> Signed-off-by: Don Zickus <dzickus@redhat.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4 v2] sparc64, nmi: Add more flexible NMI back trace support
@ 2014-04-04 20:53     ` David Miller
  0 siblings, 0 replies; 12+ messages in thread
From: David Miller @ 2014-04-04 20:53 UTC (permalink / raw)
  To: dzickus; +Cc: linux-kernel, akpm, x86, sparclinux, mguzik, atomlin

From: Don Zickus <dzickus@redhat.com>
Date: Fri,  4 Apr 2014 16:47:09 -0400

> From: Aaron Tomlin <atomlin@redhat.com>
> 
> This patch introduces the sparc specific implementation
> changes to the arch_trigger_all_cpu_backtrace() routine.
> Now users have the ability to choose whether or not to
> issue an NMI back trace which includes current.
> 
> Update sysrq_handle_globreg() to use the new interface
> 
> [squash two sparc patches, update changelog - dcz]
> 
> Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
> Signed-off-by: Don Zickus <dzickus@redhat.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-04-04 20:53 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-04 20:47 [PATCH 0/4 V2] Print traces on softlockups Don Zickus
2014-04-04 20:47 ` Don Zickus
2014-04-04 20:47 ` [PATCH 1/4 v2] nmi: Provide the option to issue an NMI back trace to every cpu but current Don Zickus
2014-04-04 20:47   ` Don Zickus
2014-04-04 20:47 ` [PATCH 2/4 v2] x86, nmi: Add more flexible NMI back trace support Don Zickus
2014-04-04 20:47   ` Don Zickus
2014-04-04 20:47 ` [PATCH 3/4 v2] sparc64, " Don Zickus
2014-04-04 20:47   ` Don Zickus
2014-04-04 20:53   ` David Miller
2014-04-04 20:53     ` David Miller
2014-04-04 20:47 ` [PATCH 4/4 v2] watchdog: Printing traces for all cpus on lockup detection Don Zickus
2014-04-04 20:47   ` Don Zickus

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.