From: Petr Mladek <pmladek@suse.cz>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: linux-kernel@vger.kernel.org, Ingo Molnar <mingo@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Jiri Kosina <jkosina@suse.cz>, "H. Peter Anvin" <hpa@zytor.com>,
Thomas Gleixner <tglx@linutronix.de>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Subject: Re: [RFC][PATCH 12/12 v3] x86/nmi: Perform a safe NMI stack trace on all CPUs
Date: Thu, 6 Nov 2014 19:41:55 +0100 [thread overview]
Message-ID: <20141106184155.GB28294@dhcp128.suse.cz> (raw)
In-Reply-To: <20141104160223.310714394@goodmis.org>
On Tue 2014-11-04 10:52:49, Steven Rostedt wrote:
> From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
>
> When trigger_all_cpu_backtrace() is called on x86, it will trigger an
> NMI on each CPU and call show_regs(). But this can lead to a hard lock
> up if the NMI comes in on another printk().
>
> In order to avoid this, when the NMI triggers, it switches the printk
> routine for that CPU to call a NMI safe printk function that records the
> printk in a per_cpu seq_buf descriptor. After all NMIs have finished
> recording its data, the trace_seqs are printed in a safe context.
>
> Link: http://lkml.kernel.org/p/20140619213952.360076309@goodmis.org
>
> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
> ---
> arch/x86/kernel/apic/hw_nmi.c | 90 ++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 85 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> index 6a1e71bde323..6e7bb0bc6fcd 100644
> --- a/arch/x86/kernel/apic/hw_nmi.c
> +++ b/arch/x86/kernel/apic/hw_nmi.c
> @@ -18,6 +18,7 @@
> #include <linux/nmi.h>
> #include <linux/module.h>
> #include <linux/delay.h>
> +#include <linux/seq_buf.h>
>
> #ifdef CONFIG_HARDLOCKUP_DETECTOR
> u64 hw_nmi_get_sample_period(int watchdog_thresh)
> @@ -29,14 +30,35 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh)
> #ifdef arch_trigger_all_cpu_backtrace
> /* For reliability, we're prepared to waste bits here. */
> static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
> +static cpumask_var_t printtrace_mask;
> +
> +#define NMI_BUF_SIZE 4096
> +
> +struct nmi_seq_buf {
> + unsigned char buffer[NMI_BUF_SIZE];
> + struct seq_buf seq;
> +};
> +
> +/* Safe printing in NMI context */
> +static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);
>
> /* "in progress" flag of arch_trigger_all_cpu_backtrace */
> static unsigned long backtrace_flag;
>
> +static void print_seq_line(struct nmi_seq_buf *s, int last, int pos)
I would rename the arguments:
"last -> first"
"pos -> last"
or maybe better would be to pass first positon and len.
> +{
> + const char *buf = s->buffer + last;
> +
> + printk("%.*s", (pos - last) + 1, buf);
> +}
> +{
> + const char *buf = s->buffer + last;
> +
> + printk("%.*s", (pos - last) + 1, buf);
> +}
> +
> void arch_trigger_all_cpu_backtrace(bool include_self)
> {
> + struct nmi_seq_buf *s;
> + int len;
> + int cpu;
> int i;
> - int cpu = get_cpu();
> + int this_cpu = get_cpu();
>
> if (test_and_set_bit(0, &backtrace_flag)) {
> /*
> @@ -49,7 +71,17 @@ void arch_trigger_all_cpu_backtrace(bool include_self)
>
> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
> if (!include_self)
> - cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
> + cpumask_clear_cpu(this_cpu, to_cpumask(backtrace_mask));
> +
> + cpumask_copy(printtrace_mask, to_cpumask(backtrace_mask));
> + /*
> + * Set up per_cpu seq_buf buffers that the NMIs running on the other
> + * CPUs will write to.
> + */
> + for_each_cpu(cpu, to_cpumask(backtrace_mask)) {
> + s = &per_cpu(nmi_print_seq, cpu);
> + seq_buf_init(&s->seq, s->buffer, NMI_BUF_SIZE);
> + }
>
> if (!cpumask_empty(to_cpumask(backtrace_mask))) {
> pr_info("sending NMI to %s CPUs:\n",
> @@ -65,11 +97,57 @@ void arch_trigger_all_cpu_backtrace(bool include_self)
> touch_softlockup_watchdog();
> }
>
> + /*
> + * Now that all the NMIs have triggered, we can dump out their
> + * back traces safely to the console.
> + */
> + for_each_cpu(cpu, printtrace_mask) {
> + int last_i = 0;
> +
> + s = &per_cpu(nmi_print_seq, cpu);
> + len = s->seq.len;
If there is an seq_buf overflow, the len might be size + 1, so we need to do:
len = min(s->seq.len, s->size);
Well, we should create a function for this in seq_buf.h.
Alternatively, we might reconsider the overflow state,
use len == size and extra "overflow" flag in the seq_buf struct.
> + if (!len)
> + continue;
> +
> + /* Print line by line. */
> + for (i = 0; i < len; i++) {
> + if (s->buffer[i] == '\n') {
> + print_seq_line(s, last_i, i);
> + last_i = i + 1;
> + }
> + }
>
> + if (last_i < i - 1) {
IMHO, this should be:
if (last_i < i)
because last_i = i + 1. Otherwise, we would ignore state when there is
one character after a new line. For example, imagine the following:
buffer = "a\nb";
len = 3;
it will end with:
last_i = 2;
i = 3;
and we still need to print the "b".
> + print_seq_line(s, last_i, i);
If I get it correctly, (i == len) here and "printk_seq_line"
print_seq_line() prints the characters including "pos" value.
So, we should call:
print_seq_line(s, last_i, i - 1)
> + pr_cont("\n");
> + }
> + }
> +
I hope that I have got it correctly. It is getting late here and I
feel tired to see the off-by-one problems clearly ;-)
Best Regards,
Petr
next prev parent reply other threads:[~2014-11-06 18:42 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-04 15:52 [RFC][PATCH 00/12 v3] seq-buf/x86/printk: Print all stacks from NMI safely Steven Rostedt
2014-11-04 15:52 ` [RFC][PATCH 01/12 v3] x86/kvm/tracing: Use helper function trace_seq_buffer_ptr() Steven Rostedt
2014-11-04 16:27 ` Paolo Bonzini
2014-11-04 17:17 ` Rustad, Mark D
2014-11-04 19:09 ` Steven Rostedt
2014-11-04 19:35 ` Steven Rostedt
2014-11-04 20:09 ` Rustad, Mark D
2014-11-05 10:28 ` Petr Mladek
2014-11-04 15:52 ` [RFC][PATCH 02/12 v3] RAS/tracing: Use trace_seq_buffer_ptr() helper instead of open coded Steven Rostedt
2014-11-04 19:59 ` Borislav Petkov
2014-11-05 10:29 ` Petr Mladek
2014-11-04 15:52 ` [RFC][PATCH 03/12 v3] tracing: Create seq_buf layer in trace_seq Steven Rostedt
2014-11-05 14:22 ` Petr Mladek
2014-11-05 18:41 ` Steven Rostedt
2014-11-05 20:00 ` Steven Rostedt
2014-11-05 21:17 ` Steven Rostedt
2014-11-05 21:21 ` Steven Rostedt
2014-11-06 16:33 ` Petr Mladek
2014-11-07 18:30 ` Steven Rostedt
2014-11-07 18:59 ` Joe Perches
2014-11-07 19:10 ` Steven Rostedt
2014-11-10 13:53 ` Petr Mladek
2014-11-10 17:37 ` Steven Rostedt
2014-11-10 19:02 ` Petr Mladek
2014-11-06 16:13 ` Petr Mladek
2014-11-05 14:26 ` Petr Mladek
2014-11-05 18:42 ` Steven Rostedt
2014-11-04 15:52 ` [RFC][PATCH 04/12 v3] tracing: Convert seq_buf_path() to be like seq_path() Steven Rostedt
2014-11-05 14:45 ` Petr Mladek
2014-11-05 20:10 ` Steven Rostedt
2014-11-06 14:18 ` Petr Mladek
2014-11-06 21:09 ` Steven Rostedt
2014-11-06 15:01 ` Petr Mladek
2014-11-07 18:34 ` Steven Rostedt
2014-11-10 14:03 ` Petr Mladek
2014-11-10 17:38 ` Steven Rostedt
2014-11-04 15:52 ` [RFC][PATCH 05/12 v3] tracing: Convert seq_buf fields to be like seq_file fields Steven Rostedt
2014-11-05 15:57 ` Petr Mladek
2014-11-05 20:14 ` Steven Rostedt
2014-11-06 14:24 ` Petr Mladek
2014-11-04 15:52 ` [RFC][PATCH 06/12 v3] tracing: Add a seq_buf_clear() helper and clear len and readpos in init Steven Rostedt
2014-11-05 16:00 ` Petr Mladek
2014-11-04 15:52 ` [RFC][PATCH 07/12 v3] tracing: Have seq_buf use full buffer Steven Rostedt
2014-11-05 16:31 ` Petr Mladek
2014-11-05 20:21 ` Steven Rostedt
2014-11-05 21:06 ` Steven Rostedt
2014-11-06 15:31 ` Petr Mladek
2014-11-06 19:24 ` Steven Rostedt
2014-11-07 9:11 ` Petr Mladek
2014-11-07 18:37 ` Steven Rostedt
2014-11-10 18:11 ` Petr Mladek
2014-11-06 15:13 ` Petr Mladek
2014-11-04 15:52 ` [RFC][PATCH 08/12 v3] tracing: Add seq_buf_get_buf() and seq_buf_commit() helper functions Steven Rostedt
2014-11-05 16:51 ` Petr Mladek
2014-11-05 20:26 ` Steven Rostedt
2014-11-07 18:39 ` Steven Rostedt
2014-11-10 18:33 ` Petr Mladek
2014-11-10 19:23 ` Steven Rostedt
2014-11-04 15:52 ` [RFC][PATCH 09/12 v3] seq_buf: Move the seq_buf code to lib/ Steven Rostedt
2014-11-05 16:57 ` Petr Mladek
2014-11-05 20:32 ` Steven Rostedt
2014-11-04 15:52 ` [RFC][PATCH 10/12 v3] seq-buf: Make seq_buf_bprintf() conditional on CONFIG_BINARY_PRINTF Steven Rostedt
2014-11-05 17:06 ` Petr Mladek
2014-11-05 20:33 ` Steven Rostedt
2014-11-05 20:42 ` Steven Rostedt
2014-11-06 14:39 ` Petr Mladek
2014-11-07 20:36 ` Junio C Hamano
2014-11-07 21:49 ` Steven Rostedt
2014-11-10 18:43 ` Petr Mladek
2014-11-04 15:52 ` [RFC][PATCH 11/12 v3] printk: Add per_cpu printk func to allow printk to be diverted Steven Rostedt
2014-11-06 16:56 ` Petr Mladek
2014-11-04 15:52 ` [RFC][PATCH 12/12 v3] x86/nmi: Perform a safe NMI stack trace on all CPUs Steven Rostedt
2014-11-04 23:05 ` Jiri Kosina
2014-11-04 23:41 ` Steven Rostedt
2014-11-06 18:41 ` Petr Mladek [this message]
2014-11-07 18:56 ` Steven Rostedt
2014-11-10 18:58 ` Petr Mladek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20141106184155.GB28294@dhcp128.suse.cz \
--to=pmladek@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=hpa@zytor.com \
--cc=jkosina@suse.cz \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).