* [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
@ 2010-10-11 10:31 Dongdong Deng
2010-10-18 11:00 ` DDD
0 siblings, 1 reply; 15+ messages in thread
From: Dongdong Deng @ 2010-10-11 10:31 UTC (permalink / raw)
To: tglx, mingo, hpa; +Cc: x86, linux-kernel, dongdong.deng, bruce.ashfield
The spin_lock_debug/rcu_cpu_stall detector uses
trigger_all_cpu_backtrace() to dump cpu backtrace.
Therefore it is possible that trigger_all_cpu_backtrace()
could be called at the same time on different CPUs, which
triggers and 'unknown reason NMI' warning. The following case
illustrates the problem:
CPU1 CPU2 ... CPU N
trigger_all_cpu_backtrace()
set "backtrace_mask" to cpu mask
|
generate NMI interrupts generate NMI interrupts ...
\ | /
\ | /
The "backtrace_mask" will be cleaned by the first NMI interrupt
at nmi_watchdog_tick(), then the following NMI interrupts generated
by other cpus's arch_trigger_all_cpu_backtrace() will be took as
unknown reason NMI interrupts.
This patch uses a lock to avoid the problem, and stop the
arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
backtrace info when there is already a trigger_all_cpu_backtrace()
in progress.
Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
---
arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
2 files changed, 28 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index cefd694..3aea0a5 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
void arch_trigger_all_cpu_backtrace(void)
{
int i;
+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ if (!arch_spin_trylock(&lock))
+ /*
+ * If there is already a trigger_all_cpu_backtrace()
+ * in progress, don't output double cpu dump infos.
+ */
+ goto out_restore_irq;
cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
@@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
break;
mdelay(1);
}
+
+ arch_spin_unlock(&lock);
+out_restore_irq:
+ local_irq_restore(flags);
}
static int __kprobes
diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
index a43f71c..5fa8a13 100644
--- a/arch/x86/kernel/apic/nmi.c
+++ b/arch/x86/kernel/apic/nmi.c
@@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
void arch_trigger_all_cpu_backtrace(void)
{
int i;
+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ if (!arch_spin_trylock(&lock))
+ /*
+ * If there is already a trigger_all_cpu_backtrace()
+ * in progress, don't output double cpu dump infos.
+ */
+ goto out_restore_irq;
cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
@@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
break;
mdelay(1);
}
+
+ arch_spin_unlock(&lock);
+out_restore_irq:
+ local_irq_restore(flags);
}
--
1.6.0.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-10-11 10:31 [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP Dongdong Deng
@ 2010-10-18 11:00 ` DDD
2010-10-18 18:03 ` Don Zickus
0 siblings, 1 reply; 15+ messages in thread
From: DDD @ 2010-10-18 11:00 UTC (permalink / raw)
To: tglx, hpa, mingo; +Cc: Dongdong Deng, x86, linux-kernel, bruce.ashfield
CC to Ingo's mingo@elte.hu and add some code explains for this patch.
Dongdong
Dongdong Deng wrote:
> The spin_lock_debug/rcu_cpu_stall detector uses
> trigger_all_cpu_backtrace() to dump cpu backtrace.
> Therefore it is possible that trigger_all_cpu_backtrace()
> could be called at the same time on different CPUs, which
> triggers and 'unknown reason NMI' warning. The following case
> illustrates the problem:
>
> CPU1 CPU2 ... CPU N
> trigger_all_cpu_backtrace()
> set "backtrace_mask" to cpu mask
> |
> generate NMI interrupts generate NMI interrupts ...
> \ | /
> \ | /
> The "backtrace_mask" will be cleaned by the first NMI interrupt
> at nmi_watchdog_tick(), then the following NMI interrupts generated
> by other cpus's arch_trigger_all_cpu_backtrace() will be took as
> unknown reason NMI interrupts.
>
> This patch uses a lock to avoid the problem, and stop the
> arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
> backtrace info when there is already a trigger_all_cpu_backtrace()
> in progress.
>
> Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
> Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
> CC: Thomas Gleixner <tglx@linutronix.de>
> CC: Ingo Molnar <mingo@redhat.com>
> CC: "H. Peter Anvin" <hpa@zytor.com>
> CC: x86@kernel.org
> CC: linux-kernel@vger.kernel.org
> ---
> arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
> arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
> 2 files changed, 28 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> index cefd694..3aea0a5 100644
> --- a/arch/x86/kernel/apic/hw_nmi.c
> +++ b/arch/x86/kernel/apic/hw_nmi.c
> @@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
> void arch_trigger_all_cpu_backtrace(void)
> {
> int i;
> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
Why an arch spin lock vs just using a raw spin lock?
for example. static DEFINE_RAW_SPINLOCK(lock);
The spin_lock_debug detector was used in raw_spinlock too.
arch_trigger_all_cpu_backtrace() -->
raw_spin_lock(lock) -->
_raw_spin_lock(lock) -->
__raw_spin_lock(lock) -->
do_raw_spin_lock(lock) -->
__spin_lock_debug(lock) -->
trigger_all_cpu_backtrace()
Therefor, we have to use arch spin lock here.
> + unsigned long flags;
> +
> + local_irq_save(flags);
Why have to save the irq's here?
When the arch_trigger_all_cpu_backtrace() was triggered by
"spin_lock()"'s spin_lock_debug detector, it is possible that
the irq is enabled, thus we have to save and disable it here.
> + if (!arch_spin_trylock(&lock))
> + /*
> + * If there is already a trigger_all_cpu_backtrace()
> + * in progress, don't output double cpu dump infos.
> + */
> + goto out_restore_irq;
>
> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>
> @@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
> break;
> mdelay(1);
> }
> +
> + arch_spin_unlock(&lock);
> +out_restore_irq:
> + local_irq_restore(flags);
> }
>
> static int __kprobes
> diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
> index a43f71c..5fa8a13 100644
> --- a/arch/x86/kernel/apic/nmi.c
> +++ b/arch/x86/kernel/apic/nmi.c
> @@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
> void arch_trigger_all_cpu_backtrace(void)
> {
> int i;
> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
> + unsigned long flags;
> +
> + local_irq_save(flags);
> + if (!arch_spin_trylock(&lock))
> + /*
> + * If there is already a trigger_all_cpu_backtrace()
> + * in progress, don't output double cpu dump infos.
> + */
> + goto out_restore_irq;
>
> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>
> @@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
> break;
> mdelay(1);
> }
> +
> + arch_spin_unlock(&lock);
> +out_restore_irq:
> + local_irq_restore(flags);
> }
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-10-18 11:00 ` DDD
@ 2010-10-18 18:03 ` Don Zickus
2010-10-21 5:17 ` DDD
0 siblings, 1 reply; 15+ messages in thread
From: Don Zickus @ 2010-10-18 18:03 UTC (permalink / raw)
To: DDD; +Cc: tglx, hpa, mingo, x86, linux-kernel, bruce.ashfield
On Mon, Oct 18, 2010 at 07:00:15PM +0800, DDD wrote:
> CC to Ingo's mingo@elte.hu and add some code explains for this patch.
>
> Dongdong
>
>
> Dongdong Deng wrote:
> >The spin_lock_debug/rcu_cpu_stall detector uses
> >trigger_all_cpu_backtrace() to dump cpu backtrace.
> >Therefore it is possible that trigger_all_cpu_backtrace()
> >could be called at the same time on different CPUs, which
> >triggers and 'unknown reason NMI' warning. The following case
> >illustrates the problem:
The patch seems reasonable. I can queue it up. Ingo wrote the original
code and knows which spin locks to use better than me, perhaps he can find
a moment to comment.
Cheers,
Don
> >
> > CPU1 CPU2 ... CPU N
> > trigger_all_cpu_backtrace()
> > set "backtrace_mask" to cpu mask
> > |
> >generate NMI interrupts generate NMI interrupts ...
> > \ | /
> > \ | /
> > The "backtrace_mask" will be cleaned by the first NMI interrupt
> > at nmi_watchdog_tick(), then the following NMI interrupts generated
> >by other cpus's arch_trigger_all_cpu_backtrace() will be took as
> >unknown reason NMI interrupts.
> >
> >This patch uses a lock to avoid the problem, and stop the
> >arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
> >backtrace info when there is already a trigger_all_cpu_backtrace()
> >in progress.
> >
> >Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
> >Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
> >CC: Thomas Gleixner <tglx@linutronix.de>
> >CC: Ingo Molnar <mingo@redhat.com>
> >CC: "H. Peter Anvin" <hpa@zytor.com>
> >CC: x86@kernel.org
> >CC: linux-kernel@vger.kernel.org
> >---
> > arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
> > arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
> > 2 files changed, 28 insertions(+), 0 deletions(-)
> >
> >diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> >index cefd694..3aea0a5 100644
> >--- a/arch/x86/kernel/apic/hw_nmi.c
> >+++ b/arch/x86/kernel/apic/hw_nmi.c
> >@@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
> > void arch_trigger_all_cpu_backtrace(void)
> > {
> > int i;
> >+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
>
> Why an arch spin lock vs just using a raw spin lock?
> for example. static DEFINE_RAW_SPINLOCK(lock);
>
> The spin_lock_debug detector was used in raw_spinlock too.
>
> arch_trigger_all_cpu_backtrace() -->
> raw_spin_lock(lock) -->
> _raw_spin_lock(lock) -->
> __raw_spin_lock(lock) -->
> do_raw_spin_lock(lock) -->
> __spin_lock_debug(lock) -->
> trigger_all_cpu_backtrace()
>
> Therefor, we have to use arch spin lock here.
>
>
>
> >+ unsigned long flags;
> >+
> >+ local_irq_save(flags);
>
> Why have to save the irq's here?
>
> When the arch_trigger_all_cpu_backtrace() was triggered by
> "spin_lock()"'s spin_lock_debug detector, it is possible that
> the irq is enabled, thus we have to save and disable it here.
>
>
>
> >+ if (!arch_spin_trylock(&lock))
> >+ /*
> >+ * If there is already a trigger_all_cpu_backtrace()
> >+ * in progress, don't output double cpu dump infos.
> >+ */
> >+ goto out_restore_irq;
> > cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
> >@@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
> > break;
> > mdelay(1);
> > }
> >+
> >+ arch_spin_unlock(&lock);
> >+out_restore_irq:
> >+ local_irq_restore(flags);
> > }
> > static int __kprobes
> >diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
> >index a43f71c..5fa8a13 100644
> >--- a/arch/x86/kernel/apic/nmi.c
> >+++ b/arch/x86/kernel/apic/nmi.c
> >@@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
> > void arch_trigger_all_cpu_backtrace(void)
> > {
> > int i;
> >+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
> >+ unsigned long flags;
> >+
> >+ local_irq_save(flags);
> >+ if (!arch_spin_trylock(&lock))
> >+ /*
> >+ * If there is already a trigger_all_cpu_backtrace()
> >+ * in progress, don't output double cpu dump infos.
> >+ */
> >+ goto out_restore_irq;
> > cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
> >@@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
> > break;
> > mdelay(1);
> > }
> >+
> >+ arch_spin_unlock(&lock);
> >+out_restore_irq:
> >+ local_irq_restore(flags);
> > }
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-10-18 18:03 ` Don Zickus
@ 2010-10-21 5:17 ` DDD
0 siblings, 0 replies; 15+ messages in thread
From: DDD @ 2010-10-21 5:17 UTC (permalink / raw)
To: Don Zickus; +Cc: tglx, hpa, mingo, x86, linux-kernel, bruce.ashfield
Don Zickus wrote:
> On Mon, Oct 18, 2010 at 07:00:15PM +0800, DDD wrote:
>> CC to Ingo's mingo@elte.hu and add some code explains for this patch.
>>
>> Dongdong
>>
>>
>> Dongdong Deng wrote:
>>> The spin_lock_debug/rcu_cpu_stall detector uses
>>> trigger_all_cpu_backtrace() to dump cpu backtrace.
>>> Therefore it is possible that trigger_all_cpu_backtrace()
>>> could be called at the same time on different CPUs, which
>>> triggers and 'unknown reason NMI' warning. The following case
>>> illustrates the problem:
>
> The patch seems reasonable. I can queue it up. Ingo wrote the original
> code and knows which spin locks to use better than me, perhaps he can find
> a moment to comment.
Hello Don,
Thanks for taking care and queuing this patch. :-)
Dongdong
>
> Cheers,
> Don
>
>>> CPU1 CPU2 ... CPU N
>>> trigger_all_cpu_backtrace()
>>> set "backtrace_mask" to cpu mask
>>> |
>>> generate NMI interrupts generate NMI interrupts ...
>>> \ | /
>>> \ | /
>>> The "backtrace_mask" will be cleaned by the first NMI interrupt
>>> at nmi_watchdog_tick(), then the following NMI interrupts generated
>>> by other cpus's arch_trigger_all_cpu_backtrace() will be took as
>>> unknown reason NMI interrupts.
>>>
>>> This patch uses a lock to avoid the problem, and stop the
>>> arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
>>> backtrace info when there is already a trigger_all_cpu_backtrace()
>>> in progress.
>>>
>>> Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
>>> Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
>>> CC: Thomas Gleixner <tglx@linutronix.de>
>>> CC: Ingo Molnar <mingo@redhat.com>
>>> CC: "H. Peter Anvin" <hpa@zytor.com>
>>> CC: x86@kernel.org
>>> CC: linux-kernel@vger.kernel.org
>>> ---
>>> arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
>>> arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
>>> 2 files changed, 28 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
>>> index cefd694..3aea0a5 100644
>>> --- a/arch/x86/kernel/apic/hw_nmi.c
>>> +++ b/arch/x86/kernel/apic/hw_nmi.c
>>> @@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
>>> void arch_trigger_all_cpu_backtrace(void)
>>> {
>>> int i;
>>> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
>> Why an arch spin lock vs just using a raw spin lock?
>> for example. static DEFINE_RAW_SPINLOCK(lock);
>>
>> The spin_lock_debug detector was used in raw_spinlock too.
>>
>> arch_trigger_all_cpu_backtrace() -->
>> raw_spin_lock(lock) -->
>> _raw_spin_lock(lock) -->
>> __raw_spin_lock(lock) -->
>> do_raw_spin_lock(lock) -->
>> __spin_lock_debug(lock) -->
>> trigger_all_cpu_backtrace()
>>
>> Therefor, we have to use arch spin lock here.
>>
>>
>>
>>> + unsigned long flags;
>>> +
>>> + local_irq_save(flags);
>> Why have to save the irq's here?
>>
>> When the arch_trigger_all_cpu_backtrace() was triggered by
>> "spin_lock()"'s spin_lock_debug detector, it is possible that
>> the irq is enabled, thus we have to save and disable it here.
>>
>>
>>
>>> + if (!arch_spin_trylock(&lock))
>>> + /*
>>> + * If there is already a trigger_all_cpu_backtrace()
>>> + * in progress, don't output double cpu dump infos.
>>> + */
>>> + goto out_restore_irq;
>>> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>>> @@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
>>> break;
>>> mdelay(1);
>>> }
>>> +
>>> + arch_spin_unlock(&lock);
>>> +out_restore_irq:
>>> + local_irq_restore(flags);
>>> }
>>> static int __kprobes
>>> diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
>>> index a43f71c..5fa8a13 100644
>>> --- a/arch/x86/kernel/apic/nmi.c
>>> +++ b/arch/x86/kernel/apic/nmi.c
>>> @@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
>>> void arch_trigger_all_cpu_backtrace(void)
>>> {
>>> int i;
>>> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
>>> + unsigned long flags;
>>> +
>>> + local_irq_save(flags);
>>> + if (!arch_spin_trylock(&lock))
>>> + /*
>>> + * If there is already a trigger_all_cpu_backtrace()
>>> + * in progress, don't output double cpu dump infos.
>>> + */
>>> + goto out_restore_irq;
>>> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>>> @@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
>>> break;
>>> mdelay(1);
>>> }
>>> +
>>> + arch_spin_unlock(&lock);
>>> +out_restore_irq:
>>> + local_irq_restore(flags);
>>> }
>>
>>
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-11 10:01 ` Ingo Molnar
@ 2010-11-11 11:00 ` DDD
0 siblings, 0 replies; 15+ messages in thread
From: DDD @ 2010-11-11 11:00 UTC (permalink / raw)
To: Ingo Molnar; +Cc: dzickus, peterz, tglx, mingo, hpa, x86, linux-kernel
Ingo Molnar wrote:
> * DDD <dongdong.deng@windriver.com> wrote:
>
>> Ingo Molnar wrote:
>>> * Dongdong Deng <dongdong.deng@windriver.com> wrote:
>>>
>>>> +static int backtrace_flag;
>>>> + if (cmpxchg(&backtrace_flag, 0, 1) != 0)
>>> Sorry to be a PITA, but i asked for test_and_set() because that's
>>> the simplest primitive. cmpxchg() semantics is not nearly as
>>> obvious and people regularly get it wrong :-/
>> As the 'backtrace_flag' could be accessed by multi-cpus on SMP at
>> the same time, I use cmpxchg() for getting a atomic/memory barrier
>> operation for 'backtrace_flag' variable.
>>
>> If we use test_and_set, maybe we need smp_wmb() after test_and_set.
>> (If I wrong, please correct me, thanks. :-) )
>
> No, test_and_set_bit() is SMP safe and is an implicit barrier as well - so no
> smp_wmb() or other barriers are needed.
Yep, the spin_lock of test_and_set_bit() could make sure that.
Thank you very much,
I will send out the new patch quickly. :-)
Dongdong
>
> Thanks,
>
> Ingo
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-11 9:51 ` DDD
@ 2010-11-11 10:01 ` Ingo Molnar
2010-11-11 11:00 ` DDD
0 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2010-11-11 10:01 UTC (permalink / raw)
To: DDD; +Cc: dzickus, peterz, tglx, mingo, hpa, x86, linux-kernel
* DDD <dongdong.deng@windriver.com> wrote:
> Ingo Molnar wrote:
> >* Dongdong Deng <dongdong.deng@windriver.com> wrote:
> >
> >>+static int backtrace_flag;
> >
> >>+ if (cmpxchg(&backtrace_flag, 0, 1) != 0)
> >
> >Sorry to be a PITA, but i asked for test_and_set() because that's
> >the simplest primitive. cmpxchg() semantics is not nearly as
> >obvious and people regularly get it wrong :-/
>
> As the 'backtrace_flag' could be accessed by multi-cpus on SMP at
> the same time, I use cmpxchg() for getting a atomic/memory barrier
> operation for 'backtrace_flag' variable.
>
> If we use test_and_set, maybe we need smp_wmb() after test_and_set.
> (If I wrong, please correct me, thanks. :-) )
No, test_and_set_bit() is SMP safe and is an implicit barrier as well - so no
smp_wmb() or other barriers are needed.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-11 9:51 ` Eric Dumazet
@ 2010-11-11 9:57 ` Ingo Molnar
0 siblings, 0 replies; 15+ messages in thread
From: Ingo Molnar @ 2010-11-11 9:57 UTC (permalink / raw)
To: Eric Dumazet
Cc: Dongdong Deng, dzickus, peterz, tglx, mingo, hpa, x86, linux-kernel
* Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le jeudi 11 novembre 2010 à 10:23 +0100, Ingo Molnar a écrit :
>
> > Also, variables that cmpxchg or test_and_set operates on need to be long, not int.
>
> Hmm, ok for test_and_set(), it operates on a long.
>
> cmpxchg() is ok on an int AFAIK. If not we have to make some changes :(
>
> btrfs_orphan_cleanup() for example does this :
>
> if (cmpxchg(&root->orphan_cleanup_state, 0, ORPHAN_CLEANUP_STARTED))
> ...
>
>
> Same in build_ehash_secret() (net/ipv4/af_inet.c)
>
> cmpxchg(&inet_ehash_secret, 0, rnd);
You are right - cmpxchg() auto-detects the word size and thus should work on int
too.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-11 9:23 ` Ingo Molnar
2010-11-11 9:51 ` DDD
@ 2010-11-11 9:51 ` Eric Dumazet
2010-11-11 9:57 ` Ingo Molnar
1 sibling, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2010-11-11 9:51 UTC (permalink / raw)
To: Ingo Molnar
Cc: Dongdong Deng, dzickus, peterz, tglx, mingo, hpa, x86, linux-kernel
Le jeudi 11 novembre 2010 à 10:23 +0100, Ingo Molnar a écrit :
> Also, variables that cmpxchg or test_and_set operates on need to be long, not int.
Hmm, ok for test_and_set(), it operates on a long.
cmpxchg() is ok on an int AFAIK. If not we have to make some changes :(
btrfs_orphan_cleanup() for example does this :
if (cmpxchg(&root->orphan_cleanup_state, 0, ORPHAN_CLEANUP_STARTED))
...
Same in build_ehash_secret() (net/ipv4/af_inet.c)
cmpxchg(&inet_ehash_secret, 0, rnd);
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-11 9:23 ` Ingo Molnar
@ 2010-11-11 9:51 ` DDD
2010-11-11 10:01 ` Ingo Molnar
2010-11-11 9:51 ` Eric Dumazet
1 sibling, 1 reply; 15+ messages in thread
From: DDD @ 2010-11-11 9:51 UTC (permalink / raw)
To: Ingo Molnar; +Cc: dzickus, peterz, tglx, mingo, hpa, x86, linux-kernel
Ingo Molnar wrote:
> * Dongdong Deng <dongdong.deng@windriver.com> wrote:
>
>> +static int backtrace_flag;
>
>> + if (cmpxchg(&backtrace_flag, 0, 1) != 0)
>
> Sorry to be a PITA, but i asked for test_and_set() because that's the simplest
> primitive. cmpxchg() semantics is not nearly as obvious and people regularly get it
> wrong :-/
As the 'backtrace_flag' could be accessed by multi-cpus on SMP at the
same time, I use cmpxchg() for getting a atomic/memory barrier operation
for 'backtrace_flag' variable.
If we use test_and_set, maybe we need smp_wmb() after test_and_set. (If
I wrong, please correct me, thanks. :-) )
Should we still need to use test_and_set?
If need, I will use test_and_set at my next patch.
>
> Also, variables that cmpxchg or test_and_set operates on need to be long, not int.
Got it, I will change it to 'unsigned long' type.
Thanks for your teaching.
Dongdong
>
> Ingo
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-11 2:20 Dongdong Deng
@ 2010-11-11 9:23 ` Ingo Molnar
2010-11-11 9:51 ` DDD
2010-11-11 9:51 ` Eric Dumazet
0 siblings, 2 replies; 15+ messages in thread
From: Ingo Molnar @ 2010-11-11 9:23 UTC (permalink / raw)
To: Dongdong Deng; +Cc: dzickus, peterz, tglx, mingo, hpa, x86, linux-kernel
* Dongdong Deng <dongdong.deng@windriver.com> wrote:
> +static int backtrace_flag;
> + if (cmpxchg(&backtrace_flag, 0, 1) != 0)
Sorry to be a PITA, but i asked for test_and_set() because that's the simplest
primitive. cmpxchg() semantics is not nearly as obvious and people regularly get it
wrong :-/
Also, variables that cmpxchg or test_and_set operates on need to be long, not int.
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
@ 2010-11-11 2:20 Dongdong Deng
2010-11-11 9:23 ` Ingo Molnar
0 siblings, 1 reply; 15+ messages in thread
From: Dongdong Deng @ 2010-11-11 2:20 UTC (permalink / raw)
To: mingo, dzickus; +Cc: peterz, tglx, mingo, hpa, x86, linux-kernel, dongdong.deng
The spin_lock_debug/rcu_cpu_stall detector uses
trigger_all_cpu_backtrace() to dump cpu backtrace.
Therefore it is possible that trigger_all_cpu_backtrace()
could be called at the same time on different CPUs, which
triggers and 'unknown reason NMI' warning. The following case
illustrates the problem:
CPU1 CPU2 ... CPU N
trigger_all_cpu_backtrace()
set "backtrace_mask" to cpu mask
|
generate NMI interrupts generate NMI interrupts ...
\ | /
\ | /
The "backtrace_mask" will be cleaned by the first NMI interrupt
at nmi_watchdog_tick(), then the following NMI interrupts generated
by other cpus's arch_trigger_all_cpu_backtrace() will be took as
unknown reason NMI interrupts.
This patch uses a lock to avoid the problem, and stop the
arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
backtrace info when there is already a trigger_all_cpu_backtrace()
in progress.
Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
arch/x86/kernel/apic/hw_nmi.c | 23 +++++++++++++++++++++++
arch/x86/kernel/apic/nmi.c | 23 +++++++++++++++++++++++
2 files changed, 46 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index cefd694..6084c3b 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -26,9 +26,27 @@ u64 hw_nmi_get_sample_period(void)
}
#ifdef ARCH_HAS_NMI_WATCHDOG
+/* "in progress" flag of arch_trigger_all_cpu_backtrace */
+static int backtrace_flag;
+
void arch_trigger_all_cpu_backtrace(void)
{
int i;
+ unsigned long flags;
+
+ /*
+ * Have to disable irq here, as the
+ * arch_trigger_all_cpu_backtrace() could be
+ * triggered by "spin_lock()" with irqs on.
+ */
+ local_irq_save(flags);
+
+ if (cmpxchg(&backtrace_flag, 0, 1) != 0)
+ /*
+ * If there is already a trigger_all_cpu_backtrace() in progress
+ * (backtrace_flag == 1), don't output double cpu dump infos.
+ */
+ goto out_restore_irq;
cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
@@ -41,6 +59,11 @@ void arch_trigger_all_cpu_backtrace(void)
break;
mdelay(1);
}
+
+ cmpxchg(&backtrace_flag, 1, 0);
+
+out_restore_irq:
+ local_irq_restore(flags);
}
static int __kprobes
diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
index c90041c..2d4b3a1 100644
--- a/arch/x86/kernel/apic/nmi.c
+++ b/arch/x86/kernel/apic/nmi.c
@@ -549,9 +549,27 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
return 0;
}
+/* "in progress" flag of arch_trigger_all_cpu_backtrace */
+static int backtrace_flag;
+
void arch_trigger_all_cpu_backtrace(void)
{
int i;
+ unsigned long flags;
+
+ /*
+ * Have to disable irq here, as the
+ * arch_trigger_all_cpu_backtrace() could be
+ * triggered by "spin_lock()" with irqs on.
+ */
+ local_irq_save(flags);
+
+ if (cmpxchg(&backtrace_flag, 0, 1) != 0)
+ /*
+ * If there is already a trigger_all_cpu_backtrace() in progress
+ * (backtrace_flag == 1), don't output double cpu dump infos.
+ */
+ goto out_restore_irq;
cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
@@ -564,4 +582,9 @@ void arch_trigger_all_cpu_backtrace(void)
break;
mdelay(1);
}
+
+ cmpxchg(&backtrace_flag, 1, 0);
+
+out_restore_irq:
+ local_irq_restore(flags);
}
--
1.6.0.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-10 8:35 ` DDD
@ 2010-11-10 8:39 ` Ingo Molnar
0 siblings, 0 replies; 15+ messages in thread
From: Ingo Molnar @ 2010-11-10 8:39 UTC (permalink / raw)
To: DDD
Cc: Don Zickus, Peter Zijlstra, Thomas Gleixner, Ingo Molnar,
H. Peter Anvin, x86, linux-kernel
* DDD <dongdong.deng@windriver.com> wrote:
> Ingo Molnar wrote:
> >* Don Zickus <dzickus@redhat.com> wrote:
> >
> >>From: Dongdong Deng <dongdong.deng@windriver.com>
> >>
> >>The spin_lock_debug/rcu_cpu_stall detector uses
> >>trigger_all_cpu_backtrace() to dump cpu backtrace.
> >>Therefore it is possible that trigger_all_cpu_backtrace()
> >>could be called at the same time on different CPUs, which
> >>triggers and 'unknown reason NMI' warning. The following case
> >>illustrates the problem:
> >>
> >> CPU1 CPU2 ... CPU N
> >> trigger_all_cpu_backtrace()
> >> set "backtrace_mask" to cpu mask
> >> |
> >>generate NMI interrupts generate NMI interrupts ...
> >> \ | /
> >> \ | /
> >> The "backtrace_mask" will be cleaned by the first NMI interrupt
> >> at nmi_watchdog_tick(), then the following NMI interrupts generated
> >>by other cpus's arch_trigger_all_cpu_backtrace() will be took as
> >>unknown reason NMI interrupts.
> >>
> >>This patch uses a lock to avoid the problem, and stop the
> >>arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
> >>backtrace info when there is already a trigger_all_cpu_backtrace()
> >>in progress.
> >>
> >>Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
> >>Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
> >>CC: Thomas Gleixner <tglx@linutronix.de>
> >>CC: Ingo Molnar <mingo@redhat.com>
> >>CC: "H. Peter Anvin" <hpa@zytor.com>
> >>CC: x86@kernel.org
> >>CC: linux-kernel@vger.kernel.org
> >>Signed-off-by: Don Zickus <dzickus@redhat.com>
> >>---
> >> arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
> >> arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
> >> 2 files changed, 28 insertions(+), 0 deletions(-)
> >>
> >>diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> >>index cefd694..3aea0a5 100644
> >>--- a/arch/x86/kernel/apic/hw_nmi.c
> >>+++ b/arch/x86/kernel/apic/hw_nmi.c
> >>@@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
> >> void arch_trigger_all_cpu_backtrace(void)
> >> {
> >> int i;
> >>+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
> >
> >Please dont put statics into the middle of local variables - put
> >them into file scope in a visible way.
>
> Got it. will change it.
>
> >
> >>+ unsigned long flags;
> >>+
> >>+ local_irq_save(flags);
> >>+ if (!arch_spin_trylock(&lock))
> >>+ /*
> >>+ * If there is already a trigger_all_cpu_backtrace()
> >>+ * in progress, don't output double cpu dump infos.
> >>+ */
> >>+ goto out_restore_irq;
> >> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
> >>@@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
> >> break;
> >> mdelay(1);
> >> }
> >>+
> >>+ arch_spin_unlock(&lock);
> >>+out_restore_irq:
> >>+ local_irq_restore(flags);
> >> }
> >> static int __kprobes
> >>diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
> >>index a43f71c..5fa8a13 100644
> >>--- a/arch/x86/kernel/apic/nmi.c
> >>+++ b/arch/x86/kernel/apic/nmi.c
> >>@@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
> >> void arch_trigger_all_cpu_backtrace(void)
> >> {
> >> int i;
> >>+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
> >>+ unsigned long flags;
> >>+
> >>+ local_irq_save(flags);
> >>+ if (!arch_spin_trylock(&lock))
> >>+ /*
> >>+ * If there is already a trigger_all_cpu_backtrace()
> >>+ * in progress, don't output double cpu dump infos.
> >>+ */
> >>+ goto out_restore_irq;
> >> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
> >>@@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
> >> break;
> >> mdelay(1);
> >> }
> >>+
> >>+ arch_spin_unlock(&lock);
> >>+out_restore_irq:
> >>+ local_irq_restore(flags);
> >
> >This spinlock is never actually used as a spinlock - it's a "in
> >progress" flag. Why not use a flag and test_and_set_bit()?
>
> Yep, it's a "in progress" flag, I will change to use a flag to
> replace the spinlock.
Thanks.
> >Also, the irq disabling really needed, will this code ever be called with irqs on?
>
> This code could be called with irqs on.
>
> If the arch_trigger_all_cpu_backtrace() was triggered by
> "spin_lock()"'s spin_lock_debug detector, it is possible that
> the irq is enabled, thus we have to save and disable it here.
Ok - please document that fact in the code.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-10 7:43 ` Ingo Molnar
@ 2010-11-10 8:35 ` DDD
2010-11-10 8:39 ` Ingo Molnar
0 siblings, 1 reply; 15+ messages in thread
From: DDD @ 2010-11-10 8:35 UTC (permalink / raw)
To: Ingo Molnar, Don Zickus
Cc: Peter Zijlstra, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
x86, linux-kernel
Ingo Molnar wrote:
> * Don Zickus <dzickus@redhat.com> wrote:
>
>> From: Dongdong Deng <dongdong.deng@windriver.com>
>>
>> The spin_lock_debug/rcu_cpu_stall detector uses
>> trigger_all_cpu_backtrace() to dump cpu backtrace.
>> Therefore it is possible that trigger_all_cpu_backtrace()
>> could be called at the same time on different CPUs, which
>> triggers and 'unknown reason NMI' warning. The following case
>> illustrates the problem:
>>
>> CPU1 CPU2 ... CPU N
>> trigger_all_cpu_backtrace()
>> set "backtrace_mask" to cpu mask
>> |
>> generate NMI interrupts generate NMI interrupts ...
>> \ | /
>> \ | /
>> The "backtrace_mask" will be cleaned by the first NMI interrupt
>> at nmi_watchdog_tick(), then the following NMI interrupts generated
>> by other cpus's arch_trigger_all_cpu_backtrace() will be took as
>> unknown reason NMI interrupts.
>>
>> This patch uses a lock to avoid the problem, and stop the
>> arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
>> backtrace info when there is already a trigger_all_cpu_backtrace()
>> in progress.
>>
>> Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
>> Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
>> CC: Thomas Gleixner <tglx@linutronix.de>
>> CC: Ingo Molnar <mingo@redhat.com>
>> CC: "H. Peter Anvin" <hpa@zytor.com>
>> CC: x86@kernel.org
>> CC: linux-kernel@vger.kernel.org
>> Signed-off-by: Don Zickus <dzickus@redhat.com>
>> ---
>> arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
>> arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
>> 2 files changed, 28 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
>> index cefd694..3aea0a5 100644
>> --- a/arch/x86/kernel/apic/hw_nmi.c
>> +++ b/arch/x86/kernel/apic/hw_nmi.c
>> @@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
>> void arch_trigger_all_cpu_backtrace(void)
>> {
>> int i;
>> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
>
> Please dont put statics into the middle of local variables - put them into file
> scope in a visible way.
Got it. will change it.
>
>> + unsigned long flags;
>> +
>> + local_irq_save(flags);
>> + if (!arch_spin_trylock(&lock))
>> + /*
>> + * If there is already a trigger_all_cpu_backtrace()
>> + * in progress, don't output double cpu dump infos.
>> + */
>> + goto out_restore_irq;
>>
>> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>>
>> @@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
>> break;
>> mdelay(1);
>> }
>> +
>> + arch_spin_unlock(&lock);
>> +out_restore_irq:
>> + local_irq_restore(flags);
>> }
>>
>> static int __kprobes
>> diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
>> index a43f71c..5fa8a13 100644
>> --- a/arch/x86/kernel/apic/nmi.c
>> +++ b/arch/x86/kernel/apic/nmi.c
>> @@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
>> void arch_trigger_all_cpu_backtrace(void)
>> {
>> int i;
>> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
>> + unsigned long flags;
>> +
>> + local_irq_save(flags);
>> + if (!arch_spin_trylock(&lock))
>> + /*
>> + * If there is already a trigger_all_cpu_backtrace()
>> + * in progress, don't output double cpu dump infos.
>> + */
>> + goto out_restore_irq;
>>
>> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>>
>> @@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
>> break;
>> mdelay(1);
>> }
>> +
>> + arch_spin_unlock(&lock);
>> +out_restore_irq:
>> + local_irq_restore(flags);
>
> This spinlock is never actually used as a spinlock - it's a "in progress" flag. Why
> not use a flag and test_and_set_bit()?
Yep, it's a "in progress" flag, I will change to use a flag to replace
the spinlock.
> Also, the irq disabling really needed, will this code ever be called with irqs on?
This code could be called with irqs on.
If the arch_trigger_all_cpu_backtrace() was triggered by
"spin_lock()"'s spin_lock_debug detector, it is possible that
the irq is enabled, thus we have to save and disable it here.
Thanks,
Dongdong
>
> Thanks,
>
> Ingo
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
2010-11-02 18:16 Don Zickus
@ 2010-11-10 7:43 ` Ingo Molnar
2010-11-10 8:35 ` DDD
0 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2010-11-10 7:43 UTC (permalink / raw)
To: Don Zickus
Cc: Peter Zijlstra, Dongdong Deng, Thomas Gleixner, Ingo Molnar,
H. Peter Anvin, x86, linux-kernel
* Don Zickus <dzickus@redhat.com> wrote:
> From: Dongdong Deng <dongdong.deng@windriver.com>
>
> The spin_lock_debug/rcu_cpu_stall detector uses
> trigger_all_cpu_backtrace() to dump cpu backtrace.
> Therefore it is possible that trigger_all_cpu_backtrace()
> could be called at the same time on different CPUs, which
> triggers and 'unknown reason NMI' warning. The following case
> illustrates the problem:
>
> CPU1 CPU2 ... CPU N
> trigger_all_cpu_backtrace()
> set "backtrace_mask" to cpu mask
> |
> generate NMI interrupts generate NMI interrupts ...
> \ | /
> \ | /
> The "backtrace_mask" will be cleaned by the first NMI interrupt
> at nmi_watchdog_tick(), then the following NMI interrupts generated
> by other cpus's arch_trigger_all_cpu_backtrace() will be took as
> unknown reason NMI interrupts.
>
> This patch uses a lock to avoid the problem, and stop the
> arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
> backtrace info when there is already a trigger_all_cpu_backtrace()
> in progress.
>
> Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
> Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
> CC: Thomas Gleixner <tglx@linutronix.de>
> CC: Ingo Molnar <mingo@redhat.com>
> CC: "H. Peter Anvin" <hpa@zytor.com>
> CC: x86@kernel.org
> CC: linux-kernel@vger.kernel.org
> Signed-off-by: Don Zickus <dzickus@redhat.com>
> ---
> arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
> arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
> 2 files changed, 28 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> index cefd694..3aea0a5 100644
> --- a/arch/x86/kernel/apic/hw_nmi.c
> +++ b/arch/x86/kernel/apic/hw_nmi.c
> @@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
> void arch_trigger_all_cpu_backtrace(void)
> {
> int i;
> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
Please dont put statics into the middle of local variables - put them into file
scope in a visible way.
> + unsigned long flags;
> +
> + local_irq_save(flags);
> + if (!arch_spin_trylock(&lock))
> + /*
> + * If there is already a trigger_all_cpu_backtrace()
> + * in progress, don't output double cpu dump infos.
> + */
> + goto out_restore_irq;
>
> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>
> @@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
> break;
> mdelay(1);
> }
> +
> + arch_spin_unlock(&lock);
> +out_restore_irq:
> + local_irq_restore(flags);
> }
>
> static int __kprobes
> diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
> index a43f71c..5fa8a13 100644
> --- a/arch/x86/kernel/apic/nmi.c
> +++ b/arch/x86/kernel/apic/nmi.c
> @@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
> void arch_trigger_all_cpu_backtrace(void)
> {
> int i;
> + static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
> + unsigned long flags;
> +
> + local_irq_save(flags);
> + if (!arch_spin_trylock(&lock))
> + /*
> + * If there is already a trigger_all_cpu_backtrace()
> + * in progress, don't output double cpu dump infos.
> + */
> + goto out_restore_irq;
>
> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>
> @@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
> break;
> mdelay(1);
> }
> +
> + arch_spin_unlock(&lock);
> +out_restore_irq:
> + local_irq_restore(flags);
This spinlock is never actually used as a spinlock - it's a "in progress" flag. Why
not use a flag and test_and_set_bit()?
Also, the irq disabling really needed, will this code ever be called with irqs on?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP
@ 2010-11-02 18:16 Don Zickus
2010-11-10 7:43 ` Ingo Molnar
0 siblings, 1 reply; 15+ messages in thread
From: Don Zickus @ 2010-11-02 18:16 UTC (permalink / raw)
To: Ingo Molnar
Cc: Peter Zijlstra, Dongdong Deng, Thomas Gleixner, Ingo Molnar,
H. Peter Anvin, x86, linux-kernel, Don Zickus
From: Dongdong Deng <dongdong.deng@windriver.com>
The spin_lock_debug/rcu_cpu_stall detector uses
trigger_all_cpu_backtrace() to dump cpu backtrace.
Therefore it is possible that trigger_all_cpu_backtrace()
could be called at the same time on different CPUs, which
triggers and 'unknown reason NMI' warning. The following case
illustrates the problem:
CPU1 CPU2 ... CPU N
trigger_all_cpu_backtrace()
set "backtrace_mask" to cpu mask
|
generate NMI interrupts generate NMI interrupts ...
\ | /
\ | /
The "backtrace_mask" will be cleaned by the first NMI interrupt
at nmi_watchdog_tick(), then the following NMI interrupts generated
by other cpus's arch_trigger_all_cpu_backtrace() will be took as
unknown reason NMI interrupts.
This patch uses a lock to avoid the problem, and stop the
arch_trigger_all_cpu_backtrace() calling to avoid dumping double cpu
backtrace info when there is already a trigger_all_cpu_backtrace()
in progress.
Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
arch/x86/kernel/apic/hw_nmi.c | 14 ++++++++++++++
arch/x86/kernel/apic/nmi.c | 14 ++++++++++++++
2 files changed, 28 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index cefd694..3aea0a5 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -29,6 +29,16 @@ u64 hw_nmi_get_sample_period(void)
void arch_trigger_all_cpu_backtrace(void)
{
int i;
+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ if (!arch_spin_trylock(&lock))
+ /*
+ * If there is already a trigger_all_cpu_backtrace()
+ * in progress, don't output double cpu dump infos.
+ */
+ goto out_restore_irq;
cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
@@ -41,6 +51,10 @@ void arch_trigger_all_cpu_backtrace(void)
break;
mdelay(1);
}
+
+ arch_spin_unlock(&lock);
+out_restore_irq:
+ local_irq_restore(flags);
}
static int __kprobes
diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
index a43f71c..5fa8a13 100644
--- a/arch/x86/kernel/apic/nmi.c
+++ b/arch/x86/kernel/apic/nmi.c
@@ -552,6 +552,16 @@ int do_nmi_callback(struct pt_regs *regs, int cpu)
void arch_trigger_all_cpu_backtrace(void)
{
int i;
+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ if (!arch_spin_trylock(&lock))
+ /*
+ * If there is already a trigger_all_cpu_backtrace()
+ * in progress, don't output double cpu dump infos.
+ */
+ goto out_restore_irq;
cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
@@ -564,4 +574,8 @@ void arch_trigger_all_cpu_backtrace(void)
break;
mdelay(1);
}
+
+ arch_spin_unlock(&lock);
+out_restore_irq:
+ local_irq_restore(flags);
}
--
1.7.2.3
^ permalink raw reply related [flat|nested] 15+ messages in thread
end of thread, other threads:[~2010-11-11 11:00 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-11 10:31 [PATCH] x86: avoid calling arch_trigger_all_cpu_backtrace() at the same time on SMP Dongdong Deng
2010-10-18 11:00 ` DDD
2010-10-18 18:03 ` Don Zickus
2010-10-21 5:17 ` DDD
2010-11-02 18:16 Don Zickus
2010-11-10 7:43 ` Ingo Molnar
2010-11-10 8:35 ` DDD
2010-11-10 8:39 ` Ingo Molnar
2010-11-11 2:20 Dongdong Deng
2010-11-11 9:23 ` Ingo Molnar
2010-11-11 9:51 ` DDD
2010-11-11 10:01 ` Ingo Molnar
2010-11-11 11:00 ` DDD
2010-11-11 9:51 ` Eric Dumazet
2010-11-11 9:57 ` Ingo Molnar
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.