All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/1] lib/atomic64 using raw_spin_lock_irq[save|resotre] for atomicity
@ 2011-09-02  1:10 Shan Hai
  2011-09-02  1:10 ` [PATCH V3 1/1] " Shan Hai
  0 siblings, 1 reply; 5+ messages in thread
From: Shan Hai @ 2011-09-02  1:10 UTC (permalink / raw)
  To: akpm, tglx; +Cc: eric.dumazet, vapier, asharma, linux-kernel, linux-rt-users


[PATCH V3 0/1] lib:atomic64 using raw_spin_lock_irq[save|resotre] for atomicity

The command 'perf top -e L1-dcache-loads' causes OOPS and hang up in the
following configuration.

Powerpc e500 core
OOPS on: 		2.6.34.9 PREEMPT-RT
Whole system hangs up:	3.0.3 PREEMPT-RT

Listed the panic info. below. 

Cause of the problem is that the atomic64_add_return is interrupted by the 
performence counter interrupt, the interrupt handler calls atomic64_read,
failed at /build/linux/kernel/rtmutex.c:832! for the reason trying to hold the
same lock twice in the same context. 

Replace the spin_lock_irq[save|restore] with raw_spin_lock_irq[save|restore] 
could guarantee the atomicity of atomic64_* because the raw variant of the spin
lock disables interrupts during atomic64_* operations.

---
 lib/atomic64.c |   40 ++++++++++++++++++++--------------------
 1 files changed, 20 insertions(+), 20 deletions(-)
---

OOPS on 2.6.34.9+PREEMPT-RT:
------------[ cut here ]------------
kernel BUG at /build/linux/kernel/rtmutex.c:832!
Oops: Exception in kernel mode, sig: 5 [#2]
PREEMPT SMP NR_CPUS=8 LTT NESTING LEVEL : 0
P4080 DS
last sysfs file: /sys/devices/system/cpu/online
Modules linked in: ipv6(+) [last unloaded: scsi_wait_scan]
NIP: c068b218 LR: c068b1e0 CTR: 00000000
REGS: eb459ae0 TRAP: 0700   Tainted: G      D     (2.6.34.9-rt)
MSR: 00021002 <ME,CE>  CR: 28000488  XER: 00000000
TASK = ea43d3b0[968] 'perf' THREAD: eb458000 CPU: 0
GPR00: 00000001 eb459b90 ea43d3b0 00021002 00000000 00000000 00000000 00000001
GPR08: 00000001 ea43d3b0 c068b1e0 00000000 28000482 10092c4c 7fffffff 80000000
GPR16: eb459d40 eb459c68 00000001 c2fa2098 eb459ec0 eac5a8e8 eac5a900 c0906308
GPR24: c0906334 00000000 eb459b9c c090d0ec 00021002 c09062e0 c09062e0 eb459b90
NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
Call Trace:
[eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
[eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
[eb459c40] [c03d2a14] atomic64_read+0x48/0x84
[eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
[eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
[eb459d30] [c0014170] ret_from_except_full+0x0/0x4c
--- Exception: 2060 at lock_acquire+0x94/0x130
    LR = lock_acquire+0x8c/0x130
[eb459e30] [c068bdf0] rt_spin_lock+0x80/0x98
[eb459e50] [c03d2884] atomic64_add_return+0x50/0x98
[eb459e70] [c00ff888] T.902+0x150/0x4f8
[eb459eb0] [c00ffedc] sys_perf_event_open+0x2ac/0x508
[eb459f40] [c0013b6c] ret_from_syscall+0x0/0x4
--- Exception: c00 at 0xfa8abe4
    LR = 0x10016034


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH V3 1/1] lib/atomic64 using raw_spin_lock_irq[save|resotre] for atomicity
  2011-09-02  1:10 [PATCH V2 0/1] lib/atomic64 using raw_spin_lock_irq[save|resotre] for atomicity Shan Hai
@ 2011-09-02  1:10 ` Shan Hai
  2011-09-02 22:57   ` Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: Shan Hai @ 2011-09-02  1:10 UTC (permalink / raw)
  To: akpm, tglx
  Cc: eric.dumazet, vapier, asharma, linux-kernel, linux-rt-users, Shan Hai

Use non sleeping raw_spin_lock_irq in atomic64 implementation, because its IRQ
safe, the sleeping spin_lock_irq is IRQ unsafe and causes deadlock shown below.

spin_lock_irq[save|restore] would cause the following OOPS on 2.6.34.9+rt

OOPS on 2.6.34.9+PREEMPT-RT:
------------[ cut here ]------------
kernel BUG at /build/linux/kernel/rtmutex.c:832!
Oops: Exception in kernel mode, sig: 5 [#2]
PREEMPT SMP NR_CPUS=8 LTT NESTING LEVEL : 0
P4080 DS
last sysfs file: /sys/devices/system/cpu/online
Modules linked in: ipv6(+) [last unloaded: scsi_wait_scan]
NIP: c068b218 LR: c068b1e0 CTR: 00000000
REGS: eb459ae0 TRAP: 0700   Tainted: G      D     (2.6.34.9-rt)
MSR: 00021002 <ME,CE>  CR: 28000488  XER: 00000000
TASK = ea43d3b0[968] 'perf' THREAD: eb458000 CPU: 0
GPR00: 00000001 eb459b90 ea43d3b0 00021002 00000000 00000000 00000000 00000001
GPR08: 00000001 ea43d3b0 c068b1e0 00000000 28000482 10092c4c 7fffffff 80000000
GPR16: eb459d40 eb459c68 00000001 c2fa2098 eb459ec0 eac5a8e8 eac5a900 c0906308
GPR24: c0906334 00000000 eb459b9c c090d0ec 00021002 c09062e0 c09062e0 eb459b90
NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
Call Trace:
[eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
[eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
[eb459c40] [c03d2a14] atomic64_read+0x48/0x84
[eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
[eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
[eb459d30] [c0014170] ret_from_except_full+0x0/0x4c
--- Exception: 2060 at lock_acquire+0x94/0x130
    LR = lock_acquire+0x8c/0x130
[eb459e30] [c068bdf0] rt_spin_lock+0x80/0x98
[eb459e50] [c03d2884] atomic64_add_return+0x50/0x98
[eb459e70] [c00ff888] T.902+0x150/0x4f8
[eb459eb0] [c00ffedc] sys_perf_event_open+0x2ac/0x508
[eb459f40] [c0013b6c] ret_from_syscall+0x0/0x4
--- Exception: c00 at 0xfa8abe4
    LR = 0x10016034

Signed-off-by: Shan Hai <haishan.bai@gmail.com>
---
 lib/atomic64.c |   44 ++++++++++++++++++++++----------------------
 1 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/lib/atomic64.c b/lib/atomic64.c
index e12ae0d..82b3313 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -29,7 +29,7 @@
  * Ensure each lock is in a separate cacheline.
  */
 static union {
-	spinlock_t lock;
+	raw_spinlock_t lock;
 	char pad[L1_CACHE_BYTES];
 } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp;
 
@@ -48,9 +48,9 @@ long long atomic64_read(const atomic64_t *v)
 	spinlock_t *lock = lock_addr(v);
 	long long val;
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
 EXPORT_SYMBOL(atomic64_read);
@@ -60,9 +60,9 @@ void atomic64_set(atomic64_t *v, long long i)
 	unsigned long flags;
 	spinlock_t *lock = lock_addr(v);
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	v->counter = i;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }
 EXPORT_SYMBOL(atomic64_set);
 
@@ -71,9 +71,9 @@ void atomic64_add(long long a, atomic64_t *v)
 	unsigned long flags;
 	spinlock_t *lock = lock_addr(v);
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	v->counter += a;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }
 EXPORT_SYMBOL(atomic64_add);
 
@@ -83,9 +83,9 @@ long long atomic64_add_return(long long a, atomic64_t *v)
 	spinlock_t *lock = lock_addr(v);
 	long long val;
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter += a;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
 EXPORT_SYMBOL(atomic64_add_return);
@@ -95,9 +95,9 @@ void atomic64_sub(long long a, atomic64_t *v)
 	unsigned long flags;
 	spinlock_t *lock = lock_addr(v);
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	v->counter -= a;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }
 EXPORT_SYMBOL(atomic64_sub);
 
@@ -107,9 +107,9 @@ long long atomic64_sub_return(long long a, atomic64_t *v)
 	spinlock_t *lock = lock_addr(v);
 	long long val;
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter -= a;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
 EXPORT_SYMBOL(atomic64_sub_return);
@@ -120,11 +120,11 @@ long long atomic64_dec_if_positive(atomic64_t *v)
 	spinlock_t *lock = lock_addr(v);
 	long long val;
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter - 1;
 	if (val >= 0)
 		v->counter = val;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
 EXPORT_SYMBOL(atomic64_dec_if_positive);
@@ -135,11 +135,11 @@ long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
 	spinlock_t *lock = lock_addr(v);
 	long long val;
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
 	if (val == o)
 		v->counter = n;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
 EXPORT_SYMBOL(atomic64_cmpxchg);
@@ -150,10 +150,10 @@ long long atomic64_xchg(atomic64_t *v, long long new)
 	spinlock_t *lock = lock_addr(v);
 	long long val;
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
 	v->counter = new;
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
 EXPORT_SYMBOL(atomic64_xchg);
@@ -164,12 +164,12 @@ int atomic64_add_unless(atomic64_t *v, long long a, long long u)
 	spinlock_t *lock = lock_addr(v);
 	int ret = 0;
 
-	spin_lock_irqsave(lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	if (v->counter != u) {
 		v->counter += a;
 		ret = 1;
 	}
-	spin_unlock_irqrestore(lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL(atomic64_add_unless);
@@ -179,7 +179,7 @@ static int init_atomic64_lock(void)
 	int i;
 
 	for (i = 0; i < NR_LOCKS; ++i)
-		spin_lock_init(&atomic64_lock[i].lock);
+		raw_spin_lock_init(&atomic64_lock[i].lock);
 	return 0;
 }
 
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH V3 1/1] lib/atomic64 using raw_spin_lock_irq[save|resotre] for atomicity
  2011-09-02  1:10 ` [PATCH V3 1/1] " Shan Hai
@ 2011-09-02 22:57   ` Andrew Morton
  2011-09-03 10:09     ` Thomas Gleixner
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2011-09-02 22:57 UTC (permalink / raw)
  To: Shan Hai
  Cc: tglx, eric.dumazet, vapier, asharma, linux-kernel, linux-rt-users

On Fri,  2 Sep 2011 09:10:51 +0800
Shan Hai <haishan.bai@gmail.com> wrote:

> Use non sleeping raw_spin_lock_irq in atomic64 implementation, because its IRQ
> safe, the sleeping spin_lock_irq is IRQ unsafe and causes deadlock shown below.
> 
> spin_lock_irq[save|restore] would cause the following OOPS on 2.6.34.9+rt

This changelog makes no sense.  I used this instead:

: The command 'perf top -e L1-dcache-loads' causes OOPS and hang up in the
: following configuration.
: 
: Powerpc e500 core
: OOPS on: 		2.6.34.9 PREEMPT-RT
: Whole system hangs up:	3.0.3 PREEMPT-RT
: 
: Listed the panic info. below. 
: 
: The cause of the problem is that the atomic64_add_return is interrupted by
: the performence counter interrupt, the interrupt handler calls
: atomic64_read, failed at /build/linux/kernel/rtmutex.c:832!  for the
: reason trying to hold the same lock twice in the same context.  
: 
: Replace the spin_lock_irq[save|restore] with
: raw_spin_lock_irq[save|restore] could guarantee the atomicity of
: atomic64_* because the raw variant of the spin lock disables interrupts
: during atomic64_* operations.
: 
: OOPS on 2.6.34.9+PREEMPT-RT:
: 
: kernel BUG at /build/linux/kernel/rtmutex.c:832!
: Oops: Exception in kernel mode, sig: 5 [#2]
: PREEMPT SMP NR_CPUS=8 LTT NESTING LEVEL : 0
: P4080 DS
: last sysfs file: /sys/devices/system/cpu/online
: Modules linked in: ipv6(+) [last unloaded: scsi_wait_scan]
: NIP: c068b218 LR: c068b1e0 CTR: 00000000
: REGS: eb459ae0 TRAP: 0700   Tainted: G      D     (2.6.34.9-rt)
: MSR: 00021002 <ME,CE>  CR: 28000488  XER: 00000000
: TASK = ea43d3b0[968] 'perf' THREAD: eb458000 CPU: 0
: GPR00: 00000001 eb459b90 ea43d3b0 00021002 00000000 00000000 00000000 00000001
: GPR08: 00000001 ea43d3b0 c068b1e0 00000000 28000482 10092c4c 7fffffff 80000000
: GPR16: eb459d40 eb459c68 00000001 c2fa2098 eb459ec0 eac5a8e8 eac5a900 c0906308
: GPR24: c0906334 00000000 eb459b9c c090d0ec 00021002 c09062e0 c09062e0 eb459b90
: NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
: LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
: Call Trace:
: [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
: [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
: [eb459c40] [c03d2a14] atomic64_read+0x48/0x84
: [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
: [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
: [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c

But I don't think the patch is applicable to mainline anyway, is it?


What's actually going on here?  A spin_lock_irq() which doesn't disable
interrupts sounds fairly moronic.  Is it the case that preempt-rt
converts interrupt code to thread context, so spin_lock_irq() only
needs to lock against other thread-context code?

If so, what's special about the ppc performance counter interrupt? 
Shouldn't that be covnerted to thread context as well?

If that isn't possible, does this mean that all locking/atomic code
which runs in the perf counter interrupt needs to be converted to use
raw_irq_foo()?

> @@ -29,7 +29,7 @@
>  * Ensure each lock is in a separate cacheline.
>  */
> static union {
> -	spinlock_t lock;
> +	raw_spinlock_t lock;
>  	char pad[L1_CACHE_BYTES];
>  } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp;

There is no way on this little green earth that a reader of this code
will be able to work out that a raw_spinlock_t was used because of
something to do with ppc performance counter interrupts!

Therefore a code comment is needed.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V3 1/1] lib/atomic64 using raw_spin_lock_irq[save|resotre] for atomicity
  2011-09-02 22:57   ` Andrew Morton
@ 2011-09-03 10:09     ` Thomas Gleixner
  0 siblings, 0 replies; 5+ messages in thread
From: Thomas Gleixner @ 2011-09-03 10:09 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Shan Hai, eric.dumazet, vapier, asharma, linux-kernel, linux-rt-users

On Fri, 2 Sep 2011, Andrew Morton wrote:
> On Fri,  2 Sep 2011 09:10:51 +0800
> Shan Hai <haishan.bai@gmail.com> wrote:
> : [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
> : [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
> : [eb459c40] [c03d2a14] atomic64_read+0x48/0x84
> : [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
> : [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
> : [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c
> 
> But I don't think the patch is applicable to mainline anyway, is it?

It is not necessary for mainline, but we have already quite a lot of
these raw_spinlock conversions in tree. That helps us to keep the size
and intrusiveness of RT lower.

> What's actually going on here?  A spin_lock_irq() which doesn't disable
> interrupts sounds fairly moronic.  Is it the case that preempt-rt
> converts interrupt code to thread context, so spin_lock_irq() only
> needs to lock against other thread-context code?

Right, as RT enforces threaded interrupts for almost everything we can
convert the spinlocks (not the raw_ variant) to "sleeping spinlocks",
i.e. rtmutex.

Now some interrupts cannot be threaded for various reasons and there
we need to look at the locking and convert those locks to
raw_spinlocks.
 
> If so, what's special about the ppc performance counter interrupt? 
> Shouldn't that be covnerted to thread context as well?

No, we can't. We need to read out stuff when the interrupt happens and
not at some random point afterwards. Also perf interrupts can come
with high frequency when used for profiling.

> If that isn't possible, does this mean that all locking/atomic code
> which runs in the perf counter interrupt needs to be converted to use
> raw_irq_foo()?

Pretty much. The perf code itself already uses the raw variants, we
just missed this atomic64 emulation muck.

> > @@ -29,7 +29,7 @@
> >  * Ensure each lock is in a separate cacheline.
> >  */
> > static union {
> > -	spinlock_t lock;
> > +	raw_spinlock_t lock;
> >  	char pad[L1_CACHE_BYTES];
> >  } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp;
> 
> There is no way on this little green earth that a reader of this code
> will be able to work out that a raw_spinlock_t was used because of
> something to do with ppc performance counter interrupts!
> 
> Therefore a code comment is needed.

There is another reason why it makes sense to convert this to a raw
spinlock. The protected code sections are just a few instructions, so
making it a sleeping lock would be overkill - even if it wouldnt be
used in the ppc performance counter interrupt .

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH V2 0/1] lib/atomic64 using raw_spin_lock_irq[save|resotre] for atomicity
@ 2011-09-01  3:32 Shan Hai
  0 siblings, 0 replies; 5+ messages in thread
From: Shan Hai @ 2011-09-01  3:32 UTC (permalink / raw)
  To: akpm, tglx; +Cc: eric.dumazet, vapier, asharma, linux-kernel, linux-rt-users


The command 'perf top -e L1-dcache-loads' causes OOPS and hang up in the
following configuration.

Powerpc e500 core
OOPS on: 		2.6.34.9 PREEMPT-RT
Whole system hangs up:	3.0.3 PREEMPT-RT

Listed the panic info. below. 

Cause of the problem is that the atomic64_add_return is interrupted by the 
performence counter interrupt, the interrupt handler calls atomic64_read,
failed at /build/linux/kernel/rtmutex.c:832! for the reason trying to hold the
same lock twice in the same context. 

Replace the spin_lock_irq[save|restore] with raw_spin_lock_irq[save|restore] 
could guarantee the atomicity of atomic64_* because the raw variant of the spin
lock disables interrupts during atomic64_* operations.

---
 lib/atomic64.c |   40 ++++++++++++++++++++--------------------
 1 files changed, 20 insertions(+), 20 deletions(-)
---

OOPS on 2.6.34.9+PREEMPT-RT:
------------[ cut here ]------------
kernel BUG at /build/linux/kernel/rtmutex.c:832!
Oops: Exception in kernel mode, sig: 5 [#2]
PREEMPT SMP NR_CPUS=8 LTT NESTING LEVEL : 0
P4080 DS
last sysfs file: /sys/devices/system/cpu/online
Modules linked in: ipv6(+) [last unloaded: scsi_wait_scan]
NIP: c068b218 LR: c068b1e0 CTR: 00000000
REGS: eb459ae0 TRAP: 0700   Tainted: G      D     (2.6.34.9-rt)
MSR: 00021002 <ME,CE>  CR: 28000488  XER: 00000000
TASK = ea43d3b0[968] 'perf' THREAD: eb458000 CPU: 0
GPR00: 00000001 eb459b90 ea43d3b0 00021002 00000000 00000000 00000000 00000001
GPR08: 00000001 ea43d3b0 c068b1e0 00000000 28000482 10092c4c 7fffffff 80000000
GPR16: eb459d40 eb459c68 00000001 c2fa2098 eb459ec0 eac5a8e8 eac5a900 c0906308
GPR24: c0906334 00000000 eb459b9c c090d0ec 00021002 c09062e0 c09062e0 eb459b90
NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
Call Trace:
[eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
[eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
[eb459c40] [c03d2a14] atomic64_read+0x48/0x84
[eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
[eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
[eb459d30] [c0014170] ret_from_except_full+0x0/0x4c
--- Exception: 2060 at lock_acquire+0x94/0x130
    LR = lock_acquire+0x8c/0x130
[eb459e30] [c068bdf0] rt_spin_lock+0x80/0x98
[eb459e50] [c03d2884] atomic64_add_return+0x50/0x98
[eb459e70] [c00ff888] T.902+0x150/0x4f8
[eb459eb0] [c00ffedc] sys_perf_event_open+0x2ac/0x508
[eb459f40] [c0013b6c] ret_from_syscall+0x0/0x4
--- Exception: c00 at 0xfa8abe4
    LR = 0x10016034


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-09-03 10:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-02  1:10 [PATCH V2 0/1] lib/atomic64 using raw_spin_lock_irq[save|resotre] for atomicity Shan Hai
2011-09-02  1:10 ` [PATCH V3 1/1] " Shan Hai
2011-09-02 22:57   ` Andrew Morton
2011-09-03 10:09     ` Thomas Gleixner
  -- strict thread matches above, loose matches on Subject: below --
2011-09-01  3:32 [PATCH V2 0/1] " Shan Hai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.