All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	Victor Kaplansky <VICTORK@il.ibm.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Anton Blanchard <anton@samba.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux PPC dev <linuxppc-dev@ozlabs.org>,
	Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>,
	Michael Ellerman <michael@ellerman.id.au>,
	Michael Neuling <mikey@neuling.org>
Subject: Re: [RFC] arch: Introduce new TSO memory barrier smp_tmb()
Date: Mon, 4 Nov 2013 08:27:32 -0800	[thread overview]
Message-ID: <20131104162732.GN3947@linux.vnet.ibm.com> (raw)
In-Reply-To: <20131104112254.GK28601@twins.programming.kicks-ass.net>

On Mon, Nov 04, 2013 at 12:22:54PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 04, 2013 at 02:51:00AM -0800, Paul E. McKenney wrote:
> > OK, something like this for the definitions (though PowerPC might want
> > to locally abstract the lwsync expansion):
> > 
> > 	#define smp_store_with_release_semantics(p, v) /* x86, s390, etc. */ \
> > 	do { \
> > 		barrier(); \
> > 		ACCESS_ONCE(p) = (v); \
> > 	} while (0)
> > 
> > 	#define smp_store_with_release_semantics(p, v) /* PowerPC. */ \
> > 	do { \
> > 		__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory"); \
> > 		ACCESS_ONCE(p) = (v); \
> > 	} while (0)
> > 
> > 	#define smp_load_with_acquire_semantics(p) /* x86, s390, etc. */ \
> > 	({ \
> > 		typeof(*p) *_________p1 = ACCESS_ONCE(p); \
> > 		barrier(); \
> > 		_________p1; \
> > 	})
> > 
> > 	#define smp_load_with_acquire_semantics(p) /* PowerPC. */ \
> > 	({ \
> > 		typeof(*p) *_________p1 = ACCESS_ONCE(p); \
> > 		__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory"); \
> > 		_________p1; \
> > 	})
> > 
> > For ARM, smp_load_with_acquire_semantics() is a wrapper around the ARM
> > "ldar" instruction and smp_store_with_release_semantics() is a wrapper
> > around the ARM "stlr" instruction.
> 
> This still leaves me confused as to what to do with my case :/
> 
> Slightly modified since last time -- as the simplified version was maybe
> simplified too far.
> 
> To recap, I'd like to get rid of barrier A where possible, since that's
> now a full barrier for every event written.
> 
> However, there's no immediate store I can attach it to; the obvious one
> would be the kbuf->head store, but that's complicated by the
> local_cmpxchg() thing.
> 
> And we need that cmpxchg loop because a hardware NMI event can
> interleave with a software event.
> 
> And to be honest, I'm still totally confused about memory barriers vs
> control flow vs C/C++. The only way we're ever getting to that memcpy is
> if we've already observed ubuf->tail, so that LOAD has to be fully
> processes and completed.
> 
> I'm really not seeing how a STORE from the memcpy() could possibly go
> wrong; and if C/C++ can hoist the memcpy() over a compiler barrier()
> then I suppose we should all just go home.
> 
> /me who wants A to be a barrier() but is terminally confused.

Well, let's see...

> ---
> 
> 
> /*
>  * One important detail is that the kbuf part and the kbuf_writer() are
>  * strictly per cpu and we can thus rely on program order for those.
>  *
>  * Only the userspace consumer can possibly run on another cpu, and thus we
>  * need to ensure data consistency for those.
>  */
> 
> struct buffer {
>         u64 size;
>         u64 tail;
>         u64 head;
>         void *data;
> };
> 
> struct buffer *kbuf, *ubuf;
> 
> /*
>  * If there's space in the buffer; store the data @buf; otherwise
>  * discard it.
>  */
> void kbuf_write(int sz, void *buf)
> {
> 	u64 tail, head, offset;
> 
> 	do {
> 		tail = ACCESS_ONCE(ubuf->tail);

So the above load is the key load.  It determines whether or not we
have space in the buffer.  This of course assumes that only this CPU
writes to ->head.

If so, then:

		tail = smp_load_with_acquire_semantics(ubuf->tail); /* A -> D */

> 		offset = head = kbuf->head;
> 		if (CIRC_SPACE(head, tail, kbuf->size) < sz) {
> 			/* discard @buf */
> 			return;
> 		}
> 		head += sz;
> 	} while (local_cmpxchg(&kbuf->head, offset, head) != offset)

If there is an issue with kbuf->head, presumably local_cmpxchg() fails
and we retry.

But sheesh, do you think we could have buried the definitions of
local_cmpxchg() under a few more layers of macro expansion just to
keep things more obscure?  Anyway, griping aside...

o	__cmpxchg_local_generic() in include/asm-generic/cmpxchg-local.h
	doesn't seem to exclude NMIs, so is not safe for this usage.

o	__cmpxchg_local() in ARM handles NMI as long as the
	argument is 32 bits, otherwise, it uses the aforementionted
	__cmpxchg_local_generic(), which does not handle NMI.  Given your
	u64, this does not look good...

	And some ARM subarches (e.g., metag) seem to fail to handle NMI
	even in the 32-bit case.

o	FRV and M32r seem to act similar to ARM.

Or maybe these architectures don't do NMIs?  If they do, local_cmpxchg()
does not seem to be safe against NMIs in general.  :-/

That said, powerpc, 64-bit s390, sparc, and x86 seem to handle it.

Of course, x86's local_cmpxchg() has full memory barriers implicitly.

> 
>         /*
>          * Ensure that if we see the userspace tail (ubuf->tail) such
>          * that there is space to write @buf without overwriting data
>          * userspace hasn't seen yet, we won't in fact store data before
>          * that read completes.
>          */
> 
>         smp_mb(); /* A, matches with D */

Given a change to smp_load_with_acquire_semantics() above, you should not
need this smp_mb().

>         memcpy(kbuf->data + offset, buf, sz);
> 
>         /*
>          * Ensure that we write all the @buf data before we update the
>          * userspace visible ubuf->head pointer.
>          */
>         smp_wmb(); /* B, matches with C */
> 
>         ubuf->head = kbuf->head;

Replace the smp_wmb() and the assignment with:

	smp_store_with_release_semantics(ubuf->head, kbuf->head); /* B -> C */

> }
> 
> /*
>  * Consume the buffer data and update the tail pointer to indicate to
>  * kernel space there's 'free' space.
>  */
> void ubuf_read(void)
> {
>         u64 head, tail;
> 
>         tail = ACCESS_ONCE(ubuf->tail);

Does anyone else write tail?  Or is this defense against NMIs?

If no one else writes to tail and if NMIs cannot muck things up, then
the above ACCESS_ONCE() is not needed, though I would not object to its
staying.

>         head = ACCESS_ONCE(ubuf->head);

Make the above be:

	head = smp_load_with_acquire_semantics(ubuf->head);  /* C -> B */

>         /*
>          * Ensure we read the buffer boundaries before the actual buffer
>          * data...
>          */
>         smp_rmb(); /* C, matches with B */

And drop the above memory barrier.

>         while (tail != head) {
>                 obj = ubuf->data + tail;
>                 /* process obj */
>                 tail += obj->size;
>                 tail %= ubuf->size;
>         }
> 
>         /*
>          * Ensure all data reads are complete before we issue the
>          * ubuf->tail update; once that update hits, kbuf_write() can
>          * observe and overwrite data.
>          */
>         smp_mb(); /* D, matches with A */
> 
>         ubuf->tail = tail;

Replace the above barrier and the assignment with:

	smp_store_with_release_semantics(ubuf->tail, tail); /* D -> B. */

> }

All this is leading me to suggest the following shortenings of names:

	smp_load_with_acquire_semantics() -> smp_load_acquire()

	smp_store_with_release_semantics() -> smp_store_release()

But names aside, the above gets rid of explicit barriers on TSO architectures,
allows ARM to avoid full DMB, and allows PowerPC to use lwsync instead of
the heavier-weight sync.

								Thanx, Paul


WARNING: multiple messages have this Message-ID (diff)
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Michael Neuling <mikey@neuling.org>,
	Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>,
	Oleg Nesterov <oleg@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux PPC dev <linuxppc-dev@ozlabs.org>,
	Anton Blanchard <anton@samba.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Victor Kaplansky <VICTORK@il.ibm.com>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [RFC] arch: Introduce new TSO memory barrier smp_tmb()
Date: Mon, 4 Nov 2013 08:27:32 -0800	[thread overview]
Message-ID: <20131104162732.GN3947@linux.vnet.ibm.com> (raw)
In-Reply-To: <20131104112254.GK28601@twins.programming.kicks-ass.net>

On Mon, Nov 04, 2013 at 12:22:54PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 04, 2013 at 02:51:00AM -0800, Paul E. McKenney wrote:
> > OK, something like this for the definitions (though PowerPC might want
> > to locally abstract the lwsync expansion):
> > 
> > 	#define smp_store_with_release_semantics(p, v) /* x86, s390, etc. */ \
> > 	do { \
> > 		barrier(); \
> > 		ACCESS_ONCE(p) = (v); \
> > 	} while (0)
> > 
> > 	#define smp_store_with_release_semantics(p, v) /* PowerPC. */ \
> > 	do { \
> > 		__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory"); \
> > 		ACCESS_ONCE(p) = (v); \
> > 	} while (0)
> > 
> > 	#define smp_load_with_acquire_semantics(p) /* x86, s390, etc. */ \
> > 	({ \
> > 		typeof(*p) *_________p1 = ACCESS_ONCE(p); \
> > 		barrier(); \
> > 		_________p1; \
> > 	})
> > 
> > 	#define smp_load_with_acquire_semantics(p) /* PowerPC. */ \
> > 	({ \
> > 		typeof(*p) *_________p1 = ACCESS_ONCE(p); \
> > 		__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory"); \
> > 		_________p1; \
> > 	})
> > 
> > For ARM, smp_load_with_acquire_semantics() is a wrapper around the ARM
> > "ldar" instruction and smp_store_with_release_semantics() is a wrapper
> > around the ARM "stlr" instruction.
> 
> This still leaves me confused as to what to do with my case :/
> 
> Slightly modified since last time -- as the simplified version was maybe
> simplified too far.
> 
> To recap, I'd like to get rid of barrier A where possible, since that's
> now a full barrier for every event written.
> 
> However, there's no immediate store I can attach it to; the obvious one
> would be the kbuf->head store, but that's complicated by the
> local_cmpxchg() thing.
> 
> And we need that cmpxchg loop because a hardware NMI event can
> interleave with a software event.
> 
> And to be honest, I'm still totally confused about memory barriers vs
> control flow vs C/C++. The only way we're ever getting to that memcpy is
> if we've already observed ubuf->tail, so that LOAD has to be fully
> processes and completed.
> 
> I'm really not seeing how a STORE from the memcpy() could possibly go
> wrong; and if C/C++ can hoist the memcpy() over a compiler barrier()
> then I suppose we should all just go home.
> 
> /me who wants A to be a barrier() but is terminally confused.

Well, let's see...

> ---
> 
> 
> /*
>  * One important detail is that the kbuf part and the kbuf_writer() are
>  * strictly per cpu and we can thus rely on program order for those.
>  *
>  * Only the userspace consumer can possibly run on another cpu, and thus we
>  * need to ensure data consistency for those.
>  */
> 
> struct buffer {
>         u64 size;
>         u64 tail;
>         u64 head;
>         void *data;
> };
> 
> struct buffer *kbuf, *ubuf;
> 
> /*
>  * If there's space in the buffer; store the data @buf; otherwise
>  * discard it.
>  */
> void kbuf_write(int sz, void *buf)
> {
> 	u64 tail, head, offset;
> 
> 	do {
> 		tail = ACCESS_ONCE(ubuf->tail);

So the above load is the key load.  It determines whether or not we
have space in the buffer.  This of course assumes that only this CPU
writes to ->head.

If so, then:

		tail = smp_load_with_acquire_semantics(ubuf->tail); /* A -> D */

> 		offset = head = kbuf->head;
> 		if (CIRC_SPACE(head, tail, kbuf->size) < sz) {
> 			/* discard @buf */
> 			return;
> 		}
> 		head += sz;
> 	} while (local_cmpxchg(&kbuf->head, offset, head) != offset)

If there is an issue with kbuf->head, presumably local_cmpxchg() fails
and we retry.

But sheesh, do you think we could have buried the definitions of
local_cmpxchg() under a few more layers of macro expansion just to
keep things more obscure?  Anyway, griping aside...

o	__cmpxchg_local_generic() in include/asm-generic/cmpxchg-local.h
	doesn't seem to exclude NMIs, so is not safe for this usage.

o	__cmpxchg_local() in ARM handles NMI as long as the
	argument is 32 bits, otherwise, it uses the aforementionted
	__cmpxchg_local_generic(), which does not handle NMI.  Given your
	u64, this does not look good...

	And some ARM subarches (e.g., metag) seem to fail to handle NMI
	even in the 32-bit case.

o	FRV and M32r seem to act similar to ARM.

Or maybe these architectures don't do NMIs?  If they do, local_cmpxchg()
does not seem to be safe against NMIs in general.  :-/

That said, powerpc, 64-bit s390, sparc, and x86 seem to handle it.

Of course, x86's local_cmpxchg() has full memory barriers implicitly.

> 
>         /*
>          * Ensure that if we see the userspace tail (ubuf->tail) such
>          * that there is space to write @buf without overwriting data
>          * userspace hasn't seen yet, we won't in fact store data before
>          * that read completes.
>          */
> 
>         smp_mb(); /* A, matches with D */

Given a change to smp_load_with_acquire_semantics() above, you should not
need this smp_mb().

>         memcpy(kbuf->data + offset, buf, sz);
> 
>         /*
>          * Ensure that we write all the @buf data before we update the
>          * userspace visible ubuf->head pointer.
>          */
>         smp_wmb(); /* B, matches with C */
> 
>         ubuf->head = kbuf->head;

Replace the smp_wmb() and the assignment with:

	smp_store_with_release_semantics(ubuf->head, kbuf->head); /* B -> C */

> }
> 
> /*
>  * Consume the buffer data and update the tail pointer to indicate to
>  * kernel space there's 'free' space.
>  */
> void ubuf_read(void)
> {
>         u64 head, tail;
> 
>         tail = ACCESS_ONCE(ubuf->tail);

Does anyone else write tail?  Or is this defense against NMIs?

If no one else writes to tail and if NMIs cannot muck things up, then
the above ACCESS_ONCE() is not needed, though I would not object to its
staying.

>         head = ACCESS_ONCE(ubuf->head);

Make the above be:

	head = smp_load_with_acquire_semantics(ubuf->head);  /* C -> B */

>         /*
>          * Ensure we read the buffer boundaries before the actual buffer
>          * data...
>          */
>         smp_rmb(); /* C, matches with B */

And drop the above memory barrier.

>         while (tail != head) {
>                 obj = ubuf->data + tail;
>                 /* process obj */
>                 tail += obj->size;
>                 tail %= ubuf->size;
>         }
> 
>         /*
>          * Ensure all data reads are complete before we issue the
>          * ubuf->tail update; once that update hits, kbuf_write() can
>          * observe and overwrite data.
>          */
>         smp_mb(); /* D, matches with A */
> 
>         ubuf->tail = tail;

Replace the above barrier and the assignment with:

	smp_store_with_release_semantics(ubuf->tail, tail); /* D -> B. */

> }

All this is leading me to suggest the following shortenings of names:

	smp_load_with_acquire_semantics() -> smp_load_acquire()

	smp_store_with_release_semantics() -> smp_store_release()

But names aside, the above gets rid of explicit barriers on TSO architectures,
allows ARM to avoid full DMB, and allows PowerPC to use lwsync instead of
the heavier-weight sync.

								Thanx, Paul

  reply	other threads:[~2013-11-04 16:27 UTC|newest]

Thread overview: 212+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-22 23:54 perf events ring buffer memory barrier on powerpc Michael Neuling
2013-10-23  7:39 ` Victor Kaplansky
2013-10-23  7:39   ` Victor Kaplansky
2013-10-23 14:19 ` Frederic Weisbecker
2013-10-23 14:19   ` Frederic Weisbecker
2013-10-23 14:25   ` Frederic Weisbecker
2013-10-23 14:25     ` Frederic Weisbecker
2013-10-25 17:37   ` Peter Zijlstra
2013-10-25 17:37     ` Peter Zijlstra
2013-10-25 20:31     ` Michael Neuling
2013-10-25 20:31       ` Michael Neuling
2013-10-27  9:00     ` Victor Kaplansky
2013-10-27  9:00       ` Victor Kaplansky
2013-10-28  9:22       ` Peter Zijlstra
2013-10-28  9:22         ` Peter Zijlstra
2013-10-28 10:02     ` Frederic Weisbecker
2013-10-28 10:02       ` Frederic Weisbecker
2013-10-28 12:38       ` Victor Kaplansky
2013-10-28 12:38         ` Victor Kaplansky
2013-10-28 13:26         ` Peter Zijlstra
2013-10-28 13:26           ` Peter Zijlstra
2013-10-28 16:34           ` Paul E. McKenney
2013-10-28 16:34             ` Paul E. McKenney
2013-10-28 20:17             ` Oleg Nesterov
2013-10-28 20:17               ` Oleg Nesterov
2013-10-28 20:58               ` Victor Kaplansky
2013-10-28 20:58                 ` Victor Kaplansky
2013-10-29 10:21                 ` Peter Zijlstra
2013-10-29 10:21                   ` Peter Zijlstra
2013-10-29 10:30                   ` Peter Zijlstra
2013-10-29 10:30                     ` Peter Zijlstra
2013-10-29 10:35                     ` Peter Zijlstra
2013-10-29 10:35                       ` Peter Zijlstra
2013-10-29 20:15                       ` Oleg Nesterov
2013-10-29 20:15                         ` Oleg Nesterov
2013-10-29 19:27                     ` Vince Weaver
2013-10-29 19:27                       ` Vince Weaver
2013-10-30 10:42                       ` Peter Zijlstra
2013-10-30 10:42                         ` Peter Zijlstra
2013-10-30 11:48                         ` James Hogan
2013-10-30 11:48                           ` James Hogan
2013-10-30 12:48                           ` Peter Zijlstra
2013-10-30 12:48                             ` Peter Zijlstra
2013-11-06 13:19                         ` [tip:perf/core] tools/perf: Add required memory barriers tip-bot for Peter Zijlstra
2013-11-06 13:50                           ` Vince Weaver
2013-11-06 14:00                             ` Peter Zijlstra
2013-11-06 14:28                               ` Peter Zijlstra
2013-11-06 14:55                                 ` Vince Weaver
2013-11-06 15:10                                   ` Peter Zijlstra
2013-11-06 15:23                                     ` Peter Zijlstra
2013-11-06 14:44                               ` Peter Zijlstra
2013-11-06 16:07                                 ` Peter Zijlstra
2013-11-06 17:31                                   ` Vince Weaver
2013-11-06 18:24                                     ` Peter Zijlstra
2013-11-07  8:21                                       ` Ingo Molnar
2013-11-07 14:27                                         ` Vince Weaver
2013-11-07 15:55                                           ` Ingo Molnar
2013-11-11 16:24                                         ` Peter Zijlstra
2013-11-11 21:10                                           ` Ingo Molnar
2013-10-29 21:23                     ` perf events ring buffer memory barrier on powerpc Michael Neuling
2013-10-29 21:23                       ` Michael Neuling
2013-10-30  9:27                 ` Paul E. McKenney
2013-10-30  9:27                   ` Paul E. McKenney
2013-10-30 11:25                   ` Peter Zijlstra
2013-10-30 11:25                     ` Peter Zijlstra
2013-10-30 14:52                     ` Victor Kaplansky
2013-10-30 14:52                       ` Victor Kaplansky
2013-10-30 15:39                       ` Peter Zijlstra
2013-10-30 15:39                         ` Peter Zijlstra
2013-10-30 17:14                         ` Victor Kaplansky
2013-10-30 17:14                           ` Victor Kaplansky
2013-10-30 17:44                           ` Peter Zijlstra
2013-10-30 17:44                             ` Peter Zijlstra
2013-10-31  6:16                       ` Paul E. McKenney
2013-10-31  6:16                         ` Paul E. McKenney
2013-11-01 13:12                         ` Victor Kaplansky
2013-11-01 13:12                           ` Victor Kaplansky
2013-11-02 16:36                           ` Paul E. McKenney
2013-11-02 16:36                             ` Paul E. McKenney
2013-11-02 17:26                             ` Paul E. McKenney
2013-11-02 17:26                               ` Paul E. McKenney
2013-10-31  6:40                     ` Paul E. McKenney
2013-10-31  6:40                       ` Paul E. McKenney
2013-11-01 14:25                       ` Victor Kaplansky
2013-11-01 14:25                         ` Victor Kaplansky
2013-11-02 17:28                         ` Paul E. McKenney
2013-11-02 17:28                           ` Paul E. McKenney
2013-11-01 14:56                       ` Peter Zijlstra
2013-11-01 14:56                         ` Peter Zijlstra
2013-11-02 17:32                         ` Paul E. McKenney
2013-11-02 17:32                           ` Paul E. McKenney
2013-11-03 14:40                           ` Paul E. McKenney
2013-11-03 14:40                             ` Paul E. McKenney
2013-11-03 15:17                             ` [RFC] arch: Introduce new TSO memory barrier smp_tmb() Peter Zijlstra
2013-11-03 15:17                               ` Peter Zijlstra
2013-11-03 18:08                               ` Linus Torvalds
2013-11-03 18:08                                 ` Linus Torvalds
2013-11-03 20:01                                 ` Peter Zijlstra
2013-11-03 20:01                                   ` Peter Zijlstra
2013-11-03 22:42                                   ` Paul E. McKenney
2013-11-03 22:42                                     ` Paul E. McKenney
2013-11-03 23:34                                     ` Linus Torvalds
2013-11-03 23:34                                       ` Linus Torvalds
2013-11-04 10:51                                       ` Paul E. McKenney
2013-11-04 10:51                                         ` Paul E. McKenney
2013-11-04 11:22                                         ` Peter Zijlstra
2013-11-04 11:22                                           ` Peter Zijlstra
2013-11-04 16:27                                           ` Paul E. McKenney [this message]
2013-11-04 16:27                                             ` Paul E. McKenney
2013-11-04 16:48                                             ` Peter Zijlstra
2013-11-04 16:48                                               ` Peter Zijlstra
2013-11-04 19:11                                             ` Peter Zijlstra
2013-11-04 19:11                                               ` Peter Zijlstra
2013-11-04 19:18                                               ` Peter Zijlstra
2013-11-04 19:18                                                 ` Peter Zijlstra
2013-11-04 20:54                                                 ` Paul E. McKenney
2013-11-04 20:54                                                   ` Paul E. McKenney
2013-11-04 20:53                                               ` Paul E. McKenney
2013-11-04 20:53                                                 ` Paul E. McKenney
2013-11-05 14:05                                                 ` Will Deacon
2013-11-05 14:05                                                   ` Will Deacon
2013-11-05 14:49                                                   ` Paul E. McKenney
2013-11-05 14:49                                                     ` Paul E. McKenney
2013-11-05 18:49                                                   ` Peter Zijlstra
2013-11-05 18:49                                                     ` Peter Zijlstra
2013-11-06 11:00                                                     ` Will Deacon
2013-11-06 11:00                                                       ` Will Deacon
2013-11-06 12:39                                                 ` Peter Zijlstra
2013-11-06 12:39                                                   ` Peter Zijlstra
2013-11-06 12:51                                                   ` Geert Uytterhoeven
2013-11-06 12:51                                                     ` Geert Uytterhoeven
2013-11-06 13:57                                                     ` Peter Zijlstra
2013-11-06 13:57                                                       ` Peter Zijlstra
2013-11-06 18:48                                                       ` Paul E. McKenney
2013-11-06 18:48                                                         ` Paul E. McKenney
2013-11-06 19:42                                                         ` Peter Zijlstra
2013-11-06 19:42                                                           ` Peter Zijlstra
2013-11-07 11:17                                                       ` Will Deacon
2013-11-07 11:17                                                         ` Will Deacon
2013-11-07 13:36                                                         ` Peter Zijlstra
2013-11-07 13:36                                                           ` Peter Zijlstra
2013-11-07 23:50                                           ` Mathieu Desnoyers
2013-11-07 23:50                                             ` Mathieu Desnoyers
2013-11-04 11:05                                       ` Will Deacon
2013-11-04 11:05                                         ` Will Deacon
2013-11-04 16:34                                         ` Paul E. McKenney
2013-11-04 16:34                                           ` Paul E. McKenney
2013-11-03 20:59                               ` Benjamin Herrenschmidt
2013-11-03 20:59                                 ` Benjamin Herrenschmidt
2013-11-03 22:43                                 ` Paul E. McKenney
2013-11-03 22:43                                   ` Paul E. McKenney
2013-11-03 17:07                             ` perf events ring buffer memory barrier on powerpc Will Deacon
2013-11-03 22:47                               ` Paul E. McKenney
2013-11-04  9:57                                 ` Will Deacon
2013-11-04 10:52                                   ` Paul E. McKenney
2013-11-01 16:11                       ` Peter Zijlstra
2013-11-01 16:11                         ` Peter Zijlstra
2013-11-02 17:46                         ` Paul E. McKenney
2013-11-02 17:46                           ` Paul E. McKenney
2013-11-01 16:18                       ` Peter Zijlstra
2013-11-01 16:18                         ` Peter Zijlstra
2013-11-02 17:49                         ` Paul E. McKenney
2013-11-02 17:49                           ` Paul E. McKenney
2013-10-30 13:28                   ` Victor Kaplansky
2013-10-30 13:28                     ` Victor Kaplansky
2013-10-30 15:51                     ` Peter Zijlstra
2013-10-30 15:51                       ` Peter Zijlstra
2013-10-30 18:29                       ` Peter Zijlstra
2013-10-30 18:29                         ` Peter Zijlstra
2013-10-30 19:11                         ` Peter Zijlstra
2013-10-30 19:11                           ` Peter Zijlstra
2013-10-31  4:33                       ` Paul E. McKenney
2013-10-31  4:33                         ` Paul E. McKenney
2013-10-31  4:32                     ` Paul E. McKenney
2013-10-31  4:32                       ` Paul E. McKenney
2013-10-31  9:04                       ` Peter Zijlstra
2013-10-31  9:04                         ` Peter Zijlstra
2013-10-31 15:07                         ` Paul E. McKenney
2013-10-31 15:07                           ` Paul E. McKenney
2013-10-31 15:19                           ` Peter Zijlstra
2013-10-31 15:19                             ` Peter Zijlstra
2013-11-01  9:28                             ` Paul E. McKenney
2013-11-01  9:28                               ` Paul E. McKenney
2013-11-01 10:30                               ` Peter Zijlstra
2013-11-01 10:30                                 ` Peter Zijlstra
2013-11-02 15:20                                 ` Paul E. McKenney
2013-11-02 15:20                                   ` Paul E. McKenney
2013-11-04  9:07                                   ` Peter Zijlstra
2013-11-04  9:07                                     ` Peter Zijlstra
2013-11-04 10:00                                     ` Paul E. McKenney
2013-11-04 10:00                                       ` Paul E. McKenney
2013-10-31  9:59                       ` Victor Kaplansky
2013-10-31  9:59                         ` Victor Kaplansky
2013-10-31 12:28                         ` David Laight
2013-10-31 12:28                           ` David Laight
2013-10-31 12:55                           ` Victor Kaplansky
2013-10-31 12:55                             ` Victor Kaplansky
2013-10-31 15:25                         ` Paul E. McKenney
2013-10-31 15:25                           ` Paul E. McKenney
2013-11-01 16:06                           ` Victor Kaplansky
2013-11-01 16:06                             ` Victor Kaplansky
2013-11-01 16:25                             ` David Laight
2013-11-01 16:25                               ` David Laight
2013-11-01 16:30                               ` Victor Kaplansky
2013-11-01 16:30                                 ` Victor Kaplansky
2013-11-03 20:57                                 ` Benjamin Herrenschmidt
2013-11-03 20:57                                   ` Benjamin Herrenschmidt
2013-11-02 15:46                             ` Paul E. McKenney
2013-11-02 15:46                               ` Paul E. McKenney
2013-10-28 19:09           ` Oleg Nesterov
2013-10-28 19:09             ` Oleg Nesterov
2013-10-29 14:06     ` [tip:perf/urgent] perf: Fix perf ring buffer memory ordering tip-bot for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131104162732.GN3947@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=VICTORK@il.ibm.com \
    --cc=anton@samba.org \
    --cc=benh@kernel.crashing.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=mathieu.desnoyers@polymtl.ca \
    --cc=michael@ellerman.id.au \
    --cc=mikey@neuling.org \
    --cc=oleg@redhat.com \
    --cc=peterz@infradead.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.