All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/4] x86: faster mb()+documentation tweaks
@ 2016-01-13 20:12 Michael S. Tsirkin
  2016-01-13 20:12 ` [PATCH v3 1/4] x86: add cc clobber for addl Michael S. Tsirkin
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2016-01-13 20:12 UTC (permalink / raw)
  To: linux-kernel, Linus Torvalds
  Cc: Davidlohr Bueso, Peter Zijlstra, Ingo Molnar, Thomas Gleixner,
	Paul E. McKenney, the arch/x86 maintainers, Davidlohr Bueso,
	H. Peter Anvin, virtualization, Borislav Petkov

mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's
2 to 3 times slower than lock; addl that we use on older CPUs.

So let's use the locked variant everywhere.

While I was at it, I found some inconsistencies in comments in
arch/x86/include/asm/barrier.h

The documentation fixes are included first - I verified that
they do not change the generated code at all. They should be
safe to apply directly.

The last patch changes mb() to lock addl. I was unable to
measure a speed difference on a macro benchmark,
but I noted that even doing
	#define mb() barrier()
seems to make no difference for most benchmarks
(it causes hangs sometimes, of course).

HPA asked that the last patch is deferred until we hear back from
intel, which makes sense of course. So it needs HPA's ack.

I hope I'm not splitting this up too much - the reason is I wanted to isolate
the code changes (that people might want to test for performance)
from comment changes approved by Linus, from (so far unreviewed) changes
I came up with myself.

Changes from v2:
	add patch adding cc clobber for addl
	tweak commit log for patch 2
	use addl at SP-4 (as opposed to SP) to reduce data dependencies

Michael S. Tsirkin (4):
  x86: add cc clobber for addl
  x86: drop a comment left over from X86_OOSTORE
  x86: tweak the comment about use of wmb for IO
  x86: drop mfence in favor of lock+addl

 arch/x86/include/asm/barrier.h | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

-- 
MST

^ permalink raw reply	[flat|nested] 16+ messages in thread
* [PATCH v3 0/4] x86: faster mb()+documentation tweaks
@ 2016-01-13 20:12 Michael S. Tsirkin
  0 siblings, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2016-01-13 20:12 UTC (permalink / raw)
  To: linux-kernel, Linus Torvalds
  Cc: Davidlohr Bueso, Davidlohr Bueso, Peter Zijlstra,
	the arch/x86 maintainers, virtualization, Borislav Petkov,
	H. Peter Anvin, Thomas Gleixner, Paul E. McKenney, Ingo Molnar

mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's
2 to 3 times slower than lock; addl that we use on older CPUs.

So let's use the locked variant everywhere.

While I was at it, I found some inconsistencies in comments in
arch/x86/include/asm/barrier.h

The documentation fixes are included first - I verified that
they do not change the generated code at all. They should be
safe to apply directly.

The last patch changes mb() to lock addl. I was unable to
measure a speed difference on a macro benchmark,
but I noted that even doing
	#define mb() barrier()
seems to make no difference for most benchmarks
(it causes hangs sometimes, of course).

HPA asked that the last patch is deferred until we hear back from
intel, which makes sense of course. So it needs HPA's ack.

I hope I'm not splitting this up too much - the reason is I wanted to isolate
the code changes (that people might want to test for performance)
from comment changes approved by Linus, from (so far unreviewed) changes
I came up with myself.

Changes from v2:
	add patch adding cc clobber for addl
	tweak commit log for patch 2
	use addl at SP-4 (as opposed to SP) to reduce data dependencies

Michael S. Tsirkin (4):
  x86: add cc clobber for addl
  x86: drop a comment left over from X86_OOSTORE
  x86: tweak the comment about use of wmb for IO
  x86: drop mfence in favor of lock+addl

 arch/x86/include/asm/barrier.h | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

-- 
MST

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-01-26  8:27 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-13 20:12 [PATCH v3 0/4] x86: faster mb()+documentation tweaks Michael S. Tsirkin
2016-01-13 20:12 ` [PATCH v3 1/4] x86: add cc clobber for addl Michael S. Tsirkin
2016-01-13 20:12 ` Michael S. Tsirkin
2016-01-13 20:12 ` [PATCH v3 2/4] x86: drop a comment left over from X86_OOSTORE Michael S. Tsirkin
2016-01-13 20:12   ` Michael S. Tsirkin
2016-01-13 20:12 ` [PATCH v3 3/4] x86: tweak the comment about use of wmb for IO Michael S. Tsirkin
2016-01-13 20:12 ` Michael S. Tsirkin
2016-01-13 20:12 ` [PATCH v3 4/4] x86: drop mfence in favor of lock+addl Michael S. Tsirkin
2016-01-13 20:12 ` Michael S. Tsirkin
2016-01-14 11:39 ` [PATCH v3 0/4] x86: faster mb()+documentation tweaks Borislav Petkov
2016-01-14 11:39   ` Borislav Petkov
2016-01-26  8:23   ` Michael S. Tsirkin
2016-01-26  8:23     ` Michael S. Tsirkin
2016-01-26  8:26     ` Boris Petkov
2016-01-26  8:26       ` Boris Petkov
  -- strict thread matches above, loose matches on Subject: below --
2016-01-13 20:12 Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.