All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Hook up qspinlock for arm64
@ 2018-06-26 11:00 Will Deacon
  2018-06-26 11:00 ` [PATCH 1/3] arm64: barrier: Implement smp_cond_load_relaxed Will Deacon
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Will Deacon @ 2018-06-26 11:00 UTC (permalink / raw)
  To: linux-arm-kernel

Hi everybody,

With my recent changes to the core qspinlock code, it now performs well
enough on arm64 to replace our ticket-based approach.

Testing welcome,

Will

--->8

Will Deacon (3):
  arm64: barrier: Implement smp_cond_load_relaxed
  arm64: locking: Replace ticket lock implementation with qspinlock
  arm64: kconfig: Ensure spinlock fastpaths are inlined if !PREEMPT

 arch/arm64/Kconfig                      |  11 +++
 arch/arm64/include/asm/Kbuild           |   1 +
 arch/arm64/include/asm/barrier.h        |  13 ++++
 arch/arm64/include/asm/spinlock.h       | 117 +-------------------------------
 arch/arm64/include/asm/spinlock_types.h |  17 +----
 5 files changed, 27 insertions(+), 132 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-07-20  9:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-26 11:00 [PATCH 0/3] Hook up qspinlock for arm64 Will Deacon
2018-06-26 11:00 ` [PATCH 1/3] arm64: barrier: Implement smp_cond_load_relaxed Will Deacon
2018-06-26 11:00 ` [PATCH 2/3] arm64: locking: Replace ticket lock implementation with qspinlock Will Deacon
2018-06-26 11:00 ` [PATCH 3/3] arm64: kconfig: Ensure spinlock fastpaths are inlined if !PREEMPT Will Deacon
2018-07-20  9:07 ` [PATCH 0/3] Hook up qspinlock for arm64 John Garry

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.