linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/7] locking/rtqspinlock: Realtime queued spinlocks
@ 2017-01-03 18:00 Waiman Long
  2017-01-03 18:00 ` [RFC PATCH 1/7] locking/spinlock: Remove the unused spin_lock_bh_nested API Waiman Long
                   ` (7 more replies)
  0 siblings, 8 replies; 24+ messages in thread
From: Waiman Long @ 2017-01-03 18:00 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Thomas Gleixner, H. Peter Anvin
  Cc: linux-kernel, Waiman Long

This patchset introduces a new variant of queued spinlocks - the
realtime queued spinlocks. The purpose of this new variant is to
support real spinlock in a realtime environment where high priority
RT tasks should be allowed to complete its work ASAP. This means as
little waiting time for spinlocks as possible.

Non-RT tasks will wait for spinlocks in the MCS waiting queue as
usual. RT tasks and interrupts will spin directly on the spinlocks
and use the priority value in the pending byte to arbitrate who get
the lock first.

Patch 1 removes the unused spin_lock_bh_nested() API.

Patch 2 introduces the basic realtime queued spinlocks where the
pending byte is used for storing the priority of the highest priority
RT task that is waiting on the spinlock. All the RT tasks will spin
directly on the spinlock instead of waiting in the queue.

Patch 3 moves all interrupt context lock waiters to RT spinning.

Patch 4 overrides the spin_lock_nested() call with special code to
enabled RT lock spinning for nested spinlock.

Patch 5 handles priority boosting by periodically checking its priority
and unqueuing from the waiting queue and do RT spinning if applicable.

Patch 6 allows voluntary CPU preemption to happen when a CPU is
waiting for a spinlock.

Patch 7 enables event counts to be collected by the qspinlock stat
package so that we could monitor what have happened within the kernel.

With a locking microbenchmark running on a 2-socket 36-core E5-2699
v3 system, the elapsed times to complete 2M locking loop per non-RT
thread were as follows:

   # of threads   qspinlock   rt-qspinlock  % change
   ------------   ---------   ------------  --------
        2           0.29s        1.97s       +580%
	3           1.46s        2.05s        +40%
	4           1.81s        2.38s        +31%
	5           2.36s        2.87s        +22%
	6           2.73s        3.58s        +31%
	7           3.17s        3.74s        +18%
	8           3.67s        4.70s        +28%
	9           3.89s        5.28s        +36%
       10           4.35s        6.58s        +51%

The RT qspinlock is slower than the non-RT qspinlock which is expected.

This patchset hasn't included any patch to modify the call sites of
spin_lock_nested() to include the outer lock of the nest spinlock
pair yet. That will be included in a later version of this patchset
once it is determined that RT qspinlocks is worth pursuing.

Only minimal testing to build and boot the patched kernel was
done. More extensive testing will be done with later versions of
this patchset.

Waiman Long (7):
  locking/spinlock: Remove the unused spin_lock_bh_nested API
  locking/rtqspinlock: Introduce realtime queued spinlocks
  locking/rtqspinlock: Use static RT priority when in interrupt context
  locking/rtqspinlock: Override spin_lock_nested with special RT variants
  locking/rtqspinlock: Handle priority boosting
  locking/rtqspinlock: Voluntarily yield CPU when need_sched()
  locking/rtqspinlock: Enable collection of event counts

 arch/x86/Kconfig                 |  18 +-
 include/linux/spinlock.h         |  43 +++-
 include/linux/spinlock_api_smp.h |   9 +-
 include/linux/spinlock_api_up.h  |   1 -
 kernel/Kconfig.locks             |   9 +
 kernel/locking/qspinlock.c       |  51 +++-
 kernel/locking/qspinlock_rt.h    | 543 +++++++++++++++++++++++++++++++++++++++
 kernel/locking/qspinlock_stat.h  |  81 +++++-
 kernel/locking/spinlock.c        |   8 -
 9 files changed, 721 insertions(+), 42 deletions(-)
 create mode 100644 kernel/locking/qspinlock_rt.h

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-01-05 19:27 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-03 18:00 [RFC PATCH 0/7] locking/rtqspinlock: Realtime queued spinlocks Waiman Long
2017-01-03 18:00 ` [RFC PATCH 1/7] locking/spinlock: Remove the unused spin_lock_bh_nested API Waiman Long
2017-01-03 18:00 ` [RFC PATCH 2/7] locking/rtqspinlock: Introduce realtime queued spinlocks Waiman Long
2017-01-03 18:00 ` [RFC PATCH 3/7] locking/rtqspinlock: Use static RT priority when in interrupt context Waiman Long
2017-01-03 18:00 ` [RFC PATCH 4/7] locking/rtqspinlock: Override spin_lock_nested with special RT variants Waiman Long
2017-01-03 18:00 ` [RFC PATCH 5/7] locking/rtqspinlock: Handle priority boosting Waiman Long
2017-01-03 18:00 ` [RFC PATCH 6/7] locking/rtqspinlock: Voluntarily yield CPU when need_sched() Waiman Long
2017-01-04 10:07   ` Boqun Feng
2017-01-04 21:57     ` Waiman Long
2017-01-05 10:16   ` Daniel Bristot de Oliveira
2017-01-03 18:00 ` [RFC PATCH 7/7] locking/rtqspinlock: Enable collection of event counts Waiman Long
2017-01-04 12:49 ` [RFC PATCH 0/7] locking/rtqspinlock: Realtime queued spinlocks Peter Zijlstra
2017-01-04 15:25   ` Waiman Long
2017-01-04 15:55     ` Steven Rostedt
2017-01-04 20:02       ` Waiman Long
2017-01-05 18:43         ` Steven Rostedt
2017-01-05  9:26     ` Daniel Bristot de Oliveira
2017-01-05  9:44     ` Peter Zijlstra
2017-01-05 15:55       ` Waiman Long
2017-01-05 16:08         ` Peter Zijlstra
2017-01-05 17:07           ` Waiman Long
2017-01-05 18:50             ` Steven Rostedt
2017-01-05 19:24               ` Waiman Long
2017-01-05 18:05           ` Daniel Bristot de Oliveira

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).