linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Add NUMA-awareness to qspinlock
@ 2019-07-15 19:25 Alex Kogan
  2019-07-15 19:25 ` [PATCH v3 1/5] locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic Alex Kogan
                   ` (5 more replies)
  0 siblings, 6 replies; 33+ messages in thread
From: Alex Kogan @ 2019-07-15 19:25 UTC (permalink / raw)
  To: linux, peterz, mingo, will.deacon, arnd, longman, linux-arch,
	linux-arm-kernel, linux-kernel, tglx, bp, hpa, x86, guohanjun,
	jglauber
  Cc: alex.kogan, dave.dice, rahul.x.yadav, steven.sistare, daniel.m.jordan

Changes from v2:
----------------
- Patch the NUMA-aware qspinlock at the boot time on machines with
multiple NUMA nodes and a kernel compiled with NUMA_AWARE_SPINLOCKS,
as suggested by Peter and Longman.

- CNA queue nodes encapsulate MCS queue nodes, similarly to paravirt nodes,
as suggested by Peter. MCS queue node size has been increased by 4 bytes.

- Use the existing next_pseudo_random32() instead of a custom xorshift
pseudo-random number generator, as suggested by Peter.

- Use cpu_to_node() to lookup the NUMA node of a thread, as suggested
by Hanjun.

— Rewrote cna_pass_mcs_lock(), as suggested by Peter.

- We evaluated the patch on a single-node machine as well as in a paravirt 
environment (with virtme/qemu), as suggested by Peter and Longman.
Details are below.

— Our evaluation shows that CNA also improves performance of user 
applications that have hot pthread mutexes, as the latter create contention
on spin locks protecting futex chains in the kernel when waiting threads
park and unpark. Details are below.


Summary
-------

Lock throughput can be increased by handing a lock to a waiter on the
same NUMA node as the lock holder, provided care is taken to avoid
starvation of waiters on other NUMA nodes. This patch introduces CNA
(compact NUMA-aware lock) as the slow path for qspinlock. It is
enabled through a configuration option (NUMA_AWARE_SPINLOCKS).

CNA is a NUMA-aware version of the MCS spin-lock. Spinning threads are
organized in two queues, a main queue for threads running on the same
node as the current lock holder, and a secondary queue for threads
running on other nodes. Threads store the ID of the node on which
they are running in their queue nodes. At the unlock time, the lock
holder scans the main queue looking for a thread running on the same
node. If found (call it thread T), all threads in the main queue
between the current lock holder and T are moved to the end of the
secondary queue, and the lock is passed to T. If such T is not found, the
lock is passed to the first node in the secondary queue. Finally, if the
secondary queue is empty, the lock is passed to the next thread in the
main queue. To avoid starvation of threads in the secondary queue,
those threads are moved back to the head of the main queue
after a certain expected number of intra-node lock hand-offs.

More details are available at https://arxiv.org/abs/1810.05600.

We have done some performance evaluation with the locktorture module
as well as with several benchmarks from the will-it-scale repo.
The following locktorture results are from an Oracle X5-4 server
(four Intel Xeon E7-8895 v3 @ 2.60GHz sockets with 18 hyperthreaded
cores each). Each number represents an average (over 25 runs) of the
total number of ops (x10^7) reported at the end of each run. The 
standard deviation is also reported in (), and in general is about 3%
from the average. The 'stock' kernel is v5.2.0-rc2,
commit f782099a96a0 ("Merge branch 'perf/core'"),
compiled in the default configuration. 'patch' is the modified
kernel compiled with NUMA_AWARE_SPINLOCKS not set; it is included to show
that any performance changes to the existing qspinlock implementation are
essentially noise. 'patch-CNA' is the modified kernel with 
NUMA_AWARE_SPINLOCKS set; the speedup is calculated dividing 
'patch-CNA' by 'stock'.

#thr  	 stock  	patch	     patch-CNA 	 speedup (patch-CNA/stock)
  1  2.687 (0.104)  2.655 (0.099)   2.706 (0.119)  1.007
  2  3.085 (0.104)  3.140 (0.128)   3.111 (0.147)  1.009
  4  4.230 (0.125)  4.217 (0.129)   4.482 (0.121)  1.060
  8  5.480 (0.159)  5.411 (0.183)   7.064 (0.218)  1.289
 16  6.733 (0.196)  6.764 (0.155)   8.666 (0.161)  1.287
 32  7.557 (0.148)  7.488 (0.133)   9.519 (0.253)  1.260
 36  7.667 (0.222)  7.654 (0.211)   9.530 (0.218)  1.243
 72  6.931 (0.172)  6.931 (0.187)  10.030 (0.217)  1.447
108  6.478 (0.098)  6.423 (0.107)  10.157 (0.250)  1.568
142  6.041 (0.102)  6.058 (0.111)  10.102 (0.260)  1.672

The following tables contain throughput results (ops/us) from the same
setup for will-it-scale/open1_threads: 

#thr  	 stock  	patch	     patch-CNA 	 speedup (patch-CNA/stock)
  1  0.536 (0.001)  0.540 (0.003)  0.538 (0.001)  1.002
  2  0.833 (0.020)  0.842 (0.028)  0.827 (0.025)  0.993
  4  1.464 (0.031)  1.473 (0.025)  1.465 (0.033)  1.001
  8  1.685 (0.087)  1.707 (0.078)  1.708 (0.104)  1.013
 16  1.715 (0.091)  1.777 (0.100)  1.766 (0.070)  1.029
 32  0.937 (0.065)  0.930 (0.078)  1.752 (0.072)  1.869
 36  0.930 (0.079)  0.927 (0.092)  1.731 (0.068)  1.862
 72  0.871 (0.037)  0.855 (0.038)  1.758 (0.071)  2.019
108  0.856 (0.044)  0.865 (0.042)  1.747 (0.063)  2.040
142  0.810 (0.051)  0.815 (0.041)  1.776 (0.064)  2.193

and will-it-scale/lock2_threads:

#thr  	 stock  	patch	     patch-CNA 	 speedup (patch-CNA/stock)
  1  1.631 (0.002)  1.638 (0.002)  1.637 (0.002)  1.004
  2  2.756 (0.076)  2.761 (0.063)  2.778 (0.081)  1.008
  4  5.119 (0.411)  5.256 (0.331)  5.138 (0.388)  1.004
  8  4.147 (0.215)  4.299 (0.264)  4.126 (0.322)  0.995
 16  4.214 (0.111)  4.234 (0.133)  4.133 (0.128)  0.981
 32  2.485 (0.095)  2.473 (0.117)  4.015 (0.115)  1.616
 36  2.423 (0.099)  2.451 (0.117)  3.963 (0.129)  1.636
 72  2.026 (0.102)  1.983 (0.108)  4.000 (0.122)  1.975
108  2.102 (0.088)  2.145 (0.080)  3.927 (0.108)  1.868
142  1.923 (0.128)  1.894 (0.100)  3.879 (0.081)  2.018

We also evaluated the patch on a single-node machine (Intel i7-4770 with 
4 hyperthreaded cores) with will-it-scale, and observed no meaningful
performance impact, as expected. For instance, below are results for 
will-it-scale/open1_threads:

#thr  	 stock        patch-CNA   speedup (patch-CNA/stock)
  1  0.861 (0.006)  0.867 (0.005)  1.007
  2  1.481 (0.015)  1.511 (0.017)  1.020
  4  2.671 (0.041)  2.697 (0.049)  1.010
  6  2.889 (0.064)  2.910 (0.060)  1.007

Furthermore, we evaluated the patch in the paravirt setup, booting the 
kernel with virtme (qemu) and $(nproc) cores on the same Oracle X5-4 server
as above. We run will-it-scale benchmarks, and once again observed 
no meaningful performance impact. For instance, below are results for
will-it-scale/open1_threads:

#thr  	 stock 	      patch-CNA   speedup (patch-CNA/stock)
  1  0.761 (0.009)  0.763 (0.009)  1.003
  2  0.652 (0.043)  0.666 (0.033)  1.022
  4  0.591 (0.036)  0.596 (0.033)  1.008
  8  0.582 (0.019)  0.575 (0.020)  0.989
 16  0.680 (0.021)  0.685 (0.018)  1.007
 32  0.566 (0.031)  0.548 (0.049)  0.968
 36  0.549 (0.053)  0.531 (0.053)  0.966
 72  0.363 (0.012)  0.364 (0.008)  1.002
108  0.359 (0.010)  0.361 (0.009)  1.004
142  0.355 (0.011)  0.362 (0.011)  1.020

Our evaluation shows that CNA also improves performance of user 
applications that have hot pthread mutexes. Those mutexes are 
blocking, and waiting threads park and unpark via the futex 
mechanism in the kernel. Given that kernel futex chains, which
are hashed by the mutex address, are each protected by a 
chain-specific spin lock, the contention on a user-mode mutex 
translates into contention on a kernel level spinlock. 

Here are the results for the leveldb ‘readrandom’ benchmark:

#thr  	 stock        patch-CNA    speedup (patch-CNA/stock)
  1  0.479 (0.036)  0.533 (0.010)   1.113
  2  0.653 (0.022)  0.680 (0.027)   1.042
  4  0.705 (0.016)  0.701 (0.019)   0.995
  8  0.686 (0.021)  0.690 (0.024)   1.006
 16  0.708 (0.025)  0.719 (0.020)   1.016
 32  0.728 (0.023)  1.011 (0.117)   1.389
 36  0.720 (0.038)  1.073 (0.127)   1.491
 72  0.652 (0.018)  1.195 (0.017)   1.833
108  0.624 (0.016)  1.178 (0.028)   1.888
142  0.604 (0.015)  1.163 (0.024)   1.925

Further comments are welcome and appreciated.

Alex Kogan (5):
  locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic
  locking/qspinlock: Refactor the qspinlock slow path
  locking/qspinlock: Introduce CNA into the slow path of qspinlock
  locking/qspinlock: Introduce starvation avoidance into CNA
  locking/qspinlock: Introduce the shuffle reduction optimization into
    CNA

 arch/arm/include/asm/mcs_spinlock.h |   4 +-
 arch/x86/Kconfig                    |  18 +++
 arch/x86/include/asm/qspinlock.h    |   4 +
 arch/x86/kernel/alternative.c       |  12 ++
 kernel/locking/mcs_spinlock.h       |   8 +-
 kernel/locking/qspinlock.c          |  81 +++++++++++---
 kernel/locking/qspinlock_cna.h      | 218 ++++++++++++++++++++++++++++++++++++
 7 files changed, 326 insertions(+), 19 deletions(-)
 create mode 100644 kernel/locking/qspinlock_cna.h

-- 
2.11.0 (Apple Git-81)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2019-07-17 17:58 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-15 19:25 [PATCH v3 0/5] Add NUMA-awareness to qspinlock Alex Kogan
2019-07-15 19:25 ` [PATCH v3 1/5] locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic Alex Kogan
2019-07-16 10:23   ` Peter Zijlstra
2019-07-15 19:25 ` [PATCH v3 2/5] locking/qspinlock: Refactor the qspinlock slow path Alex Kogan
2019-07-16 10:20   ` Peter Zijlstra
2019-07-16 14:53     ` Alex Kogan
2019-07-16 15:58       ` Peter Zijlstra
2019-07-15 19:25 ` [PATCH v3 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock Alex Kogan
2019-07-15 21:30   ` Waiman Long
2019-07-16 11:04     ` Peter Zijlstra
2019-07-16 14:26     ` Alex Kogan
2019-07-16 14:44       ` Waiman Long
     [not found]     ` <aa73b86d-902a-bb6f-d372-8645c8299a6d@redhat.com>
     [not found]       ` <C1C55A40-FDB1-43B5-B551-F9B8BE776DF8@oracle.com>
2019-07-16 14:50         ` Waiman Long
2019-07-17 17:44           ` Alex Kogan
2019-07-17 17:58             ` Waiman Long
2019-07-16 11:05   ` Peter Zijlstra
2019-07-16 14:30     ` Alex Kogan
2019-07-16 15:50   ` Peter Zijlstra
2019-07-16 17:19     ` Alex Kogan
2019-07-16 18:47       ` Peter Zijlstra
2019-07-17  8:39         ` Peter Zijlstra
2019-07-17  8:59           ` Peter Zijlstra
2019-07-17 14:52             ` Alex Kogan
     [not found]           ` <FFC2D45A-24B3-40E1-ABBB-1D696E830B23@oracle.com>
2019-07-17 15:09             ` Peter Zijlstra
2019-07-17  2:16   ` Waiman Long
2019-07-17  7:44     ` Peter Zijlstra
2019-07-17 13:35       ` Waiman Long
2019-07-17 14:42       ` Alex Kogan
2019-07-15 19:25 ` [PATCH v3 4/5] locking/qspinlock: Introduce starvation avoidance into CNA Alex Kogan
2019-07-16 15:59   ` Peter Zijlstra
2019-07-15 19:25 ` [PATCH v3 5/5] locking/qspinlock: Introduce the shuffle reduction optimization " Alex Kogan
2019-07-16 11:47 ` [PATCH v3 0/5] Add NUMA-awareness to qspinlock Nicholas Piggin
     [not found]   ` <7D29555E-8F72-4EDD-8A87-B1A59C3945A6@oracle.com>
2019-07-16 23:07     ` Nicholas Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).