* [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
v9->v10:
- Make some minor changes to qspinlock.c to accommodate review feedback.
- Change author to PeterZ for 2 of the patches.
- Include Raghavendra KT's test results in patch 18.
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843@infradead.org
- Break the more complex patches into smaller ones to ease review effort.
- Fix a racing condition in the PV qspinlock code.
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving performance.
- Simplify some of the codes and add more comments.
- Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
- Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
- Add performance data for IvyBridge-EX CPU.
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized 2-task contending code to make it fairer at the
expense of a bit of performance.
- Add a patch to support unfair queue spinlock for Xen.
- Modify the PV qspinlock code to follow what was done in the PV
ticketlock.
- Add performance data for the unfair lock as well as the PV
support code.
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue head stay alive as much as possible.
v3->v4:
- Remove debugging code and fix a configuration error
- Simplify the qspinlock structure and streamline the code to make it
perform a bit better
- Add an x86 version of asm/qspinlock.h for holding x86 specific
optimization.
- Add an optimized x86 code path for 2 contending tasks to improve
low contention performance.
v2->v3:
- Simplify the code by using numerous mode only without an unfair option.
- Use the latest smp_load_acquire()/smp_store_release() barriers.
- Move the queue spinlock code to kernel/locking.
- Make the use of queue spinlock the default for x86-64 without user
configuration.
- Additional performance tuning.
v1->v2:
- Add some more comments to document what the code does.
- Add a numerous CPU mode to support >= 16K CPUs
- Add a configuration option to allow lock stealing which can further
improve performance in many cases.
- Enable wakeup of queue head CPU at unlock time for non-numerous
CPU mode.
This patch set has 3 different sections:
1) Patches 1-7: Introduces a queue-based spinlock implementation that
can replace the default ticket spinlock without increasing the
size of the spinlock data structure. As a result, critical kernel
data structures that embed spinlock won't increase in size and
break data alignments.
2) Patches 8-13: Enables the use of unfair queue spinlock in a
virtual guest. This can resolve some of the locking related
performance issues due to the fact that the next CPU to get the
lock may have been scheduled out for a period of time.
3) Patches 14-19: Enable qspinlock para-virtualization support
by halting the waiting CPUs after spinning for a certain amount of
time. The unlock code will detect the a sleeping waiter and wake it
up. This is essentially the same logic as the PV ticketlock code.
The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention. This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.
The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.
The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.
Peter Zijlstra (2):
qspinlock: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
Waiman Long (17):
qspinlock: A simple generic 4-byte queue spinlock
qspinlock, x86: Enable x86-64 to use queue spinlock
qspinlock: Extract out the exchange of tail code word
qspinlock: prolong the stay in the pending bit path
qspinlock: Use a simple write to grab the lock, if applicable
qspinlock: Make a new qnode structure to support virtualization
qspinlock: Prepare for unfair lock support
qspinlock, x86: Allow unfair spinlock in a virtual guest
qspinlock: Split the MCS queuing code into a separate slowerpath
unfair qspinlock: Variable frequency lock stealing mechanism
unfair qspinlock: Enable lock stealing in lock waiters
pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
pvqspinlock, x86: Add PV data structure & methods
pvqspinlock: Enable coexistence with the unfair lock
pvqspinlock: Add qspinlock para-virtualization support
pvqspinlock, x86: Enable PV qspinlock PV for KVM
pvqspinlock, x86: Enable PV qspinlock for XEN
arch/x86/Kconfig | 12 +
arch/x86/include/asm/paravirt.h | 18 +-
arch/x86/include/asm/paravirt_types.h | 17 +
arch/x86/include/asm/pvqspinlock.h | 306 +++++++++++
arch/x86/include/asm/qspinlock.h | 141 +++++
arch/x86/include/asm/spinlock.h | 9 +-
arch/x86/include/asm/spinlock_types.h | 4 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/kvm.c | 137 +++++-
arch/x86/kernel/paravirt-spinlocks.c | 36 ++-
arch/x86/xen/spinlock.c | 149 +++++-
include/asm-generic/qspinlock.h | 118 +++++
include/asm-generic/qspinlock_types.h | 82 +++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/mcs_spinlock.h | 1 +
kernel/locking/qspinlock.c | 918 +++++++++++++++++++++++++++++++++
17 files changed, 1945 insertions(+), 12 deletions(-)
create mode 100644 arch/x86/include/asm/pvqspinlock.h
create mode 100644 arch/x86/include/asm/qspinlock.h
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
^ permalink raw reply [flat|nested] 163+ messages in thread
* [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
v9->v10:
- Make some minor changes to qspinlock.c to accommodate review feedback.
- Change author to PeterZ for 2 of the patches.
- Include Raghavendra KT's test results in patch 18.
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843@infradead.org
- Break the more complex patches into smaller ones to ease review effort.
- Fix a racing condition in the PV qspinlock code.
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving performance.
- Simplify some of the codes and add more comments.
- Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
- Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
- Add performance data for IvyBridge-EX CPU.
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized 2-task contending code to make it fairer at the
expense of a bit of performance.
- Add a patch to support unfair queue spinlock for Xen.
- Modify the PV qspinlock code to follow what was done in the PV
ticketlock.
- Add performance data for the unfair lock as well as the PV
support code.
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue head stay alive as much as possible.
v3->v4:
- Remove debugging code and fix a configuration error
- Simplify the qspinlock structure and streamline the code to make it
perform a bit better
- Add an x86 version of asm/qspinlock.h for holding x86 specific
optimization.
- Add an optimized x86 code path for 2 contending tasks to improve
low contention performance.
v2->v3:
- Simplify the code by using numerous mode only without an unfair option.
- Use the latest smp_load_acquire()/smp_store_release() barriers.
- Move the queue spinlock code to kernel/locking.
- Make the use of queue spinlock the default for x86-64 without user
configuration.
- Additional performance tuning.
v1->v2:
- Add some more comments to document what the code does.
- Add a numerous CPU mode to support >= 16K CPUs
- Add a configuration option to allow lock stealing which can further
improve performance in many cases.
- Enable wakeup of queue head CPU at unlock time for non-numerous
CPU mode.
This patch set has 3 different sections:
1) Patches 1-7: Introduces a queue-based spinlock implementation that
can replace the default ticket spinlock without increasing the
size of the spinlock data structure. As a result, critical kernel
data structures that embed spinlock won't increase in size and
break data alignments.
2) Patches 8-13: Enables the use of unfair queue spinlock in a
virtual guest. This can resolve some of the locking related
performance issues due to the fact that the next CPU to get the
lock may have been scheduled out for a period of time.
3) Patches 14-19: Enable qspinlock para-virtualization support
by halting the waiting CPUs after spinning for a certain amount of
time. The unlock code will detect the a sleeping waiter and wake it
up. This is essentially the same logic as the PV ticketlock code.
The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention. This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.
The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.
The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.
Peter Zijlstra (2):
qspinlock: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
Waiman Long (17):
qspinlock: A simple generic 4-byte queue spinlock
qspinlock, x86: Enable x86-64 to use queue spinlock
qspinlock: Extract out the exchange of tail code word
qspinlock: prolong the stay in the pending bit path
qspinlock: Use a simple write to grab the lock, if applicable
qspinlock: Make a new qnode structure to support virtualization
qspinlock: Prepare for unfair lock support
qspinlock, x86: Allow unfair spinlock in a virtual guest
qspinlock: Split the MCS queuing code into a separate slowerpath
unfair qspinlock: Variable frequency lock stealing mechanism
unfair qspinlock: Enable lock stealing in lock waiters
pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
pvqspinlock, x86: Add PV data structure & methods
pvqspinlock: Enable coexistence with the unfair lock
pvqspinlock: Add qspinlock para-virtualization support
pvqspinlock, x86: Enable PV qspinlock PV for KVM
pvqspinlock, x86: Enable PV qspinlock for XEN
arch/x86/Kconfig | 12 +
arch/x86/include/asm/paravirt.h | 18 +-
arch/x86/include/asm/paravirt_types.h | 17 +
arch/x86/include/asm/pvqspinlock.h | 306 +++++++++++
arch/x86/include/asm/qspinlock.h | 141 +++++
arch/x86/include/asm/spinlock.h | 9 +-
arch/x86/include/asm/spinlock_types.h | 4 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/kvm.c | 137 +++++-
arch/x86/kernel/paravirt-spinlocks.c | 36 ++-
arch/x86/xen/spinlock.c | 149 +++++-
include/asm-generic/qspinlock.h | 118 +++++
include/asm-generic/qspinlock_types.h | 82 +++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/mcs_spinlock.h | 1 +
kernel/locking/qspinlock.c | 918 +++++++++++++++++++++++++++++++++
17 files changed, 1945 insertions(+), 12 deletions(-)
create mode 100644 arch/x86/include/asm/pvqspinlock.h
create mode 100644 arch/x86/include/asm/qspinlock.h
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
^ permalink raw reply [flat|nested] 163+ messages in thread
* [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock
2014-05-07 15:01 ` Waiman Long
(?)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long, Peter Zijlstra
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.
Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.
This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.
Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities. By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.
Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
include/asm-generic/qspinlock.h | 118 ++++++++++++++++++++
include/asm-generic/qspinlock_types.h | 61 ++++++++++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/mcs_spinlock.h | 1 +
kernel/locking/qspinlock.c | 197 +++++++++++++++++++++++++++++++++
6 files changed, 385 insertions(+), 0 deletions(-)
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..e8a7ae8
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ * locked wrt the lockref code to avoid lock stealing by the lockref
+ * code and change things underneath the lock. This also allows some
+ * optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+ return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+ if (!atomic_read(&lock->val) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+ u32 val;
+
+ val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+ if (likely(val == 0))
+ return;
+ queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ /*
+ * smp_mb__before_atomic() in order to guarantee release semantics
+ */
+ smp_mb__before_atomic_dec();
+ atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define __ARCH_SPIN_LOCK_UNLOCKED { ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l) queue_spin_is_locked(l)
+#define arch_spin_is_contended(l) queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l) queue_spin_value_unlocked(l)
+#define arch_spin_lock(l) queue_spin_lock(l)
+#define arch_spin_trylock(l) queue_spin_trylock(l)
+#define arch_spin_unlock(l) queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f) queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..f66f845
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+ atomic_t val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ * 0- 7: locked byte
+ * 8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
+ << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET 0
+#define _Q_LOCKED_BITS 8
+#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS 2
+#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET (_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+ bool
+
+config QUEUE_SPINLOCK
+ def_bool y if ARCH_USE_QUEUE_SPINLOCK
+ depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index b8bdcd4..e6741ac 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_SMP) += spinlock.o
obj-$(CONFIG_SMP) += lglock.o
obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index a2dbac4..a59b677 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
struct mcs_spinlock {
struct mcs_spinlock *next;
int locked; /* 1 if lock acquired */
+ int count;
};
#ifndef arch_mcs_spin_lock_contended
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..b97a1ad
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ * Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+ u32 tail;
+
+ tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+ tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+ return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+ int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+ int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+ return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
+ * : | ^--------. / :
+ * : v \ | :
+ * uncontended : (n,x) --+--> (n,0) | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
+ * queue : ^--' :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct mcs_spinlock *prev, *next, *node;
+ u32 new, old, tail;
+ int idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ node = this_cpu_ptr(&mcs_nodes[0]);
+ idx = node->count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->locked = 0;
+ node->next = NULL;
+
+ /*
+ * trylock || xchg(lock, node)
+ *
+ * 0,0 -> 0,1 ; trylock
+ * p,x -> n,x ; prev = xchg(lock, node)
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val)
+ new = tail | (val & _Q_LOCKED_MASK);
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * we won the trylock; forget about queueing.
+ */
+ if (new == _Q_LOCKED_VAL)
+ goto release;
+
+ /*
+ * if there was a previous node; link it and wait.
+ */
+ if (old & ~_Q_LOCKED_MASK) {
+ prev = decode_tail(old);
+ ACCESS_ONCE(prev->next) = node;
+
+ arch_mcs_spin_lock_contended(&node->locked);
+ }
+
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ *
+ * *,x -> *,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * claim the lock:
+ *
+ * n,0 -> 0,1 : lock, uncontended
+ * *,0 -> *,1 : lock, contended
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val != tail)
+ new |= val;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * contended path; wait for next, release.
+ */
+ if (new != _Q_LOCKED_VAL) {
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
+
+ arch_mcs_spin_unlock_contended(&next->locked);
+ }
+
+release:
+ /*
+ * release the node
+ */
+ this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Peter Zijlstra,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.
Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.
This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.
Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities. By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.
Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
include/asm-generic/qspinlock.h | 118 ++++++++++++++++++++
include/asm-generic/qspinlock_types.h | 61 ++++++++++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/mcs_spinlock.h | 1 +
kernel/locking/qspinlock.c | 197 +++++++++++++++++++++++++++++++++
6 files changed, 385 insertions(+), 0 deletions(-)
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..e8a7ae8
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ * locked wrt the lockref code to avoid lock stealing by the lockref
+ * code and change things underneath the lock. This also allows some
+ * optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+ return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+ if (!atomic_read(&lock->val) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+ u32 val;
+
+ val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+ if (likely(val == 0))
+ return;
+ queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ /*
+ * smp_mb__before_atomic() in order to guarantee release semantics
+ */
+ smp_mb__before_atomic_dec();
+ atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define __ARCH_SPIN_LOCK_UNLOCKED { ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l) queue_spin_is_locked(l)
+#define arch_spin_is_contended(l) queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l) queue_spin_value_unlocked(l)
+#define arch_spin_lock(l) queue_spin_lock(l)
+#define arch_spin_trylock(l) queue_spin_trylock(l)
+#define arch_spin_unlock(l) queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f) queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..f66f845
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+ atomic_t val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ * 0- 7: locked byte
+ * 8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
+ << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET 0
+#define _Q_LOCKED_BITS 8
+#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS 2
+#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET (_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+ bool
+
+config QUEUE_SPINLOCK
+ def_bool y if ARCH_USE_QUEUE_SPINLOCK
+ depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index b8bdcd4..e6741ac 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_SMP) += spinlock.o
obj-$(CONFIG_SMP) += lglock.o
obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index a2dbac4..a59b677 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
struct mcs_spinlock {
struct mcs_spinlock *next;
int locked; /* 1 if lock acquired */
+ int count;
};
#ifndef arch_mcs_spin_lock_contended
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..b97a1ad
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ * Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+ u32 tail;
+
+ tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+ tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+ return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+ int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+ int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+ return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
+ * : | ^--------. / :
+ * : v \ | :
+ * uncontended : (n,x) --+--> (n,0) | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
+ * queue : ^--' :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct mcs_spinlock *prev, *next, *node;
+ u32 new, old, tail;
+ int idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ node = this_cpu_ptr(&mcs_nodes[0]);
+ idx = node->count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->locked = 0;
+ node->next = NULL;
+
+ /*
+ * trylock || xchg(lock, node)
+ *
+ * 0,0 -> 0,1 ; trylock
+ * p,x -> n,x ; prev = xchg(lock, node)
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val)
+ new = tail | (val & _Q_LOCKED_MASK);
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * we won the trylock; forget about queueing.
+ */
+ if (new == _Q_LOCKED_VAL)
+ goto release;
+
+ /*
+ * if there was a previous node; link it and wait.
+ */
+ if (old & ~_Q_LOCKED_MASK) {
+ prev = decode_tail(old);
+ ACCESS_ONCE(prev->next) = node;
+
+ arch_mcs_spin_lock_contended(&node->locked);
+ }
+
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ *
+ * *,x -> *,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * claim the lock:
+ *
+ * n,0 -> 0,1 : lock, uncontended
+ * *,0 -> *,1 : lock, contended
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val != tail)
+ new |= val;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * contended path; wait for next, release.
+ */
+ if (new != _Q_LOCKED_VAL) {
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
+
+ arch_mcs_spin_unlock_contended(&next->locked);
+ }
+
+release:
+ /*
+ * release the node
+ */
+ this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.
Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.
This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.
Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities. By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.
Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
include/asm-generic/qspinlock.h | 118 ++++++++++++++++++++
include/asm-generic/qspinlock_types.h | 61 ++++++++++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/mcs_spinlock.h | 1 +
kernel/locking/qspinlock.c | 197 +++++++++++++++++++++++++++++++++
6 files changed, 385 insertions(+), 0 deletions(-)
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..e8a7ae8
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ * locked wrt the lockref code to avoid lock stealing by the lockref
+ * code and change things underneath the lock. This also allows some
+ * optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+ return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+ if (!atomic_read(&lock->val) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+ u32 val;
+
+ val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+ if (likely(val == 0))
+ return;
+ queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ /*
+ * smp_mb__before_atomic() in order to guarantee release semantics
+ */
+ smp_mb__before_atomic_dec();
+ atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define __ARCH_SPIN_LOCK_UNLOCKED { ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l) queue_spin_is_locked(l)
+#define arch_spin_is_contended(l) queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l) queue_spin_value_unlocked(l)
+#define arch_spin_lock(l) queue_spin_lock(l)
+#define arch_spin_trylock(l) queue_spin_trylock(l)
+#define arch_spin_unlock(l) queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f) queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..f66f845
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+ atomic_t val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ * 0- 7: locked byte
+ * 8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
+ << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET 0
+#define _Q_LOCKED_BITS 8
+#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS 2
+#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET (_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+ bool
+
+config QUEUE_SPINLOCK
+ def_bool y if ARCH_USE_QUEUE_SPINLOCK
+ depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index b8bdcd4..e6741ac 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_SMP) += spinlock.o
obj-$(CONFIG_SMP) += lglock.o
obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index a2dbac4..a59b677 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
struct mcs_spinlock {
struct mcs_spinlock *next;
int locked; /* 1 if lock acquired */
+ int count;
};
#ifndef arch_mcs_spin_lock_contended
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..b97a1ad
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ * Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+ u32 tail;
+
+ tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+ tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+ return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+ int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+ int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+ return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
+ * : | ^--------. / :
+ * : v \ | :
+ * uncontended : (n,x) --+--> (n,0) | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
+ * queue : ^--' :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct mcs_spinlock *prev, *next, *node;
+ u32 new, old, tail;
+ int idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ node = this_cpu_ptr(&mcs_nodes[0]);
+ idx = node->count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->locked = 0;
+ node->next = NULL;
+
+ /*
+ * trylock || xchg(lock, node)
+ *
+ * 0,0 -> 0,1 ; trylock
+ * p,x -> n,x ; prev = xchg(lock, node)
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val)
+ new = tail | (val & _Q_LOCKED_MASK);
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * we won the trylock; forget about queueing.
+ */
+ if (new == _Q_LOCKED_VAL)
+ goto release;
+
+ /*
+ * if there was a previous node; link it and wait.
+ */
+ if (old & ~_Q_LOCKED_MASK) {
+ prev = decode_tail(old);
+ ACCESS_ONCE(prev->next) = node;
+
+ arch_mcs_spin_lock_contended(&node->locked);
+ }
+
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ *
+ * *,x -> *,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * claim the lock:
+ *
+ * n,0 -> 0,1 : lock, uncontended
+ * *,0 -> *,1 : lock, contended
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val != tail)
+ new |= val;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * contended path; wait for next, release.
+ */
+ if (new != _Q_LOCKED_VAL) {
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
+
+ arch_mcs_spin_unlock_contended(&next->locked);
+ }
+
+release:
+ /*
+ * release the node
+ */
+ this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Peter Zijlstra,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.
Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.
This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.
Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities. By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.
Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
include/asm-generic/qspinlock.h | 118 ++++++++++++++++++++
include/asm-generic/qspinlock_types.h | 61 ++++++++++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/mcs_spinlock.h | 1 +
kernel/locking/qspinlock.c | 197 +++++++++++++++++++++++++++++++++
6 files changed, 385 insertions(+), 0 deletions(-)
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..e8a7ae8
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ * locked wrt the lockref code to avoid lock stealing by the lockref
+ * code and change things underneath the lock. This also allows some
+ * optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+ return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+ if (!atomic_read(&lock->val) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+ u32 val;
+
+ val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+ if (likely(val == 0))
+ return;
+ queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ /*
+ * smp_mb__before_atomic() in order to guarantee release semantics
+ */
+ smp_mb__before_atomic_dec();
+ atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define __ARCH_SPIN_LOCK_UNLOCKED { ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l) queue_spin_is_locked(l)
+#define arch_spin_is_contended(l) queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l) queue_spin_value_unlocked(l)
+#define arch_spin_lock(l) queue_spin_lock(l)
+#define arch_spin_trylock(l) queue_spin_trylock(l)
+#define arch_spin_unlock(l) queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f) queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..f66f845
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+ atomic_t val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ * 0- 7: locked byte
+ * 8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
+ << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET 0
+#define _Q_LOCKED_BITS 8
+#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS 2
+#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET (_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+ bool
+
+config QUEUE_SPINLOCK
+ def_bool y if ARCH_USE_QUEUE_SPINLOCK
+ depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index b8bdcd4..e6741ac 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_SMP) += spinlock.o
obj-$(CONFIG_SMP) += lglock.o
obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index a2dbac4..a59b677 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
struct mcs_spinlock {
struct mcs_spinlock *next;
int locked; /* 1 if lock acquired */
+ int count;
};
#ifndef arch_mcs_spin_lock_contended
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..b97a1ad
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ * Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+ u32 tail;
+
+ tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+ tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+ return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+ int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+ int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+ return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
+ * : | ^--------. / :
+ * : v \ | :
+ * uncontended : (n,x) --+--> (n,0) | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
+ * queue : ^--' :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct mcs_spinlock *prev, *next, *node;
+ u32 new, old, tail;
+ int idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ node = this_cpu_ptr(&mcs_nodes[0]);
+ idx = node->count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->locked = 0;
+ node->next = NULL;
+
+ /*
+ * trylock || xchg(lock, node)
+ *
+ * 0,0 -> 0,1 ; trylock
+ * p,x -> n,x ; prev = xchg(lock, node)
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val)
+ new = tail | (val & _Q_LOCKED_MASK);
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * we won the trylock; forget about queueing.
+ */
+ if (new == _Q_LOCKED_VAL)
+ goto release;
+
+ /*
+ * if there was a previous node; link it and wait.
+ */
+ if (old & ~_Q_LOCKED_MASK) {
+ prev = decode_tail(old);
+ ACCESS_ONCE(prev->next) = node;
+
+ arch_mcs_spin_lock_contended(&node->locked);
+ }
+
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ *
+ * *,x -> *,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * claim the lock:
+ *
+ * n,0 -> 0,1 : lock, uncontended
+ * *,0 -> *,1 : lock, contended
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val != tail)
+ new |= val;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * contended path; wait for next, release.
+ */
+ if (new != _Q_LOCKED_VAL) {
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
+
+ arch_mcs_spin_unlock_contended(&next->locked);
+ }
+
+release:
+ /*
+ * release the node
+ */
+ this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Chegu Vinod, David Vrabel, Oleg Nesterov,
xen-devel, Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much faster in high contention situations especially when
the spinlock is embedded within the data structure to be protected.
Only in light to moderate contention where the average queue depth
is around 1-3 will this queue spinlock be potentially a bit slower
due to the higher slowpath overhead.
This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.
Due to the fact that spinlocks are acquired with preemption disabled,
the process will not be migrated to another CPU while it is trying
to get a spinlock. Ignoring interrupt handling, a CPU can only be
contending in one spinlock at any one time. Counting soft IRQ, hard
IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
activities. By allocating a set of per-cpu queue nodes and used them
to form a waiting queue, we can encode the queue node address into a
much smaller 24-bit size (including CPU number and queue node index)
leaving one byte for the lock.
Please note that the queue node is only needed when waiting for the
lock. Once the lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
include/asm-generic/qspinlock.h | 118 ++++++++++++++++++++
include/asm-generic/qspinlock_types.h | 61 ++++++++++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/mcs_spinlock.h | 1 +
kernel/locking/qspinlock.c | 197 +++++++++++++++++++++++++++++++++
6 files changed, 385 insertions(+), 0 deletions(-)
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..e8a7ae8
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,118 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val);
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ *
+ * N.B. Whenever there are tasks waiting for the lock, it is considered
+ * locked wrt the lockref code to avoid lock stealing by the lockref
+ * code and change things underneath the lock. This also allows some
+ * optimizations to be applied without conflict with lockref.
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+ return !atomic_read(&lock.val);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+ return atomic_read(&lock->val) & ~_Q_LOCKED_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+ if (!atomic_read(&lock->val) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+ u32 val;
+
+ val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+ if (likely(val == 0))
+ return;
+ queue_spin_lock_slowpath(lock, val);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ /*
+ * smp_mb__before_atomic() in order to guarantee release semantics
+ */
+ smp_mb__before_atomic_dec();
+ atomic_sub(_Q_LOCKED_VAL, &lock->val);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define __ARCH_SPIN_LOCK_UNLOCKED { ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l) queue_spin_is_locked(l)
+#define arch_spin_is_contended(l) queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l) queue_spin_value_unlocked(l)
+#define arch_spin_lock(l) queue_spin_lock(l)
+#define arch_spin_trylock(l) queue_spin_trylock(l)
+#define arch_spin_unlock(l) queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f) queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..f66f845
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,61 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here.
+ */
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt_types.h>
+#else
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#endif
+
+typedef struct qspinlock {
+ atomic_t val;
+} arch_spinlock_t;
+
+/*
+ * Bitfields in the atomic value:
+ *
+ * 0- 7: locked byte
+ * 8- 9: tail index
+ * 10-31: tail cpu (+1)
+ */
+#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
+ << _Q_ ## type ## _OFFSET)
+#define _Q_LOCKED_OFFSET 0
+#define _Q_LOCKED_BITS 8
+#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_TAIL_IDX_BITS 2
+#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
+
+#define _Q_TAIL_CPU_OFFSET (_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS)
+#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
+#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+
+#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+ bool
+
+config QUEUE_SPINLOCK
+ def_bool y if ARCH_USE_QUEUE_SPINLOCK
+ depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index b8bdcd4..e6741ac 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_SMP) += spinlock.o
obj-$(CONFIG_SMP) += lglock.o
obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index a2dbac4..a59b677 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -17,6 +17,7 @@
struct mcs_spinlock {
struct mcs_spinlock *next;
int locked; /* 1 if lock acquired */
+ int count;
};
#ifndef arch_mcs_spin_lock_contended
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..b97a1ad
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,197 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ * Peter Zijlstra <pzijlstr@redhat.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <asm/qspinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock, however to make
+ * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
+ * API, we must modify it some.
+ *
+ * In particular; where the traditional MCS lock consists of a tail pointer
+ * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
+ * unlock the next pending (next->locked), we compress both these: {tail,
+ * next->locked} into a single u32 value.
+ *
+ * Since a spinlock disables recursion of its own context and there is a limit
+ * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
+ * encode the tail as and index indicating this context and a cpu number.
+ *
+ * We can further change the first spinner to spin on a bit in the lock word
+ * instead of its node; whereby avoiding the need to carry a node from lock to
+ * unlock, and preserving API.
+ */
+
+#include "mcs_spinlock.h"
+
+/*
+ * Per-CPU queue node structures; we can never have more than 4 nested
+ * contexts: task, softirq, hardirq, nmi.
+ *
+ * Exactly fits one cacheline.
+ */
+static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+
+/*
+ * We must be able to distinguish between no-tail and the tail at 0:0,
+ * therefore increment the cpu number by one.
+ */
+
+static inline u32 encode_tail(int cpu, int idx)
+{
+ u32 tail;
+
+ tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
+ tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
+
+ return tail;
+}
+
+static inline struct mcs_spinlock *decode_tail(u32 tail)
+{
+ int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
+ int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
+
+ return per_cpu_ptr(&mcs_nodes[idx], cpu);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
+ * : | ^--------. / :
+ * : v \ | :
+ * uncontended : (n,x) --+--> (n,0) | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
+ * queue : ^--' :
+ *
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct mcs_spinlock *prev, *next, *node;
+ u32 new, old, tail;
+ int idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ node = this_cpu_ptr(&mcs_nodes[0]);
+ idx = node->count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->locked = 0;
+ node->next = NULL;
+
+ /*
+ * trylock || xchg(lock, node)
+ *
+ * 0,0 -> 0,1 ; trylock
+ * p,x -> n,x ; prev = xchg(lock, node)
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val)
+ new = tail | (val & _Q_LOCKED_MASK);
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * we won the trylock; forget about queueing.
+ */
+ if (new == _Q_LOCKED_VAL)
+ goto release;
+
+ /*
+ * if there was a previous node; link it and wait.
+ */
+ if (old & ~_Q_LOCKED_MASK) {
+ prev = decode_tail(old);
+ ACCESS_ONCE(prev->next) = node;
+
+ arch_mcs_spin_lock_contended(&node->locked);
+ }
+
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ *
+ * *,x -> *,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * claim the lock:
+ *
+ * n,0 -> 0,1 : lock, uncontended
+ * *,0 -> *,1 : lock, contended
+ */
+ for (;;) {
+ new = _Q_LOCKED_VAL;
+ if (val != tail)
+ new |= val;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ /*
+ * contended path; wait for next, release.
+ */
+ if (new != _Q_LOCKED_VAL) {
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
+
+ arch_mcs_spin_unlock_contended(&next->locked);
+ }
+
+release:
+ /*
+ * release the node
+ */
+ this_cpu_dec(mcs_nodes[0].count);
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use queue spinlock
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long, Peter Zijlstra
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.
The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/qspinlock.h | 29 +++++++++++++++++++++++++++++
arch/x86/include/asm/spinlock.h | 5 +++++
arch/x86/include/asm/spinlock_types.h | 4 ++++
4 files changed, 39 insertions(+), 0 deletions(-)
create mode 100644 arch/x86/include/asm/qspinlock.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 25d2c6f..95c9c4e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_INT128 if X86_64
select ARCH_WANTS_PROT_NUMA_PROT_NONE
+ select ARCH_USE_QUEUE_SPINLOCK
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..e4a4f5d
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,29 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ ACCESS_ONCE(*(u8 *)lock) = 0;
+ barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 0f62f54..958d20f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
extern struct static_key paravirt_ticketlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
#ifdef CONFIG_PARAVIRT_SPINLOCKS
static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
{
arch_spin_lock(lock);
}
+#endif /* CONFIG_QUEUE_SPINLOCK */
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
#define TICKET_SHIFT (sizeof(__ticket_t) * 8)
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
typedef struct arch_spinlock {
union {
__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
} arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
#include <asm/rwlock.h>
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use queue spinlock
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long, Peter Zijlstra
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.
The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/qspinlock.h | 29 +++++++++++++++++++++++++++++
arch/x86/include/asm/spinlock.h | 5 +++++
arch/x86/include/asm/spinlock_types.h | 4 ++++
4 files changed, 39 insertions(+), 0 deletions(-)
create mode 100644 arch/x86/include/asm/qspinlock.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 25d2c6f..95c9c4e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_INT128 if X86_64
select ARCH_WANTS_PROT_NUMA_PROT_NONE
+ select ARCH_USE_QUEUE_SPINLOCK
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..e4a4f5d
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,29 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ ACCESS_ONCE(*(u8 *)lock) = 0;
+ barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 0f62f54..958d20f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
extern struct static_key paravirt_ticketlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
#ifdef CONFIG_PARAVIRT_SPINLOCKS
static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
{
arch_spin_lock(lock);
}
+#endif /* CONFIG_QUEUE_SPINLOCK */
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
#define TICKET_SHIFT (sizeof(__ticket_t) * 8)
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
typedef struct arch_spinlock {
union {
__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
} arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
#include <asm/rwlock.h>
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use queue spinlock
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.
The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/qspinlock.h | 29 +++++++++++++++++++++++++++++
arch/x86/include/asm/spinlock.h | 5 +++++
arch/x86/include/asm/spinlock_types.h | 4 ++++
4 files changed, 39 insertions(+), 0 deletions(-)
create mode 100644 arch/x86/include/asm/qspinlock.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 25d2c6f..95c9c4e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_INT128 if X86_64
select ARCH_WANTS_PROT_NUMA_PROT_NONE
+ select ARCH_USE_QUEUE_SPINLOCK
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..e4a4f5d
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,29 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ ACCESS_ONCE(*(u8 *)lock) = 0;
+ barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 0f62f54..958d20f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
extern struct static_key paravirt_ticketlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
#ifdef CONFIG_PARAVIRT_SPINLOCKS
static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
{
arch_spin_lock(lock);
}
+#endif /* CONFIG_QUEUE_SPINLOCK */
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
#define TICKET_SHIFT (sizeof(__ticket_t) * 8)
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
typedef struct arch_spinlock {
union {
__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
} arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
#include <asm/rwlock.h>
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use queue spinlock
2014-05-07 15:01 ` Waiman Long
` (4 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Peter Zijlstra,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.
The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/qspinlock.h | 29 +++++++++++++++++++++++++++++
arch/x86/include/asm/spinlock.h | 5 +++++
arch/x86/include/asm/spinlock_types.h | 4 ++++
4 files changed, 39 insertions(+), 0 deletions(-)
create mode 100644 arch/x86/include/asm/qspinlock.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 25d2c6f..95c9c4e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_INT128 if X86_64
select ARCH_WANTS_PROT_NUMA_PROT_NONE
+ select ARCH_USE_QUEUE_SPINLOCK
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..e4a4f5d
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,29 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ ACCESS_ONCE(*(u8 *)lock) = 0;
+ barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 0f62f54..958d20f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
extern struct static_key paravirt_ticketlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
#ifdef CONFIG_PARAVIRT_SPINLOCKS
static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
{
arch_spin_lock(lock);
}
+#endif /* CONFIG_QUEUE_SPINLOCK */
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
#define TICKET_SHIFT (sizeof(__ticket_t) * 8)
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
typedef struct arch_spinlock {
union {
__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
} arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
#include <asm/rwlock.h>
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use queue spinlock
2014-05-07 15:01 ` Waiman Long
` (3 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Chegu Vinod, David Vrabel, Oleg Nesterov,
xen-devel, Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.
The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/qspinlock.h | 29 +++++++++++++++++++++++++++++
arch/x86/include/asm/spinlock.h | 5 +++++
arch/x86/include/asm/spinlock_types.h | 4 ++++
4 files changed, 39 insertions(+), 0 deletions(-)
create mode 100644 arch/x86/include/asm/qspinlock.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 25d2c6f..95c9c4e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_INT128 if X86_64
select ARCH_WANTS_PROT_NUMA_PROT_NONE
+ select ARCH_USE_QUEUE_SPINLOCK
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..e4a4f5d
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,29 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ ACCESS_ONCE(*(u8 *)lock) = 0;
+ barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 0f62f54..958d20f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -42,6 +42,10 @@
extern struct static_key paravirt_ticketlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
#ifdef CONFIG_PARAVIRT_SPINLOCKS
static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -180,6 +184,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
{
arch_spin_lock(lock);
}
+#endif /* CONFIG_QUEUE_SPINLOCK */
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
#define TICKET_SHIFT (sizeof(__ticket_t) * 8)
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
typedef struct arch_spinlock {
union {
__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
} arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
#include <asm/rwlock.h>
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
(?)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Peter Zijlstra, Waiman Long
From: Peter Zijlstra <peterz@infradead.org>
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c | 121 +++++++++++++++++++++++++++------
2 files changed, 110 insertions(+), 23 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index f66f845..bd25081 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
* Bitfields in the atomic value:
*
* 0- 7: locked byte
- * 8- 9: tail index
- * 10-31: tail cpu (+1)
+ * 8: pending
+ * 9-10: tail index
+ * 11-31: tail cpu (+1)
*/
#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
<< _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_BITS 8
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
-#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS 1
+#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
#define _Q_TAIL_IDX_BITS 2
#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
@@ -57,5 +62,6 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index b97a1ad..6467bfc 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,23 +83,97 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
return per_cpu_ptr(&mcs_nodes[idx], cpu);
}
+#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ /*
+ * trylock || pending
+ *
+ * 0,0,0 -> 0,0,1 ; trylock
+ * 0,0,1 -> 0,1,1 ; pending
+ */
+ for (;;) {
+ /*
+ * If we observe any contention; queue.
+ */
+ if (val & ~_Q_LOCKED_MASK)
+ return 0;
+
+ new = _Q_LOCKED_VAL;
+ if (val == new)
+ new |= _Q_PENDING_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ *pval = val = old;
+ }
+
+ /*
+ * we won the trylock
+ */
+ if (new == _Q_LOCKED_VAL)
+ return 1;
+
+ /*
+ * we're pending, wait for the owner to go away.
+ *
+ * *,1,1 -> *,1,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * take ownership and clear the pending bit.
+ *
+ * *,1,0 -> *,0,1
+ */
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ return 1;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
* @val: Current value of the queue spinlock 32-bit word
*
- * (queue tail, lock bit)
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
*
- * fast : slow : unlock
- * : :
- * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
- * : | ^--------. / :
- * : v \ | :
- * uncontended : (n,x) --+--> (n,0) | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
- * queue : ^--' :
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowpath() function.
*
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
@@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
* trylock || xchg(lock, node)
*
- * 0,0 -> 0,1 ; trylock
- * p,x -> n,x ; prev = xchg(lock, node)
+ * 0,0,0 -> 0,0,1 ; trylock
+ * p,y,x -> n,y,x ; prev = xchg(lock, node)
*/
for (;;) {
new = _Q_LOCKED_VAL;
if (val)
- new = tail | (val & _Q_LOCKED_MASK);
+ new = tail | (val & _Q_LOCKED_PENDING_MASK);
old = atomic_cmpxchg(&lock->val, val, new);
if (old == val)
@@ -145,7 +225,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_MASK) {
+ if (old & ~_Q_LOCKED_PENDING_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
@@ -153,18 +233,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner to go away.
+ * we're at the head of the waitqueue, wait for the owner & pending to
+ * go away.
*
- * *,x -> *,0
+ * *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
* claim the lock:
*
- * n,0 -> 0,1 : lock, uncontended
- * *,0 -> *,1 : lock, contended
+ * n,0,0 -> 0,0,1 : lock, uncontended
+ * *,0,0 -> *,0,1 : lock, contended
*/
for (;;) {
new = _Q_LOCKED_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 03/19] qspinlock: Add pending bit
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Peter Zijlstra,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
From: Peter Zijlstra <peterz@infradead.org>
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c | 121 +++++++++++++++++++++++++++------
2 files changed, 110 insertions(+), 23 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index f66f845..bd25081 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
* Bitfields in the atomic value:
*
* 0- 7: locked byte
- * 8- 9: tail index
- * 10-31: tail cpu (+1)
+ * 8: pending
+ * 9-10: tail index
+ * 11-31: tail cpu (+1)
*/
#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
<< _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_BITS 8
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
-#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS 1
+#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
#define _Q_TAIL_IDX_BITS 2
#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
@@ -57,5 +62,6 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index b97a1ad..6467bfc 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,23 +83,97 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
return per_cpu_ptr(&mcs_nodes[idx], cpu);
}
+#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ /*
+ * trylock || pending
+ *
+ * 0,0,0 -> 0,0,1 ; trylock
+ * 0,0,1 -> 0,1,1 ; pending
+ */
+ for (;;) {
+ /*
+ * If we observe any contention; queue.
+ */
+ if (val & ~_Q_LOCKED_MASK)
+ return 0;
+
+ new = _Q_LOCKED_VAL;
+ if (val == new)
+ new |= _Q_PENDING_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ *pval = val = old;
+ }
+
+ /*
+ * we won the trylock
+ */
+ if (new == _Q_LOCKED_VAL)
+ return 1;
+
+ /*
+ * we're pending, wait for the owner to go away.
+ *
+ * *,1,1 -> *,1,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * take ownership and clear the pending bit.
+ *
+ * *,1,0 -> *,0,1
+ */
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ return 1;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
* @val: Current value of the queue spinlock 32-bit word
*
- * (queue tail, lock bit)
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
*
- * fast : slow : unlock
- * : :
- * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
- * : | ^--------. / :
- * : v \ | :
- * uncontended : (n,x) --+--> (n,0) | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
- * queue : ^--' :
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowpath() function.
*
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
@@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
* trylock || xchg(lock, node)
*
- * 0,0 -> 0,1 ; trylock
- * p,x -> n,x ; prev = xchg(lock, node)
+ * 0,0,0 -> 0,0,1 ; trylock
+ * p,y,x -> n,y,x ; prev = xchg(lock, node)
*/
for (;;) {
new = _Q_LOCKED_VAL;
if (val)
- new = tail | (val & _Q_LOCKED_MASK);
+ new = tail | (val & _Q_LOCKED_PENDING_MASK);
old = atomic_cmpxchg(&lock->val, val, new);
if (old == val)
@@ -145,7 +225,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_MASK) {
+ if (old & ~_Q_LOCKED_PENDING_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
@@ -153,18 +233,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner to go away.
+ * we're at the head of the waitqueue, wait for the owner & pending to
+ * go away.
*
- * *,x -> *,0
+ * *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
* claim the lock:
*
- * n,0 -> 0,1 : lock, uncontended
- * *,0 -> *,1 : lock, contended
+ * n,0,0 -> 0,0,1 : lock, uncontended
+ * *,0,0 -> *,0,1 : lock, contended
*/
for (;;) {
new = _Q_LOCKED_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 03/19] qspinlock: Add pending bit
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
From: Peter Zijlstra <peterz@infradead.org>
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c | 121 +++++++++++++++++++++++++++------
2 files changed, 110 insertions(+), 23 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index f66f845..bd25081 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
* Bitfields in the atomic value:
*
* 0- 7: locked byte
- * 8- 9: tail index
- * 10-31: tail cpu (+1)
+ * 8: pending
+ * 9-10: tail index
+ * 11-31: tail cpu (+1)
*/
#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
<< _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_BITS 8
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
-#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS 1
+#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
#define _Q_TAIL_IDX_BITS 2
#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
@@ -57,5 +62,6 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index b97a1ad..6467bfc 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,23 +83,97 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
return per_cpu_ptr(&mcs_nodes[idx], cpu);
}
+#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ /*
+ * trylock || pending
+ *
+ * 0,0,0 -> 0,0,1 ; trylock
+ * 0,0,1 -> 0,1,1 ; pending
+ */
+ for (;;) {
+ /*
+ * If we observe any contention; queue.
+ */
+ if (val & ~_Q_LOCKED_MASK)
+ return 0;
+
+ new = _Q_LOCKED_VAL;
+ if (val == new)
+ new |= _Q_PENDING_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ *pval = val = old;
+ }
+
+ /*
+ * we won the trylock
+ */
+ if (new == _Q_LOCKED_VAL)
+ return 1;
+
+ /*
+ * we're pending, wait for the owner to go away.
+ *
+ * *,1,1 -> *,1,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * take ownership and clear the pending bit.
+ *
+ * *,1,0 -> *,0,1
+ */
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ return 1;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
* @val: Current value of the queue spinlock 32-bit word
*
- * (queue tail, lock bit)
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
*
- * fast : slow : unlock
- * : :
- * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
- * : | ^--------. / :
- * : v \ | :
- * uncontended : (n,x) --+--> (n,0) | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
- * queue : ^--' :
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowpath() function.
*
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
@@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
* trylock || xchg(lock, node)
*
- * 0,0 -> 0,1 ; trylock
- * p,x -> n,x ; prev = xchg(lock, node)
+ * 0,0,0 -> 0,0,1 ; trylock
+ * p,y,x -> n,y,x ; prev = xchg(lock, node)
*/
for (;;) {
new = _Q_LOCKED_VAL;
if (val)
- new = tail | (val & _Q_LOCKED_MASK);
+ new = tail | (val & _Q_LOCKED_PENDING_MASK);
old = atomic_cmpxchg(&lock->val, val, new);
if (old == val)
@@ -145,7 +225,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_MASK) {
+ if (old & ~_Q_LOCKED_PENDING_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
@@ -153,18 +233,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner to go away.
+ * we're at the head of the waitqueue, wait for the owner & pending to
+ * go away.
*
- * *,x -> *,0
+ * *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
* claim the lock:
*
- * n,0 -> 0,1 : lock, uncontended
- * *,0 -> *,1 : lock, contended
+ * n,0,0 -> 0,0,1 : lock, uncontended
+ * *,0,0 -> *,0,1 : lock, contended
*/
for (;;) {
new = _Q_LOCKED_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 03/19] qspinlock: Add pending bit
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Peter Zijlstra,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
From: Peter Zijlstra <peterz@infradead.org>
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c | 121 +++++++++++++++++++++++++++------
2 files changed, 110 insertions(+), 23 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index f66f845..bd25081 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
* Bitfields in the atomic value:
*
* 0- 7: locked byte
- * 8- 9: tail index
- * 10-31: tail cpu (+1)
+ * 8: pending
+ * 9-10: tail index
+ * 11-31: tail cpu (+1)
*/
#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
<< _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_BITS 8
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
-#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS 1
+#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
#define _Q_TAIL_IDX_BITS 2
#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
@@ -57,5 +62,6 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index b97a1ad..6467bfc 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,23 +83,97 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
return per_cpu_ptr(&mcs_nodes[idx], cpu);
}
+#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ /*
+ * trylock || pending
+ *
+ * 0,0,0 -> 0,0,1 ; trylock
+ * 0,0,1 -> 0,1,1 ; pending
+ */
+ for (;;) {
+ /*
+ * If we observe any contention; queue.
+ */
+ if (val & ~_Q_LOCKED_MASK)
+ return 0;
+
+ new = _Q_LOCKED_VAL;
+ if (val == new)
+ new |= _Q_PENDING_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ *pval = val = old;
+ }
+
+ /*
+ * we won the trylock
+ */
+ if (new == _Q_LOCKED_VAL)
+ return 1;
+
+ /*
+ * we're pending, wait for the owner to go away.
+ *
+ * *,1,1 -> *,1,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * take ownership and clear the pending bit.
+ *
+ * *,1,0 -> *,0,1
+ */
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ return 1;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
* @val: Current value of the queue spinlock 32-bit word
*
- * (queue tail, lock bit)
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
*
- * fast : slow : unlock
- * : :
- * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
- * : | ^--------. / :
- * : v \ | :
- * uncontended : (n,x) --+--> (n,0) | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
- * queue : ^--' :
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowpath() function.
*
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
@@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
* trylock || xchg(lock, node)
*
- * 0,0 -> 0,1 ; trylock
- * p,x -> n,x ; prev = xchg(lock, node)
+ * 0,0,0 -> 0,0,1 ; trylock
+ * p,y,x -> n,y,x ; prev = xchg(lock, node)
*/
for (;;) {
new = _Q_LOCKED_VAL;
if (val)
- new = tail | (val & _Q_LOCKED_MASK);
+ new = tail | (val & _Q_LOCKED_PENDING_MASK);
old = atomic_cmpxchg(&lock->val, val, new);
if (old == val)
@@ -145,7 +225,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_MASK) {
+ if (old & ~_Q_LOCKED_PENDING_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
@@ -153,18 +233,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner to go away.
+ * we're at the head of the waitqueue, wait for the owner & pending to
+ * go away.
*
- * *,x -> *,0
+ * *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
* claim the lock:
*
- * n,0 -> 0,1 : lock, uncontended
- * *,0 -> *,1 : lock, contended
+ * n,0,0 -> 0,0,1 : lock, uncontended
+ * *,0,0 -> *,0,1 : lock, contended
*/
for (;;) {
new = _Q_LOCKED_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
` (5 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Chegu Vinod, David Vrabel, Oleg Nesterov,
xen-devel, Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
From: Peter Zijlstra <peterz@infradead.org>
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c | 121 +++++++++++++++++++++++++++------
2 files changed, 110 insertions(+), 23 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index f66f845..bd25081 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
* Bitfields in the atomic value:
*
* 0- 7: locked byte
- * 8- 9: tail index
- * 10-31: tail cpu (+1)
+ * 8: pending
+ * 9-10: tail index
+ * 11-31: tail cpu (+1)
*/
#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\
<< _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_BITS 8
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
-#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS 1
+#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
#define _Q_TAIL_IDX_BITS 2
#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX)
@@ -57,5 +62,6 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index b97a1ad..6467bfc 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,23 +83,97 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
return per_cpu_ptr(&mcs_nodes[idx], cpu);
}
+#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ /*
+ * trylock || pending
+ *
+ * 0,0,0 -> 0,0,1 ; trylock
+ * 0,0,1 -> 0,1,1 ; pending
+ */
+ for (;;) {
+ /*
+ * If we observe any contention; queue.
+ */
+ if (val & ~_Q_LOCKED_MASK)
+ return 0;
+
+ new = _Q_LOCKED_VAL;
+ if (val == new)
+ new |= _Q_PENDING_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ *pval = val = old;
+ }
+
+ /*
+ * we won the trylock
+ */
+ if (new == _Q_LOCKED_VAL)
+ return 1;
+
+ /*
+ * we're pending, wait for the owner to go away.
+ *
+ * *,1,1 -> *,1,0
+ */
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+
+ /*
+ * take ownership and clear the pending bit.
+ *
+ * *,1,0 -> *,0,1
+ */
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ return 1;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
* @val: Current value of the queue spinlock 32-bit word
*
- * (queue tail, lock bit)
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
*
- * fast : slow : unlock
- * : :
- * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0)
- * : | ^--------. / :
- * : v \ | :
- * uncontended : (n,x) --+--> (n,0) | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x) --+--> (*,0) -----> (*,1) ---' :
- * queue : ^--' :
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowpath() function.
*
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
@@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
* trylock || xchg(lock, node)
*
- * 0,0 -> 0,1 ; trylock
- * p,x -> n,x ; prev = xchg(lock, node)
+ * 0,0,0 -> 0,0,1 ; trylock
+ * p,y,x -> n,y,x ; prev = xchg(lock, node)
*/
for (;;) {
new = _Q_LOCKED_VAL;
if (val)
- new = tail | (val & _Q_LOCKED_MASK);
+ new = tail | (val & _Q_LOCKED_PENDING_MASK);
old = atomic_cmpxchg(&lock->val, val, new);
if (old == val)
@@ -145,7 +225,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_MASK) {
+ if (old & ~_Q_LOCKED_PENDING_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
@@ -153,18 +233,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner to go away.
+ * we're at the head of the waitqueue, wait for the owner & pending to
+ * go away.
*
- * *,x -> *,0
+ * *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
* claim the lock:
*
- * n,0 -> 0,1 : lock, uncontended
- * *,0 -> *,1 : lock, contended
+ * n,0,0 -> 0,0,1 : lock, uncontended
+ * *,0,0 -> *,0,1 : lock, contended
*/
for (;;) {
new = _Q_LOCKED_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 04/19] qspinlock: Extract out the exchange of tail code word
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 2 +
kernel/locking/qspinlock.c | 61 +++++++++++++++++++++------------
2 files changed, 41 insertions(+), 22 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index bd25081..ed5d89a 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -61,6 +61,8 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
+
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 6467bfc..a49b82b 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -86,6 +86,34 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
/**
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ for (;;) {
+ new = (val & _Q_LOCKED_PENDING_MASK) | tail;
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ *pval = new;
+ return old;
+}
+
+/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
@@ -196,36 +224,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
- * we already touched the queueing cacheline; don't bother with pending
- * stuff.
- *
- * trylock || xchg(lock, node)
- *
- * 0,0,0 -> 0,0,1 ; trylock
- * p,y,x -> n,y,x ; prev = xchg(lock, node)
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
*/
- for (;;) {
- new = _Q_LOCKED_VAL;
- if (val)
- new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ if (queue_spin_trylock(lock))
+ goto release;
/*
- * we won the trylock; forget about queueing.
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
+ * p,*,* -> n,*,*
*/
- if (new == _Q_LOCKED_VAL)
- goto release;
+ old = xchg_tail(lock, tail, &val);
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_PENDING_MASK) {
+ if (old & _Q_TAIL_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 04/19] qspinlock: Extract out the exchange of tail code word
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 2 +
kernel/locking/qspinlock.c | 61 +++++++++++++++++++++------------
2 files changed, 41 insertions(+), 22 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index bd25081..ed5d89a 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -61,6 +61,8 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
+
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 6467bfc..a49b82b 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -86,6 +86,34 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
/**
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ for (;;) {
+ new = (val & _Q_LOCKED_PENDING_MASK) | tail;
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ *pval = new;
+ return old;
+}
+
+/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
@@ -196,36 +224,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
- * we already touched the queueing cacheline; don't bother with pending
- * stuff.
- *
- * trylock || xchg(lock, node)
- *
- * 0,0,0 -> 0,0,1 ; trylock
- * p,y,x -> n,y,x ; prev = xchg(lock, node)
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
*/
- for (;;) {
- new = _Q_LOCKED_VAL;
- if (val)
- new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ if (queue_spin_trylock(lock))
+ goto release;
/*
- * we won the trylock; forget about queueing.
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
+ * p,*,* -> n,*,*
*/
- if (new == _Q_LOCKED_VAL)
- goto release;
+ old = xchg_tail(lock, tail, &val);
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_PENDING_MASK) {
+ if (old & _Q_TAIL_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 04/19] qspinlock: Extract out the exchange of tail code word
2014-05-07 15:01 ` Waiman Long
` (8 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 2 +
kernel/locking/qspinlock.c | 61 +++++++++++++++++++++------------
2 files changed, 41 insertions(+), 22 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index bd25081..ed5d89a 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -61,6 +61,8 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
+
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 6467bfc..a49b82b 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -86,6 +86,34 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
/**
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ u32 old, new, val = *pval;
+
+ for (;;) {
+ new = (val & _Q_LOCKED_PENDING_MASK) | tail;
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ *pval = new;
+ return old;
+}
+
+/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
@@ -196,36 +224,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;
/*
- * we already touched the queueing cacheline; don't bother with pending
- * stuff.
- *
- * trylock || xchg(lock, node)
- *
- * 0,0,0 -> 0,0,1 ; trylock
- * p,y,x -> n,y,x ; prev = xchg(lock, node)
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
*/
- for (;;) {
- new = _Q_LOCKED_VAL;
- if (val)
- new = tail | (val & _Q_LOCKED_PENDING_MASK);
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ if (queue_spin_trylock(lock))
+ goto release;
/*
- * we won the trylock; forget about queueing.
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
+ * p,*,* -> n,*,*
*/
- if (new == _Q_LOCKED_VAL)
- goto release;
+ old = xchg_tail(lock, tail, &val);
/*
* if there was a previous node; link it and wait.
*/
- if (old & ~_Q_LOCKED_PENDING_MASK) {
+ if (old & _Q_TAIL_MASK) {
prev = decode_tail(old);
ACCESS_ONCE(prev->next) = node;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Peter Zijlstra, Waiman Long
From: Peter Zijlstra <peterz@infradead.org>
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.
This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 13 ++++
kernel/locking/qspinlock.c | 107 +++++++++++++++++++++++++++++---
2 files changed, 110 insertions(+), 10 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index ed5d89a..4914abe 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
/*
* Bitfields in the atomic value:
*
+ * When NR_CPUS < 16K
+ * 0- 7: locked byte
+ * 8: pending
+ * 9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
* 0- 7: locked byte
* 8: pending
* 9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS 8
+#else
#define _Q_PENDING_BITS 1
+#endif
#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET
#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a49b82b..3e908f7 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/mutex.h>
+#include <asm/byteorder.h>
#include <asm/qspinlock.h>
/*
@@ -48,6 +49,9 @@
* We can further change the first spinner to spin on a bit in the lock word
* instead of its node; whereby avoiding the need to carry a node from lock to
* unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ * atomic operations on smaller 8-bit and 16-bit data types.
*/
#include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+ union {
+ atomic_t val;
+ struct {
+#ifdef __LITTLE_ENDIAN
+ u16 locked_pending;
+ u16 tail;
+#else
+ u16 tail;
+ u16 locked_pending;
+#endif
+ };
+ };
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ u32 new, old;
+
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+}
+
/**
* xchg_tail - Put in the new queue tail code word & retrieve previous one
* @lock : Pointer to queue spinlock structure
@@ -112,12 +197,17 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+#endif /* _Q_PENDING_BITS == 8 */
/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
* Return: 1 if lock acquired, 0 otherwise
+ *
+ * The pending bit won't be set as soon as one or more tasks queue up.
+ * This function should only be called when lock stealing will not happen.
+ * Otherwise, it has to be disabled.
*/
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
@@ -157,8 +247,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* we're pending, wait for the owner to go away.
*
* *,1,1 -> *,1,0
+ *
+ * this wait loop must be a load-acquire such that we match the
+ * store-release that clears the locked bit and create lock
+ * sequentiality; this because not all clear_pending_set_locked()
+ * implementations imply full barriers.
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
arch_mutex_cpu_relax();
/*
@@ -166,15 +261,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*
* *,1,0 -> *,0,1
*/
- for (;;) {
- new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ clear_pending_set_locked(lock, val);
return 1;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Peter Zijlstra, Waiman Long
From: Peter Zijlstra <peterz@infradead.org>
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.
This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 13 ++++
kernel/locking/qspinlock.c | 107 +++++++++++++++++++++++++++++---
2 files changed, 110 insertions(+), 10 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index ed5d89a..4914abe 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
/*
* Bitfields in the atomic value:
*
+ * When NR_CPUS < 16K
+ * 0- 7: locked byte
+ * 8: pending
+ * 9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
* 0- 7: locked byte
* 8: pending
* 9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS 8
+#else
#define _Q_PENDING_BITS 1
+#endif
#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET
#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a49b82b..3e908f7 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/mutex.h>
+#include <asm/byteorder.h>
#include <asm/qspinlock.h>
/*
@@ -48,6 +49,9 @@
* We can further change the first spinner to spin on a bit in the lock word
* instead of its node; whereby avoiding the need to carry a node from lock to
* unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ * atomic operations on smaller 8-bit and 16-bit data types.
*/
#include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+ union {
+ atomic_t val;
+ struct {
+#ifdef __LITTLE_ENDIAN
+ u16 locked_pending;
+ u16 tail;
+#else
+ u16 tail;
+ u16 locked_pending;
+#endif
+ };
+ };
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ u32 new, old;
+
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+}
+
/**
* xchg_tail - Put in the new queue tail code word & retrieve previous one
* @lock : Pointer to queue spinlock structure
@@ -112,12 +197,17 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+#endif /* _Q_PENDING_BITS == 8 */
/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
* Return: 1 if lock acquired, 0 otherwise
+ *
+ * The pending bit won't be set as soon as one or more tasks queue up.
+ * This function should only be called when lock stealing will not happen.
+ * Otherwise, it has to be disabled.
*/
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
@@ -157,8 +247,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* we're pending, wait for the owner to go away.
*
* *,1,1 -> *,1,0
+ *
+ * this wait loop must be a load-acquire such that we match the
+ * store-release that clears the locked bit and create lock
+ * sequentiality; this because not all clear_pending_set_locked()
+ * implementations imply full barriers.
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
arch_mutex_cpu_relax();
/*
@@ -166,15 +261,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*
* *,1,0 -> *,0,1
*/
- for (;;) {
- new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ clear_pending_set_locked(lock, val);
return 1;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
From: Peter Zijlstra <peterz@infradead.org>
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.
This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 13 ++++
kernel/locking/qspinlock.c | 107 +++++++++++++++++++++++++++++---
2 files changed, 110 insertions(+), 10 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index ed5d89a..4914abe 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
/*
* Bitfields in the atomic value:
*
+ * When NR_CPUS < 16K
+ * 0- 7: locked byte
+ * 8: pending
+ * 9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
* 0- 7: locked byte
* 8: pending
* 9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS 8
+#else
#define _Q_PENDING_BITS 1
+#endif
#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET
#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a49b82b..3e908f7 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/mutex.h>
+#include <asm/byteorder.h>
#include <asm/qspinlock.h>
/*
@@ -48,6 +49,9 @@
* We can further change the first spinner to spin on a bit in the lock word
* instead of its node; whereby avoiding the need to carry a node from lock to
* unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ * atomic operations on smaller 8-bit and 16-bit data types.
*/
#include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+ union {
+ atomic_t val;
+ struct {
+#ifdef __LITTLE_ENDIAN
+ u16 locked_pending;
+ u16 tail;
+#else
+ u16 tail;
+ u16 locked_pending;
+#endif
+ };
+ };
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ u32 new, old;
+
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+}
+
/**
* xchg_tail - Put in the new queue tail code word & retrieve previous one
* @lock : Pointer to queue spinlock structure
@@ -112,12 +197,17 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+#endif /* _Q_PENDING_BITS == 8 */
/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
* Return: 1 if lock acquired, 0 otherwise
+ *
+ * The pending bit won't be set as soon as one or more tasks queue up.
+ * This function should only be called when lock stealing will not happen.
+ * Otherwise, it has to be disabled.
*/
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
@@ -157,8 +247,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* we're pending, wait for the owner to go away.
*
* *,1,1 -> *,1,0
+ *
+ * this wait loop must be a load-acquire such that we match the
+ * store-release that clears the locked bit and create lock
+ * sequentiality; this because not all clear_pending_set_locked()
+ * implementations imply full barriers.
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
arch_mutex_cpu_relax();
/*
@@ -166,15 +261,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*
* *,1,0 -> *,0,1
*/
- for (;;) {
- new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ clear_pending_set_locked(lock, val);
return 1;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS
2014-05-07 15:01 ` Waiman Long
` (10 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Peter Zijlstra,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
From: Peter Zijlstra <peterz@infradead.org>
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.
This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 13 ++++
kernel/locking/qspinlock.c | 107 +++++++++++++++++++++++++++++---
2 files changed, 110 insertions(+), 10 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index ed5d89a..4914abe 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
/*
* Bitfields in the atomic value:
*
+ * When NR_CPUS < 16K
+ * 0- 7: locked byte
+ * 8: pending
+ * 9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
* 0- 7: locked byte
* 8: pending
* 9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS 8
+#else
#define _Q_PENDING_BITS 1
+#endif
#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET
#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a49b82b..3e908f7 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/mutex.h>
+#include <asm/byteorder.h>
#include <asm/qspinlock.h>
/*
@@ -48,6 +49,9 @@
* We can further change the first spinner to spin on a bit in the lock word
* instead of its node; whereby avoiding the need to carry a node from lock to
* unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ * atomic operations on smaller 8-bit and 16-bit data types.
*/
#include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+ union {
+ atomic_t val;
+ struct {
+#ifdef __LITTLE_ENDIAN
+ u16 locked_pending;
+ u16 tail;
+#else
+ u16 tail;
+ u16 locked_pending;
+#endif
+ };
+ };
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ u32 new, old;
+
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+}
+
/**
* xchg_tail - Put in the new queue tail code word & retrieve previous one
* @lock : Pointer to queue spinlock structure
@@ -112,12 +197,17 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+#endif /* _Q_PENDING_BITS == 8 */
/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
* Return: 1 if lock acquired, 0 otherwise
+ *
+ * The pending bit won't be set as soon as one or more tasks queue up.
+ * This function should only be called when lock stealing will not happen.
+ * Otherwise, it has to be disabled.
*/
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
@@ -157,8 +247,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* we're pending, wait for the owner to go away.
*
* *,1,1 -> *,1,0
+ *
+ * this wait loop must be a load-acquire such that we match the
+ * store-release that clears the locked bit and create lock
+ * sequentiality; this because not all clear_pending_set_locked()
+ * implementations imply full barriers.
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
arch_mutex_cpu_relax();
/*
@@ -166,15 +261,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*
* *,1,0 -> *,0,1
*/
- for (;;) {
- new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ clear_pending_set_locked(lock, val);
return 1;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS
2014-05-07 15:01 ` Waiman Long
` (11 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Chegu Vinod, David Vrabel, Oleg Nesterov,
xen-devel, Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
From: Peter Zijlstra <peterz@infradead.org>
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() operations.
This in turn allows us to unconditionally acquire; the locked state
as observed by the wait loops cannot change. And because both locked
and pending are now a full byte we can use simple stores for the
state transition, obviating one atomic operation entirely.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
include/asm-generic/qspinlock_types.h | 13 ++++
kernel/locking/qspinlock.c | 107 +++++++++++++++++++++++++++++---
2 files changed, 110 insertions(+), 10 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index ed5d89a..4914abe 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -38,6 +38,14 @@ typedef struct qspinlock {
/*
* Bitfields in the atomic value:
*
+ * When NR_CPUS < 16K
+ * 0- 7: locked byte
+ * 8: pending
+ * 9-15: not used
+ * 16-17: tail index
+ * 18-31: tail cpu (+1)
+ *
+ * When NR_CPUS >= 16K
* 0- 7: locked byte
* 8: pending
* 9-10: tail index
@@ -50,7 +58,11 @@ typedef struct qspinlock {
#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#if CONFIG_NR_CPUS < (1U << 14)
+#define _Q_PENDING_BITS 8
+#else
#define _Q_PENDING_BITS 1
+#endif
#define _Q_PENDING_MASK _Q_SET_MASK(PENDING)
#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
@@ -61,6 +73,7 @@ typedef struct qspinlock {
#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET
#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a49b82b..3e908f7 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/mutex.h>
+#include <asm/byteorder.h>
#include <asm/qspinlock.h>
/*
@@ -48,6 +49,9 @@
* We can further change the first spinner to spin on a bit in the lock word
* instead of its node; whereby avoiding the need to carry a node from lock to
* unlock, and preserving API.
+ *
+ * N.B. The current implementation only supports architectures that allow
+ * atomic operations on smaller 8-bit and 16-bit data types.
*/
#include "mcs_spinlock.h"
@@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+/*
+ * By using the whole 2nd least significant byte for the pending bit, we
+ * can allow better optimization of the lock acquisition for the pending
+ * bit holder.
+ */
+#if _Q_PENDING_BITS == 8
+
+struct __qspinlock {
+ union {
+ atomic_t val;
+ struct {
+#ifdef __LITTLE_ENDIAN
+ u16 locked_pending;
+ u16 tail;
+#else
+ u16 tail;
+ u16 locked_pending;
+#endif
+ };
+ };
+};
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL;
+}
+
+/*
+ * xchg_tail - Put in the new queue tail code word & retrieve previous one
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The new queue tail code word
+ * @pval : Pointer to current value of the queue spinlock 32-bit word
+ * Return: The previous queue tail code word
+ *
+ * xchg(lock, tail)
+ *
+ * p,*,* -> n,*,* ; prev = xchg(lock, node)
+ */
+static __always_inline u32
+xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
+#else /* _Q_PENDING_BITS == 8 */
+
+/**
+ * clear_pending_set_locked - take ownership and clear the pending bit.
+ * @lock: Pointer to queue spinlock structure
+ * @val : Current value of the queue spinlock 32-bit word
+ *
+ * *,1,0 -> *,0,1
+ */
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+ u32 new, old;
+
+ for (;;) {
+ new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+}
+
/**
* xchg_tail - Put in the new queue tail code word & retrieve previous one
* @lock : Pointer to queue spinlock structure
@@ -112,12 +197,17 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+#endif /* _Q_PENDING_BITS == 8 */
/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
* Return: 1 if lock acquired, 0 otherwise
+ *
+ * The pending bit won't be set as soon as one or more tasks queue up.
+ * This function should only be called when lock stealing will not happen.
+ * Otherwise, it has to be disabled.
*/
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
@@ -157,8 +247,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* we're pending, wait for the owner to go away.
*
* *,1,1 -> *,1,0
+ *
+ * this wait loop must be a load-acquire such that we match the
+ * store-release that clears the locked bit and create lock
+ * sequentiality; this because not all clear_pending_set_locked()
+ * implementations imply full barriers.
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
arch_mutex_cpu_relax();
/*
@@ -166,15 +261,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*
* *,1,0 -> *,0,1
*/
- for (;;) {
- new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
+ clear_pending_set_locked(lock, val);
return 1;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
There is a problem in the current trylock_pending() function. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get the lock and
clear the pending bit soon, it is actually better to wait than to be
queued up which has a higher overhead.
This patch modified the trylock_pending() function to wait until the
pending bit holder gets the lock and clears the pending bit. In case
both the lock and pending bits are set, the new code will also wait
a bit to see if either one is cleared. If they are not, it will quit
and be queued.
The following tables show the before-patch execution time (in ms)
of a micro-benchmark where 5M iterations of the lock/unlock cycles
were run on a 10-core Westere-EX x86-64 CPU with 2 different types of
loads - standalone (lock and protected data in different cachelines)
and embedded (lock and protected data in the same cacheline).
[Standalone/Embedded - same node]
# of tasks Ticket lock Queue lock %Change
---------- ----------- ---------- -------
1 135/ 111 135/ 101 0%/ -9%
2 890/ 779 1885/1990 +112%/+156%
3 1932/1859 2333/2341 +21%/ +26%
4 2829/2726 2900/2923 +3%/ +7%
5 3834/3761 3655/3648 -5%/ -3%
6 4963/4976 4336/4326 -13%/ -13%
7 6299/6269 5057/5064 -20%/ -19%
8 7691/7569 5786/5798 -25%/ -23%
With 1 task per NUMA node, the execution times are:
[Standalone - different nodes]
# of nodes Ticket lock Queue lock %Change
---------- ----------- ---------- -------
1 135 135 0%
2 4604 5087 +10%
3 10940 12224 +12%
4 21555 10555 -51%
It can be seen that the queue spinlock is slower than the ticket
spinlock when there are 2 or 3 contending tasks. In all the other case,
the queue spinlock is either equal or faster than the ticket spinlock.
With this patch, the performance data for 2 contending tasks are:
[Standalone/Embedded]
# of tasks Ticket lock Queue lock %Change
---------- ----------- ---------- -------
2 890/779 984/871 +11%/+12%
[Standalone - different nodes]
# of nodes Ticket lock Queue lock %Change
---------- ----------- ---------- -------
2 4604 1364 -70%
It can be seen that the queue spinlock performance for 2 contending
tasks is now comparable to ticket spinlock on the same node, but much
faster when in different nodes. With 3 contending tasks, however,
the ticket spinlock is still quite a bit faster.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 31 +++++++++++++++++++++++++++++--
1 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 3e908f7..e734acb 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -212,6 +212,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
u32 old, new, val = *pval;
+ int retry = 1;
/*
* trylock || pending
@@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*/
for (;;) {
/*
- * If we observe any contention; queue.
+ * If we observe that the queue is not empty,
+ * return and be queued.
*/
- if (val & ~_Q_LOCKED_MASK)
+ if (val & _Q_TAIL_MASK)
return 0;
+ if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
+ /*
+ * If both the lock and pending bits are set, we wait
+ * a while to see if that either bit will be cleared.
+ * If that is no change, we return and be queued.
+ */
+ if (!retry)
+ return 0;
+ retry--;
+ cpu_relax();
+ cpu_relax();
+ *pval = val = atomic_read(&lock->val);
+ continue;
+ } else if (val == _Q_PENDING_VAL) {
+ /*
+ * Pending bit is set, but not the lock bit.
+ * Assuming that the pending bit holder is going to
+ * set the lock bit and clear the pending bit soon,
+ * it is better to wait than to exit at this point.
+ */
+ cpu_relax();
+ *pval = val = atomic_read(&lock->val);
+ continue;
+ }
+
new = _Q_LOCKED_VAL;
if (val == new)
new |= _Q_PENDING_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
There is a problem in the current trylock_pending() function. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get the lock and
clear the pending bit soon, it is actually better to wait than to be
queued up which has a higher overhead.
This patch modified the trylock_pending() function to wait until the
pending bit holder gets the lock and clears the pending bit. In case
both the lock and pending bits are set, the new code will also wait
a bit to see if either one is cleared. If they are not, it will quit
and be queued.
The following tables show the before-patch execution time (in ms)
of a micro-benchmark where 5M iterations of the lock/unlock cycles
were run on a 10-core Westere-EX x86-64 CPU with 2 different types of
loads - standalone (lock and protected data in different cachelines)
and embedded (lock and protected data in the same cacheline).
[Standalone/Embedded - same node]
# of tasks Ticket lock Queue lock %Change
---------- ----------- ---------- -------
1 135/ 111 135/ 101 0%/ -9%
2 890/ 779 1885/1990 +112%/+156%
3 1932/1859 2333/2341 +21%/ +26%
4 2829/2726 2900/2923 +3%/ +7%
5 3834/3761 3655/3648 -5%/ -3%
6 4963/4976 4336/4326 -13%/ -13%
7 6299/6269 5057/5064 -20%/ -19%
8 7691/7569 5786/5798 -25%/ -23%
With 1 task per NUMA node, the execution times are:
[Standalone - different nodes]
# of nodes Ticket lock Queue lock %Change
---------- ----------- ---------- -------
1 135 135 0%
2 4604 5087 +10%
3 10940 12224 +12%
4 21555 10555 -51%
It can be seen that the queue spinlock is slower than the ticket
spinlock when there are 2 or 3 contending tasks. In all the other case,
the queue spinlock is either equal or faster than the ticket spinlock.
With this patch, the performance data for 2 contending tasks are:
[Standalone/Embedded]
# of tasks Ticket lock Queue lock %Change
---------- ----------- ---------- -------
2 890/779 984/871 +11%/+12%
[Standalone - different nodes]
# of nodes Ticket lock Queue lock %Change
---------- ----------- ---------- -------
2 4604 1364 -70%
It can be seen that the queue spinlock performance for 2 contending
tasks is now comparable to ticket spinlock on the same node, but much
faster when in different nodes. With 3 contending tasks, however,
the ticket spinlock is still quite a bit faster.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 31 +++++++++++++++++++++++++++++--
1 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 3e908f7..e734acb 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -212,6 +212,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
u32 old, new, val = *pval;
+ int retry = 1;
/*
* trylock || pending
@@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*/
for (;;) {
/*
- * If we observe any contention; queue.
+ * If we observe that the queue is not empty,
+ * return and be queued.
*/
- if (val & ~_Q_LOCKED_MASK)
+ if (val & _Q_TAIL_MASK)
return 0;
+ if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
+ /*
+ * If both the lock and pending bits are set, we wait
+ * a while to see if that either bit will be cleared.
+ * If that is no change, we return and be queued.
+ */
+ if (!retry)
+ return 0;
+ retry--;
+ cpu_relax();
+ cpu_relax();
+ *pval = val = atomic_read(&lock->val);
+ continue;
+ } else if (val == _Q_PENDING_VAL) {
+ /*
+ * Pending bit is set, but not the lock bit.
+ * Assuming that the pending bit holder is going to
+ * set the lock bit and clear the pending bit soon,
+ * it is better to wait than to exit at this point.
+ */
+ cpu_relax();
+ *pval = val = atomic_read(&lock->val);
+ continue;
+ }
+
new = _Q_LOCKED_VAL;
if (val == new)
new |= _Q_PENDING_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-07 15:01 ` Waiman Long
` (12 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
There is a problem in the current trylock_pending() function. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get the lock and
clear the pending bit soon, it is actually better to wait than to be
queued up which has a higher overhead.
This patch modified the trylock_pending() function to wait until the
pending bit holder gets the lock and clears the pending bit. In case
both the lock and pending bits are set, the new code will also wait
a bit to see if either one is cleared. If they are not, it will quit
and be queued.
The following tables show the before-patch execution time (in ms)
of a micro-benchmark where 5M iterations of the lock/unlock cycles
were run on a 10-core Westere-EX x86-64 CPU with 2 different types of
loads - standalone (lock and protected data in different cachelines)
and embedded (lock and protected data in the same cacheline).
[Standalone/Embedded - same node]
# of tasks Ticket lock Queue lock %Change
---------- ----------- ---------- -------
1 135/ 111 135/ 101 0%/ -9%
2 890/ 779 1885/1990 +112%/+156%
3 1932/1859 2333/2341 +21%/ +26%
4 2829/2726 2900/2923 +3%/ +7%
5 3834/3761 3655/3648 -5%/ -3%
6 4963/4976 4336/4326 -13%/ -13%
7 6299/6269 5057/5064 -20%/ -19%
8 7691/7569 5786/5798 -25%/ -23%
With 1 task per NUMA node, the execution times are:
[Standalone - different nodes]
# of nodes Ticket lock Queue lock %Change
---------- ----------- ---------- -------
1 135 135 0%
2 4604 5087 +10%
3 10940 12224 +12%
4 21555 10555 -51%
It can be seen that the queue spinlock is slower than the ticket
spinlock when there are 2 or 3 contending tasks. In all the other case,
the queue spinlock is either equal or faster than the ticket spinlock.
With this patch, the performance data for 2 contending tasks are:
[Standalone/Embedded]
# of tasks Ticket lock Queue lock %Change
---------- ----------- ---------- -------
2 890/779 984/871 +11%/+12%
[Standalone - different nodes]
# of nodes Ticket lock Queue lock %Change
---------- ----------- ---------- -------
2 4604 1364 -70%
It can be seen that the queue spinlock performance for 2 contending
tasks is now comparable to ticket spinlock on the same node, but much
faster when in different nodes. With 3 contending tasks, however,
the ticket spinlock is still quite a bit faster.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 31 +++++++++++++++++++++++++++++--
1 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 3e908f7..e734acb 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -212,6 +212,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
u32 old, new, val = *pval;
+ int retry = 1;
/*
* trylock || pending
@@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*/
for (;;) {
/*
- * If we observe any contention; queue.
+ * If we observe that the queue is not empty,
+ * return and be queued.
*/
- if (val & ~_Q_LOCKED_MASK)
+ if (val & _Q_TAIL_MASK)
return 0;
+ if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
+ /*
+ * If both the lock and pending bits are set, we wait
+ * a while to see if that either bit will be cleared.
+ * If that is no change, we return and be queued.
+ */
+ if (!retry)
+ return 0;
+ retry--;
+ cpu_relax();
+ cpu_relax();
+ *pval = val = atomic_read(&lock->val);
+ continue;
+ } else if (val == _Q_PENDING_VAL) {
+ /*
+ * Pending bit is set, but not the lock bit.
+ * Assuming that the pending bit holder is going to
+ * set the lock bit and clear the pending bit soon,
+ * it is better to wait than to exit at this point.
+ */
+ cpu_relax();
+ *pval = val = atomic_read(&lock->val);
+ continue;
+ }
+
new = _Q_LOCKED_VAL;
if (val == new)
new |= _Q_PENDING_VAL;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
queue code word (tail) in the lock is set.
With that change, the are some slight improvement in the performance
of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.
[Standalone/Embedded - same node]
# of tasks Before patch After patch %Change
---------- ----------- ---------- -------
3 2324/2321 2248/2265 -3%/-2%
4 2890/2896 2819/2831 -2%/-2%
5 3611/3595 3522/3512 -2%/-2%
6 4281/4276 4173/4160 -3%/-3%
7 5018/5001 4875/4861 -3%/-3%
8 5759/5750 5563/5568 -3%/-3%
[Standalone/Embedded - different nodes]
# of tasks Before patch After patch %Change
---------- ----------- ---------- -------
3 12242/12237 12087/12093 -1%/-1%
4 10688/10696 10507/10521 -2%/-2%
It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queue spinlock.
The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.
The "ebizzy -m" test was also run with the following results:
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 61 +++++++++++++++++++++++++++++++------------
1 files changed, 44 insertions(+), 17 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e734acb..0ee1a23 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
* can allow better optimization of the lock acquisition for the pending
* bit holder.
*/
-#if _Q_PENDING_BITS == 8
-
struct __qspinlock {
union {
atomic_t val;
- struct {
#ifdef __LITTLE_ENDIAN
+ u8 locked;
+ struct {
u16 locked_pending;
u16 tail;
+ };
#else
+ struct {
u16 tail;
u16 locked_pending;
-#endif
};
+ struct {
+ u8 reserved[3];
+ u8 locked;
+ };
+#endif
};
};
+#if _Q_PENDING_BITS == 8
/**
* clear_pending_set_locked - take ownership and clear the pending bit.
* @lock: Pointer to queue spinlock structure
@@ -200,6 +206,22 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
#endif /* _Q_PENDING_BITS == 8 */
/**
+ * get_qlock - Set the lock bit and own the lock
+ * @lock: Pointer to queue spinlock structure
+ *
+ * This routine should only be called when the caller is the only one
+ * entitled to acquire the lock.
+ */
+static __always_inline void get_qlock(struct qspinlock *lock)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ barrier();
+ ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
+ barrier();
+}
+
+/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
@@ -321,7 +343,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
struct mcs_spinlock *prev, *next, *node;
- u32 new, old, tail;
+ u32 old, tail;
int idx;
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -366,10 +388,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* we're at the head of the waitqueue, wait for the owner & pending to
* go away.
+ * Load-acquired is used here because the get_qlock()
+ * function below may not be a full memory barrier.
*
* *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter))
+ & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
@@ -377,15 +402,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* n,0,0 -> 0,0,1 : lock, uncontended
* *,0,0 -> *,0,1 : lock, contended
+ *
+ * If the queue head is the only one in the queue (lock value == tail),
+ * clear the tail code and grab the lock. Otherwise, we only need
+ * to grab the lock.
*/
for (;;) {
- new = _Q_LOCKED_VAL;
- if (val != tail)
- new |= val;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
+ if (val != tail) {
+ get_qlock(lock);
break;
+ }
+ old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+ if (old == val)
+ goto release; /* No contention */
val = old;
}
@@ -393,12 +422,10 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* contended path; wait for next, release.
*/
- if (new != _Q_LOCKED_VAL) {
- while (!(next = ACCESS_ONCE(node->next)))
- arch_mutex_cpu_relax();
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->locked);
- }
+ arch_mcs_spin_unlock_contended(&next->locked);
release:
/*
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
queue code word (tail) in the lock is set.
With that change, the are some slight improvement in the performance
of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.
[Standalone/Embedded - same node]
# of tasks Before patch After patch %Change
---------- ----------- ---------- -------
3 2324/2321 2248/2265 -3%/-2%
4 2890/2896 2819/2831 -2%/-2%
5 3611/3595 3522/3512 -2%/-2%
6 4281/4276 4173/4160 -3%/-3%
7 5018/5001 4875/4861 -3%/-3%
8 5759/5750 5563/5568 -3%/-3%
[Standalone/Embedded - different nodes]
# of tasks Before patch After patch %Change
---------- ----------- ---------- -------
3 12242/12237 12087/12093 -1%/-1%
4 10688/10696 10507/10521 -2%/-2%
It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queue spinlock.
The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.
The "ebizzy -m" test was also run with the following results:
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 61 +++++++++++++++++++++++++++++++------------
1 files changed, 44 insertions(+), 17 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e734acb..0ee1a23 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
* can allow better optimization of the lock acquisition for the pending
* bit holder.
*/
-#if _Q_PENDING_BITS == 8
-
struct __qspinlock {
union {
atomic_t val;
- struct {
#ifdef __LITTLE_ENDIAN
+ u8 locked;
+ struct {
u16 locked_pending;
u16 tail;
+ };
#else
+ struct {
u16 tail;
u16 locked_pending;
-#endif
};
+ struct {
+ u8 reserved[3];
+ u8 locked;
+ };
+#endif
};
};
+#if _Q_PENDING_BITS == 8
/**
* clear_pending_set_locked - take ownership and clear the pending bit.
* @lock: Pointer to queue spinlock structure
@@ -200,6 +206,22 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
#endif /* _Q_PENDING_BITS == 8 */
/**
+ * get_qlock - Set the lock bit and own the lock
+ * @lock: Pointer to queue spinlock structure
+ *
+ * This routine should only be called when the caller is the only one
+ * entitled to acquire the lock.
+ */
+static __always_inline void get_qlock(struct qspinlock *lock)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ barrier();
+ ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
+ barrier();
+}
+
+/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
@@ -321,7 +343,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
struct mcs_spinlock *prev, *next, *node;
- u32 new, old, tail;
+ u32 old, tail;
int idx;
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -366,10 +388,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* we're at the head of the waitqueue, wait for the owner & pending to
* go away.
+ * Load-acquired is used here because the get_qlock()
+ * function below may not be a full memory barrier.
*
* *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter))
+ & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
@@ -377,15 +402,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* n,0,0 -> 0,0,1 : lock, uncontended
* *,0,0 -> *,0,1 : lock, contended
+ *
+ * If the queue head is the only one in the queue (lock value == tail),
+ * clear the tail code and grab the lock. Otherwise, we only need
+ * to grab the lock.
*/
for (;;) {
- new = _Q_LOCKED_VAL;
- if (val != tail)
- new |= val;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
+ if (val != tail) {
+ get_qlock(lock);
break;
+ }
+ old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+ if (old == val)
+ goto release; /* No contention */
val = old;
}
@@ -393,12 +422,10 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* contended path; wait for next, release.
*/
- if (new != _Q_LOCKED_VAL) {
- while (!(next = ACCESS_ONCE(node->next)))
- arch_mutex_cpu_relax();
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->locked);
- }
+ arch_mcs_spin_unlock_contended(&next->locked);
release:
/*
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-07 15:01 ` Waiman Long
` (14 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
queue code word (tail) in the lock is set.
With that change, the are some slight improvement in the performance
of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.
[Standalone/Embedded - same node]
# of tasks Before patch After patch %Change
---------- ----------- ---------- -------
3 2324/2321 2248/2265 -3%/-2%
4 2890/2896 2819/2831 -2%/-2%
5 3611/3595 3522/3512 -2%/-2%
6 4281/4276 4173/4160 -3%/-3%
7 5018/5001 4875/4861 -3%/-3%
8 5759/5750 5563/5568 -3%/-3%
[Standalone/Embedded - different nodes]
# of tasks Before patch After patch %Change
---------- ----------- ---------- -------
3 12242/12237 12087/12093 -1%/-1%
4 10688/10696 10507/10521 -2%/-2%
It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queue spinlock.
The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.
The "ebizzy -m" test was also run with the following results:
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 61 +++++++++++++++++++++++++++++++------------
1 files changed, 44 insertions(+), 17 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e734acb..0ee1a23 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
* can allow better optimization of the lock acquisition for the pending
* bit holder.
*/
-#if _Q_PENDING_BITS == 8
-
struct __qspinlock {
union {
atomic_t val;
- struct {
#ifdef __LITTLE_ENDIAN
+ u8 locked;
+ struct {
u16 locked_pending;
u16 tail;
+ };
#else
+ struct {
u16 tail;
u16 locked_pending;
-#endif
};
+ struct {
+ u8 reserved[3];
+ u8 locked;
+ };
+#endif
};
};
+#if _Q_PENDING_BITS == 8
/**
* clear_pending_set_locked - take ownership and clear the pending bit.
* @lock: Pointer to queue spinlock structure
@@ -200,6 +206,22 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
#endif /* _Q_PENDING_BITS == 8 */
/**
+ * get_qlock - Set the lock bit and own the lock
+ * @lock: Pointer to queue spinlock structure
+ *
+ * This routine should only be called when the caller is the only one
+ * entitled to acquire the lock.
+ */
+static __always_inline void get_qlock(struct qspinlock *lock)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ barrier();
+ ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
+ barrier();
+}
+
+/**
* trylock_pending - try to acquire queue spinlock using the pending bit
* @lock : Pointer to queue spinlock structure
* @pval : Pointer to value of the queue spinlock 32-bit word
@@ -321,7 +343,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
struct mcs_spinlock *prev, *next, *node;
- u32 new, old, tail;
+ u32 old, tail;
int idx;
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -366,10 +388,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* we're at the head of the waitqueue, wait for the owner & pending to
* go away.
+ * Load-acquired is used here because the get_qlock()
+ * function below may not be a full memory barrier.
*
* *,x,y -> *,0,0
*/
- while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
+ while ((val = smp_load_acquire(&lock->val.counter))
+ & _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
/*
@@ -377,15 +402,19 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* n,0,0 -> 0,0,1 : lock, uncontended
* *,0,0 -> *,0,1 : lock, contended
+ *
+ * If the queue head is the only one in the queue (lock value == tail),
+ * clear the tail code and grab the lock. Otherwise, we only need
+ * to grab the lock.
*/
for (;;) {
- new = _Q_LOCKED_VAL;
- if (val != tail)
- new |= val;
-
- old = atomic_cmpxchg(&lock->val, val, new);
- if (old == val)
+ if (val != tail) {
+ get_qlock(lock);
break;
+ }
+ old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+ if (old == val)
+ goto release; /* No contention */
val = old;
}
@@ -393,12 +422,10 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* contended path; wait for next, release.
*/
- if (new != _Q_LOCKED_VAL) {
- while (!(next = ACCESS_ONCE(node->next)))
- arch_mutex_cpu_relax();
+ while (!(next = ACCESS_ONCE(node->next)))
+ arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->locked);
- }
+ arch_mcs_spin_unlock_contended(&next->locked);
release:
/*
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
In order to support additional virtualization features like unfair lock
and para-virtualized spinlock, it is necessary to store additional
CPU specific data into the queue node structure. As a result, a new
qnode structure is created and the mcs_spinlock structure is now part
of the new structure.
It is also necessary to expand arch_mcs_spin_lock_contended() to the
underlying while loop as additional code will need to be inserted
into the loop.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 36 +++++++++++++++++++++++-------------
1 files changed, 23 insertions(+), 13 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0ee1a23..e98d7d4 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -57,12 +57,21 @@
#include "mcs_spinlock.h"
/*
+ * To have additional features for better virtualization support, it is
+ * necessary to store additional data in the queue node structure. So
+ * a new queue node structure will have to be defined and used here.
+ */
+struct qnode {
+ struct mcs_spinlock mcs;
+};
+
+/*
* Per-CPU queue node structures; we can never have more than 4 nested
* contexts: task, softirq, hardirq, nmi.
*
* Exactly fits one cacheline.
*/
-static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+static DEFINE_PER_CPU_ALIGNED(struct qnode, qnodes[4]);
/*
* We must be able to distinguish between no-tail and the tail at 0:0,
@@ -79,12 +88,12 @@ static inline u32 encode_tail(int cpu, int idx)
return tail;
}
-static inline struct mcs_spinlock *decode_tail(u32 tail)
+static inline struct qnode *decode_tail(u32 tail)
{
int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
- return per_cpu_ptr(&mcs_nodes[idx], cpu);
+ return per_cpu_ptr(&qnodes[idx], cpu);
}
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
@@ -342,7 +351,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
- struct mcs_spinlock *prev, *next, *node;
+ struct qnode *prev, *next, *node;
u32 old, tail;
int idx;
@@ -351,13 +360,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (trylock_pending(lock, &val))
return; /* Lock acquired */
- node = this_cpu_ptr(&mcs_nodes[0]);
- idx = node->count++;
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->locked = 0;
- node->next = NULL;
+ node->mcs.locked = 0;
+ node->mcs.next = NULL;
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -380,9 +389,10 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
if (old & _Q_TAIL_MASK) {
prev = decode_tail(old);
- ACCESS_ONCE(prev->next) = node;
+ ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- arch_mcs_spin_lock_contended(&node->locked);
+ while (!smp_load_acquire(&node->mcs.locked))
+ arch_mutex_cpu_relax();
}
/*
@@ -422,15 +432,15 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* contended path; wait for next, release.
*/
- while (!(next = ACCESS_ONCE(node->next)))
+ while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->locked);
+ arch_mcs_spin_unlock_contended(&next->mcs.locked);
release:
/*
* release the node
*/
- this_cpu_dec(mcs_nodes[0].count);
+ this_cpu_dec(qnodes[0].mcs.count);
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
In order to support additional virtualization features like unfair lock
and para-virtualized spinlock, it is necessary to store additional
CPU specific data into the queue node structure. As a result, a new
qnode structure is created and the mcs_spinlock structure is now part
of the new structure.
It is also necessary to expand arch_mcs_spin_lock_contended() to the
underlying while loop as additional code will need to be inserted
into the loop.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 36 +++++++++++++++++++++++-------------
1 files changed, 23 insertions(+), 13 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0ee1a23..e98d7d4 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -57,12 +57,21 @@
#include "mcs_spinlock.h"
/*
+ * To have additional features for better virtualization support, it is
+ * necessary to store additional data in the queue node structure. So
+ * a new queue node structure will have to be defined and used here.
+ */
+struct qnode {
+ struct mcs_spinlock mcs;
+};
+
+/*
* Per-CPU queue node structures; we can never have more than 4 nested
* contexts: task, softirq, hardirq, nmi.
*
* Exactly fits one cacheline.
*/
-static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+static DEFINE_PER_CPU_ALIGNED(struct qnode, qnodes[4]);
/*
* We must be able to distinguish between no-tail and the tail at 0:0,
@@ -79,12 +88,12 @@ static inline u32 encode_tail(int cpu, int idx)
return tail;
}
-static inline struct mcs_spinlock *decode_tail(u32 tail)
+static inline struct qnode *decode_tail(u32 tail)
{
int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
- return per_cpu_ptr(&mcs_nodes[idx], cpu);
+ return per_cpu_ptr(&qnodes[idx], cpu);
}
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
@@ -342,7 +351,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
- struct mcs_spinlock *prev, *next, *node;
+ struct qnode *prev, *next, *node;
u32 old, tail;
int idx;
@@ -351,13 +360,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (trylock_pending(lock, &val))
return; /* Lock acquired */
- node = this_cpu_ptr(&mcs_nodes[0]);
- idx = node->count++;
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->locked = 0;
- node->next = NULL;
+ node->mcs.locked = 0;
+ node->mcs.next = NULL;
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -380,9 +389,10 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
if (old & _Q_TAIL_MASK) {
prev = decode_tail(old);
- ACCESS_ONCE(prev->next) = node;
+ ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- arch_mcs_spin_lock_contended(&node->locked);
+ while (!smp_load_acquire(&node->mcs.locked))
+ arch_mutex_cpu_relax();
}
/*
@@ -422,15 +432,15 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* contended path; wait for next, release.
*/
- while (!(next = ACCESS_ONCE(node->next)))
+ while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->locked);
+ arch_mcs_spin_unlock_contended(&next->mcs.locked);
release:
/*
* release the node
*/
- this_cpu_dec(mcs_nodes[0].count);
+ this_cpu_dec(qnodes[0].mcs.count);
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-07 15:01 ` Waiman Long
` (16 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
In order to support additional virtualization features like unfair lock
and para-virtualized spinlock, it is necessary to store additional
CPU specific data into the queue node structure. As a result, a new
qnode structure is created and the mcs_spinlock structure is now part
of the new structure.
It is also necessary to expand arch_mcs_spin_lock_contended() to the
underlying while loop as additional code will need to be inserted
into the loop.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 36 +++++++++++++++++++++++-------------
1 files changed, 23 insertions(+), 13 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0ee1a23..e98d7d4 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -57,12 +57,21 @@
#include "mcs_spinlock.h"
/*
+ * To have additional features for better virtualization support, it is
+ * necessary to store additional data in the queue node structure. So
+ * a new queue node structure will have to be defined and used here.
+ */
+struct qnode {
+ struct mcs_spinlock mcs;
+};
+
+/*
* Per-CPU queue node structures; we can never have more than 4 nested
* contexts: task, softirq, hardirq, nmi.
*
* Exactly fits one cacheline.
*/
-static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]);
+static DEFINE_PER_CPU_ALIGNED(struct qnode, qnodes[4]);
/*
* We must be able to distinguish between no-tail and the tail at 0:0,
@@ -79,12 +88,12 @@ static inline u32 encode_tail(int cpu, int idx)
return tail;
}
-static inline struct mcs_spinlock *decode_tail(u32 tail)
+static inline struct qnode *decode_tail(u32 tail)
{
int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
- return per_cpu_ptr(&mcs_nodes[idx], cpu);
+ return per_cpu_ptr(&qnodes[idx], cpu);
}
#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
@@ -342,7 +351,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
- struct mcs_spinlock *prev, *next, *node;
+ struct qnode *prev, *next, *node;
u32 old, tail;
int idx;
@@ -351,13 +360,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (trylock_pending(lock, &val))
return; /* Lock acquired */
- node = this_cpu_ptr(&mcs_nodes[0]);
- idx = node->count++;
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->locked = 0;
- node->next = NULL;
+ node->mcs.locked = 0;
+ node->mcs.next = NULL;
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -380,9 +389,10 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
if (old & _Q_TAIL_MASK) {
prev = decode_tail(old);
- ACCESS_ONCE(prev->next) = node;
+ ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- arch_mcs_spin_lock_contended(&node->locked);
+ while (!smp_load_acquire(&node->mcs.locked))
+ arch_mutex_cpu_relax();
}
/*
@@ -422,15 +432,15 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* contended path; wait for next, release.
*/
- while (!(next = ACCESS_ONCE(node->next)))
+ while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->locked);
+ arch_mcs_spin_unlock_contended(&next->mcs.locked);
release:
/*
* release the node
*/
- this_cpu_dec(mcs_nodes[0].count);
+ this_cpu_dec(qnodes[0].mcs.count);
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 26 +++++++++++++++++++-------
1 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e98d7d4..9e7659e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -64,6 +64,7 @@
struct qnode {
struct mcs_spinlock mcs;
};
+#define qhead mcs.locked /* The queue head flag */
/*
* Per-CPU queue node structures; we can never have more than 4 nested
@@ -216,18 +217,20 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
/**
* get_qlock - Set the lock bit and own the lock
- * @lock: Pointer to queue spinlock structure
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 otherwise
*
* This routine should only be called when the caller is the only one
* entitled to acquire the lock.
*/
-static __always_inline void get_qlock(struct qspinlock *lock)
+static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
+ return 1;
}
/**
@@ -365,7 +368,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->mcs.locked = 0;
+ node->qhead = 0;
node->mcs.next = NULL;
/*
@@ -391,7 +394,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
prev = decode_tail(old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->mcs.locked))
+ while (!smp_load_acquire(&node->qhead))
arch_mutex_cpu_relax();
}
@@ -403,6 +406,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* *,x,y -> *,0,0
*/
+retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
& _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
@@ -419,12 +423,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
for (;;) {
if (val != tail) {
- get_qlock(lock);
- break;
+ /*
+ * The get_qlock function will only failed if the
+ * lock was stolen.
+ */
+ if (get_qlock(lock))
+ break;
+ else
+ goto retry_queue_wait;
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
goto release; /* No contention */
+ else if (old & _Q_LOCKED_MASK)
+ goto retry_queue_wait;
val = old;
}
@@ -435,7 +447,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->mcs.locked);
+ arch_mcs_spin_unlock_contended(&next->qhead);
release:
/*
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 26 +++++++++++++++++++-------
1 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e98d7d4..9e7659e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -64,6 +64,7 @@
struct qnode {
struct mcs_spinlock mcs;
};
+#define qhead mcs.locked /* The queue head flag */
/*
* Per-CPU queue node structures; we can never have more than 4 nested
@@ -216,18 +217,20 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
/**
* get_qlock - Set the lock bit and own the lock
- * @lock: Pointer to queue spinlock structure
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 otherwise
*
* This routine should only be called when the caller is the only one
* entitled to acquire the lock.
*/
-static __always_inline void get_qlock(struct qspinlock *lock)
+static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
+ return 1;
}
/**
@@ -365,7 +368,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->mcs.locked = 0;
+ node->qhead = 0;
node->mcs.next = NULL;
/*
@@ -391,7 +394,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
prev = decode_tail(old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->mcs.locked))
+ while (!smp_load_acquire(&node->qhead))
arch_mutex_cpu_relax();
}
@@ -403,6 +406,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* *,x,y -> *,0,0
*/
+retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
& _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
@@ -419,12 +423,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
for (;;) {
if (val != tail) {
- get_qlock(lock);
- break;
+ /*
+ * The get_qlock function will only failed if the
+ * lock was stolen.
+ */
+ if (get_qlock(lock))
+ break;
+ else
+ goto retry_queue_wait;
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
goto release; /* No contention */
+ else if (old & _Q_LOCKED_MASK)
+ goto retry_queue_wait;
val = old;
}
@@ -435,7 +447,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->mcs.locked);
+ arch_mcs_spin_unlock_contended(&next->qhead);
release:
/*
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-07 15:01 ` Waiman Long
` (18 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 26 +++++++++++++++++++-------
1 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e98d7d4..9e7659e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -64,6 +64,7 @@
struct qnode {
struct mcs_spinlock mcs;
};
+#define qhead mcs.locked /* The queue head flag */
/*
* Per-CPU queue node structures; we can never have more than 4 nested
@@ -216,18 +217,20 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
/**
* get_qlock - Set the lock bit and own the lock
- * @lock: Pointer to queue spinlock structure
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 otherwise
*
* This routine should only be called when the caller is the only one
* entitled to acquire the lock.
*/
-static __always_inline void get_qlock(struct qspinlock *lock)
+static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
+ return 1;
}
/**
@@ -365,7 +368,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->mcs.locked = 0;
+ node->qhead = 0;
node->mcs.next = NULL;
/*
@@ -391,7 +394,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
prev = decode_tail(old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->mcs.locked))
+ while (!smp_load_acquire(&node->qhead))
arch_mutex_cpu_relax();
}
@@ -403,6 +406,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* *,x,y -> *,0,0
*/
+retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
& _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
@@ -419,12 +423,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
for (;;) {
if (val != tail) {
- get_qlock(lock);
- break;
+ /*
+ * The get_qlock function will only failed if the
+ * lock was stolen.
+ */
+ if (get_qlock(lock))
+ break;
+ else
+ goto retry_queue_wait;
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
goto release; /* No contention */
+ else if (old & _Q_LOCKED_MASK)
+ goto retry_queue_wait;
val = old;
}
@@ -435,7 +447,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->mcs.locked);
+ arch_mcs_spin_unlock_contended(&next->qhead);
release:
/*
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
Locking is always an issue in a virtualized environment because of 2
different types of problems:
1) Lock holder preemption
2) Lock waiter preemption
One solution to the lock waiter preemption problem is to allow unfair
lock in a virtualized environment. In this case, a new lock acquirer
can come and steal the lock if the next-in-line CPU to get the lock
is scheduled out.
A simple unfair lock is the test-and-set byte lock where an lock
acquirer constantly spins on the lock word and attempt to grab it
when the lock is freed. This simple unfair lock has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the affected cacheline, thus slowing tasks
that need to access the cacheline.
2) Lock starvation is a real possibility especially if the number of
virtual CPUs is large.
A simple unfair queue spinlock can be implemented by allowing lock
stealing in the fast path. The slowpath will still be the same as
before and all the pending lock acquirers will have to wait in the
queue in FIFO order. This cannot completely solve the lock waiter
preemption problem, but it does help to alleviate the impact of
this problem.
To illustrate the performance impact of the various approaches, the
disk workload of the AIM7 benchmark and the ebizzy test were run on
a 4-socket 40-core Westmere-EX system (bare metal, HT off, ramdisk)
on a 3.14 based kernel. The table below shows the performance
of the different kernel flavors.
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
simple test-and-set 5625000 3.20 98.29 5.93
simple unfair 5750799 3.13 95.91 5.98
qspinlock
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
simple test-and-set 593081 30.35 967.55 9.00
simple unfair 2292994 7.85 222.84 5.89
qspinlock
Ebizzy -m test
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
simple test-and-set 1667 10.00 198.93 2.89
simple unfair 2915 10.00 165.68 4.31
qspinlock
The disk-xfs workload spent only about 2.88% of CPU time in
_raw_spin_lock() whereas the disk-ext4 workload spent 57.8% of CPU
time in _raw_spin_lock(). It can be seen that there wasn't too much
difference in performance with low spinlock contention in the disk-xfs
workload. With heavy spinlock contention, the performance of simple
test-and-set lock can plummet when compared with the ticket and
queue spinlocks.
Unfair lock in a native environment is generally not a good idea as
there is a possibility of lock starvation for a heavily contended lock.
This patch adds a new configuration option for the x86 architecture
to enable the use of unfair queue spinlock (PARAVIRT_UNFAIR_LOCKS) in
a para-virtualized guest. A jump label (paravirt_unfairlocks_enabled)
is used to switch between a fair and an unfair version of the spinlock
code. This jump label will only be enabled in a virtual guest where
the X86_FEATURE_HYPERVISOR feature bit is set.
Enabling this configuration feature causes a slight decrease the
performance of an uncontended lock-unlock operation by about 1-2%
mainly due to the use of a static key. However, uncontended lock-unlock
operation are really just a tiny percentage of a real workload. So
there should no noticeable change in application performance.
With the unfair locking activated on bare metal 4-socket Westmere-EX
box, the execution times (in ms) of a spinlock micro-benchmark were
as follows:
# of Ticket Fair Unfair simple Unfair
tasks lock queue lock queue lock byte lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 890 1082 421 718
3 1932 2248 708 1263
4 2829 2819 1030 1916
5 3834 3522 1323 2327
6 4963 4173 1723 2938
7 6299 4875 2067 3292
8 7691 5563 2360 3768
Executing one task per node, the performance data were:
# of Ticket Fair Unfair simple Unfair
nodes lock queue lock queue lock byte lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 4603 1034 670 766
3 10940 12087 1389 1934
4 21555 10507 1869 3731
In general, the shorter the critical section, the better the
performance benefit of an unfair lock. For large critical section,
however, there may not be much benefit.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/Kconfig | 11 +++++
arch/x86/include/asm/qspinlock.h | 79 ++++++++++++++++++++++++++++++++++
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/paravirt-spinlocks.c | 26 +++++++++++
kernel/locking/qspinlock.c | 8 +++
5 files changed, 125 insertions(+), 0 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 95c9c4e..2f06976 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,17 @@ config PARAVIRT_SPINLOCKS
If you are unsure how to answer this question, answer Y.
+config PARAVIRT_UNFAIR_LOCKS
+ bool "Enable unfair locks in a para-virtualized guest"
+ depends on PARAVIRT && SMP && QUEUE_SPINLOCK
+ depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
+ ---help---
+ This changes the kernel to use unfair locks in a
+ para-virtualized guest. This will help performance in most
+ cases. However, there is a possibility of lock starvation
+ on a heavily contended lock especially in a large guest
+ with many virtual CPUs.
+
source "arch/x86/xen/Kconfig"
config KVM_GUEST
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index e4a4f5d..19af937 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -5,6 +5,10 @@
#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+extern struct static_key paravirt_unfairlocks_enabled;
+#endif
+
#define queue_spin_unlock queue_spin_unlock
/**
* queue_spin_unlock - release a queue spinlock
@@ -26,4 +30,79 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
#include <asm-generic/qspinlock.h>
+union arch_qspinlock {
+ atomic_t val;
+ u8 locked;
+};
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/**
+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
+{
+ union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+ if (!qlock->locked && (cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+/**
+ * queue_spin_lock_unfair - acquire a queue spinlock unfairly
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
+{
+ union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+ if (likely(cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0))
+ return;
+ /*
+ * Since the lock is now unfair, we should not activate the 2-task
+ * pending bit spinning code path which disallows lock stealing.
+ */
+ queue_spin_lock_slowpath(lock, -1);
+}
+
+/*
+ * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
+ * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
+ * is true.
+ */
+#undef arch_spin_lock
+#undef arch_spin_trylock
+#undef arch_spin_lock_flags
+
+/**
+ * arch_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ queue_spin_lock_unfair(lock);
+ else
+ queue_spin_lock(lock);
+}
+
+/**
+ * arch_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ return queue_spin_trylock_unfair(lock);
+ else
+ return queue_spin_trylock(lock);
+}
+
+#define arch_spin_lock_flags(l, f) arch_spin_lock(l)
+
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index f4d9600..b436419 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvmclock.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o
obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o
obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..7dfd02d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,6 +8,7 @@
#include <asm/paravirt.h>
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
@@ -18,3 +19,28 @@ EXPORT_SYMBOL(pv_lock_ops);
struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+#endif
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
+
+#include <linux/init.h>
+#include <asm/cpufeature.h>
+
+/*
+ * Enable unfair lock only if it is running under a hypervisor
+ */
+static __init int unfair_locks_init_jump(void)
+{
+ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ return 0;
+
+ static_key_slow_inc(¶virt_unfairlocks_enabled);
+ printk(KERN_INFO "Unfair spinlock enabled\n");
+
+ return 0;
+}
+early_initcall(unfair_locks_init_jump);
+
+#endif
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 9e7659e..10e87e1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Need to use atomic operation to get the lock when
+ * lock stealing can happen.
+ */
+ return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
+#endif
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
Locking is always an issue in a virtualized environment because of 2
different types of problems:
1) Lock holder preemption
2) Lock waiter preemption
One solution to the lock waiter preemption problem is to allow unfair
lock in a virtualized environment. In this case, a new lock acquirer
can come and steal the lock if the next-in-line CPU to get the lock
is scheduled out.
A simple unfair lock is the test-and-set byte lock where an lock
acquirer constantly spins on the lock word and attempt to grab it
when the lock is freed. This simple unfair lock has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the affected cacheline, thus slowing tasks
that need to access the cacheline.
2) Lock starvation is a real possibility especially if the number of
virtual CPUs is large.
A simple unfair queue spinlock can be implemented by allowing lock
stealing in the fast path. The slowpath will still be the same as
before and all the pending lock acquirers will have to wait in the
queue in FIFO order. This cannot completely solve the lock waiter
preemption problem, but it does help to alleviate the impact of
this problem.
To illustrate the performance impact of the various approaches, the
disk workload of the AIM7 benchmark and the ebizzy test were run on
a 4-socket 40-core Westmere-EX system (bare metal, HT off, ramdisk)
on a 3.14 based kernel. The table below shows the performance
of the different kernel flavors.
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
simple test-and-set 5625000 3.20 98.29 5.93
simple unfair 5750799 3.13 95.91 5.98
qspinlock
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
simple test-and-set 593081 30.35 967.55 9.00
simple unfair 2292994 7.85 222.84 5.89
qspinlock
Ebizzy -m test
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
simple test-and-set 1667 10.00 198.93 2.89
simple unfair 2915 10.00 165.68 4.31
qspinlock
The disk-xfs workload spent only about 2.88% of CPU time in
_raw_spin_lock() whereas the disk-ext4 workload spent 57.8% of CPU
time in _raw_spin_lock(). It can be seen that there wasn't too much
difference in performance with low spinlock contention in the disk-xfs
workload. With heavy spinlock contention, the performance of simple
test-and-set lock can plummet when compared with the ticket and
queue spinlocks.
Unfair lock in a native environment is generally not a good idea as
there is a possibility of lock starvation for a heavily contended lock.
This patch adds a new configuration option for the x86 architecture
to enable the use of unfair queue spinlock (PARAVIRT_UNFAIR_LOCKS) in
a para-virtualized guest. A jump label (paravirt_unfairlocks_enabled)
is used to switch between a fair and an unfair version of the spinlock
code. This jump label will only be enabled in a virtual guest where
the X86_FEATURE_HYPERVISOR feature bit is set.
Enabling this configuration feature causes a slight decrease the
performance of an uncontended lock-unlock operation by about 1-2%
mainly due to the use of a static key. However, uncontended lock-unlock
operation are really just a tiny percentage of a real workload. So
there should no noticeable change in application performance.
With the unfair locking activated on bare metal 4-socket Westmere-EX
box, the execution times (in ms) of a spinlock micro-benchmark were
as follows:
# of Ticket Fair Unfair simple Unfair
tasks lock queue lock queue lock byte lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 890 1082 421 718
3 1932 2248 708 1263
4 2829 2819 1030 1916
5 3834 3522 1323 2327
6 4963 4173 1723 2938
7 6299 4875 2067 3292
8 7691 5563 2360 3768
Executing one task per node, the performance data were:
# of Ticket Fair Unfair simple Unfair
nodes lock queue lock queue lock byte lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 4603 1034 670 766
3 10940 12087 1389 1934
4 21555 10507 1869 3731
In general, the shorter the critical section, the better the
performance benefit of an unfair lock. For large critical section,
however, there may not be much benefit.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/Kconfig | 11 +++++
arch/x86/include/asm/qspinlock.h | 79 ++++++++++++++++++++++++++++++++++
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/paravirt-spinlocks.c | 26 +++++++++++
kernel/locking/qspinlock.c | 8 +++
5 files changed, 125 insertions(+), 0 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 95c9c4e..2f06976 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,17 @@ config PARAVIRT_SPINLOCKS
If you are unsure how to answer this question, answer Y.
+config PARAVIRT_UNFAIR_LOCKS
+ bool "Enable unfair locks in a para-virtualized guest"
+ depends on PARAVIRT && SMP && QUEUE_SPINLOCK
+ depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
+ ---help---
+ This changes the kernel to use unfair locks in a
+ para-virtualized guest. This will help performance in most
+ cases. However, there is a possibility of lock starvation
+ on a heavily contended lock especially in a large guest
+ with many virtual CPUs.
+
source "arch/x86/xen/Kconfig"
config KVM_GUEST
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index e4a4f5d..19af937 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -5,6 +5,10 @@
#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+extern struct static_key paravirt_unfairlocks_enabled;
+#endif
+
#define queue_spin_unlock queue_spin_unlock
/**
* queue_spin_unlock - release a queue spinlock
@@ -26,4 +30,79 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
#include <asm-generic/qspinlock.h>
+union arch_qspinlock {
+ atomic_t val;
+ u8 locked;
+};
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/**
+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
+{
+ union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+ if (!qlock->locked && (cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+/**
+ * queue_spin_lock_unfair - acquire a queue spinlock unfairly
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
+{
+ union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+ if (likely(cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0))
+ return;
+ /*
+ * Since the lock is now unfair, we should not activate the 2-task
+ * pending bit spinning code path which disallows lock stealing.
+ */
+ queue_spin_lock_slowpath(lock, -1);
+}
+
+/*
+ * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
+ * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
+ * is true.
+ */
+#undef arch_spin_lock
+#undef arch_spin_trylock
+#undef arch_spin_lock_flags
+
+/**
+ * arch_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ queue_spin_lock_unfair(lock);
+ else
+ queue_spin_lock(lock);
+}
+
+/**
+ * arch_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ return queue_spin_trylock_unfair(lock);
+ else
+ return queue_spin_trylock(lock);
+}
+
+#define arch_spin_lock_flags(l, f) arch_spin_lock(l)
+
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index f4d9600..b436419 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvmclock.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o
obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o
obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..7dfd02d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,6 +8,7 @@
#include <asm/paravirt.h>
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
@@ -18,3 +19,28 @@ EXPORT_SYMBOL(pv_lock_ops);
struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+#endif
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
+
+#include <linux/init.h>
+#include <asm/cpufeature.h>
+
+/*
+ * Enable unfair lock only if it is running under a hypervisor
+ */
+static __init int unfair_locks_init_jump(void)
+{
+ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ return 0;
+
+ static_key_slow_inc(¶virt_unfairlocks_enabled);
+ printk(KERN_INFO "Unfair spinlock enabled\n");
+
+ return 0;
+}
+early_initcall(unfair_locks_init_jump);
+
+#endif
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 9e7659e..10e87e1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Need to use atomic operation to get the lock when
+ * lock stealing can happen.
+ */
+ return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
+#endif
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-07 15:01 ` Waiman Long
` (21 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
Locking is always an issue in a virtualized environment because of 2
different types of problems:
1) Lock holder preemption
2) Lock waiter preemption
One solution to the lock waiter preemption problem is to allow unfair
lock in a virtualized environment. In this case, a new lock acquirer
can come and steal the lock if the next-in-line CPU to get the lock
is scheduled out.
A simple unfair lock is the test-and-set byte lock where an lock
acquirer constantly spins on the lock word and attempt to grab it
when the lock is freed. This simple unfair lock has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the affected cacheline, thus slowing tasks
that need to access the cacheline.
2) Lock starvation is a real possibility especially if the number of
virtual CPUs is large.
A simple unfair queue spinlock can be implemented by allowing lock
stealing in the fast path. The slowpath will still be the same as
before and all the pending lock acquirers will have to wait in the
queue in FIFO order. This cannot completely solve the lock waiter
preemption problem, but it does help to alleviate the impact of
this problem.
To illustrate the performance impact of the various approaches, the
disk workload of the AIM7 benchmark and the ebizzy test were run on
a 4-socket 40-core Westmere-EX system (bare metal, HT off, ramdisk)
on a 3.14 based kernel. The table below shows the performance
of the different kernel flavors.
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
simple test-and-set 5625000 3.20 98.29 5.93
simple unfair 5750799 3.13 95.91 5.98
qspinlock
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
simple test-and-set 593081 30.35 967.55 9.00
simple unfair 2292994 7.85 222.84 5.89
qspinlock
Ebizzy -m test
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
simple test-and-set 1667 10.00 198.93 2.89
simple unfair 2915 10.00 165.68 4.31
qspinlock
The disk-xfs workload spent only about 2.88% of CPU time in
_raw_spin_lock() whereas the disk-ext4 workload spent 57.8% of CPU
time in _raw_spin_lock(). It can be seen that there wasn't too much
difference in performance with low spinlock contention in the disk-xfs
workload. With heavy spinlock contention, the performance of simple
test-and-set lock can plummet when compared with the ticket and
queue spinlocks.
Unfair lock in a native environment is generally not a good idea as
there is a possibility of lock starvation for a heavily contended lock.
This patch adds a new configuration option for the x86 architecture
to enable the use of unfair queue spinlock (PARAVIRT_UNFAIR_LOCKS) in
a para-virtualized guest. A jump label (paravirt_unfairlocks_enabled)
is used to switch between a fair and an unfair version of the spinlock
code. This jump label will only be enabled in a virtual guest where
the X86_FEATURE_HYPERVISOR feature bit is set.
Enabling this configuration feature causes a slight decrease the
performance of an uncontended lock-unlock operation by about 1-2%
mainly due to the use of a static key. However, uncontended lock-unlock
operation are really just a tiny percentage of a real workload. So
there should no noticeable change in application performance.
With the unfair locking activated on bare metal 4-socket Westmere-EX
box, the execution times (in ms) of a spinlock micro-benchmark were
as follows:
# of Ticket Fair Unfair simple Unfair
tasks lock queue lock queue lock byte lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 890 1082 421 718
3 1932 2248 708 1263
4 2829 2819 1030 1916
5 3834 3522 1323 2327
6 4963 4173 1723 2938
7 6299 4875 2067 3292
8 7691 5563 2360 3768
Executing one task per node, the performance data were:
# of Ticket Fair Unfair simple Unfair
nodes lock queue lock queue lock byte lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 4603 1034 670 766
3 10940 12087 1389 1934
4 21555 10507 1869 3731
In general, the shorter the critical section, the better the
performance benefit of an unfair lock. For large critical section,
however, there may not be much benefit.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/Kconfig | 11 +++++
arch/x86/include/asm/qspinlock.h | 79 ++++++++++++++++++++++++++++++++++
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/paravirt-spinlocks.c | 26 +++++++++++
kernel/locking/qspinlock.c | 8 +++
5 files changed, 125 insertions(+), 0 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 95c9c4e..2f06976 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,17 @@ config PARAVIRT_SPINLOCKS
If you are unsure how to answer this question, answer Y.
+config PARAVIRT_UNFAIR_LOCKS
+ bool "Enable unfair locks in a para-virtualized guest"
+ depends on PARAVIRT && SMP && QUEUE_SPINLOCK
+ depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
+ ---help---
+ This changes the kernel to use unfair locks in a
+ para-virtualized guest. This will help performance in most
+ cases. However, there is a possibility of lock starvation
+ on a heavily contended lock especially in a large guest
+ with many virtual CPUs.
+
source "arch/x86/xen/Kconfig"
config KVM_GUEST
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index e4a4f5d..19af937 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -5,6 +5,10 @@
#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+extern struct static_key paravirt_unfairlocks_enabled;
+#endif
+
#define queue_spin_unlock queue_spin_unlock
/**
* queue_spin_unlock - release a queue spinlock
@@ -26,4 +30,79 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
#include <asm-generic/qspinlock.h>
+union arch_qspinlock {
+ atomic_t val;
+ u8 locked;
+};
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/**
+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
+{
+ union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+ if (!qlock->locked && (cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0))
+ return 1;
+ return 0;
+}
+
+/**
+ * queue_spin_lock_unfair - acquire a queue spinlock unfairly
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
+{
+ union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+ if (likely(cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0))
+ return;
+ /*
+ * Since the lock is now unfair, we should not activate the 2-task
+ * pending bit spinning code path which disallows lock stealing.
+ */
+ queue_spin_lock_slowpath(lock, -1);
+}
+
+/*
+ * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
+ * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
+ * is true.
+ */
+#undef arch_spin_lock
+#undef arch_spin_trylock
+#undef arch_spin_lock_flags
+
+/**
+ * arch_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ queue_spin_lock_unfair(lock);
+ else
+ queue_spin_lock(lock);
+}
+
+/**
+ * arch_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ return queue_spin_trylock_unfair(lock);
+ else
+ return queue_spin_trylock(lock);
+}
+
+#define arch_spin_lock_flags(l, f) arch_spin_lock(l)
+
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index f4d9600..b436419 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvmclock.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o
obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o
obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..7dfd02d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,6 +8,7 @@
#include <asm/paravirt.h>
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
@@ -18,3 +19,28 @@ EXPORT_SYMBOL(pv_lock_ops);
struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+#endif
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
+
+#include <linux/init.h>
+#include <asm/cpufeature.h>
+
+/*
+ * Enable unfair lock only if it is running under a hypervisor
+ */
+static __init int unfair_locks_init_jump(void)
+{
+ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ return 0;
+
+ static_key_slow_inc(¶virt_unfairlocks_enabled);
+ printk(KERN_INFO "Unfair spinlock enabled\n");
+
+ return 0;
+}
+early_initcall(unfair_locks_init_jump);
+
+#endif
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 9e7659e..10e87e1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Need to use atomic operation to get the lock when
+ * lock stealing can happen.
+ */
+ return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
+#endif
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
With the pending addition of more codes to support unfair lock and
PV spinlock, the complexity of the slowpath function increases to
the point that the number of scratch-pad registers in the x86-64
architecture is not enough and so those additional non-scratch-pad
registers will need to be used. This has the downside of requiring
saving and restoring of those registers in the prolog and epilog of
the slowpath function slowing down the nominally faster pending bit
and trylock code path at the beginning of the slowpath function.
This patch separates out the actual MCS queuing code into a slowerpath
function. This avoids the slow down of the pending bit and trylock
code path at the expense of a little bit of additional overhead to
the MCS queuing code path.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 120 +++++++++++++++++++++++++------------------
1 files changed, 70 insertions(+), 50 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 10e87e1..a14241e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -335,57 +335,23 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
}
/**
- * queue_spin_lock_slowpath - acquire the queue spinlock
+ * queue_spin_lock_slowerpath - a slower path for acquiring queue spinlock
* @lock: Pointer to queue spinlock structure
- * @val: Current value of the queue spinlock 32-bit word
- *
- * (queue tail, pending bit, lock bit)
- *
- * fast : slow : unlock
- * : :
- * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * @val : Current value of the queue spinlock 32-bit word
+ * @node: Pointer to the queue node
+ * @tail: The tail code
*
+ * The reason for splitting a slowerpath from slowpath is to avoid the
+ * unnecessary overhead of non-scratch pad register pushing and popping
+ * due to increased complexity with unfair and PV spinlock from slowing
+ * down the nominally faster pending bit and trylock code path. So this
+ * function is not inlined.
*/
-void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
+ u32 val, struct qnode *node, u32 tail)
{
- struct qnode *prev, *next, *node;
- u32 old, tail;
- int idx;
-
- BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
-
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
-
- node = this_cpu_ptr(&qnodes[0]);
- idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
-
- node += idx;
- node->qhead = 0;
- node->mcs.next = NULL;
-
- /*
- * We touched a (possibly) cold cacheline in the per-cpu queue node;
- * attempt the trylock once more in the hope someone let go while we
- * weren't watching.
- */
- if (queue_spin_trylock(lock))
- goto release;
+ struct qnode *prev, *next;
+ u32 old;
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -442,7 +408,7 @@ retry_queue_wait:
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
- goto release; /* No contention */
+ return; /* No contention */
else if (old & _Q_LOCKED_MASK)
goto retry_queue_wait;
@@ -450,14 +416,68 @@ retry_queue_wait:
}
/*
- * contended path; wait for next, release.
+ * contended path; wait for next, return.
*/
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
arch_mcs_spin_unlock_contended(&next->qhead);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
+ *
+ * This slowpath only contains the faster pending bit and trylock codes.
+ * The slower queuing code is in the slowerpath function.
+ *
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowerpath() function.
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct qnode *node;
+ u32 tail, idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->qhead = 0;
+ node->mcs.next = NULL;
+
+ /*
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
+ */
+ if (!queue_spin_trylock(lock))
+ queue_spin_lock_slowerpath(lock, val, node, tail);
-release:
/*
* release the node
*/
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
With the pending addition of more codes to support unfair lock and
PV spinlock, the complexity of the slowpath function increases to
the point that the number of scratch-pad registers in the x86-64
architecture is not enough and so those additional non-scratch-pad
registers will need to be used. This has the downside of requiring
saving and restoring of those registers in the prolog and epilog of
the slowpath function slowing down the nominally faster pending bit
and trylock code path at the beginning of the slowpath function.
This patch separates out the actual MCS queuing code into a slowerpath
function. This avoids the slow down of the pending bit and trylock
code path at the expense of a little bit of additional overhead to
the MCS queuing code path.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 120 +++++++++++++++++++++++++------------------
1 files changed, 70 insertions(+), 50 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 10e87e1..a14241e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -335,57 +335,23 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
}
/**
- * queue_spin_lock_slowpath - acquire the queue spinlock
+ * queue_spin_lock_slowerpath - a slower path for acquiring queue spinlock
* @lock: Pointer to queue spinlock structure
- * @val: Current value of the queue spinlock 32-bit word
- *
- * (queue tail, pending bit, lock bit)
- *
- * fast : slow : unlock
- * : :
- * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * @val : Current value of the queue spinlock 32-bit word
+ * @node: Pointer to the queue node
+ * @tail: The tail code
*
+ * The reason for splitting a slowerpath from slowpath is to avoid the
+ * unnecessary overhead of non-scratch pad register pushing and popping
+ * due to increased complexity with unfair and PV spinlock from slowing
+ * down the nominally faster pending bit and trylock code path. So this
+ * function is not inlined.
*/
-void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
+ u32 val, struct qnode *node, u32 tail)
{
- struct qnode *prev, *next, *node;
- u32 old, tail;
- int idx;
-
- BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
-
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
-
- node = this_cpu_ptr(&qnodes[0]);
- idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
-
- node += idx;
- node->qhead = 0;
- node->mcs.next = NULL;
-
- /*
- * We touched a (possibly) cold cacheline in the per-cpu queue node;
- * attempt the trylock once more in the hope someone let go while we
- * weren't watching.
- */
- if (queue_spin_trylock(lock))
- goto release;
+ struct qnode *prev, *next;
+ u32 old;
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -442,7 +408,7 @@ retry_queue_wait:
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
- goto release; /* No contention */
+ return; /* No contention */
else if (old & _Q_LOCKED_MASK)
goto retry_queue_wait;
@@ -450,14 +416,68 @@ retry_queue_wait:
}
/*
- * contended path; wait for next, release.
+ * contended path; wait for next, return.
*/
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
arch_mcs_spin_unlock_contended(&next->qhead);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
+ *
+ * This slowpath only contains the faster pending bit and trylock codes.
+ * The slower queuing code is in the slowerpath function.
+ *
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowerpath() function.
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct qnode *node;
+ u32 tail, idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->qhead = 0;
+ node->mcs.next = NULL;
+
+ /*
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
+ */
+ if (!queue_spin_trylock(lock))
+ queue_spin_lock_slowerpath(lock, val, node, tail);
-release:
/*
* release the node
*/
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath
2014-05-07 15:01 ` Waiman Long
` (22 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
With the pending addition of more codes to support unfair lock and
PV spinlock, the complexity of the slowpath function increases to
the point that the number of scratch-pad registers in the x86-64
architecture is not enough and so those additional non-scratch-pad
registers will need to be used. This has the downside of requiring
saving and restoring of those registers in the prolog and epilog of
the slowpath function slowing down the nominally faster pending bit
and trylock code path at the beginning of the slowpath function.
This patch separates out the actual MCS queuing code into a slowerpath
function. This avoids the slow down of the pending bit and trylock
code path at the expense of a little bit of additional overhead to
the MCS queuing code path.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 120 +++++++++++++++++++++++++------------------
1 files changed, 70 insertions(+), 50 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 10e87e1..a14241e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -335,57 +335,23 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
}
/**
- * queue_spin_lock_slowpath - acquire the queue spinlock
+ * queue_spin_lock_slowerpath - a slower path for acquiring queue spinlock
* @lock: Pointer to queue spinlock structure
- * @val: Current value of the queue spinlock 32-bit word
- *
- * (queue tail, pending bit, lock bit)
- *
- * fast : slow : unlock
- * : :
- * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * @val : Current value of the queue spinlock 32-bit word
+ * @node: Pointer to the queue node
+ * @tail: The tail code
*
+ * The reason for splitting a slowerpath from slowpath is to avoid the
+ * unnecessary overhead of non-scratch pad register pushing and popping
+ * due to increased complexity with unfair and PV spinlock from slowing
+ * down the nominally faster pending bit and trylock code path. So this
+ * function is not inlined.
*/
-void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
+ u32 val, struct qnode *node, u32 tail)
{
- struct qnode *prev, *next, *node;
- u32 old, tail;
- int idx;
-
- BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
-
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
-
- node = this_cpu_ptr(&qnodes[0]);
- idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
-
- node += idx;
- node->qhead = 0;
- node->mcs.next = NULL;
-
- /*
- * We touched a (possibly) cold cacheline in the per-cpu queue node;
- * attempt the trylock once more in the hope someone let go while we
- * weren't watching.
- */
- if (queue_spin_trylock(lock))
- goto release;
+ struct qnode *prev, *next;
+ u32 old;
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -442,7 +408,7 @@ retry_queue_wait:
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
- goto release; /* No contention */
+ return; /* No contention */
else if (old & _Q_LOCKED_MASK)
goto retry_queue_wait;
@@ -450,14 +416,68 @@ retry_queue_wait:
}
/*
- * contended path; wait for next, release.
+ * contended path; wait for next, return.
*/
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
arch_mcs_spin_unlock_contended(&next->qhead);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
+ *
+ * This slowpath only contains the faster pending bit and trylock codes.
+ * The slower queuing code is in the slowerpath function.
+ *
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowerpath() function.
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct qnode *node;
+ u32 tail, idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->qhead = 0;
+ node->mcs.next = NULL;
+
+ /*
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
+ */
+ if (!queue_spin_trylock(lock))
+ queue_spin_lock_slowerpath(lock, val, node, tail);
-release:
/*
* release the node
*/
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
In order to fully resolve the lock waiter preemption problem in virtual
guests, it is necessary to enable lock stealing in the lock waiters.
A simple test-and-set lock, however, has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the affected cacheline, thus slowing tasks
that need to access the cacheline.
2) Lock starvation is a real possibility especially if the number of
virtual CPUs is large.
To alleviate these problems, this patch implements a variable frequency
(from 1/8 to 1/1024) lock stealing mechanism for the lock waiters
in the queue. The node next to the queue head try to steal lock once
every 8 iterations of the pause loop. The next one in the queue has
half the lock stealing frequency (once every 16 iterations) and so
on until it reaches a maximum of once every 1024 iterations.
This mechanism reduces the cacheline contention problem on the lock
word while trying to maintain as much of a FIFO order as possible.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 147 +++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 146 insertions(+), 1 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a14241e..06dd486 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -63,6 +63,11 @@
*/
struct qnode {
struct mcs_spinlock mcs;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ int lsteal_mask; /* Lock stealing frequency mask */
+ u32 prev_tail; /* Tail code of previous node */
+ struct qnode *qprev; /* Previous queue node addr */
+#endif
};
#define qhead mcs.locked /* The queue head flag */
@@ -215,6 +220,139 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
}
#endif /* _Q_PENDING_BITS == 8 */
+/*
+ ************************************************************************
+ * Inline functions for supporting unfair queue lock *
+ ************************************************************************
+ */
+/*
+ * Unfair lock support in a virtualized guest
+ *
+ * An unfair lock can be implemented using a simple test-and-set lock like
+ * what is being done in a read-write lock. This simple scheme has 2 major
+ * problems:
+ * 1) It needs constant reading and occasionally writing to the lock word
+ * thus putting a lot of cacheline contention traffic on the affected
+ * cacheline.
+ * 2) Lock starvation is a real possibility especially if the number of
+ * virtual CPUs is large.
+ *
+ * To reduce the undesirable side effects of an unfair lock, the queue
+ * unfair spinlock implements a more elaborate scheme. Lock stealing is
+ * allowed in the following places:
+ * 1) In the spin_lock and spin_trylock fastpaths
+ * 2) When spinning in the waiter queue before becoming the queue head
+ *
+ * A lock acquirer has only one chance of stealing the lock in the spin_lock
+ * and spin_trylock fastpath. If the attempt fails for spin_lock, the task
+ * will be queued in the wait queue.
+ *
+ * Even in the wait queue, the task can still attempt to steal the lock
+ * periodically at a frequency about inversely and logarithmically proportional
+ * to its distance from the queue head. In other word, the closer it is to
+ * the queue head, the higher a chance it has of stealing the lock. This
+ * scheme reduces the load on the lock cacheline while trying to maintain
+ * a somewhat FIFO way of getting the lock so as to reduce the chance of lock
+ * starvation.
+ */
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+#define DEF_LOOP_CNT(c) int c = 0
+#define INC_LOOP_CNT(c) (c)++
+#define LOOP_CNT(c) c
+#define LSTEAL_MIN (1 << 3)
+#define LSTEAL_MAX (1 << 10)
+#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
+#define LSTEAL_MAX_MASK (LSTEAL_MAX - 1)
+
+/**
+ * unfair_init_vars - initialize unfair relevant fields in queue node structure
+ * @node: Current queue node address
+ */
+static inline void unfair_init_vars(struct qnode *node)
+{
+ node->qprev = NULL;
+ node->prev_tail = 0;
+ node->lsteal_mask = LSTEAL_MIN_MASK;
+}
+
+/**
+ * unfair_set_vars - set unfair related fields in the queue node structure
+ * @node : Current queue node address
+ * @prev : Previous queue node address
+ * @prev_tail: Previous tail code
+ */
+static inline void
+unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
+{
+ if (!static_key_false(¶virt_unfairlocks_enabled))
+ return;
+
+ node->qprev = prev;
+ node->prev_tail = prev_tail;
+ /*
+ * This node will spin double the number of time of the previous node
+ * before attempting to steal the lock until it reaches a maximum.
+ */
+ node->lsteal_mask = prev->qhead ? LSTEAL_MIN_MASK :
+ (prev->lsteal_mask << 1) + 1;
+ if (node->lsteal_mask > LSTEAL_MAX_MASK)
+ node->lsteal_mask = LSTEAL_MAX_MASK;
+ /* Make sure the new fields are visible to others */
+ smp_wmb();
+}
+
+/**
+ * unfair_get_lock - try to steal the lock periodically
+ * @lock : Pointer to queue spinlock structure
+ * @node : Current queue node address
+ * @tail : My tail code value
+ * @count: Loop count
+ * Return: true if the lock has been stolen, false otherwise
+ *
+ * When a true value is returned, the caller will have to notify the next
+ * node only if the qhead flag is set and the next pointer in the queue
+ * node is not NULL.
+ */
+static noinline int
+unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
+{
+ u32 prev_tail;
+ int isqhead;
+ struct qnode *next;
+
+ if (!static_key_false(¶virt_unfairlocks_enabled) ||
+ ((count & node->lsteal_mask) != node->lsteal_mask))
+ return false;
+
+ if (!queue_spin_trylock_unfair(lock)) {
+ /*
+ * Lock stealing fails, re-adjust the lsteal mask so that
+ * it is about double of the previous node.
+ */
+ struct qnode *prev = node->qprev;
+
+ node->lsteal_mask = prev->qhead ? LSTEAL_MIN_MASK :
+ (prev->lsteal_mask << 1) + 1;
+ if (node->lsteal_mask > LSTEAL_MAX_MASK)
+ node->lsteal_mask = LSTEAL_MAX_MASK;
+ return false;
+ }
+ queue_spin_unlock(lock);
+ return false;
+}
+
+#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+#define DEF_LOOP_CNT(c)
+#define INC_LOOP_CNT(c)
+#define LOOP_CNT(c) 0
+
+static void unfair_init_vars(struct qnode *node) {}
+static void unfair_set_vars(struct qnode *node, struct qnode *prev,
+ u32 prev_tail) {}
+static int unfair_get_lock(struct qspinlock *lock, struct qnode *node,
+ u32 tail, int count) { return false; }
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
/**
* get_qlock - Set the lock bit and own the lock
* @lock : Pointer to queue spinlock structure
@@ -365,11 +503,17 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
* if there was a previous node; link it and wait.
*/
if (old & _Q_TAIL_MASK) {
+ DEF_LOOP_CNT(cnt);
+
prev = decode_tail(old);
+ unfair_set_vars(node, prev, old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->qhead))
+ while (!smp_load_acquire(&node->qhead)) {
+ INC_LOOP_CNT(cnt);
+ unfair_get_lock(lock, node, tail, LOOP_CNT(cnt));
arch_mutex_cpu_relax();
+ }
}
/*
@@ -469,6 +613,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node += idx;
node->qhead = 0;
node->mcs.next = NULL;
+ unfair_init_vars(node);
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
In order to fully resolve the lock waiter preemption problem in virtual
guests, it is necessary to enable lock stealing in the lock waiters.
A simple test-and-set lock, however, has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the affected cacheline, thus slowing tasks
that need to access the cacheline.
2) Lock starvation is a real possibility especially if the number of
virtual CPUs is large.
To alleviate these problems, this patch implements a variable frequency
(from 1/8 to 1/1024) lock stealing mechanism for the lock waiters
in the queue. The node next to the queue head try to steal lock once
every 8 iterations of the pause loop. The next one in the queue has
half the lock stealing frequency (once every 16 iterations) and so
on until it reaches a maximum of once every 1024 iterations.
This mechanism reduces the cacheline contention problem on the lock
word while trying to maintain as much of a FIFO order as possible.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 147 +++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 146 insertions(+), 1 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a14241e..06dd486 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -63,6 +63,11 @@
*/
struct qnode {
struct mcs_spinlock mcs;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ int lsteal_mask; /* Lock stealing frequency mask */
+ u32 prev_tail; /* Tail code of previous node */
+ struct qnode *qprev; /* Previous queue node addr */
+#endif
};
#define qhead mcs.locked /* The queue head flag */
@@ -215,6 +220,139 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
}
#endif /* _Q_PENDING_BITS == 8 */
+/*
+ ************************************************************************
+ * Inline functions for supporting unfair queue lock *
+ ************************************************************************
+ */
+/*
+ * Unfair lock support in a virtualized guest
+ *
+ * An unfair lock can be implemented using a simple test-and-set lock like
+ * what is being done in a read-write lock. This simple scheme has 2 major
+ * problems:
+ * 1) It needs constant reading and occasionally writing to the lock word
+ * thus putting a lot of cacheline contention traffic on the affected
+ * cacheline.
+ * 2) Lock starvation is a real possibility especially if the number of
+ * virtual CPUs is large.
+ *
+ * To reduce the undesirable side effects of an unfair lock, the queue
+ * unfair spinlock implements a more elaborate scheme. Lock stealing is
+ * allowed in the following places:
+ * 1) In the spin_lock and spin_trylock fastpaths
+ * 2) When spinning in the waiter queue before becoming the queue head
+ *
+ * A lock acquirer has only one chance of stealing the lock in the spin_lock
+ * and spin_trylock fastpath. If the attempt fails for spin_lock, the task
+ * will be queued in the wait queue.
+ *
+ * Even in the wait queue, the task can still attempt to steal the lock
+ * periodically at a frequency about inversely and logarithmically proportional
+ * to its distance from the queue head. In other word, the closer it is to
+ * the queue head, the higher a chance it has of stealing the lock. This
+ * scheme reduces the load on the lock cacheline while trying to maintain
+ * a somewhat FIFO way of getting the lock so as to reduce the chance of lock
+ * starvation.
+ */
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+#define DEF_LOOP_CNT(c) int c = 0
+#define INC_LOOP_CNT(c) (c)++
+#define LOOP_CNT(c) c
+#define LSTEAL_MIN (1 << 3)
+#define LSTEAL_MAX (1 << 10)
+#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
+#define LSTEAL_MAX_MASK (LSTEAL_MAX - 1)
+
+/**
+ * unfair_init_vars - initialize unfair relevant fields in queue node structure
+ * @node: Current queue node address
+ */
+static inline void unfair_init_vars(struct qnode *node)
+{
+ node->qprev = NULL;
+ node->prev_tail = 0;
+ node->lsteal_mask = LSTEAL_MIN_MASK;
+}
+
+/**
+ * unfair_set_vars - set unfair related fields in the queue node structure
+ * @node : Current queue node address
+ * @prev : Previous queue node address
+ * @prev_tail: Previous tail code
+ */
+static inline void
+unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
+{
+ if (!static_key_false(¶virt_unfairlocks_enabled))
+ return;
+
+ node->qprev = prev;
+ node->prev_tail = prev_tail;
+ /*
+ * This node will spin double the number of time of the previous node
+ * before attempting to steal the lock until it reaches a maximum.
+ */
+ node->lsteal_mask = prev->qhead ? LSTEAL_MIN_MASK :
+ (prev->lsteal_mask << 1) + 1;
+ if (node->lsteal_mask > LSTEAL_MAX_MASK)
+ node->lsteal_mask = LSTEAL_MAX_MASK;
+ /* Make sure the new fields are visible to others */
+ smp_wmb();
+}
+
+/**
+ * unfair_get_lock - try to steal the lock periodically
+ * @lock : Pointer to queue spinlock structure
+ * @node : Current queue node address
+ * @tail : My tail code value
+ * @count: Loop count
+ * Return: true if the lock has been stolen, false otherwise
+ *
+ * When a true value is returned, the caller will have to notify the next
+ * node only if the qhead flag is set and the next pointer in the queue
+ * node is not NULL.
+ */
+static noinline int
+unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
+{
+ u32 prev_tail;
+ int isqhead;
+ struct qnode *next;
+
+ if (!static_key_false(¶virt_unfairlocks_enabled) ||
+ ((count & node->lsteal_mask) != node->lsteal_mask))
+ return false;
+
+ if (!queue_spin_trylock_unfair(lock)) {
+ /*
+ * Lock stealing fails, re-adjust the lsteal mask so that
+ * it is about double of the previous node.
+ */
+ struct qnode *prev = node->qprev;
+
+ node->lsteal_mask = prev->qhead ? LSTEAL_MIN_MASK :
+ (prev->lsteal_mask << 1) + 1;
+ if (node->lsteal_mask > LSTEAL_MAX_MASK)
+ node->lsteal_mask = LSTEAL_MAX_MASK;
+ return false;
+ }
+ queue_spin_unlock(lock);
+ return false;
+}
+
+#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+#define DEF_LOOP_CNT(c)
+#define INC_LOOP_CNT(c)
+#define LOOP_CNT(c) 0
+
+static void unfair_init_vars(struct qnode *node) {}
+static void unfair_set_vars(struct qnode *node, struct qnode *prev,
+ u32 prev_tail) {}
+static int unfair_get_lock(struct qspinlock *lock, struct qnode *node,
+ u32 tail, int count) { return false; }
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
/**
* get_qlock - Set the lock bit and own the lock
* @lock : Pointer to queue spinlock structure
@@ -365,11 +503,17 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
* if there was a previous node; link it and wait.
*/
if (old & _Q_TAIL_MASK) {
+ DEF_LOOP_CNT(cnt);
+
prev = decode_tail(old);
+ unfair_set_vars(node, prev, old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->qhead))
+ while (!smp_load_acquire(&node->qhead)) {
+ INC_LOOP_CNT(cnt);
+ unfair_get_lock(lock, node, tail, LOOP_CNT(cnt));
arch_mutex_cpu_relax();
+ }
}
/*
@@ -469,6 +613,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node += idx;
node->qhead = 0;
node->mcs.next = NULL;
+ unfair_init_vars(node);
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
2014-05-07 15:01 ` Waiman Long
` (24 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
In order to fully resolve the lock waiter preemption problem in virtual
guests, it is necessary to enable lock stealing in the lock waiters.
A simple test-and-set lock, however, has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the affected cacheline, thus slowing tasks
that need to access the cacheline.
2) Lock starvation is a real possibility especially if the number of
virtual CPUs is large.
To alleviate these problems, this patch implements a variable frequency
(from 1/8 to 1/1024) lock stealing mechanism for the lock waiters
in the queue. The node next to the queue head try to steal lock once
every 8 iterations of the pause loop. The next one in the queue has
half the lock stealing frequency (once every 16 iterations) and so
on until it reaches a maximum of once every 1024 iterations.
This mechanism reduces the cacheline contention problem on the lock
word while trying to maintain as much of a FIFO order as possible.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 147 +++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 146 insertions(+), 1 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a14241e..06dd486 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -63,6 +63,11 @@
*/
struct qnode {
struct mcs_spinlock mcs;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ int lsteal_mask; /* Lock stealing frequency mask */
+ u32 prev_tail; /* Tail code of previous node */
+ struct qnode *qprev; /* Previous queue node addr */
+#endif
};
#define qhead mcs.locked /* The queue head flag */
@@ -215,6 +220,139 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
}
#endif /* _Q_PENDING_BITS == 8 */
+/*
+ ************************************************************************
+ * Inline functions for supporting unfair queue lock *
+ ************************************************************************
+ */
+/*
+ * Unfair lock support in a virtualized guest
+ *
+ * An unfair lock can be implemented using a simple test-and-set lock like
+ * what is being done in a read-write lock. This simple scheme has 2 major
+ * problems:
+ * 1) It needs constant reading and occasionally writing to the lock word
+ * thus putting a lot of cacheline contention traffic on the affected
+ * cacheline.
+ * 2) Lock starvation is a real possibility especially if the number of
+ * virtual CPUs is large.
+ *
+ * To reduce the undesirable side effects of an unfair lock, the queue
+ * unfair spinlock implements a more elaborate scheme. Lock stealing is
+ * allowed in the following places:
+ * 1) In the spin_lock and spin_trylock fastpaths
+ * 2) When spinning in the waiter queue before becoming the queue head
+ *
+ * A lock acquirer has only one chance of stealing the lock in the spin_lock
+ * and spin_trylock fastpath. If the attempt fails for spin_lock, the task
+ * will be queued in the wait queue.
+ *
+ * Even in the wait queue, the task can still attempt to steal the lock
+ * periodically at a frequency about inversely and logarithmically proportional
+ * to its distance from the queue head. In other word, the closer it is to
+ * the queue head, the higher a chance it has of stealing the lock. This
+ * scheme reduces the load on the lock cacheline while trying to maintain
+ * a somewhat FIFO way of getting the lock so as to reduce the chance of lock
+ * starvation.
+ */
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+#define DEF_LOOP_CNT(c) int c = 0
+#define INC_LOOP_CNT(c) (c)++
+#define LOOP_CNT(c) c
+#define LSTEAL_MIN (1 << 3)
+#define LSTEAL_MAX (1 << 10)
+#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
+#define LSTEAL_MAX_MASK (LSTEAL_MAX - 1)
+
+/**
+ * unfair_init_vars - initialize unfair relevant fields in queue node structure
+ * @node: Current queue node address
+ */
+static inline void unfair_init_vars(struct qnode *node)
+{
+ node->qprev = NULL;
+ node->prev_tail = 0;
+ node->lsteal_mask = LSTEAL_MIN_MASK;
+}
+
+/**
+ * unfair_set_vars - set unfair related fields in the queue node structure
+ * @node : Current queue node address
+ * @prev : Previous queue node address
+ * @prev_tail: Previous tail code
+ */
+static inline void
+unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
+{
+ if (!static_key_false(¶virt_unfairlocks_enabled))
+ return;
+
+ node->qprev = prev;
+ node->prev_tail = prev_tail;
+ /*
+ * This node will spin double the number of time of the previous node
+ * before attempting to steal the lock until it reaches a maximum.
+ */
+ node->lsteal_mask = prev->qhead ? LSTEAL_MIN_MASK :
+ (prev->lsteal_mask << 1) + 1;
+ if (node->lsteal_mask > LSTEAL_MAX_MASK)
+ node->lsteal_mask = LSTEAL_MAX_MASK;
+ /* Make sure the new fields are visible to others */
+ smp_wmb();
+}
+
+/**
+ * unfair_get_lock - try to steal the lock periodically
+ * @lock : Pointer to queue spinlock structure
+ * @node : Current queue node address
+ * @tail : My tail code value
+ * @count: Loop count
+ * Return: true if the lock has been stolen, false otherwise
+ *
+ * When a true value is returned, the caller will have to notify the next
+ * node only if the qhead flag is set and the next pointer in the queue
+ * node is not NULL.
+ */
+static noinline int
+unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
+{
+ u32 prev_tail;
+ int isqhead;
+ struct qnode *next;
+
+ if (!static_key_false(¶virt_unfairlocks_enabled) ||
+ ((count & node->lsteal_mask) != node->lsteal_mask))
+ return false;
+
+ if (!queue_spin_trylock_unfair(lock)) {
+ /*
+ * Lock stealing fails, re-adjust the lsteal mask so that
+ * it is about double of the previous node.
+ */
+ struct qnode *prev = node->qprev;
+
+ node->lsteal_mask = prev->qhead ? LSTEAL_MIN_MASK :
+ (prev->lsteal_mask << 1) + 1;
+ if (node->lsteal_mask > LSTEAL_MAX_MASK)
+ node->lsteal_mask = LSTEAL_MAX_MASK;
+ return false;
+ }
+ queue_spin_unlock(lock);
+ return false;
+}
+
+#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+#define DEF_LOOP_CNT(c)
+#define INC_LOOP_CNT(c)
+#define LOOP_CNT(c) 0
+
+static void unfair_init_vars(struct qnode *node) {}
+static void unfair_set_vars(struct qnode *node, struct qnode *prev,
+ u32 prev_tail) {}
+static int unfair_get_lock(struct qspinlock *lock, struct qnode *node,
+ u32 tail, int count) { return false; }
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
/**
* get_qlock - Set the lock bit and own the lock
* @lock : Pointer to queue spinlock structure
@@ -365,11 +503,17 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
* if there was a previous node; link it and wait.
*/
if (old & _Q_TAIL_MASK) {
+ DEF_LOOP_CNT(cnt);
+
prev = decode_tail(old);
+ unfair_set_vars(node, prev, old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->qhead))
+ while (!smp_load_acquire(&node->qhead)) {
+ INC_LOOP_CNT(cnt);
+ unfair_get_lock(lock, node, tail, LOOP_CNT(cnt));
arch_mutex_cpu_relax();
+ }
}
/*
@@ -469,6 +613,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node += idx;
node->qhead = 0;
node->mcs.next = NULL;
+ unfair_init_vars(node);
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 13/19] unfair qspinlock: Enable lock stealing in lock waiters
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
The simple unfair queue lock cannot completely solve the lock waiter
preemption problem as a preempted CPU at the front of the queue will
block forward progress in all the other CPUs behind it in the queue.
To allow those CPUs to move forward, it is necessary to enable lock
stealing for those lock waiters as well.
Enabling those lock waiters to try to steal the lock increases the
cacheline pressure on the lock word. As a result, performance can
suffer on a workload with heavy spinlock contention.
The tables below shows the the performance (jobs/minutes) of other
kernel flavors of a 3.14-based kernel at 3000 users of the AIM7 disk
workload on a 4-socket Westmere-EX bare-metal system. The ebizzy test
was run.
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
simple test-and-set 5625000 3.20 98.29 5.93
simple unfair 5750799 3.13 95.91 5.98
qspinlock
complex unfair 5678233 3.17 97.40 5.93
qspinlock
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
simple test-and-set 593081 30.35 967.55 9.00
simple unfair 2292994 7.85 222.84 5.89
qspinlock
complex unfair 972447 18.51 589.88 6.65
qspinlock
Ebizzy -m test
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
simple test-and-set 1667 10.00 198.93 2.89
simple unfair 2915 10.00 165.68 4.31
qspinlock
complex unfair 1965 10.00 191.96 3.17
qspinlock
With heavy spinlock contention, the complex unfair lock is faster
than the simple test-and-set lock, but it is still slower than the
baseline ticketlock.
The table below shows the execution times (in ms) of a spinlock
micro-benchmark on the same 4-socket Westmere-EX system.
# of Ticket Fair Unfair simple Unfair complex
tasks lock queue lock queue lock queue lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 890 1082 421 663
3 1932 2248 708 1263
4 2829 2819 1030 1806
5 3834 3522 1323 2315
6 4963 4173 1723 2831
7 6299 4875 2067 2878
8 7691 5563 2360 3256
Executing one task per node, the performance data were:
# of Ticket Fair Unfair simple Unfair complex
nodes lock queue lock queue lock queue lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 4603 1034 670 888
3 10940 12087 1389 2041
4 21555 10507 1869 4307
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 160 ++++++++++++++++++++++++++++++++++++++++++--
1 files changed, 154 insertions(+), 6 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 06dd486..0c86a6f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -166,6 +166,23 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
}
+/*
+ * cmpxchg_tail - Put in the new tail code if it matches the old one
+ * @lock : Pointer to queue spinlock structure
+ * @old : The old tail code value
+ * @new : The new tail code value
+ * Return: true if operation succeeds, fail otherwise
+ */
+static __always_inline int
+cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ old >>= _Q_TAIL_OFFSET;
+ new >>= _Q_TAIL_OFFSET;
+ return cmpxchg(&l->tail, old, new) == old;
+}
+
#else /* _Q_PENDING_BITS == 8 */
/**
@@ -218,6 +235,35 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+
+/*
+ * cmpxchg_tail - Put in the new tail code if it matches the old one
+ * @lock : Pointer to queue spinlock structure
+ * @old : The old tail code value
+ * @new : The new tail code value
+ * Return: true if operation succeeds, fail otherwise
+ *
+ * It is assumed that the caller has grabbed the lock before calling it.
+ */
+static __always_inline int
+cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
+{
+ u32 val;
+ u32 lp = _Q_LOCKED_VAL; /* Lock & pending bits value */
+
+ for (;;) {
+ u32 val = atomic_cmpxchg(&lock->val, old | lp, new | lp);
+
+ /*
+ * If the lock or pending bits somehow changes, redo it again
+ */
+ if ((val & _Q_LOCKED_PENDING_MASK) != lp) {
+ lp = val & _Q_LOCKED_PENDING_MASK;
+ continue;
+ }
+ return val == (old | lp);
+ }
+}
#endif /* _Q_PENDING_BITS == 8 */
/*
@@ -302,6 +348,25 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
}
/**
+ * unfair_check_and_clear_tail - check the tail value in lock & clear it if
+ * it matches the given tail
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The tail code to be checked against
+ * Return: true if the tail code matches and is cleared, false otherwise
+ */
+static inline int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
+{
+ if (!static_key_false(¶virt_unfairlocks_enabled))
+ return false;
+
+ /*
+ * Try to clear the current tail code if it matches the given tail
+ */
+ return ((atomic_read(&lock->val) & _Q_TAIL_MASK) == tail) &&
+ cmpxchg_tail(lock, tail, 0);
+}
+
+/**
* unfair_get_lock - try to steal the lock periodically
* @lock : Pointer to queue spinlock structure
* @node : Current queue node address
@@ -313,7 +378,7 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
* node only if the qhead flag is set and the next pointer in the queue
* node is not NULL.
*/
-static noinline int
+static inline int
unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
{
u32 prev_tail;
@@ -337,8 +402,64 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
node->lsteal_mask = LSTEAL_MAX_MASK;
return false;
}
- queue_spin_unlock(lock);
- return false;
+
+ /*
+ * Have stolen the lock, need to remove itself from the wait queue.
+ * There are 3 steps that need to be done:
+ * 1) If it is at the end of the queue, change the tail code in the
+ * lock to the one before it. If the current node happens to be
+ * the queue head, the previous tail code is 0.
+ * 2) Change the next pointer in the previous queue node to point
+ * to the next one in queue or NULL if it is at the end of queue.
+ * 3) If a next node is present, copy the prev_tail and qprev values
+ * to the next node.
+ */
+ isqhead = ACCESS_ONCE(node->qhead);
+ prev_tail = isqhead ? 0 : node->prev_tail;
+
+ /* Step 1 */
+ if (((atomic_read(&lock->val) & _Q_TAIL_MASK) == tail) &&
+ cmpxchg_tail(lock, tail, prev_tail)) {
+ /*
+ * Successfully change the tail code back to the previous one.
+ * Now need to clear the next pointer in the previous node
+ * only if it contains my queue node address and is not
+ * the queue head. The cmpxchg() call below may fail if
+ * a new task comes along and put its node address into the
+ * next pointer. Whether the operation succeeds or fails
+ * doesn't really matter.
+ */
+ /* Step 2 */
+ if (!isqhead)
+ (void)cmpxchg(&node->qprev->mcs.next, &node->mcs, NULL);
+ node->mcs.next = NULL;
+ return true;
+ }
+
+ /*
+ * A next node has to be present if the lock has a different tail
+ * code. So wait until the next pointer is set.
+ */
+ while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
+ arch_mutex_cpu_relax();
+
+ if (!isqhead) {
+ /*
+ * Change the node data only if it is not the queue head
+ * Steps 2 & 3
+ */
+ ACCESS_ONCE(node->qprev->mcs.next) =
+ (struct mcs_spinlock *)next;
+ ACCESS_ONCE(next->qprev) = node->qprev;
+ ACCESS_ONCE(next->prev_tail) = node->prev_tail;
+
+ /*
+ * Make sure all the new node information are visible
+ * before proceeding.
+ */
+ smp_wmb();
+ }
+ return true;
}
#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
@@ -351,6 +472,8 @@ static void unfair_set_vars(struct qnode *node, struct qnode *prev,
u32 prev_tail) {}
static int unfair_get_lock(struct qspinlock *lock, struct qnode *node,
u32 tail, int count) { return false; }
+static int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
+ { return false; }
#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
/**
@@ -511,7 +634,16 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
while (!smp_load_acquire(&node->qhead)) {
INC_LOOP_CNT(cnt);
- unfair_get_lock(lock, node, tail, LOOP_CNT(cnt));
+ if (unfair_get_lock(lock, node, tail, LOOP_CNT(cnt))) {
+ /*
+ * Need to notify the next node only if both
+ * the qhead flag and the next pointer in the
+ * queue node are set.
+ */
+ if (unlikely(node->qhead && node->mcs.next))
+ goto notify_next;
+ return;
+ }
arch_mutex_cpu_relax();
}
}
@@ -545,10 +677,25 @@ retry_queue_wait:
* The get_qlock function will only failed if the
* lock was stolen.
*/
- if (get_qlock(lock))
+ if (get_qlock(lock)) {
+ /*
+ * It is possible that in between the reading
+ * of the lock value and the acquisition of
+ * the lock, the next node may have stolen the
+ * lock and be removed from the queue. So
+ * the lock value may contain the tail code
+ * of the current node. We need to recheck
+ * the tail value here to be sure. And if
+ * the tail code was cleared, we don't need
+ * to notify the next node.
+ */
+ if (unlikely(unfair_check_and_clear_tail(lock,
+ tail)))
+ return;
break;
- else
+ } else {
goto retry_queue_wait;
+ }
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
@@ -562,6 +709,7 @@ retry_queue_wait:
/*
* contended path; wait for next, return.
*/
+notify_next:
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 13/19] unfair qspinlock: Enable lock stealing in lock waiters
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
The simple unfair queue lock cannot completely solve the lock waiter
preemption problem as a preempted CPU at the front of the queue will
block forward progress in all the other CPUs behind it in the queue.
To allow those CPUs to move forward, it is necessary to enable lock
stealing for those lock waiters as well.
Enabling those lock waiters to try to steal the lock increases the
cacheline pressure on the lock word. As a result, performance can
suffer on a workload with heavy spinlock contention.
The tables below shows the the performance (jobs/minutes) of other
kernel flavors of a 3.14-based kernel at 3000 users of the AIM7 disk
workload on a 4-socket Westmere-EX bare-metal system. The ebizzy test
was run.
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
simple test-and-set 5625000 3.20 98.29 5.93
simple unfair 5750799 3.13 95.91 5.98
qspinlock
complex unfair 5678233 3.17 97.40 5.93
qspinlock
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
simple test-and-set 593081 30.35 967.55 9.00
simple unfair 2292994 7.85 222.84 5.89
qspinlock
complex unfair 972447 18.51 589.88 6.65
qspinlock
Ebizzy -m test
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
simple test-and-set 1667 10.00 198.93 2.89
simple unfair 2915 10.00 165.68 4.31
qspinlock
complex unfair 1965 10.00 191.96 3.17
qspinlock
With heavy spinlock contention, the complex unfair lock is faster
than the simple test-and-set lock, but it is still slower than the
baseline ticketlock.
The table below shows the execution times (in ms) of a spinlock
micro-benchmark on the same 4-socket Westmere-EX system.
# of Ticket Fair Unfair simple Unfair complex
tasks lock queue lock queue lock queue lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 890 1082 421 663
3 1932 2248 708 1263
4 2829 2819 1030 1806
5 3834 3522 1323 2315
6 4963 4173 1723 2831
7 6299 4875 2067 2878
8 7691 5563 2360 3256
Executing one task per node, the performance data were:
# of Ticket Fair Unfair simple Unfair complex
nodes lock queue lock queue lock queue lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 4603 1034 670 888
3 10940 12087 1389 2041
4 21555 10507 1869 4307
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 160 ++++++++++++++++++++++++++++++++++++++++++--
1 files changed, 154 insertions(+), 6 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 06dd486..0c86a6f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -166,6 +166,23 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
}
+/*
+ * cmpxchg_tail - Put in the new tail code if it matches the old one
+ * @lock : Pointer to queue spinlock structure
+ * @old : The old tail code value
+ * @new : The new tail code value
+ * Return: true if operation succeeds, fail otherwise
+ */
+static __always_inline int
+cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ old >>= _Q_TAIL_OFFSET;
+ new >>= _Q_TAIL_OFFSET;
+ return cmpxchg(&l->tail, old, new) == old;
+}
+
#else /* _Q_PENDING_BITS == 8 */
/**
@@ -218,6 +235,35 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+
+/*
+ * cmpxchg_tail - Put in the new tail code if it matches the old one
+ * @lock : Pointer to queue spinlock structure
+ * @old : The old tail code value
+ * @new : The new tail code value
+ * Return: true if operation succeeds, fail otherwise
+ *
+ * It is assumed that the caller has grabbed the lock before calling it.
+ */
+static __always_inline int
+cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
+{
+ u32 val;
+ u32 lp = _Q_LOCKED_VAL; /* Lock & pending bits value */
+
+ for (;;) {
+ u32 val = atomic_cmpxchg(&lock->val, old | lp, new | lp);
+
+ /*
+ * If the lock or pending bits somehow changes, redo it again
+ */
+ if ((val & _Q_LOCKED_PENDING_MASK) != lp) {
+ lp = val & _Q_LOCKED_PENDING_MASK;
+ continue;
+ }
+ return val == (old | lp);
+ }
+}
#endif /* _Q_PENDING_BITS == 8 */
/*
@@ -302,6 +348,25 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
}
/**
+ * unfair_check_and_clear_tail - check the tail value in lock & clear it if
+ * it matches the given tail
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The tail code to be checked against
+ * Return: true if the tail code matches and is cleared, false otherwise
+ */
+static inline int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
+{
+ if (!static_key_false(¶virt_unfairlocks_enabled))
+ return false;
+
+ /*
+ * Try to clear the current tail code if it matches the given tail
+ */
+ return ((atomic_read(&lock->val) & _Q_TAIL_MASK) == tail) &&
+ cmpxchg_tail(lock, tail, 0);
+}
+
+/**
* unfair_get_lock - try to steal the lock periodically
* @lock : Pointer to queue spinlock structure
* @node : Current queue node address
@@ -313,7 +378,7 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
* node only if the qhead flag is set and the next pointer in the queue
* node is not NULL.
*/
-static noinline int
+static inline int
unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
{
u32 prev_tail;
@@ -337,8 +402,64 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
node->lsteal_mask = LSTEAL_MAX_MASK;
return false;
}
- queue_spin_unlock(lock);
- return false;
+
+ /*
+ * Have stolen the lock, need to remove itself from the wait queue.
+ * There are 3 steps that need to be done:
+ * 1) If it is at the end of the queue, change the tail code in the
+ * lock to the one before it. If the current node happens to be
+ * the queue head, the previous tail code is 0.
+ * 2) Change the next pointer in the previous queue node to point
+ * to the next one in queue or NULL if it is at the end of queue.
+ * 3) If a next node is present, copy the prev_tail and qprev values
+ * to the next node.
+ */
+ isqhead = ACCESS_ONCE(node->qhead);
+ prev_tail = isqhead ? 0 : node->prev_tail;
+
+ /* Step 1 */
+ if (((atomic_read(&lock->val) & _Q_TAIL_MASK) == tail) &&
+ cmpxchg_tail(lock, tail, prev_tail)) {
+ /*
+ * Successfully change the tail code back to the previous one.
+ * Now need to clear the next pointer in the previous node
+ * only if it contains my queue node address and is not
+ * the queue head. The cmpxchg() call below may fail if
+ * a new task comes along and put its node address into the
+ * next pointer. Whether the operation succeeds or fails
+ * doesn't really matter.
+ */
+ /* Step 2 */
+ if (!isqhead)
+ (void)cmpxchg(&node->qprev->mcs.next, &node->mcs, NULL);
+ node->mcs.next = NULL;
+ return true;
+ }
+
+ /*
+ * A next node has to be present if the lock has a different tail
+ * code. So wait until the next pointer is set.
+ */
+ while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
+ arch_mutex_cpu_relax();
+
+ if (!isqhead) {
+ /*
+ * Change the node data only if it is not the queue head
+ * Steps 2 & 3
+ */
+ ACCESS_ONCE(node->qprev->mcs.next) =
+ (struct mcs_spinlock *)next;
+ ACCESS_ONCE(next->qprev) = node->qprev;
+ ACCESS_ONCE(next->prev_tail) = node->prev_tail;
+
+ /*
+ * Make sure all the new node information are visible
+ * before proceeding.
+ */
+ smp_wmb();
+ }
+ return true;
}
#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
@@ -351,6 +472,8 @@ static void unfair_set_vars(struct qnode *node, struct qnode *prev,
u32 prev_tail) {}
static int unfair_get_lock(struct qspinlock *lock, struct qnode *node,
u32 tail, int count) { return false; }
+static int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
+ { return false; }
#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
/**
@@ -511,7 +634,16 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
while (!smp_load_acquire(&node->qhead)) {
INC_LOOP_CNT(cnt);
- unfair_get_lock(lock, node, tail, LOOP_CNT(cnt));
+ if (unfair_get_lock(lock, node, tail, LOOP_CNT(cnt))) {
+ /*
+ * Need to notify the next node only if both
+ * the qhead flag and the next pointer in the
+ * queue node are set.
+ */
+ if (unlikely(node->qhead && node->mcs.next))
+ goto notify_next;
+ return;
+ }
arch_mutex_cpu_relax();
}
}
@@ -545,10 +677,25 @@ retry_queue_wait:
* The get_qlock function will only failed if the
* lock was stolen.
*/
- if (get_qlock(lock))
+ if (get_qlock(lock)) {
+ /*
+ * It is possible that in between the reading
+ * of the lock value and the acquisition of
+ * the lock, the next node may have stolen the
+ * lock and be removed from the queue. So
+ * the lock value may contain the tail code
+ * of the current node. We need to recheck
+ * the tail value here to be sure. And if
+ * the tail code was cleared, we don't need
+ * to notify the next node.
+ */
+ if (unlikely(unfair_check_and_clear_tail(lock,
+ tail)))
+ return;
break;
- else
+ } else {
goto retry_queue_wait;
+ }
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
@@ -562,6 +709,7 @@ retry_queue_wait:
/*
* contended path; wait for next, return.
*/
+notify_next:
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 13/19] unfair qspinlock: Enable lock stealing in lock waiters
2014-05-07 15:01 ` Waiman Long
` (26 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
The simple unfair queue lock cannot completely solve the lock waiter
preemption problem as a preempted CPU at the front of the queue will
block forward progress in all the other CPUs behind it in the queue.
To allow those CPUs to move forward, it is necessary to enable lock
stealing for those lock waiters as well.
Enabling those lock waiters to try to steal the lock increases the
cacheline pressure on the lock word. As a result, performance can
suffer on a workload with heavy spinlock contention.
The tables below shows the the performance (jobs/minutes) of other
kernel flavors of a 3.14-based kernel at 3000 users of the AIM7 disk
workload on a 4-socket Westmere-EX bare-metal system. The ebizzy test
was run.
AIM7 XFS Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 5678233 3.17 96.61 5.81
qspinlock 5750799 3.13 94.83 5.97
simple test-and-set 5625000 3.20 98.29 5.93
simple unfair 5750799 3.13 95.91 5.98
qspinlock
complex unfair 5678233 3.17 97.40 5.93
qspinlock
AIM7 EXT4 Disk Test
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
ticketlock 1114551 16.15 509.72 7.11
qspinlock 2184466 8.24 232.99 6.01
simple test-and-set 593081 30.35 967.55 9.00
simple unfair 2292994 7.85 222.84 5.89
qspinlock
complex unfair 972447 18.51 589.88 6.65
qspinlock
Ebizzy -m test
kernel records/s Real Time Sys Time Usr Time
----- --------- --------- -------- --------
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
simple test-and-set 1667 10.00 198.93 2.89
simple unfair 2915 10.00 165.68 4.31
qspinlock
complex unfair 1965 10.00 191.96 3.17
qspinlock
With heavy spinlock contention, the complex unfair lock is faster
than the simple test-and-set lock, but it is still slower than the
baseline ticketlock.
The table below shows the execution times (in ms) of a spinlock
micro-benchmark on the same 4-socket Westmere-EX system.
# of Ticket Fair Unfair simple Unfair complex
tasks lock queue lock queue lock queue lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 890 1082 421 663
3 1932 2248 708 1263
4 2829 2819 1030 1806
5 3834 3522 1323 2315
6 4963 4173 1723 2831
7 6299 4875 2067 2878
8 7691 5563 2360 3256
Executing one task per node, the performance data were:
# of Ticket Fair Unfair simple Unfair complex
nodes lock queue lock queue lock queue lock
------ ------- ---------- ---------- ---------
1 135 135 137 137
2 4603 1034 670 888
3 10940 12087 1389 2041
4 21555 10507 1869 4307
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 160 ++++++++++++++++++++++++++++++++++++++++++--
1 files changed, 154 insertions(+), 6 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 06dd486..0c86a6f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -166,6 +166,23 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
}
+/*
+ * cmpxchg_tail - Put in the new tail code if it matches the old one
+ * @lock : Pointer to queue spinlock structure
+ * @old : The old tail code value
+ * @new : The new tail code value
+ * Return: true if operation succeeds, fail otherwise
+ */
+static __always_inline int
+cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ old >>= _Q_TAIL_OFFSET;
+ new >>= _Q_TAIL_OFFSET;
+ return cmpxchg(&l->tail, old, new) == old;
+}
+
#else /* _Q_PENDING_BITS == 8 */
/**
@@ -218,6 +235,35 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
*pval = new;
return old;
}
+
+/*
+ * cmpxchg_tail - Put in the new tail code if it matches the old one
+ * @lock : Pointer to queue spinlock structure
+ * @old : The old tail code value
+ * @new : The new tail code value
+ * Return: true if operation succeeds, fail otherwise
+ *
+ * It is assumed that the caller has grabbed the lock before calling it.
+ */
+static __always_inline int
+cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
+{
+ u32 val;
+ u32 lp = _Q_LOCKED_VAL; /* Lock & pending bits value */
+
+ for (;;) {
+ u32 val = atomic_cmpxchg(&lock->val, old | lp, new | lp);
+
+ /*
+ * If the lock or pending bits somehow changes, redo it again
+ */
+ if ((val & _Q_LOCKED_PENDING_MASK) != lp) {
+ lp = val & _Q_LOCKED_PENDING_MASK;
+ continue;
+ }
+ return val == (old | lp);
+ }
+}
#endif /* _Q_PENDING_BITS == 8 */
/*
@@ -302,6 +348,25 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
}
/**
+ * unfair_check_and_clear_tail - check the tail value in lock & clear it if
+ * it matches the given tail
+ * @lock : Pointer to queue spinlock structure
+ * @tail : The tail code to be checked against
+ * Return: true if the tail code matches and is cleared, false otherwise
+ */
+static inline int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
+{
+ if (!static_key_false(¶virt_unfairlocks_enabled))
+ return false;
+
+ /*
+ * Try to clear the current tail code if it matches the given tail
+ */
+ return ((atomic_read(&lock->val) & _Q_TAIL_MASK) == tail) &&
+ cmpxchg_tail(lock, tail, 0);
+}
+
+/**
* unfair_get_lock - try to steal the lock periodically
* @lock : Pointer to queue spinlock structure
* @node : Current queue node address
@@ -313,7 +378,7 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
* node only if the qhead flag is set and the next pointer in the queue
* node is not NULL.
*/
-static noinline int
+static inline int
unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
{
u32 prev_tail;
@@ -337,8 +402,64 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
node->lsteal_mask = LSTEAL_MAX_MASK;
return false;
}
- queue_spin_unlock(lock);
- return false;
+
+ /*
+ * Have stolen the lock, need to remove itself from the wait queue.
+ * There are 3 steps that need to be done:
+ * 1) If it is at the end of the queue, change the tail code in the
+ * lock to the one before it. If the current node happens to be
+ * the queue head, the previous tail code is 0.
+ * 2) Change the next pointer in the previous queue node to point
+ * to the next one in queue or NULL if it is at the end of queue.
+ * 3) If a next node is present, copy the prev_tail and qprev values
+ * to the next node.
+ */
+ isqhead = ACCESS_ONCE(node->qhead);
+ prev_tail = isqhead ? 0 : node->prev_tail;
+
+ /* Step 1 */
+ if (((atomic_read(&lock->val) & _Q_TAIL_MASK) == tail) &&
+ cmpxchg_tail(lock, tail, prev_tail)) {
+ /*
+ * Successfully change the tail code back to the previous one.
+ * Now need to clear the next pointer in the previous node
+ * only if it contains my queue node address and is not
+ * the queue head. The cmpxchg() call below may fail if
+ * a new task comes along and put its node address into the
+ * next pointer. Whether the operation succeeds or fails
+ * doesn't really matter.
+ */
+ /* Step 2 */
+ if (!isqhead)
+ (void)cmpxchg(&node->qprev->mcs.next, &node->mcs, NULL);
+ node->mcs.next = NULL;
+ return true;
+ }
+
+ /*
+ * A next node has to be present if the lock has a different tail
+ * code. So wait until the next pointer is set.
+ */
+ while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
+ arch_mutex_cpu_relax();
+
+ if (!isqhead) {
+ /*
+ * Change the node data only if it is not the queue head
+ * Steps 2 & 3
+ */
+ ACCESS_ONCE(node->qprev->mcs.next) =
+ (struct mcs_spinlock *)next;
+ ACCESS_ONCE(next->qprev) = node->qprev;
+ ACCESS_ONCE(next->prev_tail) = node->prev_tail;
+
+ /*
+ * Make sure all the new node information are visible
+ * before proceeding.
+ */
+ smp_wmb();
+ }
+ return true;
}
#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
@@ -351,6 +472,8 @@ static void unfair_set_vars(struct qnode *node, struct qnode *prev,
u32 prev_tail) {}
static int unfair_get_lock(struct qspinlock *lock, struct qnode *node,
u32 tail, int count) { return false; }
+static int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
+ { return false; }
#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
/**
@@ -511,7 +634,16 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
while (!smp_load_acquire(&node->qhead)) {
INC_LOOP_CNT(cnt);
- unfair_get_lock(lock, node, tail, LOOP_CNT(cnt));
+ if (unfair_get_lock(lock, node, tail, LOOP_CNT(cnt))) {
+ /*
+ * Need to notify the next node only if both
+ * the qhead flag and the next pointer in the
+ * queue node are set.
+ */
+ if (unlikely(node->qhead && node->mcs.next))
+ goto notify_next;
+ return;
+ }
arch_mutex_cpu_relax();
}
}
@@ -545,10 +677,25 @@ retry_queue_wait:
* The get_qlock function will only failed if the
* lock was stolen.
*/
- if (get_qlock(lock))
+ if (get_qlock(lock)) {
+ /*
+ * It is possible that in between the reading
+ * of the lock value and the acquisition of
+ * the lock, the next node may have stolen the
+ * lock and be removed from the queue. So
+ * the lock value may contain the tail code
+ * of the current node. We need to recheck
+ * the tail value here to be sure. And if
+ * the tail code was cleared, we don't need
+ * to notify the next node.
+ */
+ if (unlikely(unfair_check_and_clear_tail(lock,
+ tail)))
+ return;
break;
- else
+ } else {
goto retry_queue_wait;
+ }
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
@@ -562,6 +709,7 @@ retry_queue_wait:
/*
* contended path; wait for next, return.
*/
+notify_next:
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 14/19] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
2014-05-07 15:01 ` Waiman Long
` (28 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/spinlock.h | 4 ++--
arch/x86/kernel/kvm.c | 2 +-
arch/x86/kernel/paravirt-spinlocks.c | 4 ++--
arch/x86/xen/spinlock.c | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 958d20f..428d0d1 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -39,7 +39,7 @@
/* How long a lock should spin before we consider blocking */
#define SPIN_THRESHOLD (1 << 15)
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
#ifdef CONFIG_QUEUE_SPINLOCK
@@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
{
if (TICKET_SLOWPATH_FLAG &&
- static_key_false(¶virt_ticketlocks_enabled)) {
+ static_key_false(¶virt_spinlocks_enabled)) {
arch_spinlock_t prev;
prev = *lock;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 0331cb3..7ab8ab3 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -817,7 +817,7 @@ static __init int kvm_spinlock_init_jump(void)
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return 0;
- static_key_slow_inc(¶virt_ticketlocks_enabled);
+ static_key_slow_inc(¶virt_spinlocks_enabled);
printk(KERN_INFO "KVM setup paravirtual spinlock\n");
return 0;
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 7dfd02d..6d36731 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -17,8 +17,8 @@ struct pv_lock_ops pv_lock_ops = {
};
EXPORT_SYMBOL(pv_lock_ops);
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
#endif
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 0ba5f3b..d1b6a32 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jump(void)
if (!xen_domain())
return 0;
- static_key_slow_inc(¶virt_ticketlocks_enabled);
+ static_key_slow_inc(¶virt_spinlocks_enabled);
return 0;
}
early_initcall(xen_init_spinlocks_jump);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 14/19] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
2014-05-07 15:01 ` Waiman Long
` (30 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/spinlock.h | 4 ++--
arch/x86/kernel/kvm.c | 2 +-
arch/x86/kernel/paravirt-spinlocks.c | 4 ++--
arch/x86/xen/spinlock.c | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 958d20f..428d0d1 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -39,7 +39,7 @@
/* How long a lock should spin before we consider blocking */
#define SPIN_THRESHOLD (1 << 15)
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
#ifdef CONFIG_QUEUE_SPINLOCK
@@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
{
if (TICKET_SLOWPATH_FLAG &&
- static_key_false(¶virt_ticketlocks_enabled)) {
+ static_key_false(¶virt_spinlocks_enabled)) {
arch_spinlock_t prev;
prev = *lock;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 0331cb3..7ab8ab3 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -817,7 +817,7 @@ static __init int kvm_spinlock_init_jump(void)
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return 0;
- static_key_slow_inc(¶virt_ticketlocks_enabled);
+ static_key_slow_inc(¶virt_spinlocks_enabled);
printk(KERN_INFO "KVM setup paravirtual spinlock\n");
return 0;
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 7dfd02d..6d36731 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -17,8 +17,8 @@ struct pv_lock_ops pv_lock_ops = {
};
EXPORT_SYMBOL(pv_lock_ops);
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
#endif
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 0ba5f3b..d1b6a32 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jump(void)
if (!xen_domain())
return 0;
- static_key_slow_inc(¶virt_ticketlocks_enabled);
+ static_key_slow_inc(¶virt_spinlocks_enabled);
return 0;
}
early_initcall(xen_init_spinlocks_jump);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 14/19] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
2014-05-07 15:01 ` Waiman Long
` (29 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/spinlock.h | 4 ++--
arch/x86/kernel/kvm.c | 2 +-
arch/x86/kernel/paravirt-spinlocks.c | 4 ++--
arch/x86/xen/spinlock.c | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 958d20f..428d0d1 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -39,7 +39,7 @@
/* How long a lock should spin before we consider blocking */
#define SPIN_THRESHOLD (1 << 15)
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
static __always_inline bool static_key_false(struct static_key *key);
#ifdef CONFIG_QUEUE_SPINLOCK
@@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
{
if (TICKET_SLOWPATH_FLAG &&
- static_key_false(¶virt_ticketlocks_enabled)) {
+ static_key_false(¶virt_spinlocks_enabled)) {
arch_spinlock_t prev;
prev = *lock;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 0331cb3..7ab8ab3 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -817,7 +817,7 @@ static __init int kvm_spinlock_init_jump(void)
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return 0;
- static_key_slow_inc(¶virt_ticketlocks_enabled);
+ static_key_slow_inc(¶virt_spinlocks_enabled);
printk(KERN_INFO "KVM setup paravirtual spinlock\n");
return 0;
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 7dfd02d..6d36731 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -17,8 +17,8 @@ struct pv_lock_ops pv_lock_ops = {
};
EXPORT_SYMBOL(pv_lock_ops);
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
#endif
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 0ba5f3b..d1b6a32 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jump(void)
if (!xen_domain())
return 0;
- static_key_slow_inc(¶virt_ticketlocks_enabled);
+ static_key_slow_inc(¶virt_spinlocks_enabled);
return 0;
}
early_initcall(xen_init_spinlocks_jump);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 15/19] pvqspinlock, x86: Add PV data structure & methods
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch modifies the para-virtualization (PV) infrastructure code
of the x86-64 architecture to support the PV queue spinlock. Three
new virtual methods are added to support PV qspinlock:
1) kick_cpu - schedule in a virtual CPU
2) halt_cpu - schedule out a virtual CPU
3) lockstat - update statistical data for debugfs
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/paravirt.h | 18 +++++++++++++++++-
arch/x86/include/asm/paravirt_types.h | 17 +++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 6 ++++++
3 files changed, 40 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..d71e123 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,23 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
}
#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu)
+{
+ PVOP_VCALL1(pv_lock_ops.kick_cpu, cpu);
+}
+
+static __always_inline void
+__queue_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ PVOP_VCALL3(pv_lock_ops.halt_cpu, type, state, sval);
+}
+static __always_inline void __queue_lockstat(enum pv_lock_stats type)
+{
+ PVOP_VCALL1(pv_lock_ops.lockstat, type);
+}
+#else
static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
__ticket_t ticket)
{
@@ -723,7 +739,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
{
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
}
-
+#endif
#endif
#ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7549b8b..549b3a0 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,26 @@ struct arch_spinlock;
typedef u16 __ticket_t;
#endif
+#ifdef CONFIG_QUEUE_SPINLOCK
+enum pv_lock_stats {
+ PV_HALT_QHEAD, /* Queue head halting */
+ PV_HALT_QNODE, /* Other queue node halting */
+ PV_HALT_ABORT, /* Halting aborted */
+ PV_WAKE_KICKED, /* Wakeup by kicking */
+ PV_WAKE_SPURIOUS, /* Spurious wakeup */
+ PV_KICK_NOHALT /* Kick but CPU not halted */
+};
+#endif
+
struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+ void (*kick_cpu)(int cpu);
+ void (*halt_cpu)(enum pv_lock_stats type, s8 *state, s8 sval);
+ void (*lockstat)(enum pv_lock_stats type);
+#else
struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
};
/* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 6d36731..8d15bea 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -11,9 +11,15 @@
#ifdef CONFIG_PARAVIRT_SPINLOCKS
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+ .kick_cpu = paravirt_nop,
+ .halt_cpu = paravirt_nop,
+ .lockstat = paravirt_nop,
+#else
.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
.unlock_kick = paravirt_nop,
#endif
+#endif
};
EXPORT_SYMBOL(pv_lock_ops);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 15/19] pvqspinlock, x86: Add PV data structure & methods
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch modifies the para-virtualization (PV) infrastructure code
of the x86-64 architecture to support the PV queue spinlock. Three
new virtual methods are added to support PV qspinlock:
1) kick_cpu - schedule in a virtual CPU
2) halt_cpu - schedule out a virtual CPU
3) lockstat - update statistical data for debugfs
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/paravirt.h | 18 +++++++++++++++++-
arch/x86/include/asm/paravirt_types.h | 17 +++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 6 ++++++
3 files changed, 40 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..d71e123 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,23 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
}
#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu)
+{
+ PVOP_VCALL1(pv_lock_ops.kick_cpu, cpu);
+}
+
+static __always_inline void
+__queue_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ PVOP_VCALL3(pv_lock_ops.halt_cpu, type, state, sval);
+}
+static __always_inline void __queue_lockstat(enum pv_lock_stats type)
+{
+ PVOP_VCALL1(pv_lock_ops.lockstat, type);
+}
+#else
static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
__ticket_t ticket)
{
@@ -723,7 +739,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
{
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
}
-
+#endif
#endif
#ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7549b8b..549b3a0 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,26 @@ struct arch_spinlock;
typedef u16 __ticket_t;
#endif
+#ifdef CONFIG_QUEUE_SPINLOCK
+enum pv_lock_stats {
+ PV_HALT_QHEAD, /* Queue head halting */
+ PV_HALT_QNODE, /* Other queue node halting */
+ PV_HALT_ABORT, /* Halting aborted */
+ PV_WAKE_KICKED, /* Wakeup by kicking */
+ PV_WAKE_SPURIOUS, /* Spurious wakeup */
+ PV_KICK_NOHALT /* Kick but CPU not halted */
+};
+#endif
+
struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+ void (*kick_cpu)(int cpu);
+ void (*halt_cpu)(enum pv_lock_stats type, s8 *state, s8 sval);
+ void (*lockstat)(enum pv_lock_stats type);
+#else
struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
};
/* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 6d36731..8d15bea 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -11,9 +11,15 @@
#ifdef CONFIG_PARAVIRT_SPINLOCKS
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+ .kick_cpu = paravirt_nop,
+ .halt_cpu = paravirt_nop,
+ .lockstat = paravirt_nop,
+#else
.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
.unlock_kick = paravirt_nop,
#endif
+#endif
};
EXPORT_SYMBOL(pv_lock_ops);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 15/19] pvqspinlock, x86: Add PV data structure & methods
2014-05-07 15:01 ` Waiman Long
` (31 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
This patch modifies the para-virtualization (PV) infrastructure code
of the x86-64 architecture to support the PV queue spinlock. Three
new virtual methods are added to support PV qspinlock:
1) kick_cpu - schedule in a virtual CPU
2) halt_cpu - schedule out a virtual CPU
3) lockstat - update statistical data for debugfs
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/paravirt.h | 18 +++++++++++++++++-
arch/x86/include/asm/paravirt_types.h | 17 +++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 6 ++++++
3 files changed, 40 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..d71e123 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,23 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
}
#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu)
+{
+ PVOP_VCALL1(pv_lock_ops.kick_cpu, cpu);
+}
+
+static __always_inline void
+__queue_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ PVOP_VCALL3(pv_lock_ops.halt_cpu, type, state, sval);
+}
+static __always_inline void __queue_lockstat(enum pv_lock_stats type)
+{
+ PVOP_VCALL1(pv_lock_ops.lockstat, type);
+}
+#else
static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
__ticket_t ticket)
{
@@ -723,7 +739,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
{
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
}
-
+#endif
#endif
#ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7549b8b..549b3a0 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,26 @@ struct arch_spinlock;
typedef u16 __ticket_t;
#endif
+#ifdef CONFIG_QUEUE_SPINLOCK
+enum pv_lock_stats {
+ PV_HALT_QHEAD, /* Queue head halting */
+ PV_HALT_QNODE, /* Other queue node halting */
+ PV_HALT_ABORT, /* Halting aborted */
+ PV_WAKE_KICKED, /* Wakeup by kicking */
+ PV_WAKE_SPURIOUS, /* Spurious wakeup */
+ PV_KICK_NOHALT /* Kick but CPU not halted */
+};
+#endif
+
struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+ void (*kick_cpu)(int cpu);
+ void (*halt_cpu)(enum pv_lock_stats type, s8 *state, s8 sval);
+ void (*lockstat)(enum pv_lock_stats type);
+#else
struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
};
/* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 6d36731..8d15bea 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -11,9 +11,15 @@
#ifdef CONFIG_PARAVIRT_SPINLOCKS
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+ .kick_cpu = paravirt_nop,
+ .halt_cpu = paravirt_nop,
+ .lockstat = paravirt_nop,
+#else
.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
.unlock_kick = paravirt_nop,
#endif
+#endif
};
EXPORT_SYMBOL(pv_lock_ops);
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 16/19] pvqspinlock: Enable coexistence with the unfair lock
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch enables the coexistence of both the PV qspinlock and
unfair lock. When both are enabled, however, only the lock fastpath
will perform lock stealing whereas the slowpath will have that disabled
to get the best of both features.
We also need to transition a CPU spinning too long in the pending
bit code path back to the regular queuing code path so that it can
be properly halted by the PV qspinlock code.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 74 ++++++++++++++++++++++++++++++++++++++------
1 files changed, 64 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0c86a6f..fb05e64 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -72,6 +72,30 @@ struct qnode {
#define qhead mcs.locked /* The queue head flag */
/*
+ * Allow spinning loop count only if either PV spinlock or unfair lock is
+ * configured.
+ */
+#if defined(CONFIG_PARAVIRT_UNFAIR_LOCKS) || defined(CONFIG_PARAVIRT_SPINLOCKS)
+#define DEF_LOOP_CNT(c) int c = 0
+#define INC_LOOP_CNT(c) (c)++
+#define LOOP_CNT(c) c
+#else
+#define DEF_LOOP_CNT(c)
+#define INC_LOOP_CNT(c)
+#define LOOP_CNT(c) 0
+#endif
+
+/*
+ * Check the pending bit spinning threshold only if PV qspinlock is enabled
+ */
+#define PSPIN_THRESHOLD (1 << 10)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define pv_qspinlock_enabled() static_key_false(¶virt_spinlocks_enabled)
+#else
+#define pv_qspinlock_enabled() false
+#endif
+
+/*
* Per-CPU queue node structures; we can never have more than 4 nested
* contexts: task, softirq, hardirq, nmi.
*
@@ -302,9 +326,6 @@ cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
* starvation.
*/
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
-#define DEF_LOOP_CNT(c) int c = 0
-#define INC_LOOP_CNT(c) (c)++
-#define LOOP_CNT(c) c
#define LSTEAL_MIN (1 << 3)
#define LSTEAL_MAX (1 << 10)
#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
@@ -330,7 +351,11 @@ static inline void unfair_init_vars(struct qnode *node)
static inline void
unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
{
- if (!static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled))
return;
node->qprev = prev;
@@ -356,7 +381,11 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
*/
static inline int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
{
- if (!static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled))
return false;
/*
@@ -385,7 +414,11 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
int isqhead;
struct qnode *next;
- if (!static_key_false(¶virt_unfairlocks_enabled) ||
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled) ||
((count & node->lsteal_mask) != node->lsteal_mask))
return false;
@@ -463,9 +496,6 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
}
#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
-#define DEF_LOOP_CNT(c)
-#define INC_LOOP_CNT(c)
-#define LOOP_CNT(c) 0
static void unfair_init_vars(struct qnode *node) {}
static void unfair_set_vars(struct qnode *node, struct qnode *prev,
@@ -582,9 +612,28 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* store-release that clears the locked bit and create lock
* sequentiality; this because not all clear_pending_set_locked()
* implementations imply full barriers.
+ *
+ * When PV qspinlock is enabled, exit the pending bit code path and
+ * go back to the regular queuing path if the lock isn't available
+ * within a certain threshold.
*/
- while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+ if (pv_qspinlock_enabled())
+ retry = PSPIN_THRESHOLD;
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) {
+ if (pv_qspinlock_enabled() && (--retry == 0)) {
+ /*
+ * Clear the pending bit and exit
+ */
+ for (;;) {
+ new = val & ~_Q_PENDING_MASK;
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ return 0;
+ val = old;
+ }
+ }
arch_mutex_cpu_relax();
+ }
/*
* take ownership and clear the pending bit.
@@ -646,6 +695,8 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
}
arch_mutex_cpu_relax();
}
+ } else {
+ ACCESS_ONCE(node->qhead) = true;
}
/*
@@ -713,6 +764,9 @@ notify_next:
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
+ /*
+ * The next one in queue is now at the head
+ */
arch_mcs_spin_unlock_contended(&next->qhead);
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 16/19] pvqspinlock: Enable coexistence with the unfair lock
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch enables the coexistence of both the PV qspinlock and
unfair lock. When both are enabled, however, only the lock fastpath
will perform lock stealing whereas the slowpath will have that disabled
to get the best of both features.
We also need to transition a CPU spinning too long in the pending
bit code path back to the regular queuing code path so that it can
be properly halted by the PV qspinlock code.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 74 ++++++++++++++++++++++++++++++++++++++------
1 files changed, 64 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0c86a6f..fb05e64 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -72,6 +72,30 @@ struct qnode {
#define qhead mcs.locked /* The queue head flag */
/*
+ * Allow spinning loop count only if either PV spinlock or unfair lock is
+ * configured.
+ */
+#if defined(CONFIG_PARAVIRT_UNFAIR_LOCKS) || defined(CONFIG_PARAVIRT_SPINLOCKS)
+#define DEF_LOOP_CNT(c) int c = 0
+#define INC_LOOP_CNT(c) (c)++
+#define LOOP_CNT(c) c
+#else
+#define DEF_LOOP_CNT(c)
+#define INC_LOOP_CNT(c)
+#define LOOP_CNT(c) 0
+#endif
+
+/*
+ * Check the pending bit spinning threshold only if PV qspinlock is enabled
+ */
+#define PSPIN_THRESHOLD (1 << 10)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define pv_qspinlock_enabled() static_key_false(¶virt_spinlocks_enabled)
+#else
+#define pv_qspinlock_enabled() false
+#endif
+
+/*
* Per-CPU queue node structures; we can never have more than 4 nested
* contexts: task, softirq, hardirq, nmi.
*
@@ -302,9 +326,6 @@ cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
* starvation.
*/
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
-#define DEF_LOOP_CNT(c) int c = 0
-#define INC_LOOP_CNT(c) (c)++
-#define LOOP_CNT(c) c
#define LSTEAL_MIN (1 << 3)
#define LSTEAL_MAX (1 << 10)
#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
@@ -330,7 +351,11 @@ static inline void unfair_init_vars(struct qnode *node)
static inline void
unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
{
- if (!static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled))
return;
node->qprev = prev;
@@ -356,7 +381,11 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
*/
static inline int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
{
- if (!static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled))
return false;
/*
@@ -385,7 +414,11 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
int isqhead;
struct qnode *next;
- if (!static_key_false(¶virt_unfairlocks_enabled) ||
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled) ||
((count & node->lsteal_mask) != node->lsteal_mask))
return false;
@@ -463,9 +496,6 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
}
#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
-#define DEF_LOOP_CNT(c)
-#define INC_LOOP_CNT(c)
-#define LOOP_CNT(c) 0
static void unfair_init_vars(struct qnode *node) {}
static void unfair_set_vars(struct qnode *node, struct qnode *prev,
@@ -582,9 +612,28 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* store-release that clears the locked bit and create lock
* sequentiality; this because not all clear_pending_set_locked()
* implementations imply full barriers.
+ *
+ * When PV qspinlock is enabled, exit the pending bit code path and
+ * go back to the regular queuing path if the lock isn't available
+ * within a certain threshold.
*/
- while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+ if (pv_qspinlock_enabled())
+ retry = PSPIN_THRESHOLD;
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) {
+ if (pv_qspinlock_enabled() && (--retry == 0)) {
+ /*
+ * Clear the pending bit and exit
+ */
+ for (;;) {
+ new = val & ~_Q_PENDING_MASK;
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ return 0;
+ val = old;
+ }
+ }
arch_mutex_cpu_relax();
+ }
/*
* take ownership and clear the pending bit.
@@ -646,6 +695,8 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
}
arch_mutex_cpu_relax();
}
+ } else {
+ ACCESS_ONCE(node->qhead) = true;
}
/*
@@ -713,6 +764,9 @@ notify_next:
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
+ /*
+ * The next one in queue is now at the head
+ */
arch_mcs_spin_unlock_contended(&next->qhead);
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 16/19] pvqspinlock: Enable coexistence with the unfair lock
2014-05-07 15:01 ` Waiman Long
` (33 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
This patch enables the coexistence of both the PV qspinlock and
unfair lock. When both are enabled, however, only the lock fastpath
will perform lock stealing whereas the slowpath will have that disabled
to get the best of both features.
We also need to transition a CPU spinning too long in the pending
bit code path back to the regular queuing code path so that it can
be properly halted by the PV qspinlock code.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 74 ++++++++++++++++++++++++++++++++++++++------
1 files changed, 64 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0c86a6f..fb05e64 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -72,6 +72,30 @@ struct qnode {
#define qhead mcs.locked /* The queue head flag */
/*
+ * Allow spinning loop count only if either PV spinlock or unfair lock is
+ * configured.
+ */
+#if defined(CONFIG_PARAVIRT_UNFAIR_LOCKS) || defined(CONFIG_PARAVIRT_SPINLOCKS)
+#define DEF_LOOP_CNT(c) int c = 0
+#define INC_LOOP_CNT(c) (c)++
+#define LOOP_CNT(c) c
+#else
+#define DEF_LOOP_CNT(c)
+#define INC_LOOP_CNT(c)
+#define LOOP_CNT(c) 0
+#endif
+
+/*
+ * Check the pending bit spinning threshold only if PV qspinlock is enabled
+ */
+#define PSPIN_THRESHOLD (1 << 10)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define pv_qspinlock_enabled() static_key_false(¶virt_spinlocks_enabled)
+#else
+#define pv_qspinlock_enabled() false
+#endif
+
+/*
* Per-CPU queue node structures; we can never have more than 4 nested
* contexts: task, softirq, hardirq, nmi.
*
@@ -302,9 +326,6 @@ cmpxchg_tail(struct qspinlock *lock, u32 old, u32 new)
* starvation.
*/
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
-#define DEF_LOOP_CNT(c) int c = 0
-#define INC_LOOP_CNT(c) (c)++
-#define LOOP_CNT(c) c
#define LSTEAL_MIN (1 << 3)
#define LSTEAL_MAX (1 << 10)
#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
@@ -330,7 +351,11 @@ static inline void unfair_init_vars(struct qnode *node)
static inline void
unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
{
- if (!static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled))
return;
node->qprev = prev;
@@ -356,7 +381,11 @@ unfair_set_vars(struct qnode *node, struct qnode *prev, u32 prev_tail)
*/
static inline int unfair_check_and_clear_tail(struct qspinlock *lock, u32 tail)
{
- if (!static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled))
return false;
/*
@@ -385,7 +414,11 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
int isqhead;
struct qnode *next;
- if (!static_key_false(¶virt_unfairlocks_enabled) ||
+ /*
+ * Disable waiter lock stealing if PV spinlock is enabled
+ */
+ if (pv_qspinlock_enabled() ||
+ !static_key_false(¶virt_unfairlocks_enabled) ||
((count & node->lsteal_mask) != node->lsteal_mask))
return false;
@@ -463,9 +496,6 @@ unfair_get_lock(struct qspinlock *lock, struct qnode *node, u32 tail, int count)
}
#else /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
-#define DEF_LOOP_CNT(c)
-#define INC_LOOP_CNT(c)
-#define LOOP_CNT(c) 0
static void unfair_init_vars(struct qnode *node) {}
static void unfair_set_vars(struct qnode *node, struct qnode *prev,
@@ -582,9 +612,28 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* store-release that clears the locked bit and create lock
* sequentiality; this because not all clear_pending_set_locked()
* implementations imply full barriers.
+ *
+ * When PV qspinlock is enabled, exit the pending bit code path and
+ * go back to the regular queuing path if the lock isn't available
+ * within a certain threshold.
*/
- while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+ if (pv_qspinlock_enabled())
+ retry = PSPIN_THRESHOLD;
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) {
+ if (pv_qspinlock_enabled() && (--retry == 0)) {
+ /*
+ * Clear the pending bit and exit
+ */
+ for (;;) {
+ new = val & ~_Q_PENDING_MASK;
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ return 0;
+ val = old;
+ }
+ }
arch_mutex_cpu_relax();
+ }
/*
* take ownership and clear the pending bit.
@@ -646,6 +695,8 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
}
arch_mutex_cpu_relax();
}
+ } else {
+ ACCESS_ONCE(node->qhead) = true;
}
/*
@@ -713,6 +764,9 @@ notify_next:
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
+ /*
+ * The next one in queue is now at the head
+ */
arch_mcs_spin_unlock_contended(&next->qhead);
}
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 17/19] pvqspinlock: Add qspinlock para-virtualization support
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch adds base para-virtualization support to the queue
spinlock in the same way as was done in the PV ticket lock code. In
essence, the lock waiters will spin for a specified number of times
(QSPIN_THRESHOLD = 2^14) and then halted itself. The queue head waiter,
unlike the other waiter, will spins 2*QSPIN_THRESHOLD times before
halting itself. Before being halted, the queue head waiter will set
a flag (_Q_LOCKED_SLOWPATH) in the lock byte to indicate that the
unlock slowpath has to be invoked.
In the unlock slowpath, the current lock holder will find the queue
head by following the previous node pointer links stored in the queue
node structure until it finds one that has the qhead flag turned
on. It then attempt to kick in the CPU of the queue head.
After the queue head acquired the lock, it will also check the status
of the next node and set _Q_LOCKED_SLOWPATH flag if it has been halted.
Enabling the PV code does have a performance impact on spinlock
acquisitions and releases. The following table shows the execution
time (in ms) of a spinlock micro-benchmark that does lock/unlock
operations 5M times for each task versus the number of contending
tasks on a Westmere-EX system.
# of Ticket lock Queue lock
tasks PV off/PV on/%Change PV off/PV on/%Change
------ -------------------- ---------------------
1 135/ 179/+33% 137/ 168/+23%
2 1045/ 1103/ +6% 1161/ 1248/ +7%
3 1827/ 2683/+47% 2357/ 2600/+10%
4 2689/ 4191/+56% 2882/ 3115/ +8%
5 3736/ 5830/+56% 3493/ 3571/ +2%
6 4942/ 7609/+54% 4239/ 4198/ -1%
7 6304/ 9570/+52% 4931/ 4895/ -1%
8 7736/11323/+46% 5632/ 5588/ -1%
It can be seen that the ticket lock PV code has a fairly big decrease
in performance when there are 3 or more contending tasks. The queue
spinlock PV code, on the other hand, only has a relatively minor drop
in performance for with 1-4 contending tasks. With 5 or more contending
tasks, there is practically no difference in performance. When coupled
with unfair lock, the queue spinlock can be much faster than the PV
ticket lock.
When both the unfair lock and PV spinlock features is turned on,
lock stealing will still be allowed in the fastpath, but not in
the slowpath.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/pvqspinlock.h | 306 ++++++++++++++++++++++++++++++++++++
arch/x86/include/asm/qspinlock.h | 33 ++++
kernel/locking/qspinlock.c | 91 ++++++++++-
3 files changed, 427 insertions(+), 3 deletions(-)
create mode 100644 arch/x86/include/asm/pvqspinlock.h
diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
new file mode 100644
index 0000000..fea21aa
--- /dev/null
+++ b/arch/x86/include/asm/pvqspinlock.h
@@ -0,0 +1,306 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ * Queue Spinlock Para-Virtualization (PV) Support
+ *
+ * +------+ +-----+ next +----+
+ * | Lock | |Queue|----------->|Next|
+ * |Holder|<-----------|Head |<-----------|Node|
+ * +------+ prev_tail +-----+ prev_tail +----+
+ *
+ * The PV support code for queue spinlock is roughly the same as that
+ * of the ticket spinlock. Each CPU waiting for the lock will spin until it
+ * reaches a threshold. When that happens, it will put itself to halt so
+ * that the hypervisor can reuse the CPU cycles in some other guests as well
+ * as returning other hold-up CPUs faster.
+ *
+ * A major difference between the two versions of PV spinlock is the fact
+ * that the spin threshold of the queue spinlock is half of that of the
+ * ticket spinlock. However, the queue head will spin twice as long as the
+ * other nodes before it puts itself to halt. The reason for that is to
+ * increase halting chance of heavily contended locks to favor lightly
+ * contended locks (queue depth of 1 or less).
+ *
+ * There are 2 places where races can happen:
+ * 1) Halting of the queue head CPU (in pv_head_spin_check) and the CPU
+ * kicking by the lock holder in the unlock path (in pv_kick_node).
+ * 2) Halting of the queue node CPU (in pv_queue_spin_check) and the
+ * the status check by the previous queue head (in pv_halt_check).
+ * See the comments on those functions to see how the races are being
+ * addressed.
+ */
+
+/*
+ * Spin threshold for queue spinlock
+ */
+#define QSPIN_THRESHOLD (1U<<14)
+#define MAYHALT_THRESHOLD (QSPIN_THRESHOLD - 0x10)
+
+/*
+ * CPU state flags
+ */
+#define PV_CPU_ACTIVE 1 /* This CPU is active */
+#define PV_CPU_KICKED 2 /* This CPU is being kicked */
+#define PV_CPU_HALTED -1 /* This CPU is halted */
+
+/*
+ * Additional fields to be added to the qnode structure
+ */
+#if CONFIG_NR_CPUS >= (1 << 16)
+#define _cpuid_t u32
+#else
+#define _cpuid_t u16
+#endif
+
+struct qnode;
+
+struct pv_qvars {
+ s8 cpustate; /* CPU status flag */
+ s8 mayhalt; /* May be halted soon */
+ _cpuid_t mycpu; /* CPU number of this node */
+ struct qnode *prev; /* Pointer to previous node */
+};
+
+/*
+ * Macro to be used by the unfair lock code to access the previous node pointer
+ * in the pv structure.
+ */
+#define qprev pv.prev
+
+/**
+ * pv_init_vars - initialize fields in struct pv_qvars
+ * @pv : pointer to struct pv_qvars
+ * @cpu: current CPU number
+ */
+static __always_inline void pv_init_vars(struct pv_qvars *pv, int cpu)
+{
+ pv->cpustate = PV_CPU_ACTIVE;
+ pv->prev = NULL;
+ pv->mayhalt = false;
+ pv->mycpu = cpu;
+}
+
+/**
+ * pv_head_spin_check - perform para-virtualization checks for queue head
+ * @pv : pointer to struct pv_qvars
+ * @count : loop count
+ * @qcode : queue code of the supposed lock holder
+ * @lock : pointer to the qspinlock structure
+ *
+ * The following checks will be done:
+ * 1) If it gets a kick signal, reset loop count and flag
+ * 2) Halt itself if lock is still not available after QSPIN_THRESHOLD
+ */
+static __always_inline void pv_head_spin_check(struct pv_qvars *pv, int *count,
+ u32 qcode, struct qspinlock *lock)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+
+ if (pv->cpustate == PV_CPU_KICKED) {
+ /*
+ * Reset count and flag
+ */
+ *count = 0;
+ pv->cpustate = PV_CPU_ACTIVE;
+
+ } else if (unlikely(*count >= 2*QSPIN_THRESHOLD)) {
+ u8 lockval;
+ s8 oldstate;
+
+ /*
+ * Set the lock byte to _Q_LOCKED_SLOWPATH before
+ * trying to halt itself. It is possible that the
+ * lock byte had been set to _Q_LOCKED_SLOWPATH
+ * already (spurious wakeup of queue head after a halt
+ * or opportunistic setting in pv_halt_check()).
+ * In this case, just proceeds to sleeping.
+ *
+ * queue head lock holder
+ * ---------- -----------
+ * cpustate = PV_CPU_HALTED
+ * [1] cmpxchg(_Q_LOCKED_VAL [2] cmpxchg(_Q_LOCKED_VAL => 0)
+ * => _Q_LOCKED_SLOWPATH) if (cmpxchg fails &&
+ * if (cmpxchg succeeds) cpustate == PV_CPU_HALTED)
+ * halt() kick()
+ *
+ * Sequence:
+ * 1,2 - slowpath flag set, queue head halted & lock holder
+ * will call slowpath
+ * 2,1 - queue head cmpxchg fails, halt is aborted
+ *
+ * If the queue head CPU is woken up by a spurious interrupt
+ * at the same time as the lock holder check the cpustate,
+ * it is possible that the lock holder will try to kick
+ * the queue head CPU which isn't halted.
+ */
+ oldstate = cmpxchg(&pv->cpustate, PV_CPU_ACTIVE, PV_CPU_HALTED);
+ if (oldstate == PV_CPU_KICKED)
+ goto reset; /* Reset count and state */
+
+ lockval = cmpxchg((u8 *)lock,
+ _Q_LOCKED_VAL, _Q_LOCKED_SLOWPATH);
+ if (lockval != 0) {
+ __queue_halt_cpu(PV_HALT_QHEAD, &pv->cpustate,
+ PV_CPU_HALTED);
+ __queue_lockstat((pv->cpustate == PV_CPU_KICKED)
+ ? PV_WAKE_KICKED : PV_WAKE_SPURIOUS);
+ }
+ /*
+ * Else, the lock is free and no halting is needed
+ */
+reset:
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_ACTIVE;
+ *count = 0; /* Reset count */
+ }
+}
+
+/**
+ * pv_queue_spin_check - perform para-virtualization checks for queue member
+ * @pv : pointer to struct pv_qvars
+ * @count: loop count
+ */
+static __always_inline void
+pv_queue_spin_check(struct pv_qvars *pv, struct mcs_spinlock *mcs, int *count)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+ /*
+ * Attempt to halt oneself after QSPIN_THRESHOLD spins
+ */
+ if (unlikely(*count >= QSPIN_THRESHOLD)) {
+ /*
+ * Time to halt itself
+ */
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_HALTED;
+ /*
+ * One way to avoid the racing between pv_halt_check()
+ * and pv_queue_spin_check() is to use memory barrier or
+ * atomic instruction to synchronize between the two competing
+ * threads. However, that will slow down the queue spinlock
+ * slowpath. One way to eliminate this overhead for normal
+ * cases is to use another flag (mayhalt) to indicate that
+ * racing condition may happen. This flag is set when the
+ * loop count is getting close to the halting threshold.
+ *
+ * When that happens, a 2 variables (cpustate & qhead
+ * [=mcs.locked]) handshake is used to make sure that
+ * pv_halt_check() won't miss setting the _Q_LOCKED_SLOWPATH
+ * when the CPU is about to be halted.
+ *
+ * pv_halt_check pv_queue_spin_check
+ * ------------- -------------------
+ * [1] qhead = true [3] cpustate = PV_CPU_HALTED
+ * smp_mb() smp_mb()
+ * [2] if (cpustate [4] if (qhead)
+ * == PV_CPU_HALTED)
+ *
+ * Sequence:
+ * *,1,*,4,* - halt is aborted as the qhead flag is set,
+ * _Q_LOCKED_SLOWPATH may or may not be set
+ * 3,4,1,2 - the CPU is halt and _Q_LOCKED_SLOWPATH is set
+ */
+ smp_mb();
+ if (!ACCESS_ONCE(mcs->locked)) {
+ /*
+ * Halt the CPU only if it is not the queue head
+ */
+ __queue_halt_cpu(PV_HALT_QNODE, &pv->cpustate,
+ PV_CPU_HALTED);
+ __queue_lockstat((pv->cpustate == PV_CPU_KICKED)
+ ? PV_WAKE_KICKED : PV_WAKE_SPURIOUS);
+ }
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_ACTIVE;
+ *count = 0; /* Reset count & flag */
+ pv->mayhalt = false;
+ } else if (*count == MAYHALT_THRESHOLD) {
+ pv->mayhalt = true;
+ /*
+ * Make sure that the mayhalt flag is visible to others
+ * before proceeding.
+ */
+ smp_mb();
+ }
+}
+
+/**
+ * pv_halt_check - check if the CPU has been halted & set _Q_LOCKED_SLOWPATH
+ * @pv : pointer to struct pv_qvars
+ * @count: loop count
+ *
+ * The current CPU should have gotten the lock and the queue head flag set
+ * before calling this function.
+ */
+static __always_inline void
+pv_halt_check(struct pv_qvars *pv, struct qspinlock *lock)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+ /*
+ * Halt state checking will only be done if the mayhalt flag is set
+ * to avoid the overhead of the memory barrier in normal cases.
+ * It is highly unlikely that the actual writing to the qhead flag
+ * will be more than 0x10 iterations later than the reading of the
+ * mayhalt flag so that it misses seeing the PV_CPU_HALTED state
+ * which causes lost wakeup.
+ */
+ if (ACCESS_ONCE(pv->mayhalt)) {
+ /*
+ * A memory barrier is used here to make sure that the setting
+ * of queue head flag prior to this function call is visible
+ * to others before checking the cpustate flag.
+ */
+ smp_mb();
+ if (pv->cpustate == PV_CPU_HALTED)
+ ACCESS_ONCE(*(u8 *)lock) = _Q_LOCKED_SLOWPATH;
+ }
+}
+
+/**
+ * pv_set_prev - set previous queue node pointer
+ * @pv : pointer to struct pv_qvars to be set
+ * @prev: pointer to the previous node
+ */
+static __always_inline void pv_set_prev(struct pv_qvars *pv, struct qnode *prev)
+{
+ ACCESS_ONCE(pv->prev) = prev;
+ /*
+ * Make sure the prev field is set up before others
+ */
+ smp_wmb();
+}
+
+/*
+ * The following inlined functions are being used by the
+ * queue_spin_unlock_slowpath() function.
+ */
+
+/**
+ * pv_get_prev - get previous queue node pointer
+ * @pv : pointer to struct pv_qvars to be set
+ * Return: the previous queue node pointer
+ */
+static __always_inline struct qnode *pv_get_prev(struct pv_qvars *pv)
+{
+ return ACCESS_ONCE(pv->prev);
+}
+
+/**
+ * pv_kick_node - kick up the CPU of the given node
+ * @pv : pointer to struct pv_qvars of the node to be kicked
+ */
+static __always_inline void pv_kick_node(struct pv_qvars *pv)
+{
+ s8 oldstate = xchg(&pv->cpustate, PV_CPU_KICKED);
+
+ /*
+ * Kick up the CPU only if the state was set to PV_CPU_HALTED
+ */
+ if (oldstate != PV_CPU_HALTED)
+ __queue_lockstat(PV_KICK_NOHALT);
+ else
+ __queue_kick_cpu(pv->mycpu);
+}
+
+#endif /* _ASM_X86_PVQSPINLOCK_H */
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 19af937..a145c31 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -19,13 +19,46 @@ extern struct static_key paravirt_unfairlocks_enabled;
* that the clearing the lock bit is done ASAP without artificial delay
* due to compiler optimization.
*/
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+static __always_inline void __queue_spin_unlock(struct qspinlock *lock)
+#else
static inline void queue_spin_unlock(struct qspinlock *lock)
+#endif
{
barrier();
ACCESS_ONCE(*(u8 *)lock) = 0;
barrier();
}
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+/*
+ * The lock byte can have a value of _Q_LOCKED_SLOWPATH to indicate
+ * that it needs to go through the slowpath to do the unlocking.
+ */
+#define _Q_LOCKED_SLOWPATH (_Q_LOCKED_VAL | 2)
+
+extern void queue_spin_unlock_slowpath(struct qspinlock *lock);
+
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ if (static_key_false(¶virt_spinlocks_enabled)) {
+ /*
+ * Need to atomically clear the lock byte to avoid racing with
+ * queue head waiter trying to set _QLOCK_LOCKED_SLOWPATH.
+ */
+ if (likely(cmpxchg((u8 *)lock, _Q_LOCKED_VAL, 0)
+ == _Q_LOCKED_VAL))
+ return;
+ else
+ queue_spin_unlock_slowpath(lock);
+
+ } else {
+ __queue_spin_unlock(lock);
+ }
+ barrier();
+}
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
#include <asm-generic/qspinlock.h>
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fb05e64..37b5c7f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -57,17 +57,45 @@
#include "mcs_spinlock.h"
/*
+ * Para-virtualized queue spinlock support
+ */
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/pvqspinlock.h>
+#else
+
+struct qnode;
+struct pv_qvars {};
+static inline void pv_init_vars(struct pv_qvars *pv, int cpu_nr) {}
+static inline void pv_head_spin_check(struct pv_qvars *pv, int *count,
+ u32 qcode, struct qspinlock *lock) {}
+static inline void pv_queue_spin_check(struct pv_qvars *pv,
+ struct mcs_spinlock *mcs, int *count) {}
+static inline void pv_halt_check(struct pv_qvars *pv, void *lock) {}
+static inline void pv_kick_node(struct pv_qvars *pv) {}
+static inline void pv_set_prev(struct pv_qvars *pv, struct qnode *prev) {}
+static inline struct qnode *pv_get_prev(struct pv_qvars *pv)
+ { return NULL; }
+#endif
+
+/*
* To have additional features for better virtualization support, it is
* necessary to store additional data in the queue node structure. So
* a new queue node structure will have to be defined and used here.
+ *
+ * If CONFIG_PARAVIRT_SPINLOCKS is turned on, the previous node pointer in
+ * the pv structure will be used by the unfair lock code.
+ *
*/
struct qnode {
struct mcs_spinlock mcs;
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
int lsteal_mask; /* Lock stealing frequency mask */
u32 prev_tail; /* Tail code of previous node */
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
struct qnode *qprev; /* Previous queue node addr */
#endif
+#endif
+ struct pv_qvars pv; /* For para-virtualization */
};
#define qhead mcs.locked /* The queue head flag */
@@ -662,6 +690,7 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
{
struct qnode *prev, *next;
u32 old;
+ DEF_LOOP_CNT(hcnt);
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -679,6 +708,7 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
prev = decode_tail(old);
unfair_set_vars(node, prev, old);
+ pv_set_prev(&node->pv, prev);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
while (!smp_load_acquire(&node->qhead)) {
@@ -693,6 +723,8 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
goto notify_next;
return;
}
+ pv_queue_spin_check(&node->pv, &node->mcs,
+ LOOP_CNT(&cnt));
arch_mutex_cpu_relax();
}
} else {
@@ -709,8 +741,14 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
*/
retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
- & _Q_LOCKED_PENDING_MASK)
+ & _Q_LOCKED_PENDING_MASK) {
+ INC_LOOP_CNT(hcnt);
+ /*
+ * Perform queue head para-virtualization checks
+ */
+ pv_head_spin_check(&node->pv, LOOP_CNT(&hcnt), old, lock);
arch_mutex_cpu_relax();
+ }
/*
* claim the lock:
@@ -723,6 +761,7 @@ retry_queue_wait:
* to grab the lock.
*/
for (;;) {
+ LOOP_CNT(hcnt = 0); /* Reset loop count */
if (val != tail) {
/*
* The get_qlock function will only failed if the
@@ -768,6 +807,7 @@ notify_next:
* The next one in queue is now at the head
*/
arch_mcs_spin_unlock_contended(&next->qhead);
+ pv_halt_check(&next->pv, lock);
}
/**
@@ -801,7 +841,7 @@ notify_next:
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
struct qnode *node;
- u32 tail, idx;
+ u32 tail, idx, cpu_nr;
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -810,12 +850,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node = this_cpu_ptr(&qnodes[0]);
idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
+ tail = encode_tail(cpu_nr = smp_processor_id(), idx);
node += idx;
node->qhead = 0;
node->mcs.next = NULL;
unfair_init_vars(node);
+ pv_init_vars(&node->pv, cpu_nr);
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -831,3 +872,47 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
this_cpu_dec(qnodes[0].mcs.count);
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+/**
+ * queue_spin_unlock_slowpath - kick up the CPU of the queue head
+ * @lock : Pointer to queue spinlock structure
+ *
+ * The lock is released after finding the queue head to avoid racing
+ * condition between the queue head and the lock holder.
+ */
+void queue_spin_unlock_slowpath(struct qspinlock *lock)
+{
+ struct qnode *node, *prev;
+
+ /*
+ * Get the queue tail node
+ */
+ node = decode_tail(atomic_read(&lock->val));
+
+ /*
+ * Locate the queue head node by following the prev pointer from
+ * tail to head.
+ * It is assumed that the PV guests won't have that many CPUs so
+ * that it won't take a long time to follow the pointers.
+ */
+ while (!ACCESS_ONCE(node->qhead)) {
+ prev = pv_get_prev(&node->pv);
+ if (prev)
+ node = prev;
+ else
+ /*
+ * Delay a bit to allow the prev pointer to be set up
+ */
+ arch_mutex_cpu_relax();
+ }
+ /*
+ * Found the queue head, now release the lock before waking it up
+ * If unfair lock is enabled, this allows other ready tasks to get
+ * lock before the halting CPU is waken up.
+ */
+ __queue_spin_unlock(lock);
+ pv_kick_node(&node->pv);
+}
+EXPORT_SYMBOL(queue_spin_unlock_slowpath);
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 17/19] pvqspinlock: Add qspinlock para-virtualization support
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch adds base para-virtualization support to the queue
spinlock in the same way as was done in the PV ticket lock code. In
essence, the lock waiters will spin for a specified number of times
(QSPIN_THRESHOLD = 2^14) and then halted itself. The queue head waiter,
unlike the other waiter, will spins 2*QSPIN_THRESHOLD times before
halting itself. Before being halted, the queue head waiter will set
a flag (_Q_LOCKED_SLOWPATH) in the lock byte to indicate that the
unlock slowpath has to be invoked.
In the unlock slowpath, the current lock holder will find the queue
head by following the previous node pointer links stored in the queue
node structure until it finds one that has the qhead flag turned
on. It then attempt to kick in the CPU of the queue head.
After the queue head acquired the lock, it will also check the status
of the next node and set _Q_LOCKED_SLOWPATH flag if it has been halted.
Enabling the PV code does have a performance impact on spinlock
acquisitions and releases. The following table shows the execution
time (in ms) of a spinlock micro-benchmark that does lock/unlock
operations 5M times for each task versus the number of contending
tasks on a Westmere-EX system.
# of Ticket lock Queue lock
tasks PV off/PV on/%Change PV off/PV on/%Change
------ -------------------- ---------------------
1 135/ 179/+33% 137/ 168/+23%
2 1045/ 1103/ +6% 1161/ 1248/ +7%
3 1827/ 2683/+47% 2357/ 2600/+10%
4 2689/ 4191/+56% 2882/ 3115/ +8%
5 3736/ 5830/+56% 3493/ 3571/ +2%
6 4942/ 7609/+54% 4239/ 4198/ -1%
7 6304/ 9570/+52% 4931/ 4895/ -1%
8 7736/11323/+46% 5632/ 5588/ -1%
It can be seen that the ticket lock PV code has a fairly big decrease
in performance when there are 3 or more contending tasks. The queue
spinlock PV code, on the other hand, only has a relatively minor drop
in performance for with 1-4 contending tasks. With 5 or more contending
tasks, there is practically no difference in performance. When coupled
with unfair lock, the queue spinlock can be much faster than the PV
ticket lock.
When both the unfair lock and PV spinlock features is turned on,
lock stealing will still be allowed in the fastpath, but not in
the slowpath.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/pvqspinlock.h | 306 ++++++++++++++++++++++++++++++++++++
arch/x86/include/asm/qspinlock.h | 33 ++++
kernel/locking/qspinlock.c | 91 ++++++++++-
3 files changed, 427 insertions(+), 3 deletions(-)
create mode 100644 arch/x86/include/asm/pvqspinlock.h
diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
new file mode 100644
index 0000000..fea21aa
--- /dev/null
+++ b/arch/x86/include/asm/pvqspinlock.h
@@ -0,0 +1,306 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ * Queue Spinlock Para-Virtualization (PV) Support
+ *
+ * +------+ +-----+ next +----+
+ * | Lock | |Queue|----------->|Next|
+ * |Holder|<-----------|Head |<-----------|Node|
+ * +------+ prev_tail +-----+ prev_tail +----+
+ *
+ * The PV support code for queue spinlock is roughly the same as that
+ * of the ticket spinlock. Each CPU waiting for the lock will spin until it
+ * reaches a threshold. When that happens, it will put itself to halt so
+ * that the hypervisor can reuse the CPU cycles in some other guests as well
+ * as returning other hold-up CPUs faster.
+ *
+ * A major difference between the two versions of PV spinlock is the fact
+ * that the spin threshold of the queue spinlock is half of that of the
+ * ticket spinlock. However, the queue head will spin twice as long as the
+ * other nodes before it puts itself to halt. The reason for that is to
+ * increase halting chance of heavily contended locks to favor lightly
+ * contended locks (queue depth of 1 or less).
+ *
+ * There are 2 places where races can happen:
+ * 1) Halting of the queue head CPU (in pv_head_spin_check) and the CPU
+ * kicking by the lock holder in the unlock path (in pv_kick_node).
+ * 2) Halting of the queue node CPU (in pv_queue_spin_check) and the
+ * the status check by the previous queue head (in pv_halt_check).
+ * See the comments on those functions to see how the races are being
+ * addressed.
+ */
+
+/*
+ * Spin threshold for queue spinlock
+ */
+#define QSPIN_THRESHOLD (1U<<14)
+#define MAYHALT_THRESHOLD (QSPIN_THRESHOLD - 0x10)
+
+/*
+ * CPU state flags
+ */
+#define PV_CPU_ACTIVE 1 /* This CPU is active */
+#define PV_CPU_KICKED 2 /* This CPU is being kicked */
+#define PV_CPU_HALTED -1 /* This CPU is halted */
+
+/*
+ * Additional fields to be added to the qnode structure
+ */
+#if CONFIG_NR_CPUS >= (1 << 16)
+#define _cpuid_t u32
+#else
+#define _cpuid_t u16
+#endif
+
+struct qnode;
+
+struct pv_qvars {
+ s8 cpustate; /* CPU status flag */
+ s8 mayhalt; /* May be halted soon */
+ _cpuid_t mycpu; /* CPU number of this node */
+ struct qnode *prev; /* Pointer to previous node */
+};
+
+/*
+ * Macro to be used by the unfair lock code to access the previous node pointer
+ * in the pv structure.
+ */
+#define qprev pv.prev
+
+/**
+ * pv_init_vars - initialize fields in struct pv_qvars
+ * @pv : pointer to struct pv_qvars
+ * @cpu: current CPU number
+ */
+static __always_inline void pv_init_vars(struct pv_qvars *pv, int cpu)
+{
+ pv->cpustate = PV_CPU_ACTIVE;
+ pv->prev = NULL;
+ pv->mayhalt = false;
+ pv->mycpu = cpu;
+}
+
+/**
+ * pv_head_spin_check - perform para-virtualization checks for queue head
+ * @pv : pointer to struct pv_qvars
+ * @count : loop count
+ * @qcode : queue code of the supposed lock holder
+ * @lock : pointer to the qspinlock structure
+ *
+ * The following checks will be done:
+ * 1) If it gets a kick signal, reset loop count and flag
+ * 2) Halt itself if lock is still not available after QSPIN_THRESHOLD
+ */
+static __always_inline void pv_head_spin_check(struct pv_qvars *pv, int *count,
+ u32 qcode, struct qspinlock *lock)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+
+ if (pv->cpustate == PV_CPU_KICKED) {
+ /*
+ * Reset count and flag
+ */
+ *count = 0;
+ pv->cpustate = PV_CPU_ACTIVE;
+
+ } else if (unlikely(*count >= 2*QSPIN_THRESHOLD)) {
+ u8 lockval;
+ s8 oldstate;
+
+ /*
+ * Set the lock byte to _Q_LOCKED_SLOWPATH before
+ * trying to halt itself. It is possible that the
+ * lock byte had been set to _Q_LOCKED_SLOWPATH
+ * already (spurious wakeup of queue head after a halt
+ * or opportunistic setting in pv_halt_check()).
+ * In this case, just proceeds to sleeping.
+ *
+ * queue head lock holder
+ * ---------- -----------
+ * cpustate = PV_CPU_HALTED
+ * [1] cmpxchg(_Q_LOCKED_VAL [2] cmpxchg(_Q_LOCKED_VAL => 0)
+ * => _Q_LOCKED_SLOWPATH) if (cmpxchg fails &&
+ * if (cmpxchg succeeds) cpustate == PV_CPU_HALTED)
+ * halt() kick()
+ *
+ * Sequence:
+ * 1,2 - slowpath flag set, queue head halted & lock holder
+ * will call slowpath
+ * 2,1 - queue head cmpxchg fails, halt is aborted
+ *
+ * If the queue head CPU is woken up by a spurious interrupt
+ * at the same time as the lock holder check the cpustate,
+ * it is possible that the lock holder will try to kick
+ * the queue head CPU which isn't halted.
+ */
+ oldstate = cmpxchg(&pv->cpustate, PV_CPU_ACTIVE, PV_CPU_HALTED);
+ if (oldstate == PV_CPU_KICKED)
+ goto reset; /* Reset count and state */
+
+ lockval = cmpxchg((u8 *)lock,
+ _Q_LOCKED_VAL, _Q_LOCKED_SLOWPATH);
+ if (lockval != 0) {
+ __queue_halt_cpu(PV_HALT_QHEAD, &pv->cpustate,
+ PV_CPU_HALTED);
+ __queue_lockstat((pv->cpustate == PV_CPU_KICKED)
+ ? PV_WAKE_KICKED : PV_WAKE_SPURIOUS);
+ }
+ /*
+ * Else, the lock is free and no halting is needed
+ */
+reset:
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_ACTIVE;
+ *count = 0; /* Reset count */
+ }
+}
+
+/**
+ * pv_queue_spin_check - perform para-virtualization checks for queue member
+ * @pv : pointer to struct pv_qvars
+ * @count: loop count
+ */
+static __always_inline void
+pv_queue_spin_check(struct pv_qvars *pv, struct mcs_spinlock *mcs, int *count)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+ /*
+ * Attempt to halt oneself after QSPIN_THRESHOLD spins
+ */
+ if (unlikely(*count >= QSPIN_THRESHOLD)) {
+ /*
+ * Time to halt itself
+ */
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_HALTED;
+ /*
+ * One way to avoid the racing between pv_halt_check()
+ * and pv_queue_spin_check() is to use memory barrier or
+ * atomic instruction to synchronize between the two competing
+ * threads. However, that will slow down the queue spinlock
+ * slowpath. One way to eliminate this overhead for normal
+ * cases is to use another flag (mayhalt) to indicate that
+ * racing condition may happen. This flag is set when the
+ * loop count is getting close to the halting threshold.
+ *
+ * When that happens, a 2 variables (cpustate & qhead
+ * [=mcs.locked]) handshake is used to make sure that
+ * pv_halt_check() won't miss setting the _Q_LOCKED_SLOWPATH
+ * when the CPU is about to be halted.
+ *
+ * pv_halt_check pv_queue_spin_check
+ * ------------- -------------------
+ * [1] qhead = true [3] cpustate = PV_CPU_HALTED
+ * smp_mb() smp_mb()
+ * [2] if (cpustate [4] if (qhead)
+ * == PV_CPU_HALTED)
+ *
+ * Sequence:
+ * *,1,*,4,* - halt is aborted as the qhead flag is set,
+ * _Q_LOCKED_SLOWPATH may or may not be set
+ * 3,4,1,2 - the CPU is halt and _Q_LOCKED_SLOWPATH is set
+ */
+ smp_mb();
+ if (!ACCESS_ONCE(mcs->locked)) {
+ /*
+ * Halt the CPU only if it is not the queue head
+ */
+ __queue_halt_cpu(PV_HALT_QNODE, &pv->cpustate,
+ PV_CPU_HALTED);
+ __queue_lockstat((pv->cpustate == PV_CPU_KICKED)
+ ? PV_WAKE_KICKED : PV_WAKE_SPURIOUS);
+ }
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_ACTIVE;
+ *count = 0; /* Reset count & flag */
+ pv->mayhalt = false;
+ } else if (*count == MAYHALT_THRESHOLD) {
+ pv->mayhalt = true;
+ /*
+ * Make sure that the mayhalt flag is visible to others
+ * before proceeding.
+ */
+ smp_mb();
+ }
+}
+
+/**
+ * pv_halt_check - check if the CPU has been halted & set _Q_LOCKED_SLOWPATH
+ * @pv : pointer to struct pv_qvars
+ * @count: loop count
+ *
+ * The current CPU should have gotten the lock and the queue head flag set
+ * before calling this function.
+ */
+static __always_inline void
+pv_halt_check(struct pv_qvars *pv, struct qspinlock *lock)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+ /*
+ * Halt state checking will only be done if the mayhalt flag is set
+ * to avoid the overhead of the memory barrier in normal cases.
+ * It is highly unlikely that the actual writing to the qhead flag
+ * will be more than 0x10 iterations later than the reading of the
+ * mayhalt flag so that it misses seeing the PV_CPU_HALTED state
+ * which causes lost wakeup.
+ */
+ if (ACCESS_ONCE(pv->mayhalt)) {
+ /*
+ * A memory barrier is used here to make sure that the setting
+ * of queue head flag prior to this function call is visible
+ * to others before checking the cpustate flag.
+ */
+ smp_mb();
+ if (pv->cpustate == PV_CPU_HALTED)
+ ACCESS_ONCE(*(u8 *)lock) = _Q_LOCKED_SLOWPATH;
+ }
+}
+
+/**
+ * pv_set_prev - set previous queue node pointer
+ * @pv : pointer to struct pv_qvars to be set
+ * @prev: pointer to the previous node
+ */
+static __always_inline void pv_set_prev(struct pv_qvars *pv, struct qnode *prev)
+{
+ ACCESS_ONCE(pv->prev) = prev;
+ /*
+ * Make sure the prev field is set up before others
+ */
+ smp_wmb();
+}
+
+/*
+ * The following inlined functions are being used by the
+ * queue_spin_unlock_slowpath() function.
+ */
+
+/**
+ * pv_get_prev - get previous queue node pointer
+ * @pv : pointer to struct pv_qvars to be set
+ * Return: the previous queue node pointer
+ */
+static __always_inline struct qnode *pv_get_prev(struct pv_qvars *pv)
+{
+ return ACCESS_ONCE(pv->prev);
+}
+
+/**
+ * pv_kick_node - kick up the CPU of the given node
+ * @pv : pointer to struct pv_qvars of the node to be kicked
+ */
+static __always_inline void pv_kick_node(struct pv_qvars *pv)
+{
+ s8 oldstate = xchg(&pv->cpustate, PV_CPU_KICKED);
+
+ /*
+ * Kick up the CPU only if the state was set to PV_CPU_HALTED
+ */
+ if (oldstate != PV_CPU_HALTED)
+ __queue_lockstat(PV_KICK_NOHALT);
+ else
+ __queue_kick_cpu(pv->mycpu);
+}
+
+#endif /* _ASM_X86_PVQSPINLOCK_H */
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 19af937..a145c31 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -19,13 +19,46 @@ extern struct static_key paravirt_unfairlocks_enabled;
* that the clearing the lock bit is done ASAP without artificial delay
* due to compiler optimization.
*/
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+static __always_inline void __queue_spin_unlock(struct qspinlock *lock)
+#else
static inline void queue_spin_unlock(struct qspinlock *lock)
+#endif
{
barrier();
ACCESS_ONCE(*(u8 *)lock) = 0;
barrier();
}
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+/*
+ * The lock byte can have a value of _Q_LOCKED_SLOWPATH to indicate
+ * that it needs to go through the slowpath to do the unlocking.
+ */
+#define _Q_LOCKED_SLOWPATH (_Q_LOCKED_VAL | 2)
+
+extern void queue_spin_unlock_slowpath(struct qspinlock *lock);
+
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ if (static_key_false(¶virt_spinlocks_enabled)) {
+ /*
+ * Need to atomically clear the lock byte to avoid racing with
+ * queue head waiter trying to set _QLOCK_LOCKED_SLOWPATH.
+ */
+ if (likely(cmpxchg((u8 *)lock, _Q_LOCKED_VAL, 0)
+ == _Q_LOCKED_VAL))
+ return;
+ else
+ queue_spin_unlock_slowpath(lock);
+
+ } else {
+ __queue_spin_unlock(lock);
+ }
+ barrier();
+}
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
#include <asm-generic/qspinlock.h>
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fb05e64..37b5c7f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -57,17 +57,45 @@
#include "mcs_spinlock.h"
/*
+ * Para-virtualized queue spinlock support
+ */
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/pvqspinlock.h>
+#else
+
+struct qnode;
+struct pv_qvars {};
+static inline void pv_init_vars(struct pv_qvars *pv, int cpu_nr) {}
+static inline void pv_head_spin_check(struct pv_qvars *pv, int *count,
+ u32 qcode, struct qspinlock *lock) {}
+static inline void pv_queue_spin_check(struct pv_qvars *pv,
+ struct mcs_spinlock *mcs, int *count) {}
+static inline void pv_halt_check(struct pv_qvars *pv, void *lock) {}
+static inline void pv_kick_node(struct pv_qvars *pv) {}
+static inline void pv_set_prev(struct pv_qvars *pv, struct qnode *prev) {}
+static inline struct qnode *pv_get_prev(struct pv_qvars *pv)
+ { return NULL; }
+#endif
+
+/*
* To have additional features for better virtualization support, it is
* necessary to store additional data in the queue node structure. So
* a new queue node structure will have to be defined and used here.
+ *
+ * If CONFIG_PARAVIRT_SPINLOCKS is turned on, the previous node pointer in
+ * the pv structure will be used by the unfair lock code.
+ *
*/
struct qnode {
struct mcs_spinlock mcs;
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
int lsteal_mask; /* Lock stealing frequency mask */
u32 prev_tail; /* Tail code of previous node */
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
struct qnode *qprev; /* Previous queue node addr */
#endif
+#endif
+ struct pv_qvars pv; /* For para-virtualization */
};
#define qhead mcs.locked /* The queue head flag */
@@ -662,6 +690,7 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
{
struct qnode *prev, *next;
u32 old;
+ DEF_LOOP_CNT(hcnt);
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -679,6 +708,7 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
prev = decode_tail(old);
unfair_set_vars(node, prev, old);
+ pv_set_prev(&node->pv, prev);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
while (!smp_load_acquire(&node->qhead)) {
@@ -693,6 +723,8 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
goto notify_next;
return;
}
+ pv_queue_spin_check(&node->pv, &node->mcs,
+ LOOP_CNT(&cnt));
arch_mutex_cpu_relax();
}
} else {
@@ -709,8 +741,14 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
*/
retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
- & _Q_LOCKED_PENDING_MASK)
+ & _Q_LOCKED_PENDING_MASK) {
+ INC_LOOP_CNT(hcnt);
+ /*
+ * Perform queue head para-virtualization checks
+ */
+ pv_head_spin_check(&node->pv, LOOP_CNT(&hcnt), old, lock);
arch_mutex_cpu_relax();
+ }
/*
* claim the lock:
@@ -723,6 +761,7 @@ retry_queue_wait:
* to grab the lock.
*/
for (;;) {
+ LOOP_CNT(hcnt = 0); /* Reset loop count */
if (val != tail) {
/*
* The get_qlock function will only failed if the
@@ -768,6 +807,7 @@ notify_next:
* The next one in queue is now at the head
*/
arch_mcs_spin_unlock_contended(&next->qhead);
+ pv_halt_check(&next->pv, lock);
}
/**
@@ -801,7 +841,7 @@ notify_next:
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
struct qnode *node;
- u32 tail, idx;
+ u32 tail, idx, cpu_nr;
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -810,12 +850,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node = this_cpu_ptr(&qnodes[0]);
idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
+ tail = encode_tail(cpu_nr = smp_processor_id(), idx);
node += idx;
node->qhead = 0;
node->mcs.next = NULL;
unfair_init_vars(node);
+ pv_init_vars(&node->pv, cpu_nr);
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -831,3 +872,47 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
this_cpu_dec(qnodes[0].mcs.count);
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+/**
+ * queue_spin_unlock_slowpath - kick up the CPU of the queue head
+ * @lock : Pointer to queue spinlock structure
+ *
+ * The lock is released after finding the queue head to avoid racing
+ * condition between the queue head and the lock holder.
+ */
+void queue_spin_unlock_slowpath(struct qspinlock *lock)
+{
+ struct qnode *node, *prev;
+
+ /*
+ * Get the queue tail node
+ */
+ node = decode_tail(atomic_read(&lock->val));
+
+ /*
+ * Locate the queue head node by following the prev pointer from
+ * tail to head.
+ * It is assumed that the PV guests won't have that many CPUs so
+ * that it won't take a long time to follow the pointers.
+ */
+ while (!ACCESS_ONCE(node->qhead)) {
+ prev = pv_get_prev(&node->pv);
+ if (prev)
+ node = prev;
+ else
+ /*
+ * Delay a bit to allow the prev pointer to be set up
+ */
+ arch_mutex_cpu_relax();
+ }
+ /*
+ * Found the queue head, now release the lock before waking it up
+ * If unfair lock is enabled, this allows other ready tasks to get
+ * lock before the halting CPU is waken up.
+ */
+ __queue_spin_unlock(lock);
+ pv_kick_node(&node->pv);
+}
+EXPORT_SYMBOL(queue_spin_unlock_slowpath);
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 17/19] pvqspinlock: Add qspinlock para-virtualization support
2014-05-07 15:01 ` Waiman Long
` (35 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
This patch adds base para-virtualization support to the queue
spinlock in the same way as was done in the PV ticket lock code. In
essence, the lock waiters will spin for a specified number of times
(QSPIN_THRESHOLD = 2^14) and then halted itself. The queue head waiter,
unlike the other waiter, will spins 2*QSPIN_THRESHOLD times before
halting itself. Before being halted, the queue head waiter will set
a flag (_Q_LOCKED_SLOWPATH) in the lock byte to indicate that the
unlock slowpath has to be invoked.
In the unlock slowpath, the current lock holder will find the queue
head by following the previous node pointer links stored in the queue
node structure until it finds one that has the qhead flag turned
on. It then attempt to kick in the CPU of the queue head.
After the queue head acquired the lock, it will also check the status
of the next node and set _Q_LOCKED_SLOWPATH flag if it has been halted.
Enabling the PV code does have a performance impact on spinlock
acquisitions and releases. The following table shows the execution
time (in ms) of a spinlock micro-benchmark that does lock/unlock
operations 5M times for each task versus the number of contending
tasks on a Westmere-EX system.
# of Ticket lock Queue lock
tasks PV off/PV on/%Change PV off/PV on/%Change
------ -------------------- ---------------------
1 135/ 179/+33% 137/ 168/+23%
2 1045/ 1103/ +6% 1161/ 1248/ +7%
3 1827/ 2683/+47% 2357/ 2600/+10%
4 2689/ 4191/+56% 2882/ 3115/ +8%
5 3736/ 5830/+56% 3493/ 3571/ +2%
6 4942/ 7609/+54% 4239/ 4198/ -1%
7 6304/ 9570/+52% 4931/ 4895/ -1%
8 7736/11323/+46% 5632/ 5588/ -1%
It can be seen that the ticket lock PV code has a fairly big decrease
in performance when there are 3 or more contending tasks. The queue
spinlock PV code, on the other hand, only has a relatively minor drop
in performance for with 1-4 contending tasks. With 5 or more contending
tasks, there is practically no difference in performance. When coupled
with unfair lock, the queue spinlock can be much faster than the PV
ticket lock.
When both the unfair lock and PV spinlock features is turned on,
lock stealing will still be allowed in the fastpath, but not in
the slowpath.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/include/asm/pvqspinlock.h | 306 ++++++++++++++++++++++++++++++++++++
arch/x86/include/asm/qspinlock.h | 33 ++++
kernel/locking/qspinlock.c | 91 ++++++++++-
3 files changed, 427 insertions(+), 3 deletions(-)
create mode 100644 arch/x86/include/asm/pvqspinlock.h
diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
new file mode 100644
index 0000000..fea21aa
--- /dev/null
+++ b/arch/x86/include/asm/pvqspinlock.h
@@ -0,0 +1,306 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ * Queue Spinlock Para-Virtualization (PV) Support
+ *
+ * +------+ +-----+ next +----+
+ * | Lock | |Queue|----------->|Next|
+ * |Holder|<-----------|Head |<-----------|Node|
+ * +------+ prev_tail +-----+ prev_tail +----+
+ *
+ * The PV support code for queue spinlock is roughly the same as that
+ * of the ticket spinlock. Each CPU waiting for the lock will spin until it
+ * reaches a threshold. When that happens, it will put itself to halt so
+ * that the hypervisor can reuse the CPU cycles in some other guests as well
+ * as returning other hold-up CPUs faster.
+ *
+ * A major difference between the two versions of PV spinlock is the fact
+ * that the spin threshold of the queue spinlock is half of that of the
+ * ticket spinlock. However, the queue head will spin twice as long as the
+ * other nodes before it puts itself to halt. The reason for that is to
+ * increase halting chance of heavily contended locks to favor lightly
+ * contended locks (queue depth of 1 or less).
+ *
+ * There are 2 places where races can happen:
+ * 1) Halting of the queue head CPU (in pv_head_spin_check) and the CPU
+ * kicking by the lock holder in the unlock path (in pv_kick_node).
+ * 2) Halting of the queue node CPU (in pv_queue_spin_check) and the
+ * the status check by the previous queue head (in pv_halt_check).
+ * See the comments on those functions to see how the races are being
+ * addressed.
+ */
+
+/*
+ * Spin threshold for queue spinlock
+ */
+#define QSPIN_THRESHOLD (1U<<14)
+#define MAYHALT_THRESHOLD (QSPIN_THRESHOLD - 0x10)
+
+/*
+ * CPU state flags
+ */
+#define PV_CPU_ACTIVE 1 /* This CPU is active */
+#define PV_CPU_KICKED 2 /* This CPU is being kicked */
+#define PV_CPU_HALTED -1 /* This CPU is halted */
+
+/*
+ * Additional fields to be added to the qnode structure
+ */
+#if CONFIG_NR_CPUS >= (1 << 16)
+#define _cpuid_t u32
+#else
+#define _cpuid_t u16
+#endif
+
+struct qnode;
+
+struct pv_qvars {
+ s8 cpustate; /* CPU status flag */
+ s8 mayhalt; /* May be halted soon */
+ _cpuid_t mycpu; /* CPU number of this node */
+ struct qnode *prev; /* Pointer to previous node */
+};
+
+/*
+ * Macro to be used by the unfair lock code to access the previous node pointer
+ * in the pv structure.
+ */
+#define qprev pv.prev
+
+/**
+ * pv_init_vars - initialize fields in struct pv_qvars
+ * @pv : pointer to struct pv_qvars
+ * @cpu: current CPU number
+ */
+static __always_inline void pv_init_vars(struct pv_qvars *pv, int cpu)
+{
+ pv->cpustate = PV_CPU_ACTIVE;
+ pv->prev = NULL;
+ pv->mayhalt = false;
+ pv->mycpu = cpu;
+}
+
+/**
+ * pv_head_spin_check - perform para-virtualization checks for queue head
+ * @pv : pointer to struct pv_qvars
+ * @count : loop count
+ * @qcode : queue code of the supposed lock holder
+ * @lock : pointer to the qspinlock structure
+ *
+ * The following checks will be done:
+ * 1) If it gets a kick signal, reset loop count and flag
+ * 2) Halt itself if lock is still not available after QSPIN_THRESHOLD
+ */
+static __always_inline void pv_head_spin_check(struct pv_qvars *pv, int *count,
+ u32 qcode, struct qspinlock *lock)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+
+ if (pv->cpustate == PV_CPU_KICKED) {
+ /*
+ * Reset count and flag
+ */
+ *count = 0;
+ pv->cpustate = PV_CPU_ACTIVE;
+
+ } else if (unlikely(*count >= 2*QSPIN_THRESHOLD)) {
+ u8 lockval;
+ s8 oldstate;
+
+ /*
+ * Set the lock byte to _Q_LOCKED_SLOWPATH before
+ * trying to halt itself. It is possible that the
+ * lock byte had been set to _Q_LOCKED_SLOWPATH
+ * already (spurious wakeup of queue head after a halt
+ * or opportunistic setting in pv_halt_check()).
+ * In this case, just proceeds to sleeping.
+ *
+ * queue head lock holder
+ * ---------- -----------
+ * cpustate = PV_CPU_HALTED
+ * [1] cmpxchg(_Q_LOCKED_VAL [2] cmpxchg(_Q_LOCKED_VAL => 0)
+ * => _Q_LOCKED_SLOWPATH) if (cmpxchg fails &&
+ * if (cmpxchg succeeds) cpustate == PV_CPU_HALTED)
+ * halt() kick()
+ *
+ * Sequence:
+ * 1,2 - slowpath flag set, queue head halted & lock holder
+ * will call slowpath
+ * 2,1 - queue head cmpxchg fails, halt is aborted
+ *
+ * If the queue head CPU is woken up by a spurious interrupt
+ * at the same time as the lock holder check the cpustate,
+ * it is possible that the lock holder will try to kick
+ * the queue head CPU which isn't halted.
+ */
+ oldstate = cmpxchg(&pv->cpustate, PV_CPU_ACTIVE, PV_CPU_HALTED);
+ if (oldstate == PV_CPU_KICKED)
+ goto reset; /* Reset count and state */
+
+ lockval = cmpxchg((u8 *)lock,
+ _Q_LOCKED_VAL, _Q_LOCKED_SLOWPATH);
+ if (lockval != 0) {
+ __queue_halt_cpu(PV_HALT_QHEAD, &pv->cpustate,
+ PV_CPU_HALTED);
+ __queue_lockstat((pv->cpustate == PV_CPU_KICKED)
+ ? PV_WAKE_KICKED : PV_WAKE_SPURIOUS);
+ }
+ /*
+ * Else, the lock is free and no halting is needed
+ */
+reset:
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_ACTIVE;
+ *count = 0; /* Reset count */
+ }
+}
+
+/**
+ * pv_queue_spin_check - perform para-virtualization checks for queue member
+ * @pv : pointer to struct pv_qvars
+ * @count: loop count
+ */
+static __always_inline void
+pv_queue_spin_check(struct pv_qvars *pv, struct mcs_spinlock *mcs, int *count)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+ /*
+ * Attempt to halt oneself after QSPIN_THRESHOLD spins
+ */
+ if (unlikely(*count >= QSPIN_THRESHOLD)) {
+ /*
+ * Time to halt itself
+ */
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_HALTED;
+ /*
+ * One way to avoid the racing between pv_halt_check()
+ * and pv_queue_spin_check() is to use memory barrier or
+ * atomic instruction to synchronize between the two competing
+ * threads. However, that will slow down the queue spinlock
+ * slowpath. One way to eliminate this overhead for normal
+ * cases is to use another flag (mayhalt) to indicate that
+ * racing condition may happen. This flag is set when the
+ * loop count is getting close to the halting threshold.
+ *
+ * When that happens, a 2 variables (cpustate & qhead
+ * [=mcs.locked]) handshake is used to make sure that
+ * pv_halt_check() won't miss setting the _Q_LOCKED_SLOWPATH
+ * when the CPU is about to be halted.
+ *
+ * pv_halt_check pv_queue_spin_check
+ * ------------- -------------------
+ * [1] qhead = true [3] cpustate = PV_CPU_HALTED
+ * smp_mb() smp_mb()
+ * [2] if (cpustate [4] if (qhead)
+ * == PV_CPU_HALTED)
+ *
+ * Sequence:
+ * *,1,*,4,* - halt is aborted as the qhead flag is set,
+ * _Q_LOCKED_SLOWPATH may or may not be set
+ * 3,4,1,2 - the CPU is halt and _Q_LOCKED_SLOWPATH is set
+ */
+ smp_mb();
+ if (!ACCESS_ONCE(mcs->locked)) {
+ /*
+ * Halt the CPU only if it is not the queue head
+ */
+ __queue_halt_cpu(PV_HALT_QNODE, &pv->cpustate,
+ PV_CPU_HALTED);
+ __queue_lockstat((pv->cpustate == PV_CPU_KICKED)
+ ? PV_WAKE_KICKED : PV_WAKE_SPURIOUS);
+ }
+ ACCESS_ONCE(pv->cpustate) = PV_CPU_ACTIVE;
+ *count = 0; /* Reset count & flag */
+ pv->mayhalt = false;
+ } else if (*count == MAYHALT_THRESHOLD) {
+ pv->mayhalt = true;
+ /*
+ * Make sure that the mayhalt flag is visible to others
+ * before proceeding.
+ */
+ smp_mb();
+ }
+}
+
+/**
+ * pv_halt_check - check if the CPU has been halted & set _Q_LOCKED_SLOWPATH
+ * @pv : pointer to struct pv_qvars
+ * @count: loop count
+ *
+ * The current CPU should have gotten the lock and the queue head flag set
+ * before calling this function.
+ */
+static __always_inline void
+pv_halt_check(struct pv_qvars *pv, struct qspinlock *lock)
+{
+ if (!static_key_false(¶virt_spinlocks_enabled))
+ return;
+ /*
+ * Halt state checking will only be done if the mayhalt flag is set
+ * to avoid the overhead of the memory barrier in normal cases.
+ * It is highly unlikely that the actual writing to the qhead flag
+ * will be more than 0x10 iterations later than the reading of the
+ * mayhalt flag so that it misses seeing the PV_CPU_HALTED state
+ * which causes lost wakeup.
+ */
+ if (ACCESS_ONCE(pv->mayhalt)) {
+ /*
+ * A memory barrier is used here to make sure that the setting
+ * of queue head flag prior to this function call is visible
+ * to others before checking the cpustate flag.
+ */
+ smp_mb();
+ if (pv->cpustate == PV_CPU_HALTED)
+ ACCESS_ONCE(*(u8 *)lock) = _Q_LOCKED_SLOWPATH;
+ }
+}
+
+/**
+ * pv_set_prev - set previous queue node pointer
+ * @pv : pointer to struct pv_qvars to be set
+ * @prev: pointer to the previous node
+ */
+static __always_inline void pv_set_prev(struct pv_qvars *pv, struct qnode *prev)
+{
+ ACCESS_ONCE(pv->prev) = prev;
+ /*
+ * Make sure the prev field is set up before others
+ */
+ smp_wmb();
+}
+
+/*
+ * The following inlined functions are being used by the
+ * queue_spin_unlock_slowpath() function.
+ */
+
+/**
+ * pv_get_prev - get previous queue node pointer
+ * @pv : pointer to struct pv_qvars to be set
+ * Return: the previous queue node pointer
+ */
+static __always_inline struct qnode *pv_get_prev(struct pv_qvars *pv)
+{
+ return ACCESS_ONCE(pv->prev);
+}
+
+/**
+ * pv_kick_node - kick up the CPU of the given node
+ * @pv : pointer to struct pv_qvars of the node to be kicked
+ */
+static __always_inline void pv_kick_node(struct pv_qvars *pv)
+{
+ s8 oldstate = xchg(&pv->cpustate, PV_CPU_KICKED);
+
+ /*
+ * Kick up the CPU only if the state was set to PV_CPU_HALTED
+ */
+ if (oldstate != PV_CPU_HALTED)
+ __queue_lockstat(PV_KICK_NOHALT);
+ else
+ __queue_kick_cpu(pv->mycpu);
+}
+
+#endif /* _ASM_X86_PVQSPINLOCK_H */
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 19af937..a145c31 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -19,13 +19,46 @@ extern struct static_key paravirt_unfairlocks_enabled;
* that the clearing the lock bit is done ASAP without artificial delay
* due to compiler optimization.
*/
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+static __always_inline void __queue_spin_unlock(struct qspinlock *lock)
+#else
static inline void queue_spin_unlock(struct qspinlock *lock)
+#endif
{
barrier();
ACCESS_ONCE(*(u8 *)lock) = 0;
barrier();
}
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+/*
+ * The lock byte can have a value of _Q_LOCKED_SLOWPATH to indicate
+ * that it needs to go through the slowpath to do the unlocking.
+ */
+#define _Q_LOCKED_SLOWPATH (_Q_LOCKED_VAL | 2)
+
+extern void queue_spin_unlock_slowpath(struct qspinlock *lock);
+
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ if (static_key_false(¶virt_spinlocks_enabled)) {
+ /*
+ * Need to atomically clear the lock byte to avoid racing with
+ * queue head waiter trying to set _QLOCK_LOCKED_SLOWPATH.
+ */
+ if (likely(cmpxchg((u8 *)lock, _Q_LOCKED_VAL, 0)
+ == _Q_LOCKED_VAL))
+ return;
+ else
+ queue_spin_unlock_slowpath(lock);
+
+ } else {
+ __queue_spin_unlock(lock);
+ }
+ barrier();
+}
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
#include <asm-generic/qspinlock.h>
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fb05e64..37b5c7f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -57,17 +57,45 @@
#include "mcs_spinlock.h"
/*
+ * Para-virtualized queue spinlock support
+ */
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/pvqspinlock.h>
+#else
+
+struct qnode;
+struct pv_qvars {};
+static inline void pv_init_vars(struct pv_qvars *pv, int cpu_nr) {}
+static inline void pv_head_spin_check(struct pv_qvars *pv, int *count,
+ u32 qcode, struct qspinlock *lock) {}
+static inline void pv_queue_spin_check(struct pv_qvars *pv,
+ struct mcs_spinlock *mcs, int *count) {}
+static inline void pv_halt_check(struct pv_qvars *pv, void *lock) {}
+static inline void pv_kick_node(struct pv_qvars *pv) {}
+static inline void pv_set_prev(struct pv_qvars *pv, struct qnode *prev) {}
+static inline struct qnode *pv_get_prev(struct pv_qvars *pv)
+ { return NULL; }
+#endif
+
+/*
* To have additional features for better virtualization support, it is
* necessary to store additional data in the queue node structure. So
* a new queue node structure will have to be defined and used here.
+ *
+ * If CONFIG_PARAVIRT_SPINLOCKS is turned on, the previous node pointer in
+ * the pv structure will be used by the unfair lock code.
+ *
*/
struct qnode {
struct mcs_spinlock mcs;
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
int lsteal_mask; /* Lock stealing frequency mask */
u32 prev_tail; /* Tail code of previous node */
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
struct qnode *qprev; /* Previous queue node addr */
#endif
+#endif
+ struct pv_qvars pv; /* For para-virtualization */
};
#define qhead mcs.locked /* The queue head flag */
@@ -662,6 +690,7 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
{
struct qnode *prev, *next;
u32 old;
+ DEF_LOOP_CNT(hcnt);
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -679,6 +708,7 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
prev = decode_tail(old);
unfair_set_vars(node, prev, old);
+ pv_set_prev(&node->pv, prev);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
while (!smp_load_acquire(&node->qhead)) {
@@ -693,6 +723,8 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
goto notify_next;
return;
}
+ pv_queue_spin_check(&node->pv, &node->mcs,
+ LOOP_CNT(&cnt));
arch_mutex_cpu_relax();
}
} else {
@@ -709,8 +741,14 @@ static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
*/
retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
- & _Q_LOCKED_PENDING_MASK)
+ & _Q_LOCKED_PENDING_MASK) {
+ INC_LOOP_CNT(hcnt);
+ /*
+ * Perform queue head para-virtualization checks
+ */
+ pv_head_spin_check(&node->pv, LOOP_CNT(&hcnt), old, lock);
arch_mutex_cpu_relax();
+ }
/*
* claim the lock:
@@ -723,6 +761,7 @@ retry_queue_wait:
* to grab the lock.
*/
for (;;) {
+ LOOP_CNT(hcnt = 0); /* Reset loop count */
if (val != tail) {
/*
* The get_qlock function will only failed if the
@@ -768,6 +807,7 @@ notify_next:
* The next one in queue is now at the head
*/
arch_mcs_spin_unlock_contended(&next->qhead);
+ pv_halt_check(&next->pv, lock);
}
/**
@@ -801,7 +841,7 @@ notify_next:
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
struct qnode *node;
- u32 tail, idx;
+ u32 tail, idx, cpu_nr;
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -810,12 +850,13 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node = this_cpu_ptr(&qnodes[0]);
idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
+ tail = encode_tail(cpu_nr = smp_processor_id(), idx);
node += idx;
node->qhead = 0;
node->mcs.next = NULL;
unfair_init_vars(node);
+ pv_init_vars(&node->pv, cpu_nr);
/*
* We touched a (possibly) cold cacheline in the per-cpu queue node;
@@ -831,3 +872,47 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
this_cpu_dec(qnodes[0].mcs.count);
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+/**
+ * queue_spin_unlock_slowpath - kick up the CPU of the queue head
+ * @lock : Pointer to queue spinlock structure
+ *
+ * The lock is released after finding the queue head to avoid racing
+ * condition between the queue head and the lock holder.
+ */
+void queue_spin_unlock_slowpath(struct qspinlock *lock)
+{
+ struct qnode *node, *prev;
+
+ /*
+ * Get the queue tail node
+ */
+ node = decode_tail(atomic_read(&lock->val));
+
+ /*
+ * Locate the queue head node by following the prev pointer from
+ * tail to head.
+ * It is assumed that the PV guests won't have that many CPUs so
+ * that it won't take a long time to follow the pointers.
+ */
+ while (!ACCESS_ONCE(node->qhead)) {
+ prev = pv_get_prev(&node->pv);
+ if (prev)
+ node = prev;
+ else
+ /*
+ * Delay a bit to allow the prev pointer to be set up
+ */
+ arch_mutex_cpu_relax();
+ }
+ /*
+ * Found the queue head, now release the lock before waking it up
+ * If unfair lock is enabled, this allows other ready tasks to get
+ * lock before the halting CPU is waken up.
+ */
+ __queue_spin_unlock(lock);
+ pv_kick_node(&node->pv);
+}
+EXPORT_SYMBOL(queue_spin_unlock_slowpath);
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
2014-05-07 15:01 ` Waiman Long
` (38 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
-1 siblings, 2 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
2) Both VMs are active and they share the same 20 physical CPUs
(200% overcommit)
3) Both VMs are active and they shares 30 physical CPUs (10 delicated
and 10 shared - 133% overcommit)
The tests run included the disk workload of the AIM7 benchmark on both
ext4 and xfs RAM disks at 3000 users on a 3.15-rc1 based kernel. The
"ebizzy -m" test was was also run and its performance data were
recorded. With two VMs running, the "idle=poll" kernel option was
added to simulate a busy guest. The entry "unfair + PV qspinlock"
below means that both the unfair lock and PV spinlock configuration
options were turned on.
AIM7 XFS Disk Test (no overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 2489626 7.23 101.08 5.30
qspinlock 2531646 7.11 100.75 5.43
PV qspinlock 2500000 7.20 101.94 5.40
unfair qspinlock 2549575 7.06 99.81 5.35
unfair + PV qspinlock 2486188 7.24 101.55 5.51
AIM7 XFS Disk Test (133% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 1114551 16.15 220.17 10.75
qspinlock 1159047 15.53 216.60 10.24
PV qspinlock 1170351 15.38 216.16 11.03
unfair qspinlock 1188119 15.15 209.37 10.82
unfair + PV qspinlock 1178782 15.27 211.37 11.25
AIM7 XFS Disk Test (200% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 587467 30.64 444.95 11.92
qspinlock 593276 30.34 439.39 14.59
PV qspinlock 601403 29.93 426.04 14.49
unfair qspinlock 654070 27.52 400.82 10.86
unfair + PV qspinlock 614334 29.30 393.38 28.56
AIM7 EXT4 Disk Test (no overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 2002225 9.07 105.62 5.43
qspinlock 2006689 8.97 105.65 5.26
PV qspinlock 2002225 8.99 103.19 5.19
unfair qspinlock 1988950 9.05 103.81 5.03
unfair + PV qspinlock 1993355 9.03 107.99 5.68
AIM7 EXT4 Disk Test (133% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 987383 18.23 221.63 8.89
qspinlock 1050788 17.13 206.87 8.35
PV qspinlock 1058823 17.00 205.22 9.18
unfair qspinlock 1161290 15.50 184.22 8.84
unfair + PV qspinlock 1122894 16.03 195.86 9.34
AIM7 EXT4 Disk Test (200% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 420757 42.78 565.96 5.84
qspinlock 427452 42.11 543.08 11.12
PV qspinlock 420659 42.79 548.30 10.56
unfair qspinlock 504909 35.65 466.71 5.38
unfair + PV qspinlock 500974 35.93 469.02 6.77
EBIZZY-M Test (no overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 1230 10.00 88.34 1.42
qspinlock 1212 10.00 68.25 1.47
PV qspinlock 1265 10.00 91.50 1.41
unfair qspinlock 1304 10.00 77.94 1.49
unfair + PV qspinlock 1445 10.00 75.45 1.68
EBIZZY-M Test (133% overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 467 10.00 88.16 0.73
qspinlock 463 10.00 89.44 0.78
PV qspinlock 441 10.00 95.10 0.74
unfair qspinlock 1233 10.00 35.76 1.76
unfair + PV qspinlock 1555 10.00 32.12 1.96
EBIZZY-M Test (200% overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 263 10.00 84.48 4.27
qspinlock 226 10.00 87.74 2.02
PV qspinlock 253 10.00 98.28 2.63
unfair qspinlock 338 10.00 61.15 1.68
unfair + PV qspinlock 346 10.00 60.47 3.31
Raghavendra KT had done some performance testing on this patch with
the following results:
Overall we are seeing good improvement for pv-unfair version.
System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
Guest : 8GB with 16 vcpu/VM.
Average was taken over 8-10 data points.
Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
(queue spinlock without paravirt)
C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
(queue spinlock with paravirt)
Ebizzy %improvements
====================
overcommit A B C
0.5x 4.4265 2.0611 1.5824
1.0x 0.9015 -7.7828 4.5443
1.5x 46.1162 -2.9845 -3.5046
2.0x 99.8150 -2.7116 4.7461
Dbench %improvements
====================
overcommit A B C
0.5x 3.2617 3.5436 2.5676
1.0x 0.6302 2.2342 5.2201
1.5x 5.0027 4.8275 3.8375
2.0x 23.8242 4.5782 12.6067
Absolute values of base results: (overcommit, value, stdev)
Ebizzy ( records / sec with 120 sec run)
0.5x 20941.8750 (2%)
1.0x 17623.8750 (5%)
1.5x 5874.7778 (15%)
2.0x 3581.8750 (7%)
Dbench (throughput in MB/sec)
0.5x 10009.6610 (5%)
1.0x 6583.0538 (1%)
1.5x 3991.9622 (4%)
2.0x 2527.0613 (2.5%)
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Tested-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
arch/x86/kernel/kvm.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++++
kernel/Kconfig.locks | 2 +-
2 files changed, 136 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 7ab8ab3..eef427b 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -567,6 +567,7 @@ static void kvm_kick_cpu(int cpu)
kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
}
+#ifndef CONFIG_QUEUE_SPINLOCK
enum kvm_contention_stat {
TAKEN_SLOW,
TAKEN_SLOW_PICKUP,
@@ -794,6 +795,134 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
}
}
}
+#else /* !CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_KVM_DEBUG_FS
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+static u32 kick_nohlt_stats; /* Kick but not halt count */
+static u32 halt_qhead_stats; /* Queue head halting count */
+static u32 halt_qnode_stats; /* Queue node halting count */
+static u32 halt_abort_stats; /* Halting abort count */
+static u32 wake_kick_stats; /* Wakeup by kicking count */
+static u32 wake_spur_stats; /* Spurious wakeup count */
+static u64 time_blocked; /* Total blocking time */
+
+static int __init kvm_spinlock_debugfs(void)
+{
+ d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
+ if (!d_kvm_debug) {
+ printk(KERN_WARNING
+ "Could not create 'kvm' debugfs directory\n");
+ return -ENOMEM;
+ }
+ d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
+
+ debugfs_create_u32("kick_nohlt_stats",
+ 0644, d_spin_debug, &kick_nohlt_stats);
+ debugfs_create_u32("halt_qhead_stats",
+ 0644, d_spin_debug, &halt_qhead_stats);
+ debugfs_create_u32("halt_qnode_stats",
+ 0644, d_spin_debug, &halt_qnode_stats);
+ debugfs_create_u32("halt_abort_stats",
+ 0644, d_spin_debug, &halt_abort_stats);
+ debugfs_create_u32("wake_kick_stats",
+ 0644, d_spin_debug, &wake_kick_stats);
+ debugfs_create_u32("wake_spur_stats",
+ 0644, d_spin_debug, &wake_spur_stats);
+ debugfs_create_u64("time_blocked",
+ 0644, d_spin_debug, &time_blocked);
+ return 0;
+}
+
+static inline void kvm_halt_stats(enum pv_lock_stats type)
+{
+ if (type == PV_HALT_QHEAD)
+ add_smp(&halt_qhead_stats, 1);
+ else if (type == PV_HALT_QNODE)
+ add_smp(&halt_qnode_stats, 1);
+ else /* type == PV_HALT_ABORT */
+ add_smp(&halt_abort_stats, 1);
+}
+
+static inline void kvm_lock_stats(enum pv_lock_stats type)
+{
+ if (type == PV_WAKE_KICKED)
+ add_smp(&wake_kick_stats, 1);
+ else if (type == PV_WAKE_SPURIOUS)
+ add_smp(&wake_spur_stats, 1);
+ else /* type == PV_KICK_NOHALT */
+ add_smp(&kick_nohlt_stats, 1);
+}
+
+static inline u64 spin_time_start(void)
+{
+ return sched_clock();
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+ u64 delta;
+
+ delta = sched_clock() - start;
+ add_smp(&time_blocked, delta);
+}
+
+fs_initcall(kvm_spinlock_debugfs);
+
+#else /* CONFIG_KVM_DEBUG_FS */
+static inline void kvm_halt_stats(enum pv_lock_stats type)
+{
+}
+
+static inline void kvm_lock_stats(enum pv_lock_stats type)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+ return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+/*
+ * Halt the current CPU & release it back to the host
+ */
+static void kvm_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ unsigned long flags;
+ u64 start;
+
+ if (in_nmi())
+ return;
+
+ /*
+ * Make sure an interrupt handler can't upset things in a
+ * partially setup state.
+ */
+ local_irq_save(flags);
+ /*
+ * Don't halt if the CPU state has been changed.
+ */
+ if (ACCESS_ONCE(*state) != sval) {
+ kvm_halt_stats(PV_HALT_ABORT);
+ goto out;
+ }
+ start = spin_time_start();
+ kvm_halt_stats(type);
+ if (arch_irqs_disabled_flags(flags))
+ halt();
+ else
+ safe_halt();
+ spin_time_accum_blocked(start);
+out:
+ local_irq_restore(flags);
+}
+#endif /* !CONFIG_QUEUE_SPINLOCK */
/*
* Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -806,8 +935,14 @@ void __init kvm_spinlock_init(void)
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return;
+#ifdef CONFIG_QUEUE_SPINLOCK
+ pv_lock_ops.kick_cpu = kvm_kick_cpu;
+ pv_lock_ops.halt_cpu = kvm_halt_cpu;
+ pv_lock_ops.lockstat = kvm_lock_stats;
+#else
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
}
static __init int kvm_spinlock_init_jump(void)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index f185584..a70fdeb 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
config QUEUE_SPINLOCK
def_bool y if ARCH_USE_QUEUE_SPINLOCK
- depends on SMP && !PARAVIRT_SPINLOCKS
+ depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
2014-05-07 15:01 ` Waiman Long
` (39 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
2) Both VMs are active and they share the same 20 physical CPUs
(200% overcommit)
3) Both VMs are active and they shares 30 physical CPUs (10 delicated
and 10 shared - 133% overcommit)
The tests run included the disk workload of the AIM7 benchmark on both
ext4 and xfs RAM disks at 3000 users on a 3.15-rc1 based kernel. The
"ebizzy -m" test was was also run and its performance data were
recorded. With two VMs running, the "idle=poll" kernel option was
added to simulate a busy guest. The entry "unfair + PV qspinlock"
below means that both the unfair lock and PV spinlock configuration
options were turned on.
AIM7 XFS Disk Test (no overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 2489626 7.23 101.08 5.30
qspinlock 2531646 7.11 100.75 5.43
PV qspinlock 2500000 7.20 101.94 5.40
unfair qspinlock 2549575 7.06 99.81 5.35
unfair + PV qspinlock 2486188 7.24 101.55 5.51
AIM7 XFS Disk Test (133% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 1114551 16.15 220.17 10.75
qspinlock 1159047 15.53 216.60 10.24
PV qspinlock 1170351 15.38 216.16 11.03
unfair qspinlock 1188119 15.15 209.37 10.82
unfair + PV qspinlock 1178782 15.27 211.37 11.25
AIM7 XFS Disk Test (200% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 587467 30.64 444.95 11.92
qspinlock 593276 30.34 439.39 14.59
PV qspinlock 601403 29.93 426.04 14.49
unfair qspinlock 654070 27.52 400.82 10.86
unfair + PV qspinlock 614334 29.30 393.38 28.56
AIM7 EXT4 Disk Test (no overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 2002225 9.07 105.62 5.43
qspinlock 2006689 8.97 105.65 5.26
PV qspinlock 2002225 8.99 103.19 5.19
unfair qspinlock 1988950 9.05 103.81 5.03
unfair + PV qspinlock 1993355 9.03 107.99 5.68
AIM7 EXT4 Disk Test (133% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 987383 18.23 221.63 8.89
qspinlock 1050788 17.13 206.87 8.35
PV qspinlock 1058823 17.00 205.22 9.18
unfair qspinlock 1161290 15.50 184.22 8.84
unfair + PV qspinlock 1122894 16.03 195.86 9.34
AIM7 EXT4 Disk Test (200% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 420757 42.78 565.96 5.84
qspinlock 427452 42.11 543.08 11.12
PV qspinlock 420659 42.79 548.30 10.56
unfair qspinlock 504909 35.65 466.71 5.38
unfair + PV qspinlock 500974 35.93 469.02 6.77
EBIZZY-M Test (no overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 1230 10.00 88.34 1.42
qspinlock 1212 10.00 68.25 1.47
PV qspinlock 1265 10.00 91.50 1.41
unfair qspinlock 1304 10.00 77.94 1.49
unfair + PV qspinlock 1445 10.00 75.45 1.68
EBIZZY-M Test (133% overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 467 10.00 88.16 0.73
qspinlock 463 10.00 89.44 0.78
PV qspinlock 441 10.00 95.10 0.74
unfair qspinlock 1233 10.00 35.76 1.76
unfair + PV qspinlock 1555 10.00 32.12 1.96
EBIZZY-M Test (200% overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 263 10.00 84.48 4.27
qspinlock 226 10.00 87.74 2.02
PV qspinlock 253 10.00 98.28 2.63
unfair qspinlock 338 10.00 61.15 1.68
unfair + PV qspinlock 346 10.00 60.47 3.31
Raghavendra KT had done some performance testing on this patch with
the following results:
Overall we are seeing good improvement for pv-unfair version.
System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
Guest : 8GB with 16 vcpu/VM.
Average was taken over 8-10 data points.
Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
(queue spinlock without paravirt)
C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
(queue spinlock with paravirt)
Ebizzy %improvements
====================
overcommit A B C
0.5x 4.4265 2.0611 1.5824
1.0x 0.9015 -7.7828 4.5443
1.5x 46.1162 -2.9845 -3.5046
2.0x 99.8150 -2.7116 4.7461
Dbench %improvements
====================
overcommit A B C
0.5x 3.2617 3.5436 2.5676
1.0x 0.6302 2.2342 5.2201
1.5x 5.0027 4.8275 3.8375
2.0x 23.8242 4.5782 12.6067
Absolute values of base results: (overcommit, value, stdev)
Ebizzy ( records / sec with 120 sec run)
0.5x 20941.8750 (2%)
1.0x 17623.8750 (5%)
1.5x 5874.7778 (15%)
2.0x 3581.8750 (7%)
Dbench (throughput in MB/sec)
0.5x 10009.6610 (5%)
1.0x 6583.0538 (1%)
1.5x 3991.9622 (4%)
2.0x 2527.0613 (2.5%)
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Tested-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
arch/x86/kernel/kvm.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++++
kernel/Kconfig.locks | 2 +-
2 files changed, 136 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 7ab8ab3..eef427b 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -567,6 +567,7 @@ static void kvm_kick_cpu(int cpu)
kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
}
+#ifndef CONFIG_QUEUE_SPINLOCK
enum kvm_contention_stat {
TAKEN_SLOW,
TAKEN_SLOW_PICKUP,
@@ -794,6 +795,134 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
}
}
}
+#else /* !CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_KVM_DEBUG_FS
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+static u32 kick_nohlt_stats; /* Kick but not halt count */
+static u32 halt_qhead_stats; /* Queue head halting count */
+static u32 halt_qnode_stats; /* Queue node halting count */
+static u32 halt_abort_stats; /* Halting abort count */
+static u32 wake_kick_stats; /* Wakeup by kicking count */
+static u32 wake_spur_stats; /* Spurious wakeup count */
+static u64 time_blocked; /* Total blocking time */
+
+static int __init kvm_spinlock_debugfs(void)
+{
+ d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
+ if (!d_kvm_debug) {
+ printk(KERN_WARNING
+ "Could not create 'kvm' debugfs directory\n");
+ return -ENOMEM;
+ }
+ d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
+
+ debugfs_create_u32("kick_nohlt_stats",
+ 0644, d_spin_debug, &kick_nohlt_stats);
+ debugfs_create_u32("halt_qhead_stats",
+ 0644, d_spin_debug, &halt_qhead_stats);
+ debugfs_create_u32("halt_qnode_stats",
+ 0644, d_spin_debug, &halt_qnode_stats);
+ debugfs_create_u32("halt_abort_stats",
+ 0644, d_spin_debug, &halt_abort_stats);
+ debugfs_create_u32("wake_kick_stats",
+ 0644, d_spin_debug, &wake_kick_stats);
+ debugfs_create_u32("wake_spur_stats",
+ 0644, d_spin_debug, &wake_spur_stats);
+ debugfs_create_u64("time_blocked",
+ 0644, d_spin_debug, &time_blocked);
+ return 0;
+}
+
+static inline void kvm_halt_stats(enum pv_lock_stats type)
+{
+ if (type == PV_HALT_QHEAD)
+ add_smp(&halt_qhead_stats, 1);
+ else if (type == PV_HALT_QNODE)
+ add_smp(&halt_qnode_stats, 1);
+ else /* type == PV_HALT_ABORT */
+ add_smp(&halt_abort_stats, 1);
+}
+
+static inline void kvm_lock_stats(enum pv_lock_stats type)
+{
+ if (type == PV_WAKE_KICKED)
+ add_smp(&wake_kick_stats, 1);
+ else if (type == PV_WAKE_SPURIOUS)
+ add_smp(&wake_spur_stats, 1);
+ else /* type == PV_KICK_NOHALT */
+ add_smp(&kick_nohlt_stats, 1);
+}
+
+static inline u64 spin_time_start(void)
+{
+ return sched_clock();
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+ u64 delta;
+
+ delta = sched_clock() - start;
+ add_smp(&time_blocked, delta);
+}
+
+fs_initcall(kvm_spinlock_debugfs);
+
+#else /* CONFIG_KVM_DEBUG_FS */
+static inline void kvm_halt_stats(enum pv_lock_stats type)
+{
+}
+
+static inline void kvm_lock_stats(enum pv_lock_stats type)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+ return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+/*
+ * Halt the current CPU & release it back to the host
+ */
+static void kvm_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ unsigned long flags;
+ u64 start;
+
+ if (in_nmi())
+ return;
+
+ /*
+ * Make sure an interrupt handler can't upset things in a
+ * partially setup state.
+ */
+ local_irq_save(flags);
+ /*
+ * Don't halt if the CPU state has been changed.
+ */
+ if (ACCESS_ONCE(*state) != sval) {
+ kvm_halt_stats(PV_HALT_ABORT);
+ goto out;
+ }
+ start = spin_time_start();
+ kvm_halt_stats(type);
+ if (arch_irqs_disabled_flags(flags))
+ halt();
+ else
+ safe_halt();
+ spin_time_accum_blocked(start);
+out:
+ local_irq_restore(flags);
+}
+#endif /* !CONFIG_QUEUE_SPINLOCK */
/*
* Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -806,8 +935,14 @@ void __init kvm_spinlock_init(void)
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return;
+#ifdef CONFIG_QUEUE_SPINLOCK
+ pv_lock_ops.kick_cpu = kvm_kick_cpu;
+ pv_lock_ops.halt_cpu = kvm_halt_cpu;
+ pv_lock_ops.lockstat = kvm_lock_stats;
+#else
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
}
static __init int kvm_spinlock_init_jump(void)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index f185584..a70fdeb 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
config QUEUE_SPINLOCK
def_bool y if ARCH_USE_QUEUE_SPINLOCK
- depends on SMP && !PARAVIRT_SPINLOCKS
+ depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
2014-05-07 15:01 ` Waiman Long
` (37 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
2) Both VMs are active and they share the same 20 physical CPUs
(200% overcommit)
3) Both VMs are active and they shares 30 physical CPUs (10 delicated
and 10 shared - 133% overcommit)
The tests run included the disk workload of the AIM7 benchmark on both
ext4 and xfs RAM disks at 3000 users on a 3.15-rc1 based kernel. The
"ebizzy -m" test was was also run and its performance data were
recorded. With two VMs running, the "idle=poll" kernel option was
added to simulate a busy guest. The entry "unfair + PV qspinlock"
below means that both the unfair lock and PV spinlock configuration
options were turned on.
AIM7 XFS Disk Test (no overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 2489626 7.23 101.08 5.30
qspinlock 2531646 7.11 100.75 5.43
PV qspinlock 2500000 7.20 101.94 5.40
unfair qspinlock 2549575 7.06 99.81 5.35
unfair + PV qspinlock 2486188 7.24 101.55 5.51
AIM7 XFS Disk Test (133% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 1114551 16.15 220.17 10.75
qspinlock 1159047 15.53 216.60 10.24
PV qspinlock 1170351 15.38 216.16 11.03
unfair qspinlock 1188119 15.15 209.37 10.82
unfair + PV qspinlock 1178782 15.27 211.37 11.25
AIM7 XFS Disk Test (200% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 587467 30.64 444.95 11.92
qspinlock 593276 30.34 439.39 14.59
PV qspinlock 601403 29.93 426.04 14.49
unfair qspinlock 654070 27.52 400.82 10.86
unfair + PV qspinlock 614334 29.30 393.38 28.56
AIM7 EXT4 Disk Test (no overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 2002225 9.07 105.62 5.43
qspinlock 2006689 8.97 105.65 5.26
PV qspinlock 2002225 8.99 103.19 5.19
unfair qspinlock 1988950 9.05 103.81 5.03
unfair + PV qspinlock 1993355 9.03 107.99 5.68
AIM7 EXT4 Disk Test (133% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 987383 18.23 221.63 8.89
qspinlock 1050788 17.13 206.87 8.35
PV qspinlock 1058823 17.00 205.22 9.18
unfair qspinlock 1161290 15.50 184.22 8.84
unfair + PV qspinlock 1122894 16.03 195.86 9.34
AIM7 EXT4 Disk Test (200% overcommit)
kernel JPM Real Time Sys Time Usr Time
----- --- --------- -------- --------
PV ticketlock 420757 42.78 565.96 5.84
qspinlock 427452 42.11 543.08 11.12
PV qspinlock 420659 42.79 548.30 10.56
unfair qspinlock 504909 35.65 466.71 5.38
unfair + PV qspinlock 500974 35.93 469.02 6.77
EBIZZY-M Test (no overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 1230 10.00 88.34 1.42
qspinlock 1212 10.00 68.25 1.47
PV qspinlock 1265 10.00 91.50 1.41
unfair qspinlock 1304 10.00 77.94 1.49
unfair + PV qspinlock 1445 10.00 75.45 1.68
EBIZZY-M Test (133% overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 467 10.00 88.16 0.73
qspinlock 463 10.00 89.44 0.78
PV qspinlock 441 10.00 95.10 0.74
unfair qspinlock 1233 10.00 35.76 1.76
unfair + PV qspinlock 1555 10.00 32.12 1.96
EBIZZY-M Test (200% overcommit)
kernel Rec/s Real Time Sys Time Usr Time
----- ----- --------- -------- --------
PV ticketlock 263 10.00 84.48 4.27
qspinlock 226 10.00 87.74 2.02
PV qspinlock 253 10.00 98.28 2.63
unfair qspinlock 338 10.00 61.15 1.68
unfair + PV qspinlock 346 10.00 60.47 3.31
Raghavendra KT had done some performance testing on this patch with
the following results:
Overall we are seeing good improvement for pv-unfair version.
System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
Guest : 8GB with 16 vcpu/VM.
Average was taken over 8-10 data points.
Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
(queue spinlock without paravirt)
C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
(queue spinlock with paravirt)
Ebizzy %improvements
====================
overcommit A B C
0.5x 4.4265 2.0611 1.5824
1.0x 0.9015 -7.7828 4.5443
1.5x 46.1162 -2.9845 -3.5046
2.0x 99.8150 -2.7116 4.7461
Dbench %improvements
====================
overcommit A B C
0.5x 3.2617 3.5436 2.5676
1.0x 0.6302 2.2342 5.2201
1.5x 5.0027 4.8275 3.8375
2.0x 23.8242 4.5782 12.6067
Absolute values of base results: (overcommit, value, stdev)
Ebizzy ( records / sec with 120 sec run)
0.5x 20941.8750 (2%)
1.0x 17623.8750 (5%)
1.5x 5874.7778 (15%)
2.0x 3581.8750 (7%)
Dbench (throughput in MB/sec)
0.5x 10009.6610 (5%)
1.0x 6583.0538 (1%)
1.5x 3991.9622 (4%)
2.0x 2527.0613 (2.5%)
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Tested-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
arch/x86/kernel/kvm.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++++
kernel/Kconfig.locks | 2 +-
2 files changed, 136 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 7ab8ab3..eef427b 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -567,6 +567,7 @@ static void kvm_kick_cpu(int cpu)
kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
}
+#ifndef CONFIG_QUEUE_SPINLOCK
enum kvm_contention_stat {
TAKEN_SLOW,
TAKEN_SLOW_PICKUP,
@@ -794,6 +795,134 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
}
}
}
+#else /* !CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_KVM_DEBUG_FS
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+static u32 kick_nohlt_stats; /* Kick but not halt count */
+static u32 halt_qhead_stats; /* Queue head halting count */
+static u32 halt_qnode_stats; /* Queue node halting count */
+static u32 halt_abort_stats; /* Halting abort count */
+static u32 wake_kick_stats; /* Wakeup by kicking count */
+static u32 wake_spur_stats; /* Spurious wakeup count */
+static u64 time_blocked; /* Total blocking time */
+
+static int __init kvm_spinlock_debugfs(void)
+{
+ d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
+ if (!d_kvm_debug) {
+ printk(KERN_WARNING
+ "Could not create 'kvm' debugfs directory\n");
+ return -ENOMEM;
+ }
+ d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
+
+ debugfs_create_u32("kick_nohlt_stats",
+ 0644, d_spin_debug, &kick_nohlt_stats);
+ debugfs_create_u32("halt_qhead_stats",
+ 0644, d_spin_debug, &halt_qhead_stats);
+ debugfs_create_u32("halt_qnode_stats",
+ 0644, d_spin_debug, &halt_qnode_stats);
+ debugfs_create_u32("halt_abort_stats",
+ 0644, d_spin_debug, &halt_abort_stats);
+ debugfs_create_u32("wake_kick_stats",
+ 0644, d_spin_debug, &wake_kick_stats);
+ debugfs_create_u32("wake_spur_stats",
+ 0644, d_spin_debug, &wake_spur_stats);
+ debugfs_create_u64("time_blocked",
+ 0644, d_spin_debug, &time_blocked);
+ return 0;
+}
+
+static inline void kvm_halt_stats(enum pv_lock_stats type)
+{
+ if (type == PV_HALT_QHEAD)
+ add_smp(&halt_qhead_stats, 1);
+ else if (type == PV_HALT_QNODE)
+ add_smp(&halt_qnode_stats, 1);
+ else /* type == PV_HALT_ABORT */
+ add_smp(&halt_abort_stats, 1);
+}
+
+static inline void kvm_lock_stats(enum pv_lock_stats type)
+{
+ if (type == PV_WAKE_KICKED)
+ add_smp(&wake_kick_stats, 1);
+ else if (type == PV_WAKE_SPURIOUS)
+ add_smp(&wake_spur_stats, 1);
+ else /* type == PV_KICK_NOHALT */
+ add_smp(&kick_nohlt_stats, 1);
+}
+
+static inline u64 spin_time_start(void)
+{
+ return sched_clock();
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+ u64 delta;
+
+ delta = sched_clock() - start;
+ add_smp(&time_blocked, delta);
+}
+
+fs_initcall(kvm_spinlock_debugfs);
+
+#else /* CONFIG_KVM_DEBUG_FS */
+static inline void kvm_halt_stats(enum pv_lock_stats type)
+{
+}
+
+static inline void kvm_lock_stats(enum pv_lock_stats type)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+ return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+/*
+ * Halt the current CPU & release it back to the host
+ */
+static void kvm_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ unsigned long flags;
+ u64 start;
+
+ if (in_nmi())
+ return;
+
+ /*
+ * Make sure an interrupt handler can't upset things in a
+ * partially setup state.
+ */
+ local_irq_save(flags);
+ /*
+ * Don't halt if the CPU state has been changed.
+ */
+ if (ACCESS_ONCE(*state) != sval) {
+ kvm_halt_stats(PV_HALT_ABORT);
+ goto out;
+ }
+ start = spin_time_start();
+ kvm_halt_stats(type);
+ if (arch_irqs_disabled_flags(flags))
+ halt();
+ else
+ safe_halt();
+ spin_time_accum_blocked(start);
+out:
+ local_irq_restore(flags);
+}
+#endif /* !CONFIG_QUEUE_SPINLOCK */
/*
* Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -806,8 +935,14 @@ void __init kvm_spinlock_init(void)
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return;
+#ifdef CONFIG_QUEUE_SPINLOCK
+ pv_lock_ops.kick_cpu = kvm_kick_cpu;
+ pv_lock_ops.halt_cpu = kvm_halt_cpu;
+ pv_lock_ops.lockstat = kvm_lock_stats;
+#else
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
}
static __init int kvm_spinlock_init_jump(void)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index f185584..a70fdeb 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
config QUEUE_SPINLOCK
def_bool y if ARCH_USE_QUEUE_SPINLOCK
- depends on SMP && !PARAVIRT_SPINLOCKS
+ depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 19/19] pvqspinlock, x86: Enable PV qspinlock for XEN
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod, Waiman Long
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/xen/spinlock.c | 147 +++++++++++++++++++++++++++++++++++++++++++++--
kernel/Kconfig.locks | 2 +-
2 files changed, 143 insertions(+), 6 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index d1b6a32..2a259bb 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -17,6 +17,12 @@
#include "xen-ops.h"
#include "debugfs.h"
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(char *, irq_name);
+static bool xen_pvspin = true;
+
+#ifndef CONFIG_QUEUE_SPINLOCK
+
enum xen_contention_stat {
TAKEN_SLOW,
TAKEN_SLOW_PICKUP,
@@ -100,12 +106,9 @@ struct xen_lock_waiting {
__ticket_t want;
};
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
-static DEFINE_PER_CPU(char *, irq_name);
static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
static cpumask_t waiting_cpus;
-static bool xen_pvspin = true;
__visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
{
int irq = __this_cpu_read(lock_kicker_irq);
@@ -213,6 +216,118 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
}
}
+#else /* CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_XEN_DEBUG_FS
+static u32 kick_nohlt_stats; /* Kick but not halt count */
+static u32 halt_qhead_stats; /* Queue head halting count */
+static u32 halt_qnode_stats; /* Queue node halting count */
+static u32 halt_abort_stats; /* Halting abort count */
+static u32 wake_kick_stats; /* Wakeup by kicking count */
+static u32 wake_spur_stats; /* Spurious wakeup count */
+static u64 time_blocked; /* Total blocking time */
+
+static inline void xen_halt_stats(enum pv_lock_stats type)
+{
+ if (type == PV_HALT_QHEAD)
+ add_smp(&halt_qhead_stats, 1);
+ else if (type == PV_HALT_QNODE)
+ add_smp(&halt_qnode_stats, 1);
+ else /* type == PV_HALT_ABORT */
+ add_smp(&halt_abort_stats, 1);
+}
+
+static inline void xen_lock_stats(enum pv_lock_stats type)
+{
+ if (type == PV_WAKE_KICKED)
+ add_smp(&wake_kick_stats, 1);
+ else if (type == PV_WAKE_SPURIOUS)
+ add_smp(&wake_spur_stats, 1);
+ else /* type == PV_KICK_NOHALT */
+ add_smp(&kick_nohlt_stats, 1);
+}
+
+static inline u64 spin_time_start(void)
+{
+ return sched_clock();
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+ u64 delta;
+
+ delta = sched_clock() - start;
+ add_smp(&time_blocked, delta);
+}
+#else /* CONFIG_XEN_DEBUG_FS */
+static inline void xen_halt_stats(enum pv_lock_stats type)
+{
+}
+
+static inline void xen_lock_stats(enum pv_lock_stats type)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+ return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif /* CONFIG_XEN_DEBUG_FS */
+
+static void xen_kick_cpu(int cpu)
+{
+ xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
+}
+
+/*
+ * Halt the current CPU & release it back to the host
+ */
+static void xen_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ int irq = __this_cpu_read(lock_kicker_irq);
+ unsigned long flags;
+ u64 start;
+
+ /* If kicker interrupts not initialized yet, just spin */
+ if (irq == -1)
+ return;
+
+ /*
+ * Make sure an interrupt handler can't upset things in a
+ * partially setup state.
+ */
+ local_irq_save(flags);
+ start = spin_time_start();
+
+ xen_halt_stats(type);
+ /* clear pending */
+ xen_clear_irq_pending(irq);
+
+ /* Allow interrupts while blocked */
+ local_irq_restore(flags);
+ /*
+ * Don't halt if the CPU state has been changed.
+ */
+ if (ACCESS_ONCE(*state) != sval) {
+ xen_halt_stats(PV_HALT_ABORT);
+ return;
+ }
+ /*
+ * If an interrupt happens here, it will leave the wakeup irq
+ * pending, which will cause xen_poll_irq() to return
+ * immediately.
+ */
+
+ /* Block until irq becomes pending (or perhaps a spurious wakeup) */
+ xen_poll_irq(irq);
+ spin_time_accum_blocked(start);
+}
+#endif /* CONFIG_QUEUE_SPINLOCK */
+
static irqreturn_t dummy_handler(int irq, void *dev_id)
{
BUG();
@@ -258,7 +373,6 @@ void xen_uninit_lock_cpu(int cpu)
per_cpu(irq_name, cpu) = NULL;
}
-
/*
* Our init of PV spinlocks is split in two init functions due to us
* using paravirt patching and jump labels patching and having to do
@@ -275,8 +389,15 @@ void __init xen_init_spinlocks(void)
return;
}
printk(KERN_DEBUG "xen: PV spinlocks enabled\n");
+
+#ifdef CONFIG_QUEUE_SPINLOCK
+ pv_lock_ops.kick_cpu = xen_kick_cpu;
+ pv_lock_ops.halt_cpu = xen_halt_cpu;
+ pv_lock_ops.lockstat = xen_lock_stats;
+#else
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
+#endif
}
/*
@@ -321,6 +442,7 @@ static int __init xen_spinlock_debugfs(void)
d_spin_debug = debugfs_create_dir("spinlocks", d_xen);
+#ifndef CONFIG_QUEUE_SPINLOCK
debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
debugfs_create_u32("taken_slow", 0444, d_spin_debug,
@@ -340,7 +462,22 @@ static int __init xen_spinlock_debugfs(void)
debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
-
+#else /* CONFIG_QUEUE_SPINLOCK */
+ debugfs_create_u32("kick_nohlt_stats",
+ 0644, d_spin_debug, &kick_nohlt_stats);
+ debugfs_create_u32("halt_qhead_stats",
+ 0644, d_spin_debug, &halt_qhead_stats);
+ debugfs_create_u32("halt_qnode_stats",
+ 0644, d_spin_debug, &halt_qnode_stats);
+ debugfs_create_u32("halt_abort_stats",
+ 0644, d_spin_debug, &halt_abort_stats);
+ debugfs_create_u32("wake_kick_stats",
+ 0644, d_spin_debug, &wake_kick_stats);
+ debugfs_create_u32("wake_spur_stats",
+ 0644, d_spin_debug, &wake_spur_stats);
+ debugfs_create_u64("time_blocked",
+ 0644, d_spin_debug, &time_blocked);
+#endif /* CONFIG_QUEUE_SPINLOCK */
return 0;
}
fs_initcall(xen_spinlock_debugfs);
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index a70fdeb..451e392 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
config QUEUE_SPINLOCK
def_bool y if ARCH_USE_QUEUE_SPINLOCK
- depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
+ depends on SMP
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 19/19] pvqspinlock, x86: Enable PV qspinlock for XEN
@ 2014-05-07 15:01 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Raghavendra K T, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Chegu Vinod, David Vrabel, Oleg Nesterov, xen-devel,
Boris Ostrovsky, Paul E. McKenney, Linus Torvalds
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/xen/spinlock.c | 147 +++++++++++++++++++++++++++++++++++++++++++++--
kernel/Kconfig.locks | 2 +-
2 files changed, 143 insertions(+), 6 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index d1b6a32..2a259bb 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -17,6 +17,12 @@
#include "xen-ops.h"
#include "debugfs.h"
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(char *, irq_name);
+static bool xen_pvspin = true;
+
+#ifndef CONFIG_QUEUE_SPINLOCK
+
enum xen_contention_stat {
TAKEN_SLOW,
TAKEN_SLOW_PICKUP,
@@ -100,12 +106,9 @@ struct xen_lock_waiting {
__ticket_t want;
};
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
-static DEFINE_PER_CPU(char *, irq_name);
static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
static cpumask_t waiting_cpus;
-static bool xen_pvspin = true;
__visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
{
int irq = __this_cpu_read(lock_kicker_irq);
@@ -213,6 +216,118 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
}
}
+#else /* CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_XEN_DEBUG_FS
+static u32 kick_nohlt_stats; /* Kick but not halt count */
+static u32 halt_qhead_stats; /* Queue head halting count */
+static u32 halt_qnode_stats; /* Queue node halting count */
+static u32 halt_abort_stats; /* Halting abort count */
+static u32 wake_kick_stats; /* Wakeup by kicking count */
+static u32 wake_spur_stats; /* Spurious wakeup count */
+static u64 time_blocked; /* Total blocking time */
+
+static inline void xen_halt_stats(enum pv_lock_stats type)
+{
+ if (type == PV_HALT_QHEAD)
+ add_smp(&halt_qhead_stats, 1);
+ else if (type == PV_HALT_QNODE)
+ add_smp(&halt_qnode_stats, 1);
+ else /* type == PV_HALT_ABORT */
+ add_smp(&halt_abort_stats, 1);
+}
+
+static inline void xen_lock_stats(enum pv_lock_stats type)
+{
+ if (type == PV_WAKE_KICKED)
+ add_smp(&wake_kick_stats, 1);
+ else if (type == PV_WAKE_SPURIOUS)
+ add_smp(&wake_spur_stats, 1);
+ else /* type == PV_KICK_NOHALT */
+ add_smp(&kick_nohlt_stats, 1);
+}
+
+static inline u64 spin_time_start(void)
+{
+ return sched_clock();
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+ u64 delta;
+
+ delta = sched_clock() - start;
+ add_smp(&time_blocked, delta);
+}
+#else /* CONFIG_XEN_DEBUG_FS */
+static inline void xen_halt_stats(enum pv_lock_stats type)
+{
+}
+
+static inline void xen_lock_stats(enum pv_lock_stats type)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+ return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif /* CONFIG_XEN_DEBUG_FS */
+
+static void xen_kick_cpu(int cpu)
+{
+ xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
+}
+
+/*
+ * Halt the current CPU & release it back to the host
+ */
+static void xen_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ int irq = __this_cpu_read(lock_kicker_irq);
+ unsigned long flags;
+ u64 start;
+
+ /* If kicker interrupts not initialized yet, just spin */
+ if (irq == -1)
+ return;
+
+ /*
+ * Make sure an interrupt handler can't upset things in a
+ * partially setup state.
+ */
+ local_irq_save(flags);
+ start = spin_time_start();
+
+ xen_halt_stats(type);
+ /* clear pending */
+ xen_clear_irq_pending(irq);
+
+ /* Allow interrupts while blocked */
+ local_irq_restore(flags);
+ /*
+ * Don't halt if the CPU state has been changed.
+ */
+ if (ACCESS_ONCE(*state) != sval) {
+ xen_halt_stats(PV_HALT_ABORT);
+ return;
+ }
+ /*
+ * If an interrupt happens here, it will leave the wakeup irq
+ * pending, which will cause xen_poll_irq() to return
+ * immediately.
+ */
+
+ /* Block until irq becomes pending (or perhaps a spurious wakeup) */
+ xen_poll_irq(irq);
+ spin_time_accum_blocked(start);
+}
+#endif /* CONFIG_QUEUE_SPINLOCK */
+
static irqreturn_t dummy_handler(int irq, void *dev_id)
{
BUG();
@@ -258,7 +373,6 @@ void xen_uninit_lock_cpu(int cpu)
per_cpu(irq_name, cpu) = NULL;
}
-
/*
* Our init of PV spinlocks is split in two init functions due to us
* using paravirt patching and jump labels patching and having to do
@@ -275,8 +389,15 @@ void __init xen_init_spinlocks(void)
return;
}
printk(KERN_DEBUG "xen: PV spinlocks enabled\n");
+
+#ifdef CONFIG_QUEUE_SPINLOCK
+ pv_lock_ops.kick_cpu = xen_kick_cpu;
+ pv_lock_ops.halt_cpu = xen_halt_cpu;
+ pv_lock_ops.lockstat = xen_lock_stats;
+#else
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
+#endif
}
/*
@@ -321,6 +442,7 @@ static int __init xen_spinlock_debugfs(void)
d_spin_debug = debugfs_create_dir("spinlocks", d_xen);
+#ifndef CONFIG_QUEUE_SPINLOCK
debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
debugfs_create_u32("taken_slow", 0444, d_spin_debug,
@@ -340,7 +462,22 @@ static int __init xen_spinlock_debugfs(void)
debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
-
+#else /* CONFIG_QUEUE_SPINLOCK */
+ debugfs_create_u32("kick_nohlt_stats",
+ 0644, d_spin_debug, &kick_nohlt_stats);
+ debugfs_create_u32("halt_qhead_stats",
+ 0644, d_spin_debug, &halt_qhead_stats);
+ debugfs_create_u32("halt_qnode_stats",
+ 0644, d_spin_debug, &halt_qnode_stats);
+ debugfs_create_u32("halt_abort_stats",
+ 0644, d_spin_debug, &halt_abort_stats);
+ debugfs_create_u32("wake_kick_stats",
+ 0644, d_spin_debug, &wake_kick_stats);
+ debugfs_create_u32("wake_spur_stats",
+ 0644, d_spin_debug, &wake_spur_stats);
+ debugfs_create_u64("time_blocked",
+ 0644, d_spin_debug, &time_blocked);
+#endif /* CONFIG_QUEUE_SPINLOCK */
return 0;
}
fs_initcall(xen_spinlock_debugfs);
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index a70fdeb..451e392 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
config QUEUE_SPINLOCK
def_bool y if ARCH_USE_QUEUE_SPINLOCK
- depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
+ depends on SMP
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* [PATCH v10 19/19] pvqspinlock, x86: Enable PV qspinlock for XEN
2014-05-07 15:01 ` Waiman Long
` (40 preceding siblings ...)
(?)
@ 2014-05-07 15:01 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-07 15:01 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra
Cc: linux-arch, Waiman Long, Rik van Riel, Raghavendra K T,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Chegu Vinod,
David Vrabel, Oleg Nesterov, xen-devel, Boris Ostrovsky,
Paul E. McKenney, Linus Torvalds
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
arch/x86/xen/spinlock.c | 147 +++++++++++++++++++++++++++++++++++++++++++++--
kernel/Kconfig.locks | 2 +-
2 files changed, 143 insertions(+), 6 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index d1b6a32..2a259bb 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -17,6 +17,12 @@
#include "xen-ops.h"
#include "debugfs.h"
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(char *, irq_name);
+static bool xen_pvspin = true;
+
+#ifndef CONFIG_QUEUE_SPINLOCK
+
enum xen_contention_stat {
TAKEN_SLOW,
TAKEN_SLOW_PICKUP,
@@ -100,12 +106,9 @@ struct xen_lock_waiting {
__ticket_t want;
};
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
-static DEFINE_PER_CPU(char *, irq_name);
static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
static cpumask_t waiting_cpus;
-static bool xen_pvspin = true;
__visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
{
int irq = __this_cpu_read(lock_kicker_irq);
@@ -213,6 +216,118 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
}
}
+#else /* CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_XEN_DEBUG_FS
+static u32 kick_nohlt_stats; /* Kick but not halt count */
+static u32 halt_qhead_stats; /* Queue head halting count */
+static u32 halt_qnode_stats; /* Queue node halting count */
+static u32 halt_abort_stats; /* Halting abort count */
+static u32 wake_kick_stats; /* Wakeup by kicking count */
+static u32 wake_spur_stats; /* Spurious wakeup count */
+static u64 time_blocked; /* Total blocking time */
+
+static inline void xen_halt_stats(enum pv_lock_stats type)
+{
+ if (type == PV_HALT_QHEAD)
+ add_smp(&halt_qhead_stats, 1);
+ else if (type == PV_HALT_QNODE)
+ add_smp(&halt_qnode_stats, 1);
+ else /* type == PV_HALT_ABORT */
+ add_smp(&halt_abort_stats, 1);
+}
+
+static inline void xen_lock_stats(enum pv_lock_stats type)
+{
+ if (type == PV_WAKE_KICKED)
+ add_smp(&wake_kick_stats, 1);
+ else if (type == PV_WAKE_SPURIOUS)
+ add_smp(&wake_spur_stats, 1);
+ else /* type == PV_KICK_NOHALT */
+ add_smp(&kick_nohlt_stats, 1);
+}
+
+static inline u64 spin_time_start(void)
+{
+ return sched_clock();
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+ u64 delta;
+
+ delta = sched_clock() - start;
+ add_smp(&time_blocked, delta);
+}
+#else /* CONFIG_XEN_DEBUG_FS */
+static inline void xen_halt_stats(enum pv_lock_stats type)
+{
+}
+
+static inline void xen_lock_stats(enum pv_lock_stats type)
+{
+}
+
+static inline u64 spin_time_start(void)
+{
+ return 0;
+}
+
+static inline void spin_time_accum_blocked(u64 start)
+{
+}
+#endif /* CONFIG_XEN_DEBUG_FS */
+
+static void xen_kick_cpu(int cpu)
+{
+ xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
+}
+
+/*
+ * Halt the current CPU & release it back to the host
+ */
+static void xen_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
+{
+ int irq = __this_cpu_read(lock_kicker_irq);
+ unsigned long flags;
+ u64 start;
+
+ /* If kicker interrupts not initialized yet, just spin */
+ if (irq == -1)
+ return;
+
+ /*
+ * Make sure an interrupt handler can't upset things in a
+ * partially setup state.
+ */
+ local_irq_save(flags);
+ start = spin_time_start();
+
+ xen_halt_stats(type);
+ /* clear pending */
+ xen_clear_irq_pending(irq);
+
+ /* Allow interrupts while blocked */
+ local_irq_restore(flags);
+ /*
+ * Don't halt if the CPU state has been changed.
+ */
+ if (ACCESS_ONCE(*state) != sval) {
+ xen_halt_stats(PV_HALT_ABORT);
+ return;
+ }
+ /*
+ * If an interrupt happens here, it will leave the wakeup irq
+ * pending, which will cause xen_poll_irq() to return
+ * immediately.
+ */
+
+ /* Block until irq becomes pending (or perhaps a spurious wakeup) */
+ xen_poll_irq(irq);
+ spin_time_accum_blocked(start);
+}
+#endif /* CONFIG_QUEUE_SPINLOCK */
+
static irqreturn_t dummy_handler(int irq, void *dev_id)
{
BUG();
@@ -258,7 +373,6 @@ void xen_uninit_lock_cpu(int cpu)
per_cpu(irq_name, cpu) = NULL;
}
-
/*
* Our init of PV spinlocks is split in two init functions due to us
* using paravirt patching and jump labels patching and having to do
@@ -275,8 +389,15 @@ void __init xen_init_spinlocks(void)
return;
}
printk(KERN_DEBUG "xen: PV spinlocks enabled\n");
+
+#ifdef CONFIG_QUEUE_SPINLOCK
+ pv_lock_ops.kick_cpu = xen_kick_cpu;
+ pv_lock_ops.halt_cpu = xen_halt_cpu;
+ pv_lock_ops.lockstat = xen_lock_stats;
+#else
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
+#endif
}
/*
@@ -321,6 +442,7 @@ static int __init xen_spinlock_debugfs(void)
d_spin_debug = debugfs_create_dir("spinlocks", d_xen);
+#ifndef CONFIG_QUEUE_SPINLOCK
debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
debugfs_create_u32("taken_slow", 0444, d_spin_debug,
@@ -340,7 +462,22 @@ static int __init xen_spinlock_debugfs(void)
debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
-
+#else /* CONFIG_QUEUE_SPINLOCK */
+ debugfs_create_u32("kick_nohlt_stats",
+ 0644, d_spin_debug, &kick_nohlt_stats);
+ debugfs_create_u32("halt_qhead_stats",
+ 0644, d_spin_debug, &halt_qhead_stats);
+ debugfs_create_u32("halt_qnode_stats",
+ 0644, d_spin_debug, &halt_qnode_stats);
+ debugfs_create_u32("halt_abort_stats",
+ 0644, d_spin_debug, &halt_abort_stats);
+ debugfs_create_u32("wake_kick_stats",
+ 0644, d_spin_debug, &wake_kick_stats);
+ debugfs_create_u32("wake_spur_stats",
+ 0644, d_spin_debug, &wake_spur_stats);
+ debugfs_create_u64("time_blocked",
+ 0644, d_spin_debug, &time_blocked);
+#endif /* CONFIG_QUEUE_SPINLOCK */
return 0;
}
fs_initcall(xen_spinlock_debugfs);
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index a70fdeb..451e392 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
config QUEUE_SPINLOCK
def_bool y if ARCH_USE_QUEUE_SPINLOCK
- depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
+ depends on SMP
--
1.7.1
^ permalink raw reply related [flat|nested] 163+ messages in thread
* Re: [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
1 sibling, 0 replies; 163+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-07 19:07 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Boris Ostrovsky, Paul E. McKenney, Rik van Riel,
Linus Torvalds, Raghavendra K T, David Vrabel, Oleg Nesterov,
Gleb Natapov, Scott J Norton, Chegu Vinod
> Raghavendra KT had done some performance testing on this patch with
> the following results:
>
> Overall we are seeing good improvement for pv-unfair version.
>
> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
> Guest : 8GB with 16 vcpu/VM.
> Average was taken over 8-10 data points.
>
> Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
>
> A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
>
> B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
> (queue spinlock without paravirt)
>
> C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
> (queue spinlock with paravirt)
Could you do s/PRAVIRT/PARAVIRT/ please?
>
> Ebizzy %improvements
> ====================
> overcommit A B C
> 0.5x 4.4265 2.0611 1.5824
> 1.0x 0.9015 -7.7828 4.5443
> 1.5x 46.1162 -2.9845 -3.5046
> 2.0x 99.8150 -2.7116 4.7461
Considering B sucks
>
> Dbench %improvements
> ====================
> overcommit A B C
> 0.5x 3.2617 3.5436 2.5676
> 1.0x 0.6302 2.2342 5.2201
> 1.5x 5.0027 4.8275 3.8375
> 2.0x 23.8242 4.5782 12.6067
>
> Absolute values of base results: (overcommit, value, stdev)
> Ebizzy ( records / sec with 120 sec run)
> 0.5x 20941.8750 (2%)
> 1.0x 17623.8750 (5%)
> 1.5x 5874.7778 (15%)
> 2.0x 3581.8750 (7%)
>
> Dbench (throughput in MB/sec)
> 0.5x 10009.6610 (5%)
> 1.0x 6583.0538 (1%)
> 1.5x 3991.9622 (4%)
> 2.0x 2527.0613 (2.5%)
>
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Tested-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
> arch/x86/kernel/kvm.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++++
> kernel/Kconfig.locks | 2 +-
> 2 files changed, 136 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 7ab8ab3..eef427b 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -567,6 +567,7 @@ static void kvm_kick_cpu(int cpu)
> kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
> }
>
> +#ifndef CONFIG_QUEUE_SPINLOCK
> enum kvm_contention_stat {
> TAKEN_SLOW,
> TAKEN_SLOW_PICKUP,
> @@ -794,6 +795,134 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
> }
> }
> }
> +#else /* !CONFIG_QUEUE_SPINLOCK */
> +
> +#ifdef CONFIG_KVM_DEBUG_FS
> +static struct dentry *d_spin_debug;
> +static struct dentry *d_kvm_debug;
> +static u32 kick_nohlt_stats; /* Kick but not halt count */
> +static u32 halt_qhead_stats; /* Queue head halting count */
> +static u32 halt_qnode_stats; /* Queue node halting count */
> +static u32 halt_abort_stats; /* Halting abort count */
> +static u32 wake_kick_stats; /* Wakeup by kicking count */
> +static u32 wake_spur_stats; /* Spurious wakeup count */
> +static u64 time_blocked; /* Total blocking time */
> +
> +static int __init kvm_spinlock_debugfs(void)
> +{
> + d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
> + if (!d_kvm_debug) {
> + printk(KERN_WARNING
> + "Could not create 'kvm' debugfs directory\n");
> + return -ENOMEM;
> + }
> + d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
> +
> + debugfs_create_u32("kick_nohlt_stats",
> + 0644, d_spin_debug, &kick_nohlt_stats);
> + debugfs_create_u32("halt_qhead_stats",
> + 0644, d_spin_debug, &halt_qhead_stats);
> + debugfs_create_u32("halt_qnode_stats",
> + 0644, d_spin_debug, &halt_qnode_stats);
> + debugfs_create_u32("halt_abort_stats",
> + 0644, d_spin_debug, &halt_abort_stats);
> + debugfs_create_u32("wake_kick_stats",
> + 0644, d_spin_debug, &wake_kick_stats);
> + debugfs_create_u32("wake_spur_stats",
> + 0644, d_spin_debug, &wake_spur_stats);
> + debugfs_create_u64("time_blocked",
> + 0644, d_spin_debug, &time_blocked);
> + return 0;
> +}
> +
> +static inline void kvm_halt_stats(enum pv_lock_stats type)
> +{
> + if (type == PV_HALT_QHEAD)
> + add_smp(&halt_qhead_stats, 1);
> + else if (type == PV_HALT_QNODE)
> + add_smp(&halt_qnode_stats, 1);
> + else /* type == PV_HALT_ABORT */
> + add_smp(&halt_abort_stats, 1);
> +}
> +
> +static inline void kvm_lock_stats(enum pv_lock_stats type)
> +{
> + if (type == PV_WAKE_KICKED)
> + add_smp(&wake_kick_stats, 1);
> + else if (type == PV_WAKE_SPURIOUS)
> + add_smp(&wake_spur_stats, 1);
> + else /* type == PV_KICK_NOHALT */
> + add_smp(&kick_nohlt_stats, 1);
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> + return sched_clock();
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> + u64 delta;
> +
> + delta = sched_clock() - start;
> + add_smp(&time_blocked, delta);
> +}
> +
> +fs_initcall(kvm_spinlock_debugfs);
> +
> +#else /* CONFIG_KVM_DEBUG_FS */
> +static inline void kvm_halt_stats(enum pv_lock_stats type)
> +{
> +}
> +
> +static inline void kvm_lock_stats(enum pv_lock_stats type)
> +{
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> + return 0;
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> +}
> +#endif /* CONFIG_KVM_DEBUG_FS */
> +
> +/*
> + * Halt the current CPU & release it back to the host
> + */
> +static void kvm_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
> +{
> + unsigned long flags;
> + u64 start;
> +
> + if (in_nmi())
> + return;
> +
> + /*
> + * Make sure an interrupt handler can't upset things in a
> + * partially setup state.
> + */
> + local_irq_save(flags);
> + /*
> + * Don't halt if the CPU state has been changed.
> + */
> + if (ACCESS_ONCE(*state) != sval) {
> + kvm_halt_stats(PV_HALT_ABORT);
> + goto out;
> + }
> + start = spin_time_start();
> + kvm_halt_stats(type);
> + if (arch_irqs_disabled_flags(flags))
> + halt();
> + else
> + safe_halt();
> + spin_time_accum_blocked(start);
> +out:
> + local_irq_restore(flags);
> +}
> +#endif /* !CONFIG_QUEUE_SPINLOCK */
>
> /*
> * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> @@ -806,8 +935,14 @@ void __init kvm_spinlock_init(void)
> if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> return;
>
> +#ifdef CONFIG_QUEUE_SPINLOCK
> + pv_lock_ops.kick_cpu = kvm_kick_cpu;
> + pv_lock_ops.halt_cpu = kvm_halt_cpu;
> + pv_lock_ops.lockstat = kvm_lock_stats;
> +#else
> pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
> pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +#endif
> }
>
> static __init int kvm_spinlock_init_jump(void)
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index f185584..a70fdeb 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
> config QUEUE_SPINLOCK
> def_bool y if ARCH_USE_QUEUE_SPINLOCK
> - depends on SMP && !PARAVIRT_SPINLOCKS
> + depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
> --
> 1.7.1
>
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
@ 2014-05-07 19:07 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 163+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-07 19:07 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Gleb Natapov, kvm,
Oleg Nesterov, Peter Zijlstra, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
> Raghavendra KT had done some performance testing on this patch with
> the following results:
>
> Overall we are seeing good improvement for pv-unfair version.
>
> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
> Guest : 8GB with 16 vcpu/VM.
> Average was taken over 8-10 data points.
>
> Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
>
> A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
>
> B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
> (queue spinlock without paravirt)
>
> C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
> (queue spinlock with paravirt)
Could you do s/PRAVIRT/PARAVIRT/ please?
>
> Ebizzy %improvements
> ====================
> overcommit A B C
> 0.5x 4.4265 2.0611 1.5824
> 1.0x 0.9015 -7.7828 4.5443
> 1.5x 46.1162 -2.9845 -3.5046
> 2.0x 99.8150 -2.7116 4.7461
Considering B sucks
>
> Dbench %improvements
> ====================
> overcommit A B C
> 0.5x 3.2617 3.5436 2.5676
> 1.0x 0.6302 2.2342 5.2201
> 1.5x 5.0027 4.8275 3.8375
> 2.0x 23.8242 4.5782 12.6067
>
> Absolute values of base results: (overcommit, value, stdev)
> Ebizzy ( records / sec with 120 sec run)
> 0.5x 20941.8750 (2%)
> 1.0x 17623.8750 (5%)
> 1.5x 5874.7778 (15%)
> 2.0x 3581.8750 (7%)
>
> Dbench (throughput in MB/sec)
> 0.5x 10009.6610 (5%)
> 1.0x 6583.0538 (1%)
> 1.5x 3991.9622 (4%)
> 2.0x 2527.0613 (2.5%)
>
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Tested-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
> arch/x86/kernel/kvm.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++++
> kernel/Kconfig.locks | 2 +-
> 2 files changed, 136 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 7ab8ab3..eef427b 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -567,6 +567,7 @@ static void kvm_kick_cpu(int cpu)
> kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
> }
>
> +#ifndef CONFIG_QUEUE_SPINLOCK
> enum kvm_contention_stat {
> TAKEN_SLOW,
> TAKEN_SLOW_PICKUP,
> @@ -794,6 +795,134 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
> }
> }
> }
> +#else /* !CONFIG_QUEUE_SPINLOCK */
> +
> +#ifdef CONFIG_KVM_DEBUG_FS
> +static struct dentry *d_spin_debug;
> +static struct dentry *d_kvm_debug;
> +static u32 kick_nohlt_stats; /* Kick but not halt count */
> +static u32 halt_qhead_stats; /* Queue head halting count */
> +static u32 halt_qnode_stats; /* Queue node halting count */
> +static u32 halt_abort_stats; /* Halting abort count */
> +static u32 wake_kick_stats; /* Wakeup by kicking count */
> +static u32 wake_spur_stats; /* Spurious wakeup count */
> +static u64 time_blocked; /* Total blocking time */
> +
> +static int __init kvm_spinlock_debugfs(void)
> +{
> + d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
> + if (!d_kvm_debug) {
> + printk(KERN_WARNING
> + "Could not create 'kvm' debugfs directory\n");
> + return -ENOMEM;
> + }
> + d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
> +
> + debugfs_create_u32("kick_nohlt_stats",
> + 0644, d_spin_debug, &kick_nohlt_stats);
> + debugfs_create_u32("halt_qhead_stats",
> + 0644, d_spin_debug, &halt_qhead_stats);
> + debugfs_create_u32("halt_qnode_stats",
> + 0644, d_spin_debug, &halt_qnode_stats);
> + debugfs_create_u32("halt_abort_stats",
> + 0644, d_spin_debug, &halt_abort_stats);
> + debugfs_create_u32("wake_kick_stats",
> + 0644, d_spin_debug, &wake_kick_stats);
> + debugfs_create_u32("wake_spur_stats",
> + 0644, d_spin_debug, &wake_spur_stats);
> + debugfs_create_u64("time_blocked",
> + 0644, d_spin_debug, &time_blocked);
> + return 0;
> +}
> +
> +static inline void kvm_halt_stats(enum pv_lock_stats type)
> +{
> + if (type == PV_HALT_QHEAD)
> + add_smp(&halt_qhead_stats, 1);
> + else if (type == PV_HALT_QNODE)
> + add_smp(&halt_qnode_stats, 1);
> + else /* type == PV_HALT_ABORT */
> + add_smp(&halt_abort_stats, 1);
> +}
> +
> +static inline void kvm_lock_stats(enum pv_lock_stats type)
> +{
> + if (type == PV_WAKE_KICKED)
> + add_smp(&wake_kick_stats, 1);
> + else if (type == PV_WAKE_SPURIOUS)
> + add_smp(&wake_spur_stats, 1);
> + else /* type == PV_KICK_NOHALT */
> + add_smp(&kick_nohlt_stats, 1);
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> + return sched_clock();
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> + u64 delta;
> +
> + delta = sched_clock() - start;
> + add_smp(&time_blocked, delta);
> +}
> +
> +fs_initcall(kvm_spinlock_debugfs);
> +
> +#else /* CONFIG_KVM_DEBUG_FS */
> +static inline void kvm_halt_stats(enum pv_lock_stats type)
> +{
> +}
> +
> +static inline void kvm_lock_stats(enum pv_lock_stats type)
> +{
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> + return 0;
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> +}
> +#endif /* CONFIG_KVM_DEBUG_FS */
> +
> +/*
> + * Halt the current CPU & release it back to the host
> + */
> +static void kvm_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
> +{
> + unsigned long flags;
> + u64 start;
> +
> + if (in_nmi())
> + return;
> +
> + /*
> + * Make sure an interrupt handler can't upset things in a
> + * partially setup state.
> + */
> + local_irq_save(flags);
> + /*
> + * Don't halt if the CPU state has been changed.
> + */
> + if (ACCESS_ONCE(*state) != sval) {
> + kvm_halt_stats(PV_HALT_ABORT);
> + goto out;
> + }
> + start = spin_time_start();
> + kvm_halt_stats(type);
> + if (arch_irqs_disabled_flags(flags))
> + halt();
> + else
> + safe_halt();
> + spin_time_accum_blocked(start);
> +out:
> + local_irq_restore(flags);
> +}
> +#endif /* !CONFIG_QUEUE_SPINLOCK */
>
> /*
> * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> @@ -806,8 +935,14 @@ void __init kvm_spinlock_init(void)
> if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> return;
>
> +#ifdef CONFIG_QUEUE_SPINLOCK
> + pv_lock_ops.kick_cpu = kvm_kick_cpu;
> + pv_lock_ops.halt_cpu = kvm_halt_cpu;
> + pv_lock_ops.lockstat = kvm_lock_stats;
> +#else
> pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
> pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +#endif
> }
>
> static __init int kvm_spinlock_init_jump(void)
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index f185584..a70fdeb 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
> config QUEUE_SPINLOCK
> def_bool y if ARCH_USE_QUEUE_SPINLOCK
> - depends on SMP && !PARAVIRT_SPINLOCKS
> + depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
> --
> 1.7.1
>
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
1 sibling, 0 replies; 163+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-07 19:07 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Gleb Natapov, kvm, Oleg Nesterov,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Ingo Molnar, Chegu Vinod, David Vrabel,
H. Peter Anvin, xen-devel, Thomas Gleixner, Paul E. McKenney,
Linus Torvalds, Boris Ostrovsky
> Raghavendra KT had done some performance testing on this patch with
> the following results:
>
> Overall we are seeing good improvement for pv-unfair version.
>
> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
> Guest : 8GB with 16 vcpu/VM.
> Average was taken over 8-10 data points.
>
> Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
>
> A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
>
> B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
> (queue spinlock without paravirt)
>
> C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
> (queue spinlock with paravirt)
Could you do s/PRAVIRT/PARAVIRT/ please?
>
> Ebizzy %improvements
> ====================
> overcommit A B C
> 0.5x 4.4265 2.0611 1.5824
> 1.0x 0.9015 -7.7828 4.5443
> 1.5x 46.1162 -2.9845 -3.5046
> 2.0x 99.8150 -2.7116 4.7461
Considering B sucks
>
> Dbench %improvements
> ====================
> overcommit A B C
> 0.5x 3.2617 3.5436 2.5676
> 1.0x 0.6302 2.2342 5.2201
> 1.5x 5.0027 4.8275 3.8375
> 2.0x 23.8242 4.5782 12.6067
>
> Absolute values of base results: (overcommit, value, stdev)
> Ebizzy ( records / sec with 120 sec run)
> 0.5x 20941.8750 (2%)
> 1.0x 17623.8750 (5%)
> 1.5x 5874.7778 (15%)
> 2.0x 3581.8750 (7%)
>
> Dbench (throughput in MB/sec)
> 0.5x 10009.6610 (5%)
> 1.0x 6583.0538 (1%)
> 1.5x 3991.9622 (4%)
> 2.0x 2527.0613 (2.5%)
>
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> Tested-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
> arch/x86/kernel/kvm.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++++
> kernel/Kconfig.locks | 2 +-
> 2 files changed, 136 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 7ab8ab3..eef427b 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -567,6 +567,7 @@ static void kvm_kick_cpu(int cpu)
> kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
> }
>
> +#ifndef CONFIG_QUEUE_SPINLOCK
> enum kvm_contention_stat {
> TAKEN_SLOW,
> TAKEN_SLOW_PICKUP,
> @@ -794,6 +795,134 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
> }
> }
> }
> +#else /* !CONFIG_QUEUE_SPINLOCK */
> +
> +#ifdef CONFIG_KVM_DEBUG_FS
> +static struct dentry *d_spin_debug;
> +static struct dentry *d_kvm_debug;
> +static u32 kick_nohlt_stats; /* Kick but not halt count */
> +static u32 halt_qhead_stats; /* Queue head halting count */
> +static u32 halt_qnode_stats; /* Queue node halting count */
> +static u32 halt_abort_stats; /* Halting abort count */
> +static u32 wake_kick_stats; /* Wakeup by kicking count */
> +static u32 wake_spur_stats; /* Spurious wakeup count */
> +static u64 time_blocked; /* Total blocking time */
> +
> +static int __init kvm_spinlock_debugfs(void)
> +{
> + d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
> + if (!d_kvm_debug) {
> + printk(KERN_WARNING
> + "Could not create 'kvm' debugfs directory\n");
> + return -ENOMEM;
> + }
> + d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
> +
> + debugfs_create_u32("kick_nohlt_stats",
> + 0644, d_spin_debug, &kick_nohlt_stats);
> + debugfs_create_u32("halt_qhead_stats",
> + 0644, d_spin_debug, &halt_qhead_stats);
> + debugfs_create_u32("halt_qnode_stats",
> + 0644, d_spin_debug, &halt_qnode_stats);
> + debugfs_create_u32("halt_abort_stats",
> + 0644, d_spin_debug, &halt_abort_stats);
> + debugfs_create_u32("wake_kick_stats",
> + 0644, d_spin_debug, &wake_kick_stats);
> + debugfs_create_u32("wake_spur_stats",
> + 0644, d_spin_debug, &wake_spur_stats);
> + debugfs_create_u64("time_blocked",
> + 0644, d_spin_debug, &time_blocked);
> + return 0;
> +}
> +
> +static inline void kvm_halt_stats(enum pv_lock_stats type)
> +{
> + if (type == PV_HALT_QHEAD)
> + add_smp(&halt_qhead_stats, 1);
> + else if (type == PV_HALT_QNODE)
> + add_smp(&halt_qnode_stats, 1);
> + else /* type == PV_HALT_ABORT */
> + add_smp(&halt_abort_stats, 1);
> +}
> +
> +static inline void kvm_lock_stats(enum pv_lock_stats type)
> +{
> + if (type == PV_WAKE_KICKED)
> + add_smp(&wake_kick_stats, 1);
> + else if (type == PV_WAKE_SPURIOUS)
> + add_smp(&wake_spur_stats, 1);
> + else /* type == PV_KICK_NOHALT */
> + add_smp(&kick_nohlt_stats, 1);
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> + return sched_clock();
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> + u64 delta;
> +
> + delta = sched_clock() - start;
> + add_smp(&time_blocked, delta);
> +}
> +
> +fs_initcall(kvm_spinlock_debugfs);
> +
> +#else /* CONFIG_KVM_DEBUG_FS */
> +static inline void kvm_halt_stats(enum pv_lock_stats type)
> +{
> +}
> +
> +static inline void kvm_lock_stats(enum pv_lock_stats type)
> +{
> +}
> +
> +static inline u64 spin_time_start(void)
> +{
> + return 0;
> +}
> +
> +static inline void spin_time_accum_blocked(u64 start)
> +{
> +}
> +#endif /* CONFIG_KVM_DEBUG_FS */
> +
> +/*
> + * Halt the current CPU & release it back to the host
> + */
> +static void kvm_halt_cpu(enum pv_lock_stats type, s8 *state, s8 sval)
> +{
> + unsigned long flags;
> + u64 start;
> +
> + if (in_nmi())
> + return;
> +
> + /*
> + * Make sure an interrupt handler can't upset things in a
> + * partially setup state.
> + */
> + local_irq_save(flags);
> + /*
> + * Don't halt if the CPU state has been changed.
> + */
> + if (ACCESS_ONCE(*state) != sval) {
> + kvm_halt_stats(PV_HALT_ABORT);
> + goto out;
> + }
> + start = spin_time_start();
> + kvm_halt_stats(type);
> + if (arch_irqs_disabled_flags(flags))
> + halt();
> + else
> + safe_halt();
> + spin_time_accum_blocked(start);
> +out:
> + local_irq_restore(flags);
> +}
> +#endif /* !CONFIG_QUEUE_SPINLOCK */
>
> /*
> * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> @@ -806,8 +935,14 @@ void __init kvm_spinlock_init(void)
> if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> return;
>
> +#ifdef CONFIG_QUEUE_SPINLOCK
> + pv_lock_ops.kick_cpu = kvm_kick_cpu;
> + pv_lock_ops.halt_cpu = kvm_halt_cpu;
> + pv_lock_ops.lockstat = kvm_lock_stats;
> +#else
> pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
> pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +#endif
> }
>
> static __init int kvm_spinlock_init_jump(void)
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index f185584..a70fdeb 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
> config QUEUE_SPINLOCK
> def_bool y if ARCH_USE_QUEUE_SPINLOCK
> - depends on SMP && !PARAVIRT_SPINLOCKS
> + depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
> --
> 1.7.1
>
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
2014-05-07 15:01 ` Waiman Long
@ 2014-05-07 19:07 ` Konrad Rzeszutek Wilk
-1 siblings, 0 replies; 163+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-07 19:07 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Boris Ostrovsky, Paul E. McKenney, Rik van Riel,
Linus Torvalds, Raghavendra K T, David Vrabel, Oleg Nesterov,
Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
> v9->v10:
> - Make some minor changes to qspinlock.c to accommodate review feedback.
> - Change author to PeterZ for 2 of the patches.
> - Include Raghavendra KT's test results in patch 18.
Any chance you can post these on a git tree? Thanks.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
@ 2014-05-07 19:07 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 163+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-07 19:07 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Gleb Natapov, kvm,
Oleg Nesterov, Peter Zijlstra, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
> v9->v10:
> - Make some minor changes to qspinlock.c to accommodate review feedback.
> - Change author to PeterZ for 2 of the patches.
> - Include Raghavendra KT's test results in patch 18.
Any chance you can post these on a git tree? Thanks.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
2014-05-07 15:01 ` Waiman Long
` (42 preceding siblings ...)
(?)
@ 2014-05-07 19:07 ` Konrad Rzeszutek Wilk
-1 siblings, 0 replies; 163+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-05-07 19:07 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Gleb Natapov, kvm, Oleg Nesterov,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Ingo Molnar, Chegu Vinod, David Vrabel,
H. Peter Anvin, xen-devel, Thomas Gleixner, Paul E. McKenney,
Linus Torvalds, Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
> v9->v10:
> - Make some minor changes to qspinlock.c to accommodate review feedback.
> - Change author to PeterZ for 2 of the patches.
> - Include Raghavendra KT's test results in patch 18.
Any chance you can post these on a git tree? Thanks.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
@ 2014-05-08 17:54 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-08 17:54 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Boris Ostrovsky, Paul E. McKenney, Rik van Riel,
Linus Torvalds, Raghavendra K T, David Vrabel, Oleg Nesterov,
Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/07/2014 03:07 PM, Konrad Rzeszutek Wilk wrote:
>> Raghavendra KT had done some performance testing on this patch with
>> the following results:
>>
>> Overall we are seeing good improvement for pv-unfair version.
>>
>> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
>> Guest : 8GB with 16 vcpu/VM.
>> Average was taken over 8-10 data points.
>>
>> Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
>>
>> A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
>>
>> B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
>> (queue spinlock without paravirt)
>>
>> C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
>> (queue spinlock with paravirt)
> Could you do s/PRAVIRT/PARAVIRT/ please?
>
Sorry for the typo, I didn't check the text carefully enough when I
cut-and-paste it from Raghavendra's email.
>> Ebizzy %improvements
>> ====================
>> overcommit A B C
>> 0.5x 4.4265 2.0611 1.5824
>> 1.0x 0.9015 -7.7828 4.5443
>> 1.5x 46.1162 -2.9845 -3.5046
>> 2.0x 99.8150 -2.7116 4.7461
> Considering B sucks
Yes, I don't expect the plain qspinlock code will perform well in a
guest without either unfair or pvspinlock support.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
@ 2014-05-08 17:54 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-08 17:54 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: linux-arch, Rik van Riel, Raghavendra K T, Gleb Natapov, kvm,
Oleg Nesterov, Peter Zijlstra, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/07/2014 03:07 PM, Konrad Rzeszutek Wilk wrote:
>> Raghavendra KT had done some performance testing on this patch with
>> the following results:
>>
>> Overall we are seeing good improvement for pv-unfair version.
>>
>> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
>> Guest : 8GB with 16 vcpu/VM.
>> Average was taken over 8-10 data points.
>>
>> Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
>>
>> A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
>>
>> B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
>> (queue spinlock without paravirt)
>>
>> C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
>> (queue spinlock with paravirt)
> Could you do s/PRAVIRT/PARAVIRT/ please?
>
Sorry for the typo, I didn't check the text carefully enough when I
cut-and-paste it from Raghavendra's email.
>> Ebizzy %improvements
>> ====================
>> overcommit A B C
>> 0.5x 4.4265 2.0611 1.5824
>> 1.0x 0.9015 -7.7828 4.5443
>> 1.5x 46.1162 -2.9845 -3.5046
>> 2.0x 99.8150 -2.7116 4.7461
> Considering B sucks
Yes, I don't expect the plain qspinlock code will perform well in a
guest without either unfair or pvspinlock support.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
(?)
@ 2014-05-08 17:54 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-08 17:54 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: linux-arch, Raghavendra K T, Gleb Natapov, kvm, Oleg Nesterov,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Ingo Molnar, Chegu Vinod, David Vrabel,
H. Peter Anvin, xen-devel, Thomas Gleixner, Paul E. McKenney,
Linus Torvalds, Boris Ostrovsky
On 05/07/2014 03:07 PM, Konrad Rzeszutek Wilk wrote:
>> Raghavendra KT had done some performance testing on this patch with
>> the following results:
>>
>> Overall we are seeing good improvement for pv-unfair version.
>>
>> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
>> Guest : 8GB with 16 vcpu/VM.
>> Average was taken over 8-10 data points.
>>
>> Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
>>
>> A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
>>
>> B = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n
>> (queue spinlock without paravirt)
>>
>> C = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y
>> PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = n
>> (queue spinlock with paravirt)
> Could you do s/PRAVIRT/PARAVIRT/ please?
>
Sorry for the typo, I didn't check the text carefully enough when I
cut-and-paste it from Raghavendra's email.
>> Ebizzy %improvements
>> ====================
>> overcommit A B C
>> 0.5x 4.4265 2.0611 1.5824
>> 1.0x 0.9015 -7.7828 4.5443
>> 1.5x 46.1162 -2.9845 -3.5046
>> 2.0x 99.8150 -2.7116 4.7461
> Considering B sucks
Yes, I don't expect the plain qspinlock code will perform well in a
guest without either unfair or pvspinlock support.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
(?)
@ 2014-05-08 17:54 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-08 17:54 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Boris Ostrovsky, Paul E. McKenney, Rik van Riel,
Linus Torvalds, Raghavendra K T, David Vrabel, Oleg Nesterov,
Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/07/2014 03:07 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
>> v9->v10:
>> - Make some minor changes to qspinlock.c to accommodate review feedback.
>> - Change author to PeterZ for 2 of the patches.
>> - Include Raghavendra KT's test results in patch 18.
> Any chance you can post these on a git tree? Thanks.
I have pushed the bit to https://github.com/longman88/kernel-qspinlock-v10 .
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
` (2 preceding siblings ...)
(?)
@ 2014-05-08 17:54 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-08 17:54 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: linux-arch, Rik van Riel, Raghavendra K T, Gleb Natapov, kvm,
Oleg Nesterov, Peter Zijlstra, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/07/2014 03:07 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
>> v9->v10:
>> - Make some minor changes to qspinlock.c to accommodate review feedback.
>> - Change author to PeterZ for 2 of the patches.
>> - Include Raghavendra KT's test results in patch 18.
> Any chance you can post these on a git tree? Thanks.
I have pushed the bit to https://github.com/longman88/kernel-qspinlock-v10 .
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
(?)
(?)
@ 2014-05-08 17:54 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-08 17:54 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: linux-arch, Raghavendra K T, Gleb Natapov, kvm, Oleg Nesterov,
Peter Zijlstra, Scott J Norton, x86, Paolo Bonzini, linux-kernel,
virtualization, Ingo Molnar, Chegu Vinod, David Vrabel,
H. Peter Anvin, xen-devel, Thomas Gleixner, Paul E. McKenney,
Linus Torvalds, Boris Ostrovsky
On 05/07/2014 03:07 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
>> v9->v10:
>> - Make some minor changes to qspinlock.c to accommodate review feedback.
>> - Change author to PeterZ for 2 of the patches.
>> - Include Raghavendra KT's test results in patch 18.
> Any chance you can post these on a git tree? Thanks.
I have pushed the bit to https://github.com/longman88/kernel-qspinlock-v10 .
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
` (4 preceding siblings ...)
(?)
@ 2014-05-08 18:57 ` Peter Zijlstra
2014-05-10 0:49 ` Waiman Long
2014-05-10 0:49 ` Waiman Long
-1 siblings, 2 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 18:57 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
> +/**
> + * trylock_pending - try to acquire queue spinlock using the pending bit
> + * @lock : Pointer to queue spinlock structure
> + * @pval : Pointer to value of the queue spinlock 32-bit word
> + * Return: 1 if lock acquired, 0 otherwise
> + */
> +static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
Still don't like you put it in a separate function, but you don't need
the pointer thing. Note how after you fail the trylock_pending() you
touch the second (node) cacheline.
> @@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>
> BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>
> + if (trylock_pending(lock, &val))
> + return; /* Lock acquired */
> +
> node = this_cpu_ptr(&mcs_nodes[0]);
> idx = node->count++;
> tail = encode_tail(smp_processor_id(), idx);
> @@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> node->next = NULL;
>
> /*
> + * we already touched the queueing cacheline; don't bother with pending
> + * stuff.
> + *
> * trylock || xchg(lock, node)
> *
> - * 0,0 -> 0,1 ; trylock
> - * p,x -> n,x ; prev = xchg(lock, node)
> + * 0,0,0 -> 0,0,1 ; trylock
> + * p,y,x -> n,y,x ; prev = xchg(lock, node)
> */
And any value of @val we might have had here is completely out-dated.
The only thing that makes sense it to set:
val = 0;
Which makes us start with a trylock, alternatively we can re-read val.
> for (;;) {
> new = _Q_LOCKED_VAL;
> if (val)
> - new = tail | (val & _Q_LOCKED_MASK);
> + new = tail | (val & _Q_LOCKED_PENDING_MASK);
>
> old = atomic_cmpxchg(&lock->val, val, new);
> if (old == val)
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
` (3 preceding siblings ...)
(?)
@ 2014-05-08 18:57 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 18:57 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
> +/**
> + * trylock_pending - try to acquire queue spinlock using the pending bit
> + * @lock : Pointer to queue spinlock structure
> + * @pval : Pointer to value of the queue spinlock 32-bit word
> + * Return: 1 if lock acquired, 0 otherwise
> + */
> +static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
Still don't like you put it in a separate function, but you don't need
the pointer thing. Note how after you fail the trylock_pending() you
touch the second (node) cacheline.
> @@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>
> BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>
> + if (trylock_pending(lock, &val))
> + return; /* Lock acquired */
> +
> node = this_cpu_ptr(&mcs_nodes[0]);
> idx = node->count++;
> tail = encode_tail(smp_processor_id(), idx);
> @@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> node->next = NULL;
>
> /*
> + * we already touched the queueing cacheline; don't bother with pending
> + * stuff.
> + *
> * trylock || xchg(lock, node)
> *
> - * 0,0 -> 0,1 ; trylock
> - * p,x -> n,x ; prev = xchg(lock, node)
> + * 0,0,0 -> 0,0,1 ; trylock
> + * p,y,x -> n,y,x ; prev = xchg(lock, node)
> */
And any value of @val we might have had here is completely out-dated.
The only thing that makes sense it to set:
val = 0;
Which makes us start with a trylock, alternatively we can re-read val.
> for (;;) {
> new = _Q_LOCKED_VAL;
> if (val)
> - new = tail | (val & _Q_LOCKED_MASK);
> + new = tail | (val & _Q_LOCKED_PENDING_MASK);
>
> old = atomic_cmpxchg(&lock->val, val, new);
> if (old == val)
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
` (2 preceding siblings ...)
(?)
@ 2014-05-08 18:57 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 18:57 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
> +/**
> + * trylock_pending - try to acquire queue spinlock using the pending bit
> + * @lock : Pointer to queue spinlock structure
> + * @pval : Pointer to value of the queue spinlock 32-bit word
> + * Return: 1 if lock acquired, 0 otherwise
> + */
> +static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
Still don't like you put it in a separate function, but you don't need
the pointer thing. Note how after you fail the trylock_pending() you
touch the second (node) cacheline.
> @@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>
> BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
>
> + if (trylock_pending(lock, &val))
> + return; /* Lock acquired */
> +
> node = this_cpu_ptr(&mcs_nodes[0]);
> idx = node->count++;
> tail = encode_tail(smp_processor_id(), idx);
> @@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> node->next = NULL;
>
> /*
> + * we already touched the queueing cacheline; don't bother with pending
> + * stuff.
> + *
> * trylock || xchg(lock, node)
> *
> - * 0,0 -> 0,1 ; trylock
> - * p,x -> n,x ; prev = xchg(lock, node)
> + * 0,0,0 -> 0,0,1 ; trylock
> + * p,y,x -> n,y,x ; prev = xchg(lock, node)
> */
And any value of @val we might have had here is completely out-dated.
The only thing that makes sense it to set:
val = 0;
Which makes us start with a trylock, alternatively we can re-read val.
> for (;;) {
> new = _Q_LOCKED_VAL;
> if (val)
> - new = tail | (val & _Q_LOCKED_MASK);
> + new = tail | (val & _Q_LOCKED_PENDING_MASK);
>
> old = atomic_cmpxchg(&lock->val, val, new);
> if (old == val)
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-07 15:01 ` Waiman Long
@ 2014-05-08 18:58 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 18:58 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> */
> for (;;) {
> /*
> - * If we observe any contention; queue.
> + * If we observe that the queue is not empty,
> + * return and be queued.
> */
> - if (val & ~_Q_LOCKED_MASK)
> + if (val & _Q_TAIL_MASK)
> return 0;
>
> + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> + /*
> + * If both the lock and pending bits are set, we wait
> + * a while to see if that either bit will be cleared.
> + * If that is no change, we return and be queued.
> + */
> + if (!retry)
> + return 0;
> + retry--;
> + cpu_relax();
> + cpu_relax();
> + *pval = val = atomic_read(&lock->val);
> + continue;
> + } else if (val == _Q_PENDING_VAL) {
> + /*
> + * Pending bit is set, but not the lock bit.
> + * Assuming that the pending bit holder is going to
> + * set the lock bit and clear the pending bit soon,
> + * it is better to wait than to exit at this point.
> + */
> + cpu_relax();
> + *pval = val = atomic_read(&lock->val);
> + continue;
> + }
Didn't I give a much saner alternative to this mess last time?
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
@ 2014-05-08 18:58 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 18:58 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> */
> for (;;) {
> /*
> - * If we observe any contention; queue.
> + * If we observe that the queue is not empty,
> + * return and be queued.
> */
> - if (val & ~_Q_LOCKED_MASK)
> + if (val & _Q_TAIL_MASK)
> return 0;
>
> + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> + /*
> + * If both the lock and pending bits are set, we wait
> + * a while to see if that either bit will be cleared.
> + * If that is no change, we return and be queued.
> + */
> + if (!retry)
> + return 0;
> + retry--;
> + cpu_relax();
> + cpu_relax();
> + *pval = val = atomic_read(&lock->val);
> + continue;
> + } else if (val == _Q_PENDING_VAL) {
> + /*
> + * Pending bit is set, but not the lock bit.
> + * Assuming that the pending bit holder is going to
> + * set the lock bit and clear the pending bit soon,
> + * it is better to wait than to exit at this point.
> + */
> + cpu_relax();
> + *pval = val = atomic_read(&lock->val);
> + continue;
> + }
Didn't I give a much saner alternative to this mess last time?
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-08 18:58 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 18:58 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> */
> for (;;) {
> /*
> - * If we observe any contention; queue.
> + * If we observe that the queue is not empty,
> + * return and be queued.
> */
> - if (val & ~_Q_LOCKED_MASK)
> + if (val & _Q_TAIL_MASK)
> return 0;
>
> + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> + /*
> + * If both the lock and pending bits are set, we wait
> + * a while to see if that either bit will be cleared.
> + * If that is no change, we return and be queued.
> + */
> + if (!retry)
> + return 0;
> + retry--;
> + cpu_relax();
> + cpu_relax();
> + *pval = val = atomic_read(&lock->val);
> + continue;
> + } else if (val == _Q_PENDING_VAL) {
> + /*
> + * Pending bit is set, but not the lock bit.
> + * Assuming that the pending bit holder is going to
> + * set the lock bit and clear the pending bit soon,
> + * it is better to wait than to exit at this point.
> + */
> + cpu_relax();
> + *pval = val = atomic_read(&lock->val);
> + continue;
> + }
Didn't I give a much saner alternative to this mess last time?
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-07 15:01 ` Waiman Long
@ 2014-05-08 19:00 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:00 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
> @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
> * can allow better optimization of the lock acquisition for the pending
> * bit holder.
> */
> -#if _Q_PENDING_BITS == 8
> -
> struct __qspinlock {
> union {
> atomic_t val;
> - struct {
> #ifdef __LITTLE_ENDIAN
> + u8 locked;
> + struct {
> u16 locked_pending;
> u16 tail;
> + };
> #else
> + struct {
> u16 tail;
> u16 locked_pending;
> -#endif
> };
> + struct {
> + u8 reserved[3];
> + u8 locked;
> + };
> +#endif
> };
> };
>
> +#if _Q_PENDING_BITS == 8
That doesn't make sense, that struct __qspinlock only makes sense when
_Q_PENDING_BITS == 8.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-05-08 19:00 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:00 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
> @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
> * can allow better optimization of the lock acquisition for the pending
> * bit holder.
> */
> -#if _Q_PENDING_BITS == 8
> -
> struct __qspinlock {
> union {
> atomic_t val;
> - struct {
> #ifdef __LITTLE_ENDIAN
> + u8 locked;
> + struct {
> u16 locked_pending;
> u16 tail;
> + };
> #else
> + struct {
> u16 tail;
> u16 locked_pending;
> -#endif
> };
> + struct {
> + u8 reserved[3];
> + u8 locked;
> + };
> +#endif
> };
> };
>
> +#if _Q_PENDING_BITS == 8
That doesn't make sense, that struct __qspinlock only makes sense when
_Q_PENDING_BITS == 8.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-08 19:00 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:00 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
> @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
> * can allow better optimization of the lock acquisition for the pending
> * bit holder.
> */
> -#if _Q_PENDING_BITS == 8
> -
> struct __qspinlock {
> union {
> atomic_t val;
> - struct {
> #ifdef __LITTLE_ENDIAN
> + u8 locked;
> + struct {
> u16 locked_pending;
> u16 tail;
> + };
> #else
> + struct {
> u16 tail;
> u16 locked_pending;
> -#endif
> };
> + struct {
> + u8 reserved[3];
> + u8 locked;
> + };
> +#endif
> };
> };
>
> +#if _Q_PENDING_BITS == 8
That doesn't make sense, that struct __qspinlock only makes sense when
_Q_PENDING_BITS == 8.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-07 15:01 ` Waiman Long
@ 2014-05-08 19:02 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:02 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
> /**
> + * get_qlock - Set the lock bit and own the lock
> + * @lock: Pointer to queue spinlock structure
> + *
> + * This routine should only be called when the caller is the only one
> + * entitled to acquire the lock.
> + */
> +static __always_inline void get_qlock(struct qspinlock *lock)
set_locked()
> +{
> + struct __qspinlock *l = (void *)lock;
> +
> + barrier();
> + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> + barrier();
> +}
get_qlock() is just horrible. The function doesn't actually _get_
anything, and qlock is not in line with the rest of the naming.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-05-08 19:02 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:02 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
> /**
> + * get_qlock - Set the lock bit and own the lock
> + * @lock: Pointer to queue spinlock structure
> + *
> + * This routine should only be called when the caller is the only one
> + * entitled to acquire the lock.
> + */
> +static __always_inline void get_qlock(struct qspinlock *lock)
set_locked()
> +{
> + struct __qspinlock *l = (void *)lock;
> +
> + barrier();
> + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> + barrier();
> +}
get_qlock() is just horrible. The function doesn't actually _get_
anything, and qlock is not in line with the rest of the naming.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-07 15:01 ` Waiman Long
` (3 preceding siblings ...)
(?)
@ 2014-05-08 19:02 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:02 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
> /**
> + * get_qlock - Set the lock bit and own the lock
> + * @lock: Pointer to queue spinlock structure
> + *
> + * This routine should only be called when the caller is the only one
> + * entitled to acquire the lock.
> + */
> +static __always_inline void get_qlock(struct qspinlock *lock)
set_locked()
> +{
> + struct __qspinlock *l = (void *)lock;
> +
> + barrier();
> + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> + barrier();
> +}
get_qlock() is just horrible. The function doesn't actually _get_
anything, and qlock is not in line with the rest of the naming.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-07 15:01 ` Waiman Long
@ 2014-05-08 19:04 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:04 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> /*
> + * To have additional features for better virtualization support, it is
> + * necessary to store additional data in the queue node structure. So
> + * a new queue node structure will have to be defined and used here.
> + */
> +struct qnode {
> + struct mcs_spinlock mcs;
> +};
You can ditch this entire patch; its pointless, just add a new
DEFINE_PER_CPU for the para-virt muck.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
@ 2014-05-08 19:04 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:04 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> /*
> + * To have additional features for better virtualization support, it is
> + * necessary to store additional data in the queue node structure. So
> + * a new queue node structure will have to be defined and used here.
> + */
> +struct qnode {
> + struct mcs_spinlock mcs;
> +};
You can ditch this entire patch; its pointless, just add a new
DEFINE_PER_CPU for the para-virt muck.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-08 19:04 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:04 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> /*
> + * To have additional features for better virtualization support, it is
> + * necessary to store additional data in the queue node structure. So
> + * a new queue node structure will have to be defined and used here.
> + */
> +struct qnode {
> + struct mcs_spinlock mcs;
> +};
You can ditch this entire patch; its pointless, just add a new
DEFINE_PER_CPU for the para-virt muck.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-07 15:01 ` Waiman Long
@ 2014-05-08 19:06 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:06 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
> If unfair lock is supported, the lock acquisition loop at the end of
> the queue_spin_lock_slowpath() function may need to detect the fact
> the lock can be stolen. Code are added for the stolen lock detection.
>
> A new qhead macro is also defined as a shorthand for mcs.locked.
NAK, unfair should be a pure test-and-set lock.
> /**
> * get_qlock - Set the lock bit and own the lock
> - * @lock: Pointer to queue spinlock structure
> + * @lock : Pointer to queue spinlock structure
> + * Return: 1 if lock acquired, 0 otherwise
> *
> * This routine should only be called when the caller is the only one
> * entitled to acquire the lock.
> */
> -static __always_inline void get_qlock(struct qspinlock *lock)
> +static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> barrier();
> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> barrier();
> + return 1;
> }
and here you make a horribly named function more horrible;
try_set_locked() is that its now.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
@ 2014-05-08 19:06 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:06 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
> If unfair lock is supported, the lock acquisition loop at the end of
> the queue_spin_lock_slowpath() function may need to detect the fact
> the lock can be stolen. Code are added for the stolen lock detection.
>
> A new qhead macro is also defined as a shorthand for mcs.locked.
NAK, unfair should be a pure test-and-set lock.
> /**
> * get_qlock - Set the lock bit and own the lock
> - * @lock: Pointer to queue spinlock structure
> + * @lock : Pointer to queue spinlock structure
> + * Return: 1 if lock acquired, 0 otherwise
> *
> * This routine should only be called when the caller is the only one
> * entitled to acquire the lock.
> */
> -static __always_inline void get_qlock(struct qspinlock *lock)
> +static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> barrier();
> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> barrier();
> + return 1;
> }
and here you make a horribly named function more horrible;
try_set_locked() is that its now.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-08 19:06 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:06 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
> If unfair lock is supported, the lock acquisition loop at the end of
> the queue_spin_lock_slowpath() function may need to detect the fact
> the lock can be stolen. Code are added for the stolen lock detection.
>
> A new qhead macro is also defined as a shorthand for mcs.locked.
NAK, unfair should be a pure test-and-set lock.
> /**
> * get_qlock - Set the lock bit and own the lock
> - * @lock: Pointer to queue spinlock structure
> + * @lock : Pointer to queue spinlock structure
> + * Return: 1 if lock acquired, 0 otherwise
> *
> * This routine should only be called when the caller is the only one
> * entitled to acquire the lock.
> */
> -static __always_inline void get_qlock(struct qspinlock *lock)
> +static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> barrier();
> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> barrier();
> + return 1;
> }
and here you make a horribly named function more horrible;
try_set_locked() is that its now.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-07 15:01 ` Waiman Long
@ 2014-05-08 19:12 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:12 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
No, we want the unfair thing for VIRT, not PARAVIRT.
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 9e7659e..10e87e1 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> + if (static_key_false(¶virt_unfairlocks_enabled))
> + /*
> + * Need to use atomic operation to get the lock when
> + * lock stealing can happen.
> + */
> + return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
That's missing {}.
> +#endif
> barrier();
> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> barrier();
But no, what you want is:
static __always_inline bool virt_lock(struct qspinlock *lock)
{
#ifdef CONFIG_VIRT_MUCK
if (static_key_false(&virt_unfairlocks_enabled)) {
while (!queue_spin_trylock(lock))
cpu_relax();
return true;
}
#else
return false;
}
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
if (virt_lock(lock))
return;
...
}
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
@ 2014-05-08 19:12 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:12 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
No, we want the unfair thing for VIRT, not PARAVIRT.
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 9e7659e..10e87e1 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> + if (static_key_false(¶virt_unfairlocks_enabled))
> + /*
> + * Need to use atomic operation to get the lock when
> + * lock stealing can happen.
> + */
> + return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
That's missing {}.
> +#endif
> barrier();
> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> barrier();
But no, what you want is:
static __always_inline bool virt_lock(struct qspinlock *lock)
{
#ifdef CONFIG_VIRT_MUCK
if (static_key_false(&virt_unfairlocks_enabled)) {
while (!queue_spin_trylock(lock))
cpu_relax();
return true;
}
#else
return false;
}
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
if (virt_lock(lock))
return;
...
}
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-08 19:12 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:12 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
No, we want the unfair thing for VIRT, not PARAVIRT.
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 9e7659e..10e87e1 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> + if (static_key_false(¶virt_unfairlocks_enabled))
> + /*
> + * Need to use atomic operation to get the lock when
> + * lock stealing can happen.
> + */
> + return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
That's missing {}.
> +#endif
> barrier();
> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
> barrier();
But no, what you want is:
static __always_inline bool virt_lock(struct qspinlock *lock)
{
#ifdef CONFIG_VIRT_MUCK
if (static_key_false(&virt_unfairlocks_enabled)) {
while (!queue_spin_trylock(lock))
cpu_relax();
return true;
}
#else
return false;
}
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
if (virt_lock(lock))
return;
...
}
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
2014-05-07 15:01 ` Waiman Long
@ 2014-05-08 19:19 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:19 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On Wed, May 07, 2014 at 11:01:40AM -0400, Waiman Long wrote:
> +#define DEF_LOOP_CNT(c) int c = 0
> +#define INC_LOOP_CNT(c) (c)++
> +#define LOOP_CNT(c) c
> +#define LSTEAL_MIN (1 << 3)
> +#define LSTEAL_MAX (1 << 10)
> +#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
> +#define LSTEAL_MAX_MASK (LSTEAL_MAX - 1)
*groan*.. why do you have to write the most obfuscated code ever? We're
not ioccc.org.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
@ 2014-05-08 19:19 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:19 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:40AM -0400, Waiman Long wrote:
> +#define DEF_LOOP_CNT(c) int c = 0
> +#define INC_LOOP_CNT(c) (c)++
> +#define LOOP_CNT(c) c
> +#define LSTEAL_MIN (1 << 3)
> +#define LSTEAL_MAX (1 << 10)
> +#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
> +#define LSTEAL_MAX_MASK (LSTEAL_MAX - 1)
*groan*.. why do you have to write the most obfuscated code ever? We're
not ioccc.org.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
2014-05-07 15:01 ` Waiman Long
(?)
@ 2014-05-08 19:19 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-08 19:19 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On Wed, May 07, 2014 at 11:01:40AM -0400, Waiman Long wrote:
> +#define DEF_LOOP_CNT(c) int c = 0
> +#define INC_LOOP_CNT(c) (c)++
> +#define LOOP_CNT(c) c
> +#define LSTEAL_MIN (1 << 3)
> +#define LSTEAL_MAX (1 << 10)
> +#define LSTEAL_MIN_MASK (LSTEAL_MIN - 1)
> +#define LSTEAL_MAX_MASK (LSTEAL_MAX - 1)
*groan*.. why do you have to write the most obfuscated code ever? We're
not ioccc.org.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-08 18:57 ` Peter Zijlstra
@ 2014-05-10 0:49 ` Waiman Long
2014-05-10 0:49 ` Waiman Long
1 sibling, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 0:49 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/08/2014 02:57 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
>> +/**
>> + * trylock_pending - try to acquire queue spinlock using the pending bit
>> + * @lock : Pointer to queue spinlock structure
>> + * @pval : Pointer to value of the queue spinlock 32-bit word
>> + * Return: 1 if lock acquired, 0 otherwise
>> + */
>> +static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> Still don't like you put it in a separate function, but you don't need
> the pointer thing. Note how after you fail the trylock_pending() you
> touch the second (node) cacheline.
>
>> @@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>
>> BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS));
>>
>> + if (trylock_pending(lock,&val))
>> + return; /* Lock acquired */
>> +
>> node = this_cpu_ptr(&mcs_nodes[0]);
>> idx = node->count++;
>> tail = encode_tail(smp_processor_id(), idx);
>> @@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>> node->next = NULL;
>>
>> /*
>> + * we already touched the queueing cacheline; don't bother with pending
>> + * stuff.
>> + *
>> * trylock || xchg(lock, node)
>> *
>> - * 0,0 -> 0,1 ; trylock
>> - * p,x -> n,x ; prev = xchg(lock, node)
>> + * 0,0,0 -> 0,0,1 ; trylock
>> + * p,y,x -> n,y,x ; prev = xchg(lock, node)
>> */
> And any value of @val we might have had here is completely out-dated.
> The only thing that makes sense it to set:
>
> val = 0;
>
> Which makes us start with a trylock, alternatively we can re-read val.
That is true. I will make the change to get rid of the pointer thing.
As for the separate trylock_pending function, my original goal was to
have a better delineation of different portions of the code. Given the
fact that I broke up the slowpath function into 2 in a later patch, I
may not really need to separate it out. I will pull it back in the next
version.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
@ 2014-05-10 0:49 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 0:49 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 02:57 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
>> +/**
>> + * trylock_pending - try to acquire queue spinlock using the pending bit
>> + * @lock : Pointer to queue spinlock structure
>> + * @pval : Pointer to value of the queue spinlock 32-bit word
>> + * Return: 1 if lock acquired, 0 otherwise
>> + */
>> +static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> Still don't like you put it in a separate function, but you don't need
> the pointer thing. Note how after you fail the trylock_pending() you
> touch the second (node) cacheline.
>
>> @@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>
>> BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS));
>>
>> + if (trylock_pending(lock,&val))
>> + return; /* Lock acquired */
>> +
>> node = this_cpu_ptr(&mcs_nodes[0]);
>> idx = node->count++;
>> tail = encode_tail(smp_processor_id(), idx);
>> @@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>> node->next = NULL;
>>
>> /*
>> + * we already touched the queueing cacheline; don't bother with pending
>> + * stuff.
>> + *
>> * trylock || xchg(lock, node)
>> *
>> - * 0,0 -> 0,1 ; trylock
>> - * p,x -> n,x ; prev = xchg(lock, node)
>> + * 0,0,0 -> 0,0,1 ; trylock
>> + * p,y,x -> n,y,x ; prev = xchg(lock, node)
>> */
> And any value of @val we might have had here is completely out-dated.
> The only thing that makes sense it to set:
>
> val = 0;
>
> Which makes us start with a trylock, alternatively we can re-read val.
That is true. I will make the change to get rid of the pointer thing.
As for the separate trylock_pending function, my original goal was to
have a better delineation of different portions of the code. Given the
fact that I broke up the slowpath function into 2 in a later patch, I
may not really need to separate it out. I will pull it back in the next
version.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-08 18:57 ` Peter Zijlstra
2014-05-10 0:49 ` Waiman Long
@ 2014-05-10 0:49 ` Waiman Long
1 sibling, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 0:49 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 02:57 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
>> +/**
>> + * trylock_pending - try to acquire queue spinlock using the pending bit
>> + * @lock : Pointer to queue spinlock structure
>> + * @pval : Pointer to value of the queue spinlock 32-bit word
>> + * Return: 1 if lock acquired, 0 otherwise
>> + */
>> +static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> Still don't like you put it in a separate function, but you don't need
> the pointer thing. Note how after you fail the trylock_pending() you
> touch the second (node) cacheline.
>
>> @@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>
>> BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS));
>>
>> + if (trylock_pending(lock,&val))
>> + return; /* Lock acquired */
>> +
>> node = this_cpu_ptr(&mcs_nodes[0]);
>> idx = node->count++;
>> tail = encode_tail(smp_processor_id(), idx);
>> @@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>> node->next = NULL;
>>
>> /*
>> + * we already touched the queueing cacheline; don't bother with pending
>> + * stuff.
>> + *
>> * trylock || xchg(lock, node)
>> *
>> - * 0,0 -> 0,1 ; trylock
>> - * p,x -> n,x ; prev = xchg(lock, node)
>> + * 0,0,0 -> 0,0,1 ; trylock
>> + * p,y,x -> n,y,x ; prev = xchg(lock, node)
>> */
> And any value of @val we might have had here is completely out-dated.
> The only thing that makes sense it to set:
>
> val = 0;
>
> Which makes us start with a trylock, alternatively we can re-read val.
That is true. I will make the change to get rid of the pointer thing.
As for the separate trylock_pending function, my original goal was to
have a better delineation of different portions of the code. Given the
fact that I broke up the slowpath function into 2 in a later patch, I
may not really need to separate it out. I will pull it back in the next
version.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-08 18:58 ` Peter Zijlstra
@ 2014-05-10 0:58 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 0:58 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
>> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>> */
>> for (;;) {
>> /*
>> - * If we observe any contention; queue.
>> + * If we observe that the queue is not empty,
>> + * return and be queued.
>> */
>> - if (val& ~_Q_LOCKED_MASK)
>> + if (val& _Q_TAIL_MASK)
>> return 0;
>>
>> + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
>> + /*
>> + * If both the lock and pending bits are set, we wait
>> + * a while to see if that either bit will be cleared.
>> + * If that is no change, we return and be queued.
>> + */
>> + if (!retry)
>> + return 0;
>> + retry--;
>> + cpu_relax();
>> + cpu_relax();
>> + *pval = val = atomic_read(&lock->val);
>> + continue;
>> + } else if (val == _Q_PENDING_VAL) {
>> + /*
>> + * Pending bit is set, but not the lock bit.
>> + * Assuming that the pending bit holder is going to
>> + * set the lock bit and clear the pending bit soon,
>> + * it is better to wait than to exit at this point.
>> + */
>> + cpu_relax();
>> + *pval = val = atomic_read(&lock->val);
>> + continue;
>> + }
> Didn't I give a much saner alternative to this mess last time?
I don't recall you have any suggestion last time. Anyway, if you think
the code is too messy, I think I can give up the first if statement
which is more an optimistic spinning kind of code for short critical
section. The 2nd if statement is still need to improve chance of using
this code path due to timing reason. I will rerun my performance test to
make sure it won't have too much performance impact.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
@ 2014-05-10 0:58 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 0:58 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
>> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>> */
>> for (;;) {
>> /*
>> - * If we observe any contention; queue.
>> + * If we observe that the queue is not empty,
>> + * return and be queued.
>> */
>> - if (val& ~_Q_LOCKED_MASK)
>> + if (val& _Q_TAIL_MASK)
>> return 0;
>>
>> + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
>> + /*
>> + * If both the lock and pending bits are set, we wait
>> + * a while to see if that either bit will be cleared.
>> + * If that is no change, we return and be queued.
>> + */
>> + if (!retry)
>> + return 0;
>> + retry--;
>> + cpu_relax();
>> + cpu_relax();
>> + *pval = val = atomic_read(&lock->val);
>> + continue;
>> + } else if (val == _Q_PENDING_VAL) {
>> + /*
>> + * Pending bit is set, but not the lock bit.
>> + * Assuming that the pending bit holder is going to
>> + * set the lock bit and clear the pending bit soon,
>> + * it is better to wait than to exit at this point.
>> + */
>> + cpu_relax();
>> + *pval = val = atomic_read(&lock->val);
>> + continue;
>> + }
> Didn't I give a much saner alternative to this mess last time?
I don't recall you have any suggestion last time. Anyway, if you think
the code is too messy, I think I can give up the first if statement
which is more an optimistic spinning kind of code for short critical
section. The 2nd if statement is still need to improve chance of using
this code path due to timing reason. I will rerun my performance test to
make sure it won't have too much performance impact.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-08 18:58 ` Peter Zijlstra
(?)
(?)
@ 2014-05-10 0:58 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 0:58 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
>> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>> */
>> for (;;) {
>> /*
>> - * If we observe any contention; queue.
>> + * If we observe that the queue is not empty,
>> + * return and be queued.
>> */
>> - if (val& ~_Q_LOCKED_MASK)
>> + if (val& _Q_TAIL_MASK)
>> return 0;
>>
>> + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
>> + /*
>> + * If both the lock and pending bits are set, we wait
>> + * a while to see if that either bit will be cleared.
>> + * If that is no change, we return and be queued.
>> + */
>> + if (!retry)
>> + return 0;
>> + retry--;
>> + cpu_relax();
>> + cpu_relax();
>> + *pval = val = atomic_read(&lock->val);
>> + continue;
>> + } else if (val == _Q_PENDING_VAL) {
>> + /*
>> + * Pending bit is set, but not the lock bit.
>> + * Assuming that the pending bit holder is going to
>> + * set the lock bit and clear the pending bit soon,
>> + * it is better to wait than to exit at this point.
>> + */
>> + cpu_relax();
>> + *pval = val = atomic_read(&lock->val);
>> + continue;
>> + }
> Didn't I give a much saner alternative to this mess last time?
I don't recall you have any suggestion last time. Anyway, if you think
the code is too messy, I think I can give up the first if statement
which is more an optimistic spinning kind of code for short critical
section. The 2nd if statement is still need to improve chance of using
this code path due to timing reason. I will rerun my performance test to
make sure it won't have too much performance impact.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-08 19:00 ` Peter Zijlstra
@ 2014-05-10 1:05 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:05 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/08/2014 03:00 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
>> @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
>> * can allow better optimization of the lock acquisition for the pending
>> * bit holder.
>> */
>> -#if _Q_PENDING_BITS == 8
>> -
>> struct __qspinlock {
>> union {
>> atomic_t val;
>> - struct {
>> #ifdef __LITTLE_ENDIAN
>> + u8 locked;
>> + struct {
>> u16 locked_pending;
>> u16 tail;
>> + };
>> #else
>> + struct {
>> u16 tail;
>> u16 locked_pending;
>> -#endif
>> };
>> + struct {
>> + u8 reserved[3];
>> + u8 locked;
>> + };
>> +#endif
>> };
>> };
>>
>> +#if _Q_PENDING_BITS == 8
> That doesn't make sense, that struct __qspinlock only makes sense when
> _Q_PENDING_BITS == 8.
I need to use the locked field (the 2nd strcut) in get_qlock() where I
grab the lock by setting the lock byte directly. Since the endian-aware
structure is already in place, I reused it and have to expose it even
when _Q_PENDING_BITS isn't 8. I will document that more clearly in the
code to avoid this confusion.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-05-10 1:05 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:05 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:00 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
>> @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
>> * can allow better optimization of the lock acquisition for the pending
>> * bit holder.
>> */
>> -#if _Q_PENDING_BITS == 8
>> -
>> struct __qspinlock {
>> union {
>> atomic_t val;
>> - struct {
>> #ifdef __LITTLE_ENDIAN
>> + u8 locked;
>> + struct {
>> u16 locked_pending;
>> u16 tail;
>> + };
>> #else
>> + struct {
>> u16 tail;
>> u16 locked_pending;
>> -#endif
>> };
>> + struct {
>> + u8 reserved[3];
>> + u8 locked;
>> + };
>> +#endif
>> };
>> };
>>
>> +#if _Q_PENDING_BITS == 8
> That doesn't make sense, that struct __qspinlock only makes sense when
> _Q_PENDING_BITS == 8.
I need to use the locked field (the 2nd strcut) in get_qlock() where I
grab the lock by setting the lock byte directly. Since the endian-aware
structure is already in place, I reused it and have to expose it even
when _Q_PENDING_BITS isn't 8. I will document that more clearly in the
code to avoid this confusion.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-08 19:00 ` Peter Zijlstra
(?)
@ 2014-05-10 1:05 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:05 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:00 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
>> @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
>> * can allow better optimization of the lock acquisition for the pending
>> * bit holder.
>> */
>> -#if _Q_PENDING_BITS == 8
>> -
>> struct __qspinlock {
>> union {
>> atomic_t val;
>> - struct {
>> #ifdef __LITTLE_ENDIAN
>> + u8 locked;
>> + struct {
>> u16 locked_pending;
>> u16 tail;
>> + };
>> #else
>> + struct {
>> u16 tail;
>> u16 locked_pending;
>> -#endif
>> };
>> + struct {
>> + u8 reserved[3];
>> + u8 locked;
>> + };
>> +#endif
>> };
>> };
>>
>> +#if _Q_PENDING_BITS == 8
> That doesn't make sense, that struct __qspinlock only makes sense when
> _Q_PENDING_BITS == 8.
I need to use the locked field (the 2nd strcut) in get_qlock() where I
grab the lock by setting the lock byte directly. Since the endian-aware
structure is already in place, I reused it and have to expose it even
when _Q_PENDING_BITS isn't 8. I will document that more clearly in the
code to avoid this confusion.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-08 19:02 ` Peter Zijlstra
@ 2014-05-10 1:06 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:06 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/08/2014 03:02 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
>> /**
>> + * get_qlock - Set the lock bit and own the lock
>> + * @lock: Pointer to queue spinlock structure
>> + *
>> + * This routine should only be called when the caller is the only one
>> + * entitled to acquire the lock.
>> + */
>> +static __always_inline void get_qlock(struct qspinlock *lock)
> set_locked()
>
>> +{
>> + struct __qspinlock *l = (void *)lock;
>> +
>> + barrier();
>> + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> + barrier();
>> +}
> get_qlock() is just horrible. The function doesn't actually _get_
> anything, and qlock is not in line with the rest of the naming.
Sure, I will make the change.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
@ 2014-05-10 1:06 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:06 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:02 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
>> /**
>> + * get_qlock - Set the lock bit and own the lock
>> + * @lock: Pointer to queue spinlock structure
>> + *
>> + * This routine should only be called when the caller is the only one
>> + * entitled to acquire the lock.
>> + */
>> +static __always_inline void get_qlock(struct qspinlock *lock)
> set_locked()
>
>> +{
>> + struct __qspinlock *l = (void *)lock;
>> +
>> + barrier();
>> + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> + barrier();
>> +}
> get_qlock() is just horrible. The function doesn't actually _get_
> anything, and qlock is not in line with the rest of the naming.
Sure, I will make the change.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
2014-05-08 19:02 ` Peter Zijlstra
(?)
@ 2014-05-10 1:06 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:06 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:02 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote:
>> /**
>> + * get_qlock - Set the lock bit and own the lock
>> + * @lock: Pointer to queue spinlock structure
>> + *
>> + * This routine should only be called when the caller is the only one
>> + * entitled to acquire the lock.
>> + */
>> +static __always_inline void get_qlock(struct qspinlock *lock)
> set_locked()
>
>> +{
>> + struct __qspinlock *l = (void *)lock;
>> +
>> + barrier();
>> + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> + barrier();
>> +}
> get_qlock() is just horrible. The function doesn't actually _get_
> anything, and qlock is not in line with the rest of the naming.
Sure, I will make the change.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-08 19:04 ` Peter Zijlstra
@ 2014-05-10 1:08 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:08 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
>> /*
>> + * To have additional features for better virtualization support, it is
>> + * necessary to store additional data in the queue node structure. So
>> + * a new queue node structure will have to be defined and used here.
>> + */
>> +struct qnode {
>> + struct mcs_spinlock mcs;
>> +};
> You can ditch this entire patch; its pointless, just add a new
> DEFINE_PER_CPU for the para-virt muck.
Yes, I can certainly merge it to the next one in the series. I break it
out to make each individual patch smaller, more single-purpose and
easier to review.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
@ 2014-05-10 1:08 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:08 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
>> /*
>> + * To have additional features for better virtualization support, it is
>> + * necessary to store additional data in the queue node structure. So
>> + * a new queue node structure will have to be defined and used here.
>> + */
>> +struct qnode {
>> + struct mcs_spinlock mcs;
>> +};
> You can ditch this entire patch; its pointless, just add a new
> DEFINE_PER_CPU for the para-virt muck.
Yes, I can certainly merge it to the next one in the series. I break it
out to make each individual patch smaller, more single-purpose and
easier to review.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-08 19:04 ` Peter Zijlstra
(?)
@ 2014-05-10 1:08 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:08 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
>> /*
>> + * To have additional features for better virtualization support, it is
>> + * necessary to store additional data in the queue node structure. So
>> + * a new queue node structure will have to be defined and used here.
>> + */
>> +struct qnode {
>> + struct mcs_spinlock mcs;
>> +};
> You can ditch this entire patch; its pointless, just add a new
> DEFINE_PER_CPU for the para-virt muck.
Yes, I can certainly merge it to the next one in the series. I break it
out to make each individual patch smaller, more single-purpose and
easier to review.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-08 19:06 ` Peter Zijlstra
(?)
@ 2014-05-10 1:19 ` Waiman Long
2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 14:13 ` Peter Zijlstra
-1 siblings, 2 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:19 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/08/2014 03:06 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
>> If unfair lock is supported, the lock acquisition loop at the end of
>> the queue_spin_lock_slowpath() function may need to detect the fact
>> the lock can be stolen. Code are added for the stolen lock detection.
>>
>> A new qhead macro is also defined as a shorthand for mcs.locked.
> NAK, unfair should be a pure test-and-set lock.
I have performance data showing that a simple test-and-set lock does not
scale well. That is the primary reason of ditching the test-and-set lock
and use a more complicated scheme which scales better. Also, it will be
hard to make the unfair test-and-set lock code to coexist nicely with PV
spinlock code.
>> /**
>> * get_qlock - Set the lock bit and own the lock
>> - * @lock: Pointer to queue spinlock structure
>> + * @lock : Pointer to queue spinlock structure
>> + * Return: 1 if lock acquired, 0 otherwise
>> *
>> * This routine should only be called when the caller is the only one
>> * entitled to acquire the lock.
>> */
>> -static __always_inline void get_qlock(struct qspinlock *lock)
>> +static __always_inline int get_qlock(struct qspinlock *lock)
>> {
>> struct __qspinlock *l = (void *)lock;
>>
>> barrier();
>> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> barrier();
>> + return 1;
>> }
> and here you make a horribly named function more horrible;
> try_set_locked() is that its now.
Will do.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-08 19:06 ` Peter Zijlstra
` (2 preceding siblings ...)
(?)
@ 2014-05-10 1:19 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:19 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:06 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
>> If unfair lock is supported, the lock acquisition loop at the end of
>> the queue_spin_lock_slowpath() function may need to detect the fact
>> the lock can be stolen. Code are added for the stolen lock detection.
>>
>> A new qhead macro is also defined as a shorthand for mcs.locked.
> NAK, unfair should be a pure test-and-set lock.
I have performance data showing that a simple test-and-set lock does not
scale well. That is the primary reason of ditching the test-and-set lock
and use a more complicated scheme which scales better. Also, it will be
hard to make the unfair test-and-set lock code to coexist nicely with PV
spinlock code.
>> /**
>> * get_qlock - Set the lock bit and own the lock
>> - * @lock: Pointer to queue spinlock structure
>> + * @lock : Pointer to queue spinlock structure
>> + * Return: 1 if lock acquired, 0 otherwise
>> *
>> * This routine should only be called when the caller is the only one
>> * entitled to acquire the lock.
>> */
>> -static __always_inline void get_qlock(struct qspinlock *lock)
>> +static __always_inline int get_qlock(struct qspinlock *lock)
>> {
>> struct __qspinlock *l = (void *)lock;
>>
>> barrier();
>> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> barrier();
>> + return 1;
>> }
> and here you make a horribly named function more horrible;
> try_set_locked() is that its now.
Will do.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-08 19:06 ` Peter Zijlstra
(?)
(?)
@ 2014-05-10 1:19 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-10 1:19 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:06 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
>> If unfair lock is supported, the lock acquisition loop at the end of
>> the queue_spin_lock_slowpath() function may need to detect the fact
>> the lock can be stolen. Code are added for the stolen lock detection.
>>
>> A new qhead macro is also defined as a shorthand for mcs.locked.
> NAK, unfair should be a pure test-and-set lock.
I have performance data showing that a simple test-and-set lock does not
scale well. That is the primary reason of ditching the test-and-set lock
and use a more complicated scheme which scales better. Also, it will be
hard to make the unfair test-and-set lock code to coexist nicely with PV
spinlock code.
>> /**
>> * get_qlock - Set the lock bit and own the lock
>> - * @lock: Pointer to queue spinlock structure
>> + * @lock : Pointer to queue spinlock structure
>> + * Return: 1 if lock acquired, 0 otherwise
>> *
>> * This routine should only be called when the caller is the only one
>> * entitled to acquire the lock.
>> */
>> -static __always_inline void get_qlock(struct qspinlock *lock)
>> +static __always_inline int get_qlock(struct qspinlock *lock)
>> {
>> struct __qspinlock *l = (void *)lock;
>>
>> barrier();
>> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> barrier();
>> + return 1;
>> }
> and here you make a horribly named function more horrible;
> try_set_locked() is that its now.
Will do.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-10 0:58 ` Waiman Long
@ 2014-05-10 13:38 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 13:38 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
[-- Attachment #1: Type: text/plain, Size: 1964 bytes --]
On Fri, May 09, 2014 at 08:58:47PM -0400, Waiman Long wrote:
> On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> >>@@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> >> */
> >> for (;;) {
> >> /*
> >>- * If we observe any contention; queue.
> >>+ * If we observe that the queue is not empty,
> >>+ * return and be queued.
> >> */
> >>- if (val& ~_Q_LOCKED_MASK)
> >>+ if (val& _Q_TAIL_MASK)
> >> return 0;
> >>
> >>+ if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> >>+ /*
> >>+ * If both the lock and pending bits are set, we wait
> >>+ * a while to see if that either bit will be cleared.
> >>+ * If that is no change, we return and be queued.
> >>+ */
> >>+ if (!retry)
> >>+ return 0;
> >>+ retry--;
> >>+ cpu_relax();
> >>+ cpu_relax();
> >>+ *pval = val = atomic_read(&lock->val);
> >>+ continue;
> >>+ } else if (val == _Q_PENDING_VAL) {
> >>+ /*
> >>+ * Pending bit is set, but not the lock bit.
> >>+ * Assuming that the pending bit holder is going to
> >>+ * set the lock bit and clear the pending bit soon,
> >>+ * it is better to wait than to exit at this point.
> >>+ */
> >>+ cpu_relax();
> >>+ *pval = val = atomic_read(&lock->val);
> >>+ continue;
> >>+ }
> >Didn't I give a much saner alternative to this mess last time?
>
> I don't recall you have any suggestion last time. Anyway, if you think the
> code is too messy, I think I can give up the first if statement which is
> more an optimistic spinning kind of code for short critical section. The 2nd
> if statement is still need to improve chance of using this code path due to
> timing reason. I will rerun my performance test to make sure it won't have
> too much performance impact.
lkml.kernel.org/r/20140417163640.GT11096@twins.programming.kicks-ass.net
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
@ 2014-05-10 13:38 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 13:38 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 1964 bytes --]
On Fri, May 09, 2014 at 08:58:47PM -0400, Waiman Long wrote:
> On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> >>@@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> >> */
> >> for (;;) {
> >> /*
> >>- * If we observe any contention; queue.
> >>+ * If we observe that the queue is not empty,
> >>+ * return and be queued.
> >> */
> >>- if (val& ~_Q_LOCKED_MASK)
> >>+ if (val& _Q_TAIL_MASK)
> >> return 0;
> >>
> >>+ if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> >>+ /*
> >>+ * If both the lock and pending bits are set, we wait
> >>+ * a while to see if that either bit will be cleared.
> >>+ * If that is no change, we return and be queued.
> >>+ */
> >>+ if (!retry)
> >>+ return 0;
> >>+ retry--;
> >>+ cpu_relax();
> >>+ cpu_relax();
> >>+ *pval = val = atomic_read(&lock->val);
> >>+ continue;
> >>+ } else if (val == _Q_PENDING_VAL) {
> >>+ /*
> >>+ * Pending bit is set, but not the lock bit.
> >>+ * Assuming that the pending bit holder is going to
> >>+ * set the lock bit and clear the pending bit soon,
> >>+ * it is better to wait than to exit at this point.
> >>+ */
> >>+ cpu_relax();
> >>+ *pval = val = atomic_read(&lock->val);
> >>+ continue;
> >>+ }
> >Didn't I give a much saner alternative to this mess last time?
>
> I don't recall you have any suggestion last time. Anyway, if you think the
> code is too messy, I think I can give up the first if statement which is
> more an optimistic spinning kind of code for short critical section. The 2nd
> if statement is still need to improve chance of using this code path due to
> timing reason. I will rerun my performance test to make sure it won't have
> too much performance impact.
lkml.kernel.org/r/20140417163640.GT11096@twins.programming.kicks-ass.net
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
2014-05-10 0:58 ` Waiman Long
(?)
@ 2014-05-10 13:38 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 13:38 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 1964 bytes --]
On Fri, May 09, 2014 at 08:58:47PM -0400, Waiman Long wrote:
> On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> >>@@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> >> */
> >> for (;;) {
> >> /*
> >>- * If we observe any contention; queue.
> >>+ * If we observe that the queue is not empty,
> >>+ * return and be queued.
> >> */
> >>- if (val& ~_Q_LOCKED_MASK)
> >>+ if (val& _Q_TAIL_MASK)
> >> return 0;
> >>
> >>+ if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> >>+ /*
> >>+ * If both the lock and pending bits are set, we wait
> >>+ * a while to see if that either bit will be cleared.
> >>+ * If that is no change, we return and be queued.
> >>+ */
> >>+ if (!retry)
> >>+ return 0;
> >>+ retry--;
> >>+ cpu_relax();
> >>+ cpu_relax();
> >>+ *pval = val = atomic_read(&lock->val);
> >>+ continue;
> >>+ } else if (val == _Q_PENDING_VAL) {
> >>+ /*
> >>+ * Pending bit is set, but not the lock bit.
> >>+ * Assuming that the pending bit holder is going to
> >>+ * set the lock bit and clear the pending bit soon,
> >>+ * it is better to wait than to exit at this point.
> >>+ */
> >>+ cpu_relax();
> >>+ *pval = val = atomic_read(&lock->val);
> >>+ continue;
> >>+ }
> >Didn't I give a much saner alternative to this mess last time?
>
> I don't recall you have any suggestion last time. Anyway, if you think the
> code is too messy, I think I can give up the first if statement which is
> more an optimistic spinning kind of code for short critical section. The 2nd
> if statement is still need to improve chance of using this code path due to
> timing reason. I will rerun my performance test to make sure it won't have
> too much performance impact.
lkml.kernel.org/r/20140417163640.GT11096@twins.programming.kicks-ass.net
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-10 1:19 ` Waiman Long
@ 2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 14:13 ` Peter Zijlstra
1 sibling, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 14:13 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
[-- Attachment #1: Type: text/plain, Size: 1305 bytes --]
On Fri, May 09, 2014 at 09:19:32PM -0400, Waiman Long wrote:
> On 05/08/2014 03:06 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
> >>If unfair lock is supported, the lock acquisition loop at the end of
> >>the queue_spin_lock_slowpath() function may need to detect the fact
> >>the lock can be stolen. Code are added for the stolen lock detection.
> >>
> >>A new qhead macro is also defined as a shorthand for mcs.locked.
> >NAK, unfair should be a pure test-and-set lock.
>
> I have performance data showing that a simple test-and-set lock does not
> scale well. That is the primary reason of ditching the test-and-set lock and
> use a more complicated scheme which scales better.
Nobody should give a fuck about scalability in this case anyway.
Also, as I explained/asked earlier:
lkml.kernel.org/r/20140314083001.GN27965@twins.programming.kicks-ass.net
Lock holder preemption is _way_ worse with any kind of queueing. You've
not explained how the simple 3 cpu example in that email gets better
performance than a test-and-set lock.
> Also, it will be hard to
> make the unfair test-and-set lock code to coexist nicely with PV spinlock
> code.
That's just complete crap as the test-and-set lock is like 3 lines of
code.
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
@ 2014-05-10 14:13 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 14:13 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 1305 bytes --]
On Fri, May 09, 2014 at 09:19:32PM -0400, Waiman Long wrote:
> On 05/08/2014 03:06 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
> >>If unfair lock is supported, the lock acquisition loop at the end of
> >>the queue_spin_lock_slowpath() function may need to detect the fact
> >>the lock can be stolen. Code are added for the stolen lock detection.
> >>
> >>A new qhead macro is also defined as a shorthand for mcs.locked.
> >NAK, unfair should be a pure test-and-set lock.
>
> I have performance data showing that a simple test-and-set lock does not
> scale well. That is the primary reason of ditching the test-and-set lock and
> use a more complicated scheme which scales better.
Nobody should give a fuck about scalability in this case anyway.
Also, as I explained/asked earlier:
lkml.kernel.org/r/20140314083001.GN27965@twins.programming.kicks-ass.net
Lock holder preemption is _way_ worse with any kind of queueing. You've
not explained how the simple 3 cpu example in that email gets better
performance than a test-and-set lock.
> Also, it will be hard to
> make the unfair test-and-set lock code to coexist nicely with PV spinlock
> code.
That's just complete crap as the test-and-set lock is like 3 lines of
code.
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
2014-05-10 1:19 ` Waiman Long
@ 2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 14:13 ` Peter Zijlstra
1 sibling, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 14:13 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 1305 bytes --]
On Fri, May 09, 2014 at 09:19:32PM -0400, Waiman Long wrote:
> On 05/08/2014 03:06 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:
> >>If unfair lock is supported, the lock acquisition loop at the end of
> >>the queue_spin_lock_slowpath() function may need to detect the fact
> >>the lock can be stolen. Code are added for the stolen lock detection.
> >>
> >>A new qhead macro is also defined as a shorthand for mcs.locked.
> >NAK, unfair should be a pure test-and-set lock.
>
> I have performance data showing that a simple test-and-set lock does not
> scale well. That is the primary reason of ditching the test-and-set lock and
> use a more complicated scheme which scales better.
Nobody should give a fuck about scalability in this case anyway.
Also, as I explained/asked earlier:
lkml.kernel.org/r/20140314083001.GN27965@twins.programming.kicks-ass.net
Lock holder preemption is _way_ worse with any kind of queueing. You've
not explained how the simple 3 cpu example in that email gets better
performance than a test-and-set lock.
> Also, it will be hard to
> make the unfair test-and-set lock code to coexist nicely with PV spinlock
> code.
That's just complete crap as the test-and-set lock is like 3 lines of
code.
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-10 1:08 ` Waiman Long
@ 2014-05-10 14:14 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 14:14 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
[-- Attachment #1: Type: text/plain, Size: 947 bytes --]
On Fri, May 09, 2014 at 09:08:56PM -0400, Waiman Long wrote:
> On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> >> /*
> >>+ * To have additional features for better virtualization support, it is
> >>+ * necessary to store additional data in the queue node structure. So
> >>+ * a new queue node structure will have to be defined and used here.
> >>+ */
> >>+struct qnode {
> >>+ struct mcs_spinlock mcs;
> >>+};
> >You can ditch this entire patch; its pointless, just add a new
> >DEFINE_PER_CPU for the para-virt muck.
>
> Yes, I can certainly merge it to the next one in the series. I break it out
> to make each individual patch smaller, more single-purpose and easier to
> review.
No, don't merge it, _drop_ it. Wrapping things in a struct generates a
ton of pointless change.
Put the new data in a new DEFINE_PER_CPU and leave the existing code as
is.
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
@ 2014-05-10 14:14 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 14:14 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 947 bytes --]
On Fri, May 09, 2014 at 09:08:56PM -0400, Waiman Long wrote:
> On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> >> /*
> >>+ * To have additional features for better virtualization support, it is
> >>+ * necessary to store additional data in the queue node structure. So
> >>+ * a new queue node structure will have to be defined and used here.
> >>+ */
> >>+struct qnode {
> >>+ struct mcs_spinlock mcs;
> >>+};
> >You can ditch this entire patch; its pointless, just add a new
> >DEFINE_PER_CPU for the para-virt muck.
>
> Yes, I can certainly merge it to the next one in the series. I break it out
> to make each individual patch smaller, more single-purpose and easier to
> review.
No, don't merge it, _drop_ it. Wrapping things in a struct generates a
ton of pointless change.
Put the new data in a new DEFINE_PER_CPU and leave the existing code as
is.
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-10 1:08 ` Waiman Long
(?)
@ 2014-05-10 14:14 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 14:14 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 947 bytes --]
On Fri, May 09, 2014 at 09:08:56PM -0400, Waiman Long wrote:
> On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> >> /*
> >>+ * To have additional features for better virtualization support, it is
> >>+ * necessary to store additional data in the queue node structure. So
> >>+ * a new queue node structure will have to be defined and used here.
> >>+ */
> >>+struct qnode {
> >>+ struct mcs_spinlock mcs;
> >>+};
> >You can ditch this entire patch; its pointless, just add a new
> >DEFINE_PER_CPU for the para-virt muck.
>
> Yes, I can certainly merge it to the next one in the series. I break it out
> to make each individual patch smaller, more single-purpose and easier to
> review.
No, don't merge it, _drop_ it. Wrapping things in a struct generates a
ton of pointless change.
Put the new data in a new DEFINE_PER_CPU and leave the existing code as
is.
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-10 14:14 ` Peter Zijlstra
` (2 preceding siblings ...)
(?)
@ 2014-05-10 18:21 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 18:21 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
[-- Attachment #1: Type: text/plain, Size: 2175 bytes --]
On Sat, May 10, 2014 at 04:14:17PM +0200, Peter Zijlstra wrote:
> On Fri, May 09, 2014 at 09:08:56PM -0400, Waiman Long wrote:
> > On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> > >On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> > >> /*
> > >>+ * To have additional features for better virtualization support, it is
> > >>+ * necessary to store additional data in the queue node structure. So
> > >>+ * a new queue node structure will have to be defined and used here.
> > >>+ */
> > >>+struct qnode {
> > >>+ struct mcs_spinlock mcs;
> > >>+};
> > >You can ditch this entire patch; its pointless, just add a new
> > >DEFINE_PER_CPU for the para-virt muck.
> >
> > Yes, I can certainly merge it to the next one in the series. I break it out
> > to make each individual patch smaller, more single-purpose and easier to
> > review.
>
> No, don't merge it, _drop_ it. Wrapping things in a struct generates a
> ton of pointless change.
>
> Put the new data in a new DEFINE_PER_CPU and leave the existing code as
> is.
So I had a look at the resulting code:
struct qnode {
struct mcs_spinlock mcs;
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
int lsteal_mask; /* Lock stealing frequency mask */
u32 prev_tail; /* Tail code of previous node */
#ifndef CONFIG_PARAVIRT_SPINLOCKS
struct qnode *qprev; /* Previous queue node addr */
#endif
#endif
struct pv_qvars pv; /* For para-virtualization */
};
With all the bells and whistles on (say an enterprise distro), that
single node will now fill an entire cacheline on its own.
That means that the normal case for normal people who stay the heck away
from virt shit will very often hit _3_ cachelines for their spin_lock().
1 - the cacheline that has the spinlock_t in,
2 - the cacheline that has node[0].count in to find which node to use
3 - the cacheline that has the actual right node in
That's of course complete and utter crap.
Not to mention that the final result of those 19 patches is going to
take me days to untangle :-(
Days I don't really have because I get to go hunt bugs in existing code
before thinking about adding shiny new stuff.
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-10 14:14 ` Peter Zijlstra
(?)
@ 2014-05-10 18:21 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 18:21 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 2175 bytes --]
On Sat, May 10, 2014 at 04:14:17PM +0200, Peter Zijlstra wrote:
> On Fri, May 09, 2014 at 09:08:56PM -0400, Waiman Long wrote:
> > On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> > >On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> > >> /*
> > >>+ * To have additional features for better virtualization support, it is
> > >>+ * necessary to store additional data in the queue node structure. So
> > >>+ * a new queue node structure will have to be defined and used here.
> > >>+ */
> > >>+struct qnode {
> > >>+ struct mcs_spinlock mcs;
> > >>+};
> > >You can ditch this entire patch; its pointless, just add a new
> > >DEFINE_PER_CPU for the para-virt muck.
> >
> > Yes, I can certainly merge it to the next one in the series. I break it out
> > to make each individual patch smaller, more single-purpose and easier to
> > review.
>
> No, don't merge it, _drop_ it. Wrapping things in a struct generates a
> ton of pointless change.
>
> Put the new data in a new DEFINE_PER_CPU and leave the existing code as
> is.
So I had a look at the resulting code:
struct qnode {
struct mcs_spinlock mcs;
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
int lsteal_mask; /* Lock stealing frequency mask */
u32 prev_tail; /* Tail code of previous node */
#ifndef CONFIG_PARAVIRT_SPINLOCKS
struct qnode *qprev; /* Previous queue node addr */
#endif
#endif
struct pv_qvars pv; /* For para-virtualization */
};
With all the bells and whistles on (say an enterprise distro), that
single node will now fill an entire cacheline on its own.
That means that the normal case for normal people who stay the heck away
from virt shit will very often hit _3_ cachelines for their spin_lock().
1 - the cacheline that has the spinlock_t in,
2 - the cacheline that has node[0].count in to find which node to use
3 - the cacheline that has the actual right node in
That's of course complete and utter crap.
Not to mention that the final result of those 19 patches is going to
take me days to untangle :-(
Days I don't really have because I get to go hunt bugs in existing code
before thinking about adding shiny new stuff.
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization
2014-05-10 14:14 ` Peter Zijlstra
(?)
(?)
@ 2014-05-10 18:21 ` Peter Zijlstra
-1 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-10 18:21 UTC (permalink / raw)
To: Waiman Long
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
[-- Attachment #1.1: Type: text/plain, Size: 2175 bytes --]
On Sat, May 10, 2014 at 04:14:17PM +0200, Peter Zijlstra wrote:
> On Fri, May 09, 2014 at 09:08:56PM -0400, Waiman Long wrote:
> > On 05/08/2014 03:04 PM, Peter Zijlstra wrote:
> > >On Wed, May 07, 2014 at 11:01:36AM -0400, Waiman Long wrote:
> > >> /*
> > >>+ * To have additional features for better virtualization support, it is
> > >>+ * necessary to store additional data in the queue node structure. So
> > >>+ * a new queue node structure will have to be defined and used here.
> > >>+ */
> > >>+struct qnode {
> > >>+ struct mcs_spinlock mcs;
> > >>+};
> > >You can ditch this entire patch; its pointless, just add a new
> > >DEFINE_PER_CPU for the para-virt muck.
> >
> > Yes, I can certainly merge it to the next one in the series. I break it out
> > to make each individual patch smaller, more single-purpose and easier to
> > review.
>
> No, don't merge it, _drop_ it. Wrapping things in a struct generates a
> ton of pointless change.
>
> Put the new data in a new DEFINE_PER_CPU and leave the existing code as
> is.
So I had a look at the resulting code:
struct qnode {
struct mcs_spinlock mcs;
#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
int lsteal_mask; /* Lock stealing frequency mask */
u32 prev_tail; /* Tail code of previous node */
#ifndef CONFIG_PARAVIRT_SPINLOCKS
struct qnode *qprev; /* Previous queue node addr */
#endif
#endif
struct pv_qvars pv; /* For para-virtualization */
};
With all the bells and whistles on (say an enterprise distro), that
single node will now fill an entire cacheline on its own.
That means that the normal case for normal people who stay the heck away
from virt shit will very often hit _3_ cachelines for their spin_lock().
1 - the cacheline that has the spinlock_t in,
2 - the cacheline that has node[0].count in to find which node to use
3 - the cacheline that has the actual right node in
That's of course complete and utter crap.
Not to mention that the final result of those 19 patches is going to
take me days to untangle :-(
Days I don't really have because I get to go hunt bugs in existing code
before thinking about adding shiny new stuff.
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
` (6 preceding siblings ...)
(?)
@ 2014-05-12 15:22 ` Radim Krčmář
2014-05-12 17:29 ` Peter Zijlstra
` (4 more replies)
-1 siblings, 5 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-12 15:22 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz@infradead.org>
>
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
- we have PLE and lock holder isn't running [1]
- the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
- the worst-case is lock starving [2]
- PLE can save us from wasting whole timeslice
Retry threshold is the easiest solution, regardless of its ugliness [4].
Another minor design flaw is that formerly first VCPU gets appended to
the tail when it decides to queue;
is the performance gain worth it?
Thanks.
---
1: Pause Loop Exiting is almost certain to vmexit in that case: we
default to 4096 TSC cycles on KVM, and pending loop is longer than 4
(4096/PSPIN_THRESHOLD).
We would also vmexit if critical section was longer than 4k.
2: In this example, vpus 1 and 2 use the lock while 3 never gets there.
VCPU: 1 2 3
lock() // we are the holder
pend() // we have pending bit
vmexit // while in PSPIN_THRESHOLD loop
unlock()
vmentry
SPINNING // for {;;} loop
vmexit
vmentry
lock()
pend()
vmexit
unlock()
vmentry
SPINNING
vmexit
vmentry
--- loop ---
The window is (should be) too small to happen in bare-metal.
3: Pending VCPU was first in line, but when it decides to queue, it must
go to the tail.
4:
The idea is to prevent unfairness by queueing after a while of useless
looping. Magic value should be set a bit above the time it takes an
active pending bit holder to go through the loop. 4 looks enough.
We can use either pv_qspinlock_enabled() or cpu_has_hypervisor.
I presume that we never want this to happen in a VM and that we won't
have pv_qspinlock_enabled() without cpu_has_hypervisor.
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 37b5c7f..cd45c27 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -573,7 +573,7 @@ static __always_inline int get_qlock(struct qspinlock *lock)
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
u32 old, new, val = *pval;
- int retry = 1;
+ int retry = 0;
/*
* trylock || pending
@@ -595,9 +595,9 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* a while to see if that either bit will be cleared.
* If that is no change, we return and be queued.
*/
- if (!retry)
+ if (retry)
return 0;
- retry--;
+ retry++;
cpu_relax();
cpu_relax();
*pval = val = atomic_read(&lock->val);
@@ -608,7 +608,11 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* Assuming that the pending bit holder is going to
* set the lock bit and clear the pending bit soon,
* it is better to wait than to exit at this point.
+ * Our assumption does not hold on hypervisors, where
+ * the pending bit holder doesn't have to be running.
*/
+ if (cpu_has_hypervisor && ++retry > MAGIC)
+ return 0;
cpu_relax();
*pval = val = atomic_read(&lock->val);
continue;
^ permalink raw reply related [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
` (5 preceding siblings ...)
(?)
@ 2014-05-12 15:22 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-12 15:22 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Oleg Nesterov, David Vrabel, Linus Torvalds
2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz@infradead.org>
>
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
- we have PLE and lock holder isn't running [1]
- the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
- the worst-case is lock starving [2]
- PLE can save us from wasting whole timeslice
Retry threshold is the easiest solution, regardless of its ugliness [4].
Another minor design flaw is that formerly first VCPU gets appended to
the tail when it decides to queue;
is the performance gain worth it?
Thanks.
---
1: Pause Loop Exiting is almost certain to vmexit in that case: we
default to 4096 TSC cycles on KVM, and pending loop is longer than 4
(4096/PSPIN_THRESHOLD).
We would also vmexit if critical section was longer than 4k.
2: In this example, vpus 1 and 2 use the lock while 3 never gets there.
VCPU: 1 2 3
lock() // we are the holder
pend() // we have pending bit
vmexit // while in PSPIN_THRESHOLD loop
unlock()
vmentry
SPINNING // for {;;} loop
vmexit
vmentry
lock()
pend()
vmexit
unlock()
vmentry
SPINNING
vmexit
vmentry
--- loop ---
The window is (should be) too small to happen in bare-metal.
3: Pending VCPU was first in line, but when it decides to queue, it must
go to the tail.
4:
The idea is to prevent unfairness by queueing after a while of useless
looping. Magic value should be set a bit above the time it takes an
active pending bit holder to go through the loop. 4 looks enough.
We can use either pv_qspinlock_enabled() or cpu_has_hypervisor.
I presume that we never want this to happen in a VM and that we won't
have pv_qspinlock_enabled() without cpu_has_hypervisor.
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 37b5c7f..cd45c27 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -573,7 +573,7 @@ static __always_inline int get_qlock(struct qspinlock *lock)
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
u32 old, new, val = *pval;
- int retry = 1;
+ int retry = 0;
/*
* trylock || pending
@@ -595,9 +595,9 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* a while to see if that either bit will be cleared.
* If that is no change, we return and be queued.
*/
- if (!retry)
+ if (retry)
return 0;
- retry--;
+ retry++;
cpu_relax();
cpu_relax();
*pval = val = atomic_read(&lock->val);
@@ -608,7 +608,11 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* Assuming that the pending bit holder is going to
* set the lock bit and clear the pending bit soon,
* it is better to wait than to exit at this point.
+ * Our assumption does not hold on hypervisors, where
+ * the pending bit holder doesn't have to be running.
*/
+ if (cpu_has_hypervisor && ++retry > MAGIC)
+ return 0;
cpu_relax();
*pval = val = atomic_read(&lock->val);
continue;
^ permalink raw reply related [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 15:01 ` Waiman Long
` (7 preceding siblings ...)
(?)
@ 2014-05-12 15:22 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-12 15:22 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Scott J Norton, Paolo Bonzini,
Thomas Gleixner, virtualization, Chegu Vinod, Oleg Nesterov,
David Vrabel, Linus Torvalds
2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz@infradead.org>
>
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
- we have PLE and lock holder isn't running [1]
- the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
- the worst-case is lock starving [2]
- PLE can save us from wasting whole timeslice
Retry threshold is the easiest solution, regardless of its ugliness [4].
Another minor design flaw is that formerly first VCPU gets appended to
the tail when it decides to queue;
is the performance gain worth it?
Thanks.
---
1: Pause Loop Exiting is almost certain to vmexit in that case: we
default to 4096 TSC cycles on KVM, and pending loop is longer than 4
(4096/PSPIN_THRESHOLD).
We would also vmexit if critical section was longer than 4k.
2: In this example, vpus 1 and 2 use the lock while 3 never gets there.
VCPU: 1 2 3
lock() // we are the holder
pend() // we have pending bit
vmexit // while in PSPIN_THRESHOLD loop
unlock()
vmentry
SPINNING // for {;;} loop
vmexit
vmentry
lock()
pend()
vmexit
unlock()
vmentry
SPINNING
vmexit
vmentry
--- loop ---
The window is (should be) too small to happen in bare-metal.
3: Pending VCPU was first in line, but when it decides to queue, it must
go to the tail.
4:
The idea is to prevent unfairness by queueing after a while of useless
looping. Magic value should be set a bit above the time it takes an
active pending bit holder to go through the loop. 4 looks enough.
We can use either pv_qspinlock_enabled() or cpu_has_hypervisor.
I presume that we never want this to happen in a VM and that we won't
have pv_qspinlock_enabled() without cpu_has_hypervisor.
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 37b5c7f..cd45c27 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -573,7 +573,7 @@ static __always_inline int get_qlock(struct qspinlock *lock)
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
u32 old, new, val = *pval;
- int retry = 1;
+ int retry = 0;
/*
* trylock || pending
@@ -595,9 +595,9 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* a while to see if that either bit will be cleared.
* If that is no change, we return and be queued.
*/
- if (!retry)
+ if (retry)
return 0;
- retry--;
+ retry++;
cpu_relax();
cpu_relax();
*pval = val = atomic_read(&lock->val);
@@ -608,7 +608,11 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* Assuming that the pending bit holder is going to
* set the lock bit and clear the pending bit soon,
* it is better to wait than to exit at this point.
+ * Our assumption does not hold on hypervisors, where
+ * the pending bit holder doesn't have to be running.
*/
+ if (cpu_has_hypervisor && ++retry > MAGIC)
+ return 0;
cpu_relax();
*pval = val = atomic_read(&lock->val);
continue;
^ permalink raw reply related [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-12 15:22 ` Radim Krčmář
2014-05-12 17:29 ` Peter Zijlstra
2014-05-12 17:29 ` Peter Zijlstra
@ 2014-05-12 17:29 ` Peter Zijlstra
2014-05-13 19:47 ` Waiman Long
2014-05-13 19:47 ` Waiman Long
4 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-12 17:29 UTC (permalink / raw)
To: Radim Krčmář
Cc: Waiman Long, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
On Mon, May 12, 2014 at 05:22:08PM +0200, Radim Krčmář wrote:
> 2014-05-07 11:01-0400, Waiman Long:
> > From: Peter Zijlstra <peterz@infradead.org>
> >
> > Because the qspinlock needs to touch a second cacheline; add a pending
> > bit and allow a single in-word spinner before we punt to the second
> > cacheline.
>
> I think there is an unwanted scenario on virtual machines:
> 1) VCPU sets the pending bit and start spinning.
> 2) Pending VCPU gets descheduled.
> - we have PLE and lock holder isn't running [1]
> - the hypervisor randomly preempts us
> 3) Lock holder unlocks while pending VCPU is waiting in queue.
> 4) Subsequent lockers will see free lock with set pending bit and will
> loop in trylock's 'for (;;)'
> - the worst-case is lock starving [2]
> - PLE can save us from wasting whole timeslice
>
> Retry threshold is the easiest solution, regardless of its ugliness [4].
>
> Another minor design flaw is that formerly first VCPU gets appended to
> the tail when it decides to queue;
> is the performance gain worth it?
This is all for real hardware, I've not yet stared at the (para)virt
crap.
My primary concern is that native hardware runs good and that the
(para)virt support does rape the code -- so far its failing hard on the
second.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-12 15:22 ` Radim Krčmář
@ 2014-05-12 17:29 ` Peter Zijlstra
2014-05-12 17:29 ` Peter Zijlstra
` (3 subsequent siblings)
4 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-12 17:29 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, linux-kernel, H. Peter Anvin, Boris Ostrovsky,
linux-arch, kvm, Raghavendra K T, Ingo Molnar, xen-devel,
Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Waiman Long, Oleg Nesterov, David Vrabel,
Linus Torvalds
On Mon, May 12, 2014 at 05:22:08PM +0200, Radim Krčmář wrote:
> 2014-05-07 11:01-0400, Waiman Long:
> > From: Peter Zijlstra <peterz@infradead.org>
> >
> > Because the qspinlock needs to touch a second cacheline; add a pending
> > bit and allow a single in-word spinner before we punt to the second
> > cacheline.
>
> I think there is an unwanted scenario on virtual machines:
> 1) VCPU sets the pending bit and start spinning.
> 2) Pending VCPU gets descheduled.
> - we have PLE and lock holder isn't running [1]
> - the hypervisor randomly preempts us
> 3) Lock holder unlocks while pending VCPU is waiting in queue.
> 4) Subsequent lockers will see free lock with set pending bit and will
> loop in trylock's 'for (;;)'
> - the worst-case is lock starving [2]
> - PLE can save us from wasting whole timeslice
>
> Retry threshold is the easiest solution, regardless of its ugliness [4].
>
> Another minor design flaw is that formerly first VCPU gets appended to
> the tail when it decides to queue;
> is the performance gain worth it?
This is all for real hardware, I've not yet stared at the (para)virt
crap.
My primary concern is that native hardware runs good and that the
(para)virt support does rape the code -- so far its failing hard on the
second.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-12 15:22 ` Radim Krčmář
2014-05-12 17:29 ` Peter Zijlstra
@ 2014-05-12 17:29 ` Peter Zijlstra
2014-05-12 17:29 ` Peter Zijlstra
` (2 subsequent siblings)
4 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-12 17:29 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, linux-kernel, H. Peter Anvin, Boris Ostrovsky,
linux-arch, kvm, Raghavendra K T, Ingo Molnar, xen-devel,
Paul E. McKenney, Scott J Norton, Paolo Bonzini, Thomas Gleixner,
virtualization, Chegu Vinod, Waiman Long, Oleg Nesterov,
David Vrabel, Linus Torvalds
On Mon, May 12, 2014 at 05:22:08PM +0200, Radim Krčmář wrote:
> 2014-05-07 11:01-0400, Waiman Long:
> > From: Peter Zijlstra <peterz@infradead.org>
> >
> > Because the qspinlock needs to touch a second cacheline; add a pending
> > bit and allow a single in-word spinner before we punt to the second
> > cacheline.
>
> I think there is an unwanted scenario on virtual machines:
> 1) VCPU sets the pending bit and start spinning.
> 2) Pending VCPU gets descheduled.
> - we have PLE and lock holder isn't running [1]
> - the hypervisor randomly preempts us
> 3) Lock holder unlocks while pending VCPU is waiting in queue.
> 4) Subsequent lockers will see free lock with set pending bit and will
> loop in trylock's 'for (;;)'
> - the worst-case is lock starving [2]
> - PLE can save us from wasting whole timeslice
>
> Retry threshold is the easiest solution, regardless of its ugliness [4].
>
> Another minor design flaw is that formerly first VCPU gets appended to
> the tail when it decides to queue;
> is the performance gain worth it?
This is all for real hardware, I've not yet stared at the (para)virt
crap.
My primary concern is that native hardware runs good and that the
(para)virt support does rape the code -- so far its failing hard on the
second.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-07 15:01 ` Waiman Long
` (2 preceding siblings ...)
(?)
@ 2014-05-12 18:57 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-12 18:57 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
(tl;dr: paravirtualization could be better than unfair qspinlock)
2014-05-07 11:01-0400, Waiman Long:
> Locking is always an issue in a virtualized environment because of 2
> different types of problems:
> 1) Lock holder preemption
> 2) Lock waiter preemption
Paravirtualized ticketlocks have a shortcoming;
we don't know which VCPU the ticket belongs to, so the hypervisor can
only blindly yield to runnable VCPUs after waiters halt in slowpath.
There aren't enough "free" bits in ticket struct to improve, thus we
have resorted to unfairness.
Qspinlock is different.
Most queued VCPUs already know the VCPU before it, so we have what it
takes to mitigate lock waiter preemption: we can include preempted CPU
id in hypercall, the hypervisor will schedule it, and we'll be woken up
from unlock slowpath [1].
This still isn't perfect: we can wake up a VCPU that got preempted
before it could hypercall, and these hypercalls will propagate one by
one through our queue to the preempted lock holder.
(We'd have to share the whole waiter-list to avoid this.
We could also try to send holder's id instead and unconditionally kick
next-in-line on unlock, I think it would be slower.)
Lock holder problem is tougher because we don't always share who is it.
The tail bits can be used for it as we don't really use them before a
queue has formed. This would cost us one bit to differentiate between
holder/tail CPU id [2] and complicate operations a little, but only for
the paravirt case, where benefits are expected to be far greater.
Hypercall from lock slowpath could schedule preempted VCPU right away.
I think this could obsolete unfair locks and will prepare RFC patches
soon-ish [3]. (If the idea isn't proved infeasible before.)
---
1: It is possible that we could avoid O(N) traversal and hypercall in
unlock slowpath by scheduling VCPUs in the right order often.
2: Or even less. idx=3 is a bug: if we are spinning in NMI, we are
almost deadlocked, so we should WARN/BUG if it were to happen; which
leaves the combination free to mean that the CPU id is a sole holder,
not a tail. (I prefer clean code though.)
3: I already tried and got quickly fed up by refactoring, so it might
get postponed till the series gets merged.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-07 15:01 ` Waiman Long
` (4 preceding siblings ...)
(?)
@ 2014-05-12 18:57 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-12 18:57 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Oleg Nesterov, David Vrabel, Linus Torvalds
(tl;dr: paravirtualization could be better than unfair qspinlock)
2014-05-07 11:01-0400, Waiman Long:
> Locking is always an issue in a virtualized environment because of 2
> different types of problems:
> 1) Lock holder preemption
> 2) Lock waiter preemption
Paravirtualized ticketlocks have a shortcoming;
we don't know which VCPU the ticket belongs to, so the hypervisor can
only blindly yield to runnable VCPUs after waiters halt in slowpath.
There aren't enough "free" bits in ticket struct to improve, thus we
have resorted to unfairness.
Qspinlock is different.
Most queued VCPUs already know the VCPU before it, so we have what it
takes to mitigate lock waiter preemption: we can include preempted CPU
id in hypercall, the hypervisor will schedule it, and we'll be woken up
from unlock slowpath [1].
This still isn't perfect: we can wake up a VCPU that got preempted
before it could hypercall, and these hypercalls will propagate one by
one through our queue to the preempted lock holder.
(We'd have to share the whole waiter-list to avoid this.
We could also try to send holder's id instead and unconditionally kick
next-in-line on unlock, I think it would be slower.)
Lock holder problem is tougher because we don't always share who is it.
The tail bits can be used for it as we don't really use them before a
queue has formed. This would cost us one bit to differentiate between
holder/tail CPU id [2] and complicate operations a little, but only for
the paravirt case, where benefits are expected to be far greater.
Hypercall from lock slowpath could schedule preempted VCPU right away.
I think this could obsolete unfair locks and will prepare RFC patches
soon-ish [3]. (If the idea isn't proved infeasible before.)
---
1: It is possible that we could avoid O(N) traversal and hypercall in
unlock slowpath by scheduling VCPUs in the right order often.
2: Or even less. idx=3 is a bug: if we are spinning in NMI, we are
almost deadlocked, so we should WARN/BUG if it were to happen; which
leaves the combination free to mean that the CPU id is a sole holder,
not a tail. (I prefer clean code though.)
3: I already tried and got quickly fed up by refactoring, so it might
get postponed till the series gets merged.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-07 15:01 ` Waiman Long
` (3 preceding siblings ...)
(?)
@ 2014-05-12 18:57 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-12 18:57 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Scott J Norton, Paolo Bonzini,
Thomas Gleixner, virtualization, Chegu Vinod, Oleg Nesterov,
David Vrabel, Linus Torvalds
(tl;dr: paravirtualization could be better than unfair qspinlock)
2014-05-07 11:01-0400, Waiman Long:
> Locking is always an issue in a virtualized environment because of 2
> different types of problems:
> 1) Lock holder preemption
> 2) Lock waiter preemption
Paravirtualized ticketlocks have a shortcoming;
we don't know which VCPU the ticket belongs to, so the hypervisor can
only blindly yield to runnable VCPUs after waiters halt in slowpath.
There aren't enough "free" bits in ticket struct to improve, thus we
have resorted to unfairness.
Qspinlock is different.
Most queued VCPUs already know the VCPU before it, so we have what it
takes to mitigate lock waiter preemption: we can include preempted CPU
id in hypercall, the hypervisor will schedule it, and we'll be woken up
from unlock slowpath [1].
This still isn't perfect: we can wake up a VCPU that got preempted
before it could hypercall, and these hypercalls will propagate one by
one through our queue to the preempted lock holder.
(We'd have to share the whole waiter-list to avoid this.
We could also try to send holder's id instead and unconditionally kick
next-in-line on unlock, I think it would be slower.)
Lock holder problem is tougher because we don't always share who is it.
The tail bits can be used for it as we don't really use them before a
queue has formed. This would cost us one bit to differentiate between
holder/tail CPU id [2] and complicate operations a little, but only for
the paravirt case, where benefits are expected to be far greater.
Hypercall from lock slowpath could schedule preempted VCPU right away.
I think this could obsolete unfair locks and will prepare RFC patches
soon-ish [3]. (If the idea isn't proved infeasible before.)
---
1: It is possible that we could avoid O(N) traversal and hypercall in
unlock slowpath by scheduling VCPUs in the right order often.
2: Or even less. idx=3 is a bug: if we are spinning in NMI, we are
almost deadlocked, so we should WARN/BUG if it were to happen; which
leaves the combination free to mean that the CPU id is a sole holder,
not a tail. (I prefer clean code though.)
3: I already tried and got quickly fed up by refactoring, so it might
get postponed till the series gets merged.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-12 15:22 ` Radim Krčmář
@ 2014-05-13 19:47 ` Waiman Long
2014-05-12 17:29 ` Peter Zijlstra
` (3 subsequent siblings)
4 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-13 19:47 UTC (permalink / raw)
To: Radim Krčmář
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
On 05/12/2014 11:22 AM, Radim Krčmář wrote:
> 2014-05-07 11:01-0400, Waiman Long:
>> From: Peter Zijlstra<peterz@infradead.org>
>>
>> Because the qspinlock needs to touch a second cacheline; add a pending
>> bit and allow a single in-word spinner before we punt to the second
>> cacheline.
> I think there is an unwanted scenario on virtual machines:
> 1) VCPU sets the pending bit and start spinning.
> 2) Pending VCPU gets descheduled.
> - we have PLE and lock holder isn't running [1]
> - the hypervisor randomly preempts us
> 3) Lock holder unlocks while pending VCPU is waiting in queue.
> 4) Subsequent lockers will see free lock with set pending bit and will
> loop in trylock's 'for (;;)'
> - the worst-case is lock starving [2]
> - PLE can save us from wasting whole timeslice
>
> Retry threshold is the easiest solution, regardless of its ugliness [4].
Yes, that can be a real issue. Some sort of retry threshold, as you
said, should be able to handle it.
BTW, the relevant patch should be 16/19 where the PV spinlock stuff
should be discussed. This patch is perfectly fine.
> Another minor design flaw is that formerly first VCPU gets appended to
> the tail when it decides to queue;
> is the performance gain worth it?
>
> Thanks.
Yes, the performance gain is worth it. The primary goal is to be not
worse than ticket spinlock in light load situation which is the most
common case. This feature is need to achieve that.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
@ 2014-05-13 19:47 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-13 19:47 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Oleg Nesterov, David Vrabel, Linus Torvalds
On 05/12/2014 11:22 AM, Radim Krčmář wrote:
> 2014-05-07 11:01-0400, Waiman Long:
>> From: Peter Zijlstra<peterz@infradead.org>
>>
>> Because the qspinlock needs to touch a second cacheline; add a pending
>> bit and allow a single in-word spinner before we punt to the second
>> cacheline.
> I think there is an unwanted scenario on virtual machines:
> 1) VCPU sets the pending bit and start spinning.
> 2) Pending VCPU gets descheduled.
> - we have PLE and lock holder isn't running [1]
> - the hypervisor randomly preempts us
> 3) Lock holder unlocks while pending VCPU is waiting in queue.
> 4) Subsequent lockers will see free lock with set pending bit and will
> loop in trylock's 'for (;;)'
> - the worst-case is lock starving [2]
> - PLE can save us from wasting whole timeslice
>
> Retry threshold is the easiest solution, regardless of its ugliness [4].
Yes, that can be a real issue. Some sort of retry threshold, as you
said, should be able to handle it.
BTW, the relevant patch should be 16/19 where the PV spinlock stuff
should be discussed. This patch is perfectly fine.
> Another minor design flaw is that formerly first VCPU gets appended to
> the tail when it decides to queue;
> is the performance gain worth it?
>
> Thanks.
Yes, the performance gain is worth it. The primary goal is to be not
worse than ticket spinlock in light load situation which is the most
common case. This feature is need to achieve that.
-Longman
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-12 15:22 ` Radim Krčmář
` (3 preceding siblings ...)
2014-05-13 19:47 ` Waiman Long
@ 2014-05-13 19:47 ` Waiman Long
4 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-13 19:47 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Scott J Norton, Paolo Bonzini,
Thomas Gleixner, virtualization, Chegu Vinod, Oleg Nesterov,
David Vrabel, Linus Torvalds
On 05/12/2014 11:22 AM, Radim Krčmář wrote:
> 2014-05-07 11:01-0400, Waiman Long:
>> From: Peter Zijlstra<peterz@infradead.org>
>>
>> Because the qspinlock needs to touch a second cacheline; add a pending
>> bit and allow a single in-word spinner before we punt to the second
>> cacheline.
> I think there is an unwanted scenario on virtual machines:
> 1) VCPU sets the pending bit and start spinning.
> 2) Pending VCPU gets descheduled.
> - we have PLE and lock holder isn't running [1]
> - the hypervisor randomly preempts us
> 3) Lock holder unlocks while pending VCPU is waiting in queue.
> 4) Subsequent lockers will see free lock with set pending bit and will
> loop in trylock's 'for (;;)'
> - the worst-case is lock starving [2]
> - PLE can save us from wasting whole timeslice
>
> Retry threshold is the easiest solution, regardless of its ugliness [4].
Yes, that can be a real issue. Some sort of retry threshold, as you
said, should be able to handle it.
BTW, the relevant patch should be 16/19 where the PV spinlock stuff
should be discussed. This patch is perfectly fine.
> Another minor design flaw is that formerly first VCPU gets appended to
> the tail when it decides to queue;
> is the performance gain worth it?
>
> Thanks.
Yes, the performance gain is worth it. The primary goal is to be not
worse than ticket spinlock in light load situation which is the most
common case. This feature is need to achieve that.
-Longman
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-13 19:47 ` Waiman Long
` (2 preceding siblings ...)
(?)
@ 2014-05-14 16:51 ` Radim Krčmář
2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 17:00 ` Peter Zijlstra
-1 siblings, 2 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-14 16:51 UTC (permalink / raw)
To: Waiman Long
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Peter Zijlstra,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
2014-05-13 15:47-0400, Waiman Long:
> On 05/12/2014 11:22 AM, Radim Krčmář wrote:
> >I think there is an unwanted scenario on virtual machines:
> >1) VCPU sets the pending bit and start spinning.
> >2) Pending VCPU gets descheduled.
> > - we have PLE and lock holder isn't running [1]
> > - the hypervisor randomly preempts us
> >3) Lock holder unlocks while pending VCPU is waiting in queue.
> >4) Subsequent lockers will see free lock with set pending bit and will
> > loop in trylock's 'for (;;)'
> > - the worst-case is lock starving [2]
> > - PLE can save us from wasting whole timeslice
> >
> >Retry threshold is the easiest solution, regardless of its ugliness [4].
>
> Yes, that can be a real issue. Some sort of retry threshold, as you said,
> should be able to handle it.
>
> BTW, the relevant patch should be 16/19 where the PV spinlock stuff should
> be discussed. This patch is perfectly fine.
Ouch, my apology to Peter didn't make it ... Agreed, I should have split
the comment under patches
[06/19] (part quoted above; does not depend on PV),
[16/19] (part quoted below) and
[17/19] (general doubts).
> >Another minor design flaw is that formerly first VCPU gets appended to
> >the tail when it decides to queue;
> >is the performance gain worth it?
> >
> >Thanks.
>
> Yes, the performance gain is worth it. The primary goal is to be not worse
> than ticket spinlock in light load situation which is the most common case.
> This feature is need to achieve that.
Ok.
I've seen merit in pvqspinlock even with slightly slower first-waiter,
so I would have happily sacrificed those horrible branches.
(I prefer elegant to optimized code, but I can see why we want to be
strictly better than ticketlock.)
Peter mentioned that we are focusing on bare-metal patches, so I'll
withold my other paravirt rants until they are polished.
And to forcefully bring this thread a little bit on-topic:
Pending-bit is effectively a lock in a lock, so I was wondering why
don't we use more pending bits; advantages are the same, just diminished
by the probability of having an ideally contended lock:
- waiter won't be blocked on RAM access if critical section (or more)
ends sooner
- some unlucky cacheline is not forgotten
- faster unlock (no need for tail operations)
(- ?)
disadvantages are magnified:
- increased complexity
- intense cacheline sharing
(I thought that this is the main disadvantage of ticketlock.)
(- ?)
One bit still improved performance, is it the best we got?
Thanks.
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-13 19:47 ` Waiman Long
(?)
(?)
@ 2014-05-14 16:51 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-14 16:51 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Oleg Nesterov, David Vrabel, Linus Torvalds
2014-05-13 15:47-0400, Waiman Long:
> On 05/12/2014 11:22 AM, Radim Krčmář wrote:
> >I think there is an unwanted scenario on virtual machines:
> >1) VCPU sets the pending bit and start spinning.
> >2) Pending VCPU gets descheduled.
> > - we have PLE and lock holder isn't running [1]
> > - the hypervisor randomly preempts us
> >3) Lock holder unlocks while pending VCPU is waiting in queue.
> >4) Subsequent lockers will see free lock with set pending bit and will
> > loop in trylock's 'for (;;)'
> > - the worst-case is lock starving [2]
> > - PLE can save us from wasting whole timeslice
> >
> >Retry threshold is the easiest solution, regardless of its ugliness [4].
>
> Yes, that can be a real issue. Some sort of retry threshold, as you said,
> should be able to handle it.
>
> BTW, the relevant patch should be 16/19 where the PV spinlock stuff should
> be discussed. This patch is perfectly fine.
Ouch, my apology to Peter didn't make it ... Agreed, I should have split
the comment under patches
[06/19] (part quoted above; does not depend on PV),
[16/19] (part quoted below) and
[17/19] (general doubts).
> >Another minor design flaw is that formerly first VCPU gets appended to
> >the tail when it decides to queue;
> >is the performance gain worth it?
> >
> >Thanks.
>
> Yes, the performance gain is worth it. The primary goal is to be not worse
> than ticket spinlock in light load situation which is the most common case.
> This feature is need to achieve that.
Ok.
I've seen merit in pvqspinlock even with slightly slower first-waiter,
so I would have happily sacrificed those horrible branches.
(I prefer elegant to optimized code, but I can see why we want to be
strictly better than ticketlock.)
Peter mentioned that we are focusing on bare-metal patches, so I'll
withold my other paravirt rants until they are polished.
And to forcefully bring this thread a little bit on-topic:
Pending-bit is effectively a lock in a lock, so I was wondering why
don't we use more pending bits; advantages are the same, just diminished
by the probability of having an ideally contended lock:
- waiter won't be blocked on RAM access if critical section (or more)
ends sooner
- some unlucky cacheline is not forgotten
- faster unlock (no need for tail operations)
(- ?)
disadvantages are magnified:
- increased complexity
- intense cacheline sharing
(I thought that this is the main disadvantage of ticketlock.)
(- ?)
One bit still improved performance, is it the best we got?
Thanks.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-13 19:47 ` Waiman Long
(?)
@ 2014-05-14 16:51 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-14 16:51 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Scott J Norton, Paolo Bonzini,
Thomas Gleixner, virtualization, Chegu Vinod, Oleg Nesterov,
David Vrabel, Linus Torvalds
2014-05-13 15:47-0400, Waiman Long:
> On 05/12/2014 11:22 AM, Radim Krčmář wrote:
> >I think there is an unwanted scenario on virtual machines:
> >1) VCPU sets the pending bit and start spinning.
> >2) Pending VCPU gets descheduled.
> > - we have PLE and lock holder isn't running [1]
> > - the hypervisor randomly preempts us
> >3) Lock holder unlocks while pending VCPU is waiting in queue.
> >4) Subsequent lockers will see free lock with set pending bit and will
> > loop in trylock's 'for (;;)'
> > - the worst-case is lock starving [2]
> > - PLE can save us from wasting whole timeslice
> >
> >Retry threshold is the easiest solution, regardless of its ugliness [4].
>
> Yes, that can be a real issue. Some sort of retry threshold, as you said,
> should be able to handle it.
>
> BTW, the relevant patch should be 16/19 where the PV spinlock stuff should
> be discussed. This patch is perfectly fine.
Ouch, my apology to Peter didn't make it ... Agreed, I should have split
the comment under patches
[06/19] (part quoted above; does not depend on PV),
[16/19] (part quoted below) and
[17/19] (general doubts).
> >Another minor design flaw is that formerly first VCPU gets appended to
> >the tail when it decides to queue;
> >is the performance gain worth it?
> >
> >Thanks.
>
> Yes, the performance gain is worth it. The primary goal is to be not worse
> than ticket spinlock in light load situation which is the most common case.
> This feature is need to achieve that.
Ok.
I've seen merit in pvqspinlock even with slightly slower first-waiter,
so I would have happily sacrificed those horrible branches.
(I prefer elegant to optimized code, but I can see why we want to be
strictly better than ticketlock.)
Peter mentioned that we are focusing on bare-metal patches, so I'll
withold my other paravirt rants until they are polished.
And to forcefully bring this thread a little bit on-topic:
Pending-bit is effectively a lock in a lock, so I was wondering why
don't we use more pending bits; advantages are the same, just diminished
by the probability of having an ideally contended lock:
- waiter won't be blocked on RAM access if critical section (or more)
ends sooner
- some unlucky cacheline is not forgotten
- faster unlock (no need for tail operations)
(- ?)
disadvantages are magnified:
- increased complexity
- intense cacheline sharing
(I thought that this is the main disadvantage of ticketlock.)
(- ?)
One bit still improved performance, is it the best we got?
Thanks.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 16:51 ` Radim Krčmář
@ 2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 17:00 ` Peter Zijlstra
1 sibling, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-14 17:00 UTC (permalink / raw)
To: Radim Krčmář
Cc: Waiman Long, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
[-- Attachment #1: Type: text/plain, Size: 2244 bytes --]
On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
> Ok.
> I've seen merit in pvqspinlock even with slightly slower first-waiter,
> so I would have happily sacrificed those horrible branches.
> (I prefer elegant to optimized code, but I can see why we want to be
> strictly better than ticketlock.)
> Peter mentioned that we are focusing on bare-metal patches, so I'll
> withold my other paravirt rants until they are polished.
Well, paravirt must happen too, but comes later in this series, patch 3
which we're replying to is still very much in the bare metal part of the
series.
I've not had time yet to decode all that Waiman has done to make
paravirt work.
But as a general rule I like patches that start with something simple
and working and then optimize it, this series doesn't seem to quite
grasp that.
> And to forcefully bring this thread a little bit on-topic:
>
> Pending-bit is effectively a lock in a lock, so I was wondering why
> don't we use more pending bits; advantages are the same, just diminished
> by the probability of having an ideally contended lock:
> - waiter won't be blocked on RAM access if critical section (or more)
> ends sooner
> - some unlucky cacheline is not forgotten
> - faster unlock (no need for tail operations)
> (- ?)
> disadvantages are magnified:
> - increased complexity
> - intense cacheline sharing
> (I thought that this is the main disadvantage of ticketlock.)
> (- ?)
>
> One bit still improved performance, is it the best we got?
So, the advantage of one bit is that if we use a whole byte for 1 bit we
can avoid some atomic ops.
The entire reason for this in-word spinner is to amortize the cost of
hitting the external node cacheline.
So traditional locks like test-and-test and the ticket lock only ever
access the spinlock word itsef, this MCS style queueing lock has a
second (and, see my other rants in this thread, when done wrong more
than 2) cacheline to touch.
That said, all our benchmarking is pretty much for the cache-hot case,
so I'm not entirely convinced yet that the one pending bit makes up for
it, it does in the cache-hot case.
But... writing cache-cold benchmarks is _hard_ :/
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
@ 2014-05-14 17:00 ` Peter Zijlstra
0 siblings, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-14 17:00 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, linux-kernel, H. Peter Anvin, Boris Ostrovsky,
linux-arch, kvm, Raghavendra K T, Ingo Molnar, xen-devel,
Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Waiman Long, Oleg Nesterov, David Vrabel,
Linus Torvalds
[-- Attachment #1.1: Type: text/plain, Size: 2244 bytes --]
On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
> Ok.
> I've seen merit in pvqspinlock even with slightly slower first-waiter,
> so I would have happily sacrificed those horrible branches.
> (I prefer elegant to optimized code, but I can see why we want to be
> strictly better than ticketlock.)
> Peter mentioned that we are focusing on bare-metal patches, so I'll
> withold my other paravirt rants until they are polished.
Well, paravirt must happen too, but comes later in this series, patch 3
which we're replying to is still very much in the bare metal part of the
series.
I've not had time yet to decode all that Waiman has done to make
paravirt work.
But as a general rule I like patches that start with something simple
and working and then optimize it, this series doesn't seem to quite
grasp that.
> And to forcefully bring this thread a little bit on-topic:
>
> Pending-bit is effectively a lock in a lock, so I was wondering why
> don't we use more pending bits; advantages are the same, just diminished
> by the probability of having an ideally contended lock:
> - waiter won't be blocked on RAM access if critical section (or more)
> ends sooner
> - some unlucky cacheline is not forgotten
> - faster unlock (no need for tail operations)
> (- ?)
> disadvantages are magnified:
> - increased complexity
> - intense cacheline sharing
> (I thought that this is the main disadvantage of ticketlock.)
> (- ?)
>
> One bit still improved performance, is it the best we got?
So, the advantage of one bit is that if we use a whole byte for 1 bit we
can avoid some atomic ops.
The entire reason for this in-word spinner is to amortize the cost of
hitting the external node cacheline.
So traditional locks like test-and-test and the ticket lock only ever
access the spinlock word itsef, this MCS style queueing lock has a
second (and, see my other rants in this thread, when done wrong more
than 2) cacheline to touch.
That said, all our benchmarking is pretty much for the cache-hot case,
so I'm not entirely convinced yet that the one pending bit makes up for
it, it does in the cache-hot case.
But... writing cache-cold benchmarks is _hard_ :/
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 17:00 ` Peter Zijlstra
@ 2014-05-14 17:00 ` Peter Zijlstra
1 sibling, 0 replies; 163+ messages in thread
From: Peter Zijlstra @ 2014-05-14 17:00 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, linux-kernel, H. Peter Anvin, Boris Ostrovsky,
linux-arch, kvm, Raghavendra K T, Ingo Molnar, xen-devel,
Paul E. McKenney, Scott J Norton, Paolo Bonzini, Thomas Gleixner,
virtualization, Chegu Vinod, Waiman Long, Oleg Nesterov,
David Vrabel, Linus Torvalds
[-- Attachment #1.1: Type: text/plain, Size: 2244 bytes --]
On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
> Ok.
> I've seen merit in pvqspinlock even with slightly slower first-waiter,
> so I would have happily sacrificed those horrible branches.
> (I prefer elegant to optimized code, but I can see why we want to be
> strictly better than ticketlock.)
> Peter mentioned that we are focusing on bare-metal patches, so I'll
> withold my other paravirt rants until they are polished.
Well, paravirt must happen too, but comes later in this series, patch 3
which we're replying to is still very much in the bare metal part of the
series.
I've not had time yet to decode all that Waiman has done to make
paravirt work.
But as a general rule I like patches that start with something simple
and working and then optimize it, this series doesn't seem to quite
grasp that.
> And to forcefully bring this thread a little bit on-topic:
>
> Pending-bit is effectively a lock in a lock, so I was wondering why
> don't we use more pending bits; advantages are the same, just diminished
> by the probability of having an ideally contended lock:
> - waiter won't be blocked on RAM access if critical section (or more)
> ends sooner
> - some unlucky cacheline is not forgotten
> - faster unlock (no need for tail operations)
> (- ?)
> disadvantages are magnified:
> - increased complexity
> - intense cacheline sharing
> (I thought that this is the main disadvantage of ticketlock.)
> (- ?)
>
> One bit still improved performance, is it the best we got?
So, the advantage of one bit is that if we use a whole byte for 1 bit we
can avoid some atomic ops.
The entire reason for this in-word spinner is to amortize the cost of
hitting the external node cacheline.
So traditional locks like test-and-test and the ticket lock only ever
access the spinlock word itsef, this MCS style queueing lock has a
second (and, see my other rants in this thread, when done wrong more
than 2) cacheline to touch.
That said, all our benchmarking is pretty much for the cache-hot case,
so I'm not entirely convinced yet that the one pending bit makes up for
it, it does in the cache-hot case.
But... writing cache-cold benchmarks is _hard_ :/
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 17:00 ` Peter Zijlstra
` (2 preceding siblings ...)
(?)
@ 2014-05-14 19:13 ` Radim Krčmář
2014-05-19 20:17 ` Waiman Long
2014-05-19 20:17 ` [PATCH v10 03/19] qspinlock: Add pending bit Waiman Long
-1 siblings, 2 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-14 19:13 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Waiman Long, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
2014-05-14 19:00+0200, Peter Zijlstra:
> On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
> > Ok.
> > I've seen merit in pvqspinlock even with slightly slower first-waiter,
> > so I would have happily sacrificed those horrible branches.
> > (I prefer elegant to optimized code, but I can see why we want to be
> > strictly better than ticketlock.)
> > Peter mentioned that we are focusing on bare-metal patches, so I'll
> > withold my other paravirt rants until they are polished.
(It was an ambiguous sentence, I have comments for later patches.)
> Well, paravirt must happen too, but comes later in this series, patch 3
> which we're replying to is still very much in the bare metal part of the
> series.
(I think that bare metal spans the first 7 patches.)
> I've not had time yet to decode all that Waiman has done to make
> paravirt work.
>
> But as a general rule I like patches that start with something simple
> and working and then optimize it, this series doesn't seem to quite
> grasp that.
>
> > And to forcefully bring this thread a little bit on-topic:
> >
> > Pending-bit is effectively a lock in a lock, so I was wondering why
> > don't we use more pending bits; advantages are the same, just diminished
> > by the probability of having an ideally contended lock:
> > - waiter won't be blocked on RAM access if critical section (or more)
> > ends sooner
> > - some unlucky cacheline is not forgotten
> > - faster unlock (no need for tail operations)
> > (- ?)
> > disadvantages are magnified:
> > - increased complexity
> > - intense cacheline sharing
> > (I thought that this is the main disadvantage of ticketlock.)
> > (- ?)
> >
> > One bit still improved performance, is it the best we got?
>
> So, the advantage of one bit is that if we use a whole byte for 1 bit we
> can avoid some atomic ops.
>
> The entire reason for this in-word spinner is to amortize the cost of
> hitting the external node cacheline.
Every pending CPU removes one length of the critical section from the
delay caused by cacheline propagation and really cold cache is
hundreds(?) of cycles, so we could burn some to ensure correctness and
still be waiting when the first pending CPU unlocks.
> So traditional locks like test-and-test and the ticket lock only ever
> access the spinlock word itsef, this MCS style queueing lock has a
> second (and, see my other rants in this thread, when done wrong more
> than 2) cacheline to touch.
>
> That said, all our benchmarking is pretty much for the cache-hot case,
> so I'm not entirely convinced yet that the one pending bit makes up for
> it, it does in the cache-hot case.
Yeah, we probably use the faster pre-lock quite a lot.
Cover letter states that queue depth 1-3 is a bit slower than ticket
spinlock, so we might not be losing if we implemented a faster
in-word-lock of this capacity. (Not that I'm a fan of the hybrid lock.)
> But... writing cache-cold benchmarks is _hard_ :/
Wouldn't clflush of the second cacheline before trying for the lock give
us a rough estimate?
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 17:00 ` Peter Zijlstra
(?)
@ 2014-05-14 19:13 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-14 19:13 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Gleb Natapov, linux-kernel, H. Peter Anvin, Boris Ostrovsky,
linux-arch, kvm, Raghavendra K T, Ingo Molnar, xen-devel,
Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Waiman Long, Oleg Nesterov, David Vrabel,
Linus Torvalds
2014-05-14 19:00+0200, Peter Zijlstra:
> On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
> > Ok.
> > I've seen merit in pvqspinlock even with slightly slower first-waiter,
> > so I would have happily sacrificed those horrible branches.
> > (I prefer elegant to optimized code, but I can see why we want to be
> > strictly better than ticketlock.)
> > Peter mentioned that we are focusing on bare-metal patches, so I'll
> > withold my other paravirt rants until they are polished.
(It was an ambiguous sentence, I have comments for later patches.)
> Well, paravirt must happen too, but comes later in this series, patch 3
> which we're replying to is still very much in the bare metal part of the
> series.
(I think that bare metal spans the first 7 patches.)
> I've not had time yet to decode all that Waiman has done to make
> paravirt work.
>
> But as a general rule I like patches that start with something simple
> and working and then optimize it, this series doesn't seem to quite
> grasp that.
>
> > And to forcefully bring this thread a little bit on-topic:
> >
> > Pending-bit is effectively a lock in a lock, so I was wondering why
> > don't we use more pending bits; advantages are the same, just diminished
> > by the probability of having an ideally contended lock:
> > - waiter won't be blocked on RAM access if critical section (or more)
> > ends sooner
> > - some unlucky cacheline is not forgotten
> > - faster unlock (no need for tail operations)
> > (- ?)
> > disadvantages are magnified:
> > - increased complexity
> > - intense cacheline sharing
> > (I thought that this is the main disadvantage of ticketlock.)
> > (- ?)
> >
> > One bit still improved performance, is it the best we got?
>
> So, the advantage of one bit is that if we use a whole byte for 1 bit we
> can avoid some atomic ops.
>
> The entire reason for this in-word spinner is to amortize the cost of
> hitting the external node cacheline.
Every pending CPU removes one length of the critical section from the
delay caused by cacheline propagation and really cold cache is
hundreds(?) of cycles, so we could burn some to ensure correctness and
still be waiting when the first pending CPU unlocks.
> So traditional locks like test-and-test and the ticket lock only ever
> access the spinlock word itsef, this MCS style queueing lock has a
> second (and, see my other rants in this thread, when done wrong more
> than 2) cacheline to touch.
>
> That said, all our benchmarking is pretty much for the cache-hot case,
> so I'm not entirely convinced yet that the one pending bit makes up for
> it, it does in the cache-hot case.
Yeah, we probably use the faster pre-lock quite a lot.
Cover letter states that queue depth 1-3 is a bit slower than ticket
spinlock, so we might not be losing if we implemented a faster
in-word-lock of this capacity. (Not that I'm a fan of the hybrid lock.)
> But... writing cache-cold benchmarks is _hard_ :/
Wouldn't clflush of the second cacheline before trying for the lock give
us a rough estimate?
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 17:00 ` Peter Zijlstra
(?)
(?)
@ 2014-05-14 19:13 ` Radim Krčmář
-1 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-14 19:13 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Gleb Natapov, linux-kernel, H. Peter Anvin, Boris Ostrovsky,
linux-arch, kvm, Raghavendra K T, Ingo Molnar, xen-devel,
Paul E. McKenney, Scott J Norton, Paolo Bonzini, Thomas Gleixner,
virtualization, Chegu Vinod, Waiman Long, Oleg Nesterov,
David Vrabel, Linus Torvalds
2014-05-14 19:00+0200, Peter Zijlstra:
> On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
> > Ok.
> > I've seen merit in pvqspinlock even with slightly slower first-waiter,
> > so I would have happily sacrificed those horrible branches.
> > (I prefer elegant to optimized code, but I can see why we want to be
> > strictly better than ticketlock.)
> > Peter mentioned that we are focusing on bare-metal patches, so I'll
> > withold my other paravirt rants until they are polished.
(It was an ambiguous sentence, I have comments for later patches.)
> Well, paravirt must happen too, but comes later in this series, patch 3
> which we're replying to is still very much in the bare metal part of the
> series.
(I think that bare metal spans the first 7 patches.)
> I've not had time yet to decode all that Waiman has done to make
> paravirt work.
>
> But as a general rule I like patches that start with something simple
> and working and then optimize it, this series doesn't seem to quite
> grasp that.
>
> > And to forcefully bring this thread a little bit on-topic:
> >
> > Pending-bit is effectively a lock in a lock, so I was wondering why
> > don't we use more pending bits; advantages are the same, just diminished
> > by the probability of having an ideally contended lock:
> > - waiter won't be blocked on RAM access if critical section (or more)
> > ends sooner
> > - some unlucky cacheline is not forgotten
> > - faster unlock (no need for tail operations)
> > (- ?)
> > disadvantages are magnified:
> > - increased complexity
> > - intense cacheline sharing
> > (I thought that this is the main disadvantage of ticketlock.)
> > (- ?)
> >
> > One bit still improved performance, is it the best we got?
>
> So, the advantage of one bit is that if we use a whole byte for 1 bit we
> can avoid some atomic ops.
>
> The entire reason for this in-word spinner is to amortize the cost of
> hitting the external node cacheline.
Every pending CPU removes one length of the critical section from the
delay caused by cacheline propagation and really cold cache is
hundreds(?) of cycles, so we could burn some to ensure correctness and
still be waiting when the first pending CPU unlocks.
> So traditional locks like test-and-test and the ticket lock only ever
> access the spinlock word itsef, this MCS style queueing lock has a
> second (and, see my other rants in this thread, when done wrong more
> than 2) cacheline to touch.
>
> That said, all our benchmarking is pretty much for the cache-hot case,
> so I'm not entirely convinced yet that the one pending bit makes up for
> it, it does in the cache-hot case.
Yeah, we probably use the faster pre-lock quite a lot.
Cover letter states that queue depth 1-3 is a bit slower than ticket
spinlock, so we might not be losing if we implemented a faster
in-word-lock of this capacity. (Not that I'm a fan of the hybrid lock.)
> But... writing cache-cold benchmarks is _hard_ :/
Wouldn't clflush of the second cacheline before trying for the lock give
us a rough estimate?
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 19:13 ` Radim Krčmář
@ 2014-05-19 20:17 ` Waiman Long
2014-05-19 20:17 ` [PATCH v10 03/19] qspinlock: Add pending bit Waiman Long
1 sibling, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-19 20:17 UTC (permalink / raw)
To: Radim Krčmář
Cc: Peter Zijlstra, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
On 05/14/2014 03:13 PM, Radim Krčmář wrote:
> 2014-05-14 19:00+0200, Peter Zijlstra:
>> On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
>>> Ok.
>>> I've seen merit in pvqspinlock even with slightly slower first-waiter,
>>> so I would have happily sacrificed those horrible branches.
>>> (I prefer elegant to optimized code, but I can see why we want to be
>>> strictly better than ticketlock.)
>>> Peter mentioned that we are focusing on bare-metal patches, so I'll
>>> withold my other paravirt rants until they are polished.
> (It was an ambiguous sentence, I have comments for later patches.)
>
>> Well, paravirt must happen too, but comes later in this series, patch 3
>> which we're replying to is still very much in the bare metal part of the
>> series.
> (I think that bare metal spans the first 7 patches.)
>
>> I've not had time yet to decode all that Waiman has done to make
>> paravirt work.
>>
>> But as a general rule I like patches that start with something simple
>> and working and then optimize it, this series doesn't seem to quite
>> grasp that.
>>
>>> And to forcefully bring this thread a little bit on-topic:
>>>
>>> Pending-bit is effectively a lock in a lock, so I was wondering why
>>> don't we use more pending bits; advantages are the same, just diminished
>>> by the probability of having an ideally contended lock:
>>> - waiter won't be blocked on RAM access if critical section (or more)
>>> ends sooner
>>> - some unlucky cacheline is not forgotten
>>> - faster unlock (no need for tail operations)
>>> (- ?)
>>> disadvantages are magnified:
>>> - increased complexity
>>> - intense cacheline sharing
>>> (I thought that this is the main disadvantage of ticketlock.)
>>> (- ?)
>>>
>>> One bit still improved performance, is it the best we got?
>> So, the advantage of one bit is that if we use a whole byte for 1 bit we
>> can avoid some atomic ops.
>>
>> The entire reason for this in-word spinner is to amortize the cost of
>> hitting the external node cacheline.
> Every pending CPU removes one length of the critical section from the
> delay caused by cacheline propagation and really cold cache is
> hundreds(?) of cycles, so we could burn some to ensure correctness and
> still be waiting when the first pending CPU unlocks.
Assuming that taking a spinlock is fairly frequent in the kernel, the
node structure cacheline won't be so cold after all.
>> So traditional locks like test-and-test and the ticket lock only ever
>> access the spinlock word itsef, this MCS style queueing lock has a
>> second (and, see my other rants in this thread, when done wrong more
>> than 2) cacheline to touch.
>>
>> That said, all our benchmarking is pretty much for the cache-hot case,
>> so I'm not entirely convinced yet that the one pending bit makes up for
>> it, it does in the cache-hot case.
> Yeah, we probably use the faster pre-lock quite a lot.
> Cover letter states that queue depth 1-3 is a bit slower than ticket
> spinlock, so we might not be losing if we implemented a faster
> in-word-lock of this capacity. (Not that I'm a fan of the hybrid lock.)
I had tried an experimental patch with 2 pending bits. However, the
benchmark test that I used show the performance is even worse than
without any pending bit. I probably need to revisit that later as to why
this is the case. As for now, I will focus on just having one pending
bit. If we could find a way to get better performance out of more than 1
pending bit later on, we could always submit another patch to do that.
>> But... writing cache-cold benchmarks is _hard_ :/
> Wouldn't clflush of the second cacheline before trying for the lock give
> us a rough estimate?
clflush is a very expensive operation and I doubt that it will be
indicative of real life performance at all. BTW, there is no way to
write a cache-cold benchmark for that 2nd cacheline as any call to
spin_lock will likely to access it if there is enough contention.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
@ 2014-05-19 20:17 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-19 20:17 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Oleg Nesterov, David Vrabel, Linus Torvalds
On 05/14/2014 03:13 PM, Radim Krčmář wrote:
> 2014-05-14 19:00+0200, Peter Zijlstra:
>> On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
>>> Ok.
>>> I've seen merit in pvqspinlock even with slightly slower first-waiter,
>>> so I would have happily sacrificed those horrible branches.
>>> (I prefer elegant to optimized code, but I can see why we want to be
>>> strictly better than ticketlock.)
>>> Peter mentioned that we are focusing on bare-metal patches, so I'll
>>> withold my other paravirt rants until they are polished.
> (It was an ambiguous sentence, I have comments for later patches.)
>
>> Well, paravirt must happen too, but comes later in this series, patch 3
>> which we're replying to is still very much in the bare metal part of the
>> series.
> (I think that bare metal spans the first 7 patches.)
>
>> I've not had time yet to decode all that Waiman has done to make
>> paravirt work.
>>
>> But as a general rule I like patches that start with something simple
>> and working and then optimize it, this series doesn't seem to quite
>> grasp that.
>>
>>> And to forcefully bring this thread a little bit on-topic:
>>>
>>> Pending-bit is effectively a lock in a lock, so I was wondering why
>>> don't we use more pending bits; advantages are the same, just diminished
>>> by the probability of having an ideally contended lock:
>>> - waiter won't be blocked on RAM access if critical section (or more)
>>> ends sooner
>>> - some unlucky cacheline is not forgotten
>>> - faster unlock (no need for tail operations)
>>> (- ?)
>>> disadvantages are magnified:
>>> - increased complexity
>>> - intense cacheline sharing
>>> (I thought that this is the main disadvantage of ticketlock.)
>>> (- ?)
>>>
>>> One bit still improved performance, is it the best we got?
>> So, the advantage of one bit is that if we use a whole byte for 1 bit we
>> can avoid some atomic ops.
>>
>> The entire reason for this in-word spinner is to amortize the cost of
>> hitting the external node cacheline.
> Every pending CPU removes one length of the critical section from the
> delay caused by cacheline propagation and really cold cache is
> hundreds(?) of cycles, so we could burn some to ensure correctness and
> still be waiting when the first pending CPU unlocks.
Assuming that taking a spinlock is fairly frequent in the kernel, the
node structure cacheline won't be so cold after all.
>> So traditional locks like test-and-test and the ticket lock only ever
>> access the spinlock word itsef, this MCS style queueing lock has a
>> second (and, see my other rants in this thread, when done wrong more
>> than 2) cacheline to touch.
>>
>> That said, all our benchmarking is pretty much for the cache-hot case,
>> so I'm not entirely convinced yet that the one pending bit makes up for
>> it, it does in the cache-hot case.
> Yeah, we probably use the faster pre-lock quite a lot.
> Cover letter states that queue depth 1-3 is a bit slower than ticket
> spinlock, so we might not be losing if we implemented a faster
> in-word-lock of this capacity. (Not that I'm a fan of the hybrid lock.)
I had tried an experimental patch with 2 pending bits. However, the
benchmark test that I used show the performance is even worse than
without any pending bit. I probably need to revisit that later as to why
this is the case. As for now, I will focus on just having one pending
bit. If we could find a way to get better performance out of more than 1
pending bit later on, we could always submit another patch to do that.
>> But... writing cache-cold benchmarks is _hard_ :/
> Wouldn't clflush of the second cacheline before trying for the lock give
> us a rough estimate?
clflush is a very expensive operation and I doubt that it will be
indicative of real life performance at all. BTW, there is no way to
write a cache-cold benchmark for that 2nd cacheline as any call to
spin_lock will likely to access it if there is enough contention.
-Longman
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 19:13 ` Radim Krčmář
2014-05-19 20:17 ` Waiman Long
@ 2014-05-19 20:17 ` Waiman Long
1 sibling, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-19 20:17 UTC (permalink / raw)
To: Radim Krčmář
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Scott J Norton, Paolo Bonzini,
Thomas Gleixner, virtualization, Chegu Vinod, Oleg Nesterov,
David Vrabel, Linus Torvalds
On 05/14/2014 03:13 PM, Radim Krčmář wrote:
> 2014-05-14 19:00+0200, Peter Zijlstra:
>> On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
>>> Ok.
>>> I've seen merit in pvqspinlock even with slightly slower first-waiter,
>>> so I would have happily sacrificed those horrible branches.
>>> (I prefer elegant to optimized code, but I can see why we want to be
>>> strictly better than ticketlock.)
>>> Peter mentioned that we are focusing on bare-metal patches, so I'll
>>> withold my other paravirt rants until they are polished.
> (It was an ambiguous sentence, I have comments for later patches.)
>
>> Well, paravirt must happen too, but comes later in this series, patch 3
>> which we're replying to is still very much in the bare metal part of the
>> series.
> (I think that bare metal spans the first 7 patches.)
>
>> I've not had time yet to decode all that Waiman has done to make
>> paravirt work.
>>
>> But as a general rule I like patches that start with something simple
>> and working and then optimize it, this series doesn't seem to quite
>> grasp that.
>>
>>> And to forcefully bring this thread a little bit on-topic:
>>>
>>> Pending-bit is effectively a lock in a lock, so I was wondering why
>>> don't we use more pending bits; advantages are the same, just diminished
>>> by the probability of having an ideally contended lock:
>>> - waiter won't be blocked on RAM access if critical section (or more)
>>> ends sooner
>>> - some unlucky cacheline is not forgotten
>>> - faster unlock (no need for tail operations)
>>> (- ?)
>>> disadvantages are magnified:
>>> - increased complexity
>>> - intense cacheline sharing
>>> (I thought that this is the main disadvantage of ticketlock.)
>>> (- ?)
>>>
>>> One bit still improved performance, is it the best we got?
>> So, the advantage of one bit is that if we use a whole byte for 1 bit we
>> can avoid some atomic ops.
>>
>> The entire reason for this in-word spinner is to amortize the cost of
>> hitting the external node cacheline.
> Every pending CPU removes one length of the critical section from the
> delay caused by cacheline propagation and really cold cache is
> hundreds(?) of cycles, so we could burn some to ensure correctness and
> still be waiting when the first pending CPU unlocks.
Assuming that taking a spinlock is fairly frequent in the kernel, the
node structure cacheline won't be so cold after all.
>> So traditional locks like test-and-test and the ticket lock only ever
>> access the spinlock word itsef, this MCS style queueing lock has a
>> second (and, see my other rants in this thread, when done wrong more
>> than 2) cacheline to touch.
>>
>> That said, all our benchmarking is pretty much for the cache-hot case,
>> so I'm not entirely convinced yet that the one pending bit makes up for
>> it, it does in the cache-hot case.
> Yeah, we probably use the faster pre-lock quite a lot.
> Cover letter states that queue depth 1-3 is a bit slower than ticket
> spinlock, so we might not be losing if we implemented a faster
> in-word-lock of this capacity. (Not that I'm a fan of the hybrid lock.)
I had tried an experimental patch with 2 pending bits. However, the
benchmark test that I used show the performance is even worse than
without any pending bit. I probably need to revisit that later as to why
this is the case. As for now, I will focus on just having one pending
bit. If we could find a way to get better performance out of more than 1
pending bit later on, we could always submit another patch to do that.
>> But... writing cache-cold benchmarks is _hard_ :/
> Wouldn't clflush of the second cacheline before trying for the lock give
> us a rough estimate?
clflush is a very expensive operation and I doubt that it will be
indicative of real life performance at all. BTW, there is no way to
write a cache-cold benchmark for that 2nd cacheline as any call to
spin_lock will likely to access it if there is enough contention.
-Longman
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-08 19:12 ` Peter Zijlstra
@ 2014-05-19 20:30 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-19 20:30 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-arch, x86,
linux-kernel, virtualization, xen-devel, kvm, Paolo Bonzini,
Konrad Rzeszutek Wilk, Boris Ostrovsky, Paul E. McKenney,
Rik van Riel, Linus Torvalds, Raghavendra K T, David Vrabel,
Oleg Nesterov, Gleb Natapov, Scott J Norton, Chegu Vinod
On 05/08/2014 03:12 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
>
>
> No, we want the unfair thing for VIRT, not PARAVIRT.
>
Yes, you are right. I will change that to VIRT.
>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>> index 9e7659e..10e87e1 100644
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
>> {
>> struct __qspinlock *l = (void *)lock;
>>
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> + if (static_key_false(¶virt_unfairlocks_enabled))
>> + /*
>> + * Need to use atomic operation to get the lock when
>> + * lock stealing can happen.
>> + */
>> + return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
> That's missing {}.
It is a single statement which doesn't need braces according to kernel
coding style. I could move the comments up a bit to make it easier to read.
>> +#endif
>
>> barrier();
>> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> barrier();
>
> But no, what you want is:
>
> static __always_inline bool virt_lock(struct qspinlock *lock)
> {
> #ifdef CONFIG_VIRT_MUCK
> if (static_key_false(&virt_unfairlocks_enabled)) {
> while (!queue_spin_trylock(lock))
> cpu_relax();
>
> return true;
> }
> #else
> return false;
> }
>
>
> void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> {
> if (virt_lock(lock))
> return;
>
> ...
> }
This is a possible way of doing it. I can do that in the patch series to
simplify it. Hopefully that will speed up the review process and get it
done quicker.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
@ 2014-05-19 20:30 ` Waiman Long
0 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-19 20:30 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Rik van Riel, Raghavendra K T, Oleg Nesterov,
Gleb Natapov, kvm, Konrad Rzeszutek Wilk, Scott J Norton, x86,
Paolo Bonzini, linux-kernel, virtualization, Ingo Molnar,
Chegu Vinod, David Vrabel, H. Peter Anvin, xen-devel,
Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:12 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
>
>
> No, we want the unfair thing for VIRT, not PARAVIRT.
>
Yes, you are right. I will change that to VIRT.
>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>> index 9e7659e..10e87e1 100644
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
>> {
>> struct __qspinlock *l = (void *)lock;
>>
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> + if (static_key_false(¶virt_unfairlocks_enabled))
>> + /*
>> + * Need to use atomic operation to get the lock when
>> + * lock stealing can happen.
>> + */
>> + return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
> That's missing {}.
It is a single statement which doesn't need braces according to kernel
coding style. I could move the comments up a bit to make it easier to read.
>> +#endif
>
>> barrier();
>> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> barrier();
>
> But no, what you want is:
>
> static __always_inline bool virt_lock(struct qspinlock *lock)
> {
> #ifdef CONFIG_VIRT_MUCK
> if (static_key_false(&virt_unfairlocks_enabled)) {
> while (!queue_spin_trylock(lock))
> cpu_relax();
>
> return true;
> }
> #else
> return false;
> }
>
>
> void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> {
> if (virt_lock(lock))
> return;
>
> ...
> }
This is a possible way of doing it. I can do that in the patch series to
simplify it. Hopefully that will speed up the review process and get it
done quicker.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
2014-05-08 19:12 ` Peter Zijlstra
(?)
@ 2014-05-19 20:30 ` Waiman Long
-1 siblings, 0 replies; 163+ messages in thread
From: Waiman Long @ 2014-05-19 20:30 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-arch, Raghavendra K T, Oleg Nesterov, Gleb Natapov, kvm,
Scott J Norton, x86, Paolo Bonzini, linux-kernel, virtualization,
Ingo Molnar, Chegu Vinod, David Vrabel, H. Peter Anvin,
xen-devel, Thomas Gleixner, Paul E. McKenney, Linus Torvalds,
Boris Ostrovsky
On 05/08/2014 03:12 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
>
>
> No, we want the unfair thing for VIRT, not PARAVIRT.
>
Yes, you are right. I will change that to VIRT.
>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>> index 9e7659e..10e87e1 100644
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
>> {
>> struct __qspinlock *l = (void *)lock;
>>
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> + if (static_key_false(¶virt_unfairlocks_enabled))
>> + /*
>> + * Need to use atomic operation to get the lock when
>> + * lock stealing can happen.
>> + */
>> + return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
> That's missing {}.
It is a single statement which doesn't need braces according to kernel
coding style. I could move the comments up a bit to make it easier to read.
>> +#endif
>
>> barrier();
>> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
>> barrier();
>
> But no, what you want is:
>
> static __always_inline bool virt_lock(struct qspinlock *lock)
> {
> #ifdef CONFIG_VIRT_MUCK
> if (static_key_false(&virt_unfairlocks_enabled)) {
> while (!queue_spin_trylock(lock))
> cpu_relax();
>
> return true;
> }
> #else
> return false;
> }
>
>
> void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> {
> if (virt_lock(lock))
> return;
>
> ...
> }
This is a possible way of doing it. I can do that in the patch series to
simplify it. Hopefully that will speed up the review process and get it
done quicker.
-Longman
^ permalink raw reply [flat|nested] 163+ messages in thread
* Re: [RFC 08/07] qspinlock: integrate pending bit into queue
[not found] ` <20140521164930.GA26199@potion.brq.redhat.com>
@ 2014-05-21 17:02 ` Radim Krčmář
2014-05-21 17:02 ` Radim Krčmář
1 sibling, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-21 17:02 UTC (permalink / raw)
To: Waiman Long
Cc: Peter Zijlstra, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
linux-arch, x86, linux-kernel, virtualization, xen-devel, kvm,
Paolo Bonzini, Konrad Rzeszutek Wilk, Boris Ostrovsky,
Paul E. McKenney, Rik van Riel, Linus Torvalds, Raghavendra K T,
David Vrabel, Oleg Nesterov, Gleb Natapov, Scott J Norton,
Chegu Vinod
2014-05-21 18:49+0200, Radim Krčmář:
> 2014-05-19 16:17-0400, Waiman Long:
> > As for now, I will focus on just having one pending bit.
>
> I'll throw some ideas at it,
One of the ideas follows; it seems sound, but I haven't benchmarked it
thoroughly. (Wasted a lot of time by writing/playing with various tools
and loads.)
Dbench on ext4 ramdisk, hackbench and ebizzy have shown a small
improvement in performance, but my main drive was the weird design of
Pending Bit.
Does your setup yield improvements too?
(A minor code swap noted in the patch might help things.)
It is meant to be aplied on top of first 7 patches, because the virt
stuff would just get in the way.
I have preserved a lot of dead code and made some questionable decisions
just to keep the diff short and in one patch, sorry about that.
(It is work in progress, double slashed lines mark points of interest.)
---8<---
Pending Bit wasn't used if we already had a node queue with one cpu,
which meant that we suffered from these drawbacks again:
- unlock path was more complicated
(last queued CPU had to clear the tail)
- cold node cacheline was just one critical section away
With this patch, Pending Bit is used as an additional step in the queue.
Waiting for lock is the same: we try Pending Bit and if it is taken, we
append to Node Queue.
Unlock is different: pending CPU moves into critical section and first
CPU from Node Queue takes Pending Bit and notifies next in line or
clears the tail.
This allows the pending CPU to take the lock as fast as possible,
because all bookkeeping was done when entering Pending Queue.
Node Queue operations can also be slower without affecting the
performance, because we have an additional buffer of one critical
section.
---
kernel/locking/qspinlock.c | 180 +++++++++++++++++++++++++++++++++------------
1 file changed, 135 insertions(+), 45 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0ee1a23..76cafb0 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -98,7 +98,10 @@ struct __qspinlock {
union {
atomic_t val;
#ifdef __LITTLE_ENDIAN
- u8 locked;
+ struct {
+ u8 locked;
+ u8 pending;
+ };
struct {
u16 locked_pending;
u16 tail;
@@ -109,7 +112,8 @@ struct __qspinlock {
u16 locked_pending;
};
struct {
- u8 reserved[3];
+ u8 reserved[2];
+ u8 pending;
u8 locked;
};
#endif
@@ -314,6 +318,59 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
return 1;
}
+// nice comment here
+static inline bool trylock(struct qspinlock *lock, u32 *val) {
+ if (!(*val = atomic_read(&lock->val)) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0)) {
+ *val = _Q_LOCKED_VAL;
+ return 1;
+ }
+ return 0;
+}
+
+// here
+static inline bool trypending(struct qspinlock *lock, u32 *pval) {
+ u32 old, val = *pval;
+ // optimizer might produce the same code if we use *pval directly
+
+ // we could use 'if' and a xchg that touches only the pending bit to
+ // save some cycles at the price of a longer line cutting window
+ // (and I think it would bug without changing the rest)
+ while (!(val & (_Q_PENDING_MASK | _Q_TAIL_MASK))) {
+ old = atomic_cmpxchg(&lock->val, val, val | _Q_PENDING_MASK);
+ if (old == val) {
+ *pval = val | _Q_PENDING_MASK;
+ return 1;
+ }
+ val = old;
+ }
+ *pval = val;
+ return 0;
+}
+
+// here
+static inline void set_pending(struct qspinlock *lock, u8 pending)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ // take a look if this is necessary, and if we don't have an
+ // abstraction already
+ barrier();
+ ACCESS_ONCE(l->pending) = pending;
+ barrier();
+}
+
+// and here
+static inline u32 cmpxchg_tail(struct qspinlock *lock, u32 tail, u32 newtail)
+// API-incompatible with set_pending and the shifting is ugly, so I'd rather
+// refactor this one, xchg_tail() and encode_tail() ... another day
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)cmpxchg(&l->tail, tail >> _Q_TAIL_OFFSET,
+ newtail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
@@ -324,21 +381,21 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* fast : slow : unlock
* : :
* uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * : | ^--------. / :
+ * : v \ | :
+ * pending : (0,1,1) +--> (0,1,0) | :
+ * : | ^--' ^----------. | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,y) ---> (0,1,y) | :
* queue : | ^--' | :
* : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * contended : (*,x,y) +--> (*,0,y) (*,0,1) -' :
+ * queue : ^--' | ^ :
+ * : v | :
+ * : (*,1,y) ---> (*,1,0) :
+ * // diagram might be wrong (and definitely isn't obvious)
*
+ * // give some insight about the hybrid locking
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
@@ -348,8 +405,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
+ /*
+ * Check if nothing changed while we were calling this function.
+ * (Cold code cacheline could have delayed us.)
+ */
+ // this should go into a separate patch with micro-optimizations
+ if (trylock(lock, &val))
+ return;
+ /*
+ * The lock is still held, wait without touching the node unless there
+ * is at least one cpu waiting before us.
+ */
+ // create structured code out of this mess
+ if (trypending(lock, &val))
+ goto pending;
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
@@ -364,15 +433,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* attempt the trylock once more in the hope someone let go while we
* weren't watching.
*/
- if (queue_spin_trylock(lock))
+ // is some of the re-checking counterproductive?
+ if (trylock(lock, &val)) {
+ this_cpu_dec(mcs_nodes[0].count); // ugly
+ return;
+ }
+ if (trypending(lock, &val))
goto release;
/*
- * we already touched the queueing cacheline; don't bother with pending
- * stuff.
- *
* p,*,* -> n,*,*
*/
+ // racing for pending/queue till here; safe
old = xchg_tail(lock, tail, &val);
/*
@@ -386,41 +458,45 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner & pending to
- * go away.
- * Load-acquired is used here because the get_qlock()
- * function below may not be a full memory barrier.
- *
- * *,x,y -> *,0,0
+ * We are now waiting for the pending bit to get cleared.
*/
- while ((val = smp_load_acquire(&lock->val.counter))
- & _Q_LOCKED_PENDING_MASK)
+ // make a get_pending(lock, &val) helper
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_PENDING_MASK)
+ // would longer body ease cacheline contention?
+ // would it be better to use monitor/mwait instead?
+ // (we can tolerate some delay because we aren't pending ...)
arch_mutex_cpu_relax();
/*
- * claim the lock:
+ * The pending bit is free, take it.
*
- * n,0,0 -> 0,0,1 : lock, uncontended
- * *,0,0 -> *,0,1 : lock, contended
+ * *,0,* -> *,1,*
+ */
+ // might add &val param and do |= _Q_PENDING_VAL when refactoring ...
+ set_pending(lock, 1);
+
+ /*
+ * Clear the tail if noone queued after us.
*
- * If the queue head is the only one in the queue (lock value == tail),
- * clear the tail code and grab the lock. Otherwise, we only need
- * to grab the lock.
+ * n,1,y -> 0,1,y
*/
- for (;;) {
- if (val != tail) {
- get_qlock(lock);
- break;
- }
- old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
- if (old == val)
- goto release; /* No contention */
+ if ((val & _Q_TAIL_MASK) == tail &&
+ cmpxchg_tail(lock, tail, 0) == tail)
+ goto release;
+ // negate the condition and obliterate the goto with braces
- val = old;
- }
+ // fun fact:
+ // if ((val & _Q_TAIL_MASK) == tail) {
+ // val = cmpxchg_tail(&lock, tail, 0);
+ // if ((val & _Q_TAIL_MASK) == tail)
+ // goto release;
+ // produced significantly faster code in my benchmarks ...
+ // (I haven't looked why, seems like a fluke.)
+ // swap the code if you want performance at any cost
/*
- * contended path; wait for next, release.
+ * Tell the next node that we are pending, so it can start spinning to
+ * replace us in the future.
*/
while (!(next = ACCESS_ONCE(node->next)))
arch_mutex_cpu_relax();
@@ -432,5 +508,19 @@ release:
* release the node
*/
this_cpu_dec(mcs_nodes[0].count);
+pending:
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ * Flip pending and locked bit then.
+ *
+ * *,1,0 -> *,0,1
+ */
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+ clear_pending_set_locked(lock, val);
+
+ /*
+ * We have the lock.
+ */
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.9.0
^ permalink raw reply related [flat|nested] 163+ messages in thread
* Re: [RFC 08/07] qspinlock: integrate pending bit into queue
@ 2014-05-21 17:02 ` Radim Krčmář
0 siblings, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-21 17:02 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Rik van Riel, Konrad Rzeszutek Wilk,
Scott J Norton, Paolo Bonzini, Thomas Gleixner, virtualization,
Chegu Vinod, Oleg Nesterov, David Vrabel, Linus Torvalds
2014-05-21 18:49+0200, Radim Krčmář:
> 2014-05-19 16:17-0400, Waiman Long:
> > As for now, I will focus on just having one pending bit.
>
> I'll throw some ideas at it,
One of the ideas follows; it seems sound, but I haven't benchmarked it
thoroughly. (Wasted a lot of time by writing/playing with various tools
and loads.)
Dbench on ext4 ramdisk, hackbench and ebizzy have shown a small
improvement in performance, but my main drive was the weird design of
Pending Bit.
Does your setup yield improvements too?
(A minor code swap noted in the patch might help things.)
It is meant to be aplied on top of first 7 patches, because the virt
stuff would just get in the way.
I have preserved a lot of dead code and made some questionable decisions
just to keep the diff short and in one patch, sorry about that.
(It is work in progress, double slashed lines mark points of interest.)
---8<---
Pending Bit wasn't used if we already had a node queue with one cpu,
which meant that we suffered from these drawbacks again:
- unlock path was more complicated
(last queued CPU had to clear the tail)
- cold node cacheline was just one critical section away
With this patch, Pending Bit is used as an additional step in the queue.
Waiting for lock is the same: we try Pending Bit and if it is taken, we
append to Node Queue.
Unlock is different: pending CPU moves into critical section and first
CPU from Node Queue takes Pending Bit and notifies next in line or
clears the tail.
This allows the pending CPU to take the lock as fast as possible,
because all bookkeeping was done when entering Pending Queue.
Node Queue operations can also be slower without affecting the
performance, because we have an additional buffer of one critical
section.
---
kernel/locking/qspinlock.c | 180 +++++++++++++++++++++++++++++++++------------
1 file changed, 135 insertions(+), 45 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0ee1a23..76cafb0 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -98,7 +98,10 @@ struct __qspinlock {
union {
atomic_t val;
#ifdef __LITTLE_ENDIAN
- u8 locked;
+ struct {
+ u8 locked;
+ u8 pending;
+ };
struct {
u16 locked_pending;
u16 tail;
@@ -109,7 +112,8 @@ struct __qspinlock {
u16 locked_pending;
};
struct {
- u8 reserved[3];
+ u8 reserved[2];
+ u8 pending;
u8 locked;
};
#endif
@@ -314,6 +318,59 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
return 1;
}
+// nice comment here
+static inline bool trylock(struct qspinlock *lock, u32 *val) {
+ if (!(*val = atomic_read(&lock->val)) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0)) {
+ *val = _Q_LOCKED_VAL;
+ return 1;
+ }
+ return 0;
+}
+
+// here
+static inline bool trypending(struct qspinlock *lock, u32 *pval) {
+ u32 old, val = *pval;
+ // optimizer might produce the same code if we use *pval directly
+
+ // we could use 'if' and a xchg that touches only the pending bit to
+ // save some cycles at the price of a longer line cutting window
+ // (and I think it would bug without changing the rest)
+ while (!(val & (_Q_PENDING_MASK | _Q_TAIL_MASK))) {
+ old = atomic_cmpxchg(&lock->val, val, val | _Q_PENDING_MASK);
+ if (old == val) {
+ *pval = val | _Q_PENDING_MASK;
+ return 1;
+ }
+ val = old;
+ }
+ *pval = val;
+ return 0;
+}
+
+// here
+static inline void set_pending(struct qspinlock *lock, u8 pending)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ // take a look if this is necessary, and if we don't have an
+ // abstraction already
+ barrier();
+ ACCESS_ONCE(l->pending) = pending;
+ barrier();
+}
+
+// and here
+static inline u32 cmpxchg_tail(struct qspinlock *lock, u32 tail, u32 newtail)
+// API-incompatible with set_pending and the shifting is ugly, so I'd rather
+// refactor this one, xchg_tail() and encode_tail() ... another day
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)cmpxchg(&l->tail, tail >> _Q_TAIL_OFFSET,
+ newtail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
@@ -324,21 +381,21 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* fast : slow : unlock
* : :
* uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * : | ^--------. / :
+ * : v \ | :
+ * pending : (0,1,1) +--> (0,1,0) | :
+ * : | ^--' ^----------. | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,y) ---> (0,1,y) | :
* queue : | ^--' | :
* : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * contended : (*,x,y) +--> (*,0,y) (*,0,1) -' :
+ * queue : ^--' | ^ :
+ * : v | :
+ * : (*,1,y) ---> (*,1,0) :
+ * // diagram might be wrong (and definitely isn't obvious)
*
+ * // give some insight about the hybrid locking
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
@@ -348,8 +405,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
+ /*
+ * Check if nothing changed while we were calling this function.
+ * (Cold code cacheline could have delayed us.)
+ */
+ // this should go into a separate patch with micro-optimizations
+ if (trylock(lock, &val))
+ return;
+ /*
+ * The lock is still held, wait without touching the node unless there
+ * is at least one cpu waiting before us.
+ */
+ // create structured code out of this mess
+ if (trypending(lock, &val))
+ goto pending;
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
@@ -364,15 +433,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* attempt the trylock once more in the hope someone let go while we
* weren't watching.
*/
- if (queue_spin_trylock(lock))
+ // is some of the re-checking counterproductive?
+ if (trylock(lock, &val)) {
+ this_cpu_dec(mcs_nodes[0].count); // ugly
+ return;
+ }
+ if (trypending(lock, &val))
goto release;
/*
- * we already touched the queueing cacheline; don't bother with pending
- * stuff.
- *
* p,*,* -> n,*,*
*/
+ // racing for pending/queue till here; safe
old = xchg_tail(lock, tail, &val);
/*
@@ -386,41 +458,45 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner & pending to
- * go away.
- * Load-acquired is used here because the get_qlock()
- * function below may not be a full memory barrier.
- *
- * *,x,y -> *,0,0
+ * We are now waiting for the pending bit to get cleared.
*/
- while ((val = smp_load_acquire(&lock->val.counter))
- & _Q_LOCKED_PENDING_MASK)
+ // make a get_pending(lock, &val) helper
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_PENDING_MASK)
+ // would longer body ease cacheline contention?
+ // would it be better to use monitor/mwait instead?
+ // (we can tolerate some delay because we aren't pending ...)
arch_mutex_cpu_relax();
/*
- * claim the lock:
+ * The pending bit is free, take it.
*
- * n,0,0 -> 0,0,1 : lock, uncontended
- * *,0,0 -> *,0,1 : lock, contended
+ * *,0,* -> *,1,*
+ */
+ // might add &val param and do |= _Q_PENDING_VAL when refactoring ...
+ set_pending(lock, 1);
+
+ /*
+ * Clear the tail if noone queued after us.
*
- * If the queue head is the only one in the queue (lock value == tail),
- * clear the tail code and grab the lock. Otherwise, we only need
- * to grab the lock.
+ * n,1,y -> 0,1,y
*/
- for (;;) {
- if (val != tail) {
- get_qlock(lock);
- break;
- }
- old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
- if (old == val)
- goto release; /* No contention */
+ if ((val & _Q_TAIL_MASK) == tail &&
+ cmpxchg_tail(lock, tail, 0) == tail)
+ goto release;
+ // negate the condition and obliterate the goto with braces
- val = old;
- }
+ // fun fact:
+ // if ((val & _Q_TAIL_MASK) == tail) {
+ // val = cmpxchg_tail(&lock, tail, 0);
+ // if ((val & _Q_TAIL_MASK) == tail)
+ // goto release;
+ // produced significantly faster code in my benchmarks ...
+ // (I haven't looked why, seems like a fluke.)
+ // swap the code if you want performance at any cost
/*
- * contended path; wait for next, release.
+ * Tell the next node that we are pending, so it can start spinning to
+ * replace us in the future.
*/
while (!(next = ACCESS_ONCE(node->next)))
arch_mutex_cpu_relax();
@@ -432,5 +508,19 @@ release:
* release the node
*/
this_cpu_dec(mcs_nodes[0].count);
+pending:
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ * Flip pending and locked bit then.
+ *
+ * *,1,0 -> *,0,1
+ */
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+ clear_pending_set_locked(lock, val);
+
+ /*
+ * We have the lock.
+ */
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.9.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 163+ messages in thread
* Re: [RFC 08/07] qspinlock: integrate pending bit into queue
[not found] ` <20140521164930.GA26199@potion.brq.redhat.com>
@ 2014-05-21 17:02 ` Radim Krčmář
2014-05-21 17:02 ` Radim Krčmář
1 sibling, 0 replies; 163+ messages in thread
From: Radim Krčmář @ 2014-05-21 17:02 UTC (permalink / raw)
To: Waiman Long
Cc: x86, Gleb Natapov, Peter Zijlstra, linux-kernel, H. Peter Anvin,
Boris Ostrovsky, linux-arch, kvm, Raghavendra K T, Ingo Molnar,
xen-devel, Paul E. McKenney, Scott J Norton, Paolo Bonzini,
Thomas Gleixner, virtualization, Chegu Vinod, Oleg Nesterov,
David Vrabel, Linus Torvalds
2014-05-21 18:49+0200, Radim Krčmář:
> 2014-05-19 16:17-0400, Waiman Long:
> > As for now, I will focus on just having one pending bit.
>
> I'll throw some ideas at it,
One of the ideas follows; it seems sound, but I haven't benchmarked it
thoroughly. (Wasted a lot of time by writing/playing with various tools
and loads.)
Dbench on ext4 ramdisk, hackbench and ebizzy have shown a small
improvement in performance, but my main drive was the weird design of
Pending Bit.
Does your setup yield improvements too?
(A minor code swap noted in the patch might help things.)
It is meant to be aplied on top of first 7 patches, because the virt
stuff would just get in the way.
I have preserved a lot of dead code and made some questionable decisions
just to keep the diff short and in one patch, sorry about that.
(It is work in progress, double slashed lines mark points of interest.)
---8<---
Pending Bit wasn't used if we already had a node queue with one cpu,
which meant that we suffered from these drawbacks again:
- unlock path was more complicated
(last queued CPU had to clear the tail)
- cold node cacheline was just one critical section away
With this patch, Pending Bit is used as an additional step in the queue.
Waiting for lock is the same: we try Pending Bit and if it is taken, we
append to Node Queue.
Unlock is different: pending CPU moves into critical section and first
CPU from Node Queue takes Pending Bit and notifies next in line or
clears the tail.
This allows the pending CPU to take the lock as fast as possible,
because all bookkeeping was done when entering Pending Queue.
Node Queue operations can also be slower without affecting the
performance, because we have an additional buffer of one critical
section.
---
kernel/locking/qspinlock.c | 180 +++++++++++++++++++++++++++++++++------------
1 file changed, 135 insertions(+), 45 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 0ee1a23..76cafb0 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -98,7 +98,10 @@ struct __qspinlock {
union {
atomic_t val;
#ifdef __LITTLE_ENDIAN
- u8 locked;
+ struct {
+ u8 locked;
+ u8 pending;
+ };
struct {
u16 locked_pending;
u16 tail;
@@ -109,7 +112,8 @@ struct __qspinlock {
u16 locked_pending;
};
struct {
- u8 reserved[3];
+ u8 reserved[2];
+ u8 pending;
u8 locked;
};
#endif
@@ -314,6 +318,59 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
return 1;
}
+// nice comment here
+static inline bool trylock(struct qspinlock *lock, u32 *val) {
+ if (!(*val = atomic_read(&lock->val)) &&
+ (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0)) {
+ *val = _Q_LOCKED_VAL;
+ return 1;
+ }
+ return 0;
+}
+
+// here
+static inline bool trypending(struct qspinlock *lock, u32 *pval) {
+ u32 old, val = *pval;
+ // optimizer might produce the same code if we use *pval directly
+
+ // we could use 'if' and a xchg that touches only the pending bit to
+ // save some cycles at the price of a longer line cutting window
+ // (and I think it would bug without changing the rest)
+ while (!(val & (_Q_PENDING_MASK | _Q_TAIL_MASK))) {
+ old = atomic_cmpxchg(&lock->val, val, val | _Q_PENDING_MASK);
+ if (old == val) {
+ *pval = val | _Q_PENDING_MASK;
+ return 1;
+ }
+ val = old;
+ }
+ *pval = val;
+ return 0;
+}
+
+// here
+static inline void set_pending(struct qspinlock *lock, u8 pending)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ // take a look if this is necessary, and if we don't have an
+ // abstraction already
+ barrier();
+ ACCESS_ONCE(l->pending) = pending;
+ barrier();
+}
+
+// and here
+static inline u32 cmpxchg_tail(struct qspinlock *lock, u32 tail, u32 newtail)
+// API-incompatible with set_pending and the shifting is ugly, so I'd rather
+// refactor this one, xchg_tail() and encode_tail() ... another day
+{
+ struct __qspinlock *l = (void *)lock;
+
+ return (u32)cmpxchg(&l->tail, tail >> _Q_TAIL_OFFSET,
+ newtail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
+}
+
/**
* queue_spin_lock_slowpath - acquire the queue spinlock
* @lock: Pointer to queue spinlock structure
@@ -324,21 +381,21 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
* fast : slow : unlock
* : :
* uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * : | ^--------. / :
+ * : v \ | :
+ * pending : (0,1,1) +--> (0,1,0) | :
+ * : | ^--' ^----------. | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,y) ---> (0,1,y) | :
* queue : | ^--' | :
* : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * contended : (*,x,y) +--> (*,0,y) (*,0,1) -' :
+ * queue : ^--' | ^ :
+ * : v | :
+ * : (*,1,y) ---> (*,1,0) :
+ * // diagram might be wrong (and definitely isn't obvious)
*
+ * // give some insight about the hybrid locking
*/
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
@@ -348,8 +405,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
+ /*
+ * Check if nothing changed while we were calling this function.
+ * (Cold code cacheline could have delayed us.)
+ */
+ // this should go into a separate patch with micro-optimizations
+ if (trylock(lock, &val))
+ return;
+ /*
+ * The lock is still held, wait without touching the node unless there
+ * is at least one cpu waiting before us.
+ */
+ // create structured code out of this mess
+ if (trypending(lock, &val))
+ goto pending;
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
@@ -364,15 +433,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* attempt the trylock once more in the hope someone let go while we
* weren't watching.
*/
- if (queue_spin_trylock(lock))
+ // is some of the re-checking counterproductive?
+ if (trylock(lock, &val)) {
+ this_cpu_dec(mcs_nodes[0].count); // ugly
+ return;
+ }
+ if (trypending(lock, &val))
goto release;
/*
- * we already touched the queueing cacheline; don't bother with pending
- * stuff.
- *
* p,*,* -> n,*,*
*/
+ // racing for pending/queue till here; safe
old = xchg_tail(lock, tail, &val);
/*
@@ -386,41 +458,45 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
- * we're at the head of the waitqueue, wait for the owner & pending to
- * go away.
- * Load-acquired is used here because the get_qlock()
- * function below may not be a full memory barrier.
- *
- * *,x,y -> *,0,0
+ * We are now waiting for the pending bit to get cleared.
*/
- while ((val = smp_load_acquire(&lock->val.counter))
- & _Q_LOCKED_PENDING_MASK)
+ // make a get_pending(lock, &val) helper
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_PENDING_MASK)
+ // would longer body ease cacheline contention?
+ // would it be better to use monitor/mwait instead?
+ // (we can tolerate some delay because we aren't pending ...)
arch_mutex_cpu_relax();
/*
- * claim the lock:
+ * The pending bit is free, take it.
*
- * n,0,0 -> 0,0,1 : lock, uncontended
- * *,0,0 -> *,0,1 : lock, contended
+ * *,0,* -> *,1,*
+ */
+ // might add &val param and do |= _Q_PENDING_VAL when refactoring ...
+ set_pending(lock, 1);
+
+ /*
+ * Clear the tail if noone queued after us.
*
- * If the queue head is the only one in the queue (lock value == tail),
- * clear the tail code and grab the lock. Otherwise, we only need
- * to grab the lock.
+ * n,1,y -> 0,1,y
*/
- for (;;) {
- if (val != tail) {
- get_qlock(lock);
- break;
- }
- old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
- if (old == val)
- goto release; /* No contention */
+ if ((val & _Q_TAIL_MASK) == tail &&
+ cmpxchg_tail(lock, tail, 0) == tail)
+ goto release;
+ // negate the condition and obliterate the goto with braces
- val = old;
- }
+ // fun fact:
+ // if ((val & _Q_TAIL_MASK) == tail) {
+ // val = cmpxchg_tail(&lock, tail, 0);
+ // if ((val & _Q_TAIL_MASK) == tail)
+ // goto release;
+ // produced significantly faster code in my benchmarks ...
+ // (I haven't looked why, seems like a fluke.)
+ // swap the code if you want performance at any cost
/*
- * contended path; wait for next, release.
+ * Tell the next node that we are pending, so it can start spinning to
+ * replace us in the future.
*/
while (!(next = ACCESS_ONCE(node->next)))
arch_mutex_cpu_relax();
@@ -432,5 +508,19 @@ release:
* release the node
*/
this_cpu_dec(mcs_nodes[0].count);
+pending:
+ /*
+ * we're at the head of the waitqueue, wait for the owner to go away.
+ * Flip pending and locked bit then.
+ *
+ * *,1,0 -> *,0,1
+ */
+ while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
+ arch_mutex_cpu_relax();
+ clear_pending_set_locked(lock, val);
+
+ /*
+ * We have the lock.
+ */
}
EXPORT_SYMBOL(queue_spin_lock_slowpath);
--
1.9.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 163+ messages in thread
end of thread, other threads:[~2014-05-21 17:03 UTC | newest]
Thread overview: 163+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-07 15:01 [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use " Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 03/19] qspinlock: Add pending bit Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 18:57 ` Peter Zijlstra
2014-05-08 18:57 ` Peter Zijlstra
2014-05-08 18:57 ` Peter Zijlstra
2014-05-10 0:49 ` Waiman Long
2014-05-10 0:49 ` Waiman Long
2014-05-10 0:49 ` Waiman Long
2014-05-12 15:22 ` Radim Krčmář
2014-05-12 15:22 ` Radim Krčmář
2014-05-12 17:29 ` Peter Zijlstra
2014-05-12 17:29 ` Peter Zijlstra
2014-05-12 17:29 ` Peter Zijlstra
2014-05-13 19:47 ` Waiman Long
2014-05-13 19:47 ` Waiman Long
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 19:13 ` Radim Krčmář
2014-05-14 19:13 ` Radim Krčmář
2014-05-14 19:13 ` Radim Krčmář
2014-05-19 20:17 ` Waiman Long
2014-05-19 20:17 ` Waiman Long
[not found] ` <20140521164930.GA26199@potion.brq.redhat.com>
2014-05-21 17:02 ` [RFC 08/07] qspinlock: integrate pending bit into queue Radim Krčmář
2014-05-21 17:02 ` Radim Krčmář
2014-05-21 17:02 ` Radim Krčmář
2014-05-19 20:17 ` [PATCH v10 03/19] qspinlock: Add pending bit Waiman Long
2014-05-14 17:00 ` Peter Zijlstra
2014-05-13 19:47 ` Waiman Long
2014-05-12 15:22 ` Radim Krčmář
2014-05-07 15:01 ` [PATCH v10 04/19] qspinlock: Extract out the exchange of tail code word Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 18:58 ` Peter Zijlstra
2014-05-08 18:58 ` Peter Zijlstra
2014-05-08 18:58 ` Peter Zijlstra
2014-05-10 0:58 ` Waiman Long
2014-05-10 0:58 ` Waiman Long
2014-05-10 13:38 ` Peter Zijlstra
2014-05-10 13:38 ` Peter Zijlstra
2014-05-10 13:38 ` Peter Zijlstra
2014-05-10 0:58 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:00 ` Peter Zijlstra
2014-05-08 19:00 ` Peter Zijlstra
2014-05-08 19:00 ` Peter Zijlstra
2014-05-10 1:05 ` Waiman Long
2014-05-10 1:05 ` Waiman Long
2014-05-10 1:05 ` Waiman Long
2014-05-08 19:02 ` Peter Zijlstra
2014-05-08 19:02 ` Peter Zijlstra
2014-05-10 1:06 ` Waiman Long
2014-05-10 1:06 ` Waiman Long
2014-05-10 1:06 ` Waiman Long
2014-05-08 19:02 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:04 ` Peter Zijlstra
2014-05-08 19:04 ` Peter Zijlstra
2014-05-08 19:04 ` Peter Zijlstra
2014-05-10 1:08 ` Waiman Long
2014-05-10 1:08 ` Waiman Long
2014-05-10 1:08 ` Waiman Long
2014-05-10 14:14 ` Peter Zijlstra
2014-05-10 14:14 ` Peter Zijlstra
2014-05-10 14:14 ` Peter Zijlstra
2014-05-10 18:21 ` Peter Zijlstra
2014-05-10 18:21 ` Peter Zijlstra
2014-05-10 18:21 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 09/19] qspinlock: Prepare for unfair lock support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:06 ` Peter Zijlstra
2014-05-08 19:06 ` Peter Zijlstra
2014-05-08 19:06 ` Peter Zijlstra
2014-05-10 1:19 ` Waiman Long
2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 1:19 ` Waiman Long
2014-05-10 1:19 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:12 ` Peter Zijlstra
2014-05-08 19:12 ` Peter Zijlstra
2014-05-08 19:12 ` Peter Zijlstra
2014-05-19 20:30 ` Waiman Long
2014-05-19 20:30 ` Waiman Long
2014-05-19 20:30 ` Waiman Long
2014-05-12 18:57 ` Radim Krčmář
2014-05-12 18:57 ` Radim Krčmář
2014-05-12 18:57 ` Radim Krčmář
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:19 ` Peter Zijlstra
2014-05-08 19:19 ` Peter Zijlstra
2014-05-08 19:19 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 13/19] unfair qspinlock: Enable lock stealing in lock waiters Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 14/19] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 15/19] pvqspinlock, x86: Add PV data structure & methods Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 16/19] pvqspinlock: Enable coexistence with the unfair lock Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 17/19] pvqspinlock: Add qspinlock para-virtualization support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-08 17:54 ` Waiman Long
2014-05-08 17:54 ` Waiman Long
2014-05-08 17:54 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 19/19] pvqspinlock, x86: Enable PV qspinlock for XEN Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 19:07 ` [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-08 17:54 ` Waiman Long
2014-05-08 17:54 ` Waiman Long
2014-05-08 17:54 ` Waiman Long
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.